text
stringlengths
11
320k
source
stringlengths
26
161
Insociolinguistics, asociolectis aform of language(non-standard dialect, restrictedregister) or a set oflexical itemsused by a socioeconomic class, profession, age group, or other social group.[1][2] Sociolects involve both passive acquisition of particular communicative practices through association with a local community, as well as active learning and choice among speech or writing forms to demonstrateidentificationwith particular groups.[3]The termsociolectmight refer to socially restricted dialects,[4]but it is sometimes also treated as equivalent with the concept ofregister,[5]or used as a synonym forjargonandslang.[6][7] Sociolinguists—people who study sociolects andlanguage variation—define a sociolect by examining the social distribution of specific linguistic terms. For example, a sociolinguist would examine the use of the second person pronounyouwithin a given population. If one distinct social group usedyousas the plural form of the pronoun, then this could indicate the existence of a sociolect. A sociolect is distinct from aregional dialect(regiolect) becausesocial class, rather than geographical subdivision, substantiates the unique linguistic features.[8] A sociolect, defined by leading sociolinguist andphilosopherPeter Trudgill, is "avarietyor lect which is thought of as being related to its speakers' social background rather than geographical background."[9]: 122This idea of a sociolect began with the commencement ofdialectology, the study of differentdialectsin relation to society, which has been established in countries such as England for many years, but only recently has the field garnered more attention.[10]: 26However, as opposed to a dialect, the basic concept of a sociolect is that a person speaks in accordance with their social group whether it is with regard to one's ethnicity, age, gender, etc. As William Labov once said, "the sociolinguistic view ... is that we are programmed to learn to speak in ways that fit the general pattern of our communities."[11]: 6Therefore, what we are surrounded with in our environment determines how we speak; hence, our actions and associations. The main distinction between sociolects (social dialects) and dialects proper (geographical dialects), which are often confused, is the settings in which they are created.[12]A dialect's main identifier is geography: a certain region uses specific phonological, morphosyntactic or lexical rules.[9]: 35Asif Agha expands the concept by stating that "the case where the demographic dimension marked by speech are matters of geographic provenance alone, such as speaker's birth locale, extended residence and the like".[13]: 135However, a sociolect's main identifier is a socioeconomic class, age, gender, and/or ethnicity in a certain speech community. An example of a dialectal difference, based on region, is the use of the wordssodaorpopandcokein different parts of the United States. As Thomas E. Murray states, "cokeis used generically by thousands of people, especially in the southern half of the country."[14]On the other hand,popis known to be a term that is used by many citizens in the northern half of the country. An example of a sociolect difference, based on social grouping, is thezero copulainAfrican American Vernacular English. It occurs in a specific ethnic group but in all areas of the United States.[11]: 48William Labovgives an example: "he here" instead of "he's here."[11]: 38 Code switchingis "the process whereby bilingual or bidialectal speakers switch back and forth between one language or dialect and another within the same conversation".[9]: 23 Diglossia, associated with the American linguistCharles A. Ferguson, which describes a sociolinguistic situation such as those that obtain in Arabic-speaking countries and in German-speaking Switzerland. In such a diglossic community, the prestigious standard of 'High' (or H) variety, which is linguistically related to but significantly different from the vernacular or 'Low' (or L) varieties, has no native speakers.[9]: 389 Domainis "different language, dialects, or styles are used in different social contexts".[9]: 41 Language attitudesare "social in origin, but that they may have important effects on language behavior, being involved in acts of identity, and on linguistic change."[9]: 73 Linguistic variableis "a linguistic unit...initially developed...in order to be able to handle linguistics variation. Variables may be lexical and grammatical, but are most often phonological". Example of British English (h) which is sometimes present and sometimes not.[9]: 83 Pragmaticsis the meaning of a word in social context, whilesemanticshas "purely linguistic meaning".[9]: 107 Registeris "a language variety that is associated with a particular topic, subject, or activity...." Usually, it is defined by vocabulary, but has grammatical features as well.[9]: 110 The following is an example of the lexical distinction between theMudaliyarand theIyengargroups of theTamil-speaking people in India. The Iyengar group is part of theBrahmincaste which is scholarly and higher in the caste hierarchy than the non-Brahmin or Mudaliyar, caste.[13]: 136The Mudaliyars use many of the same words for things that are differentiated within the Iyengars' speech. For example, as seen below, the difference between drinking water, water in general, and non-potable water is used by one word in the non-Brahmin caste and three separate words in the Brahmin caste. Furthermore, Agha references how the use of different speech reflects a "departure from a group-internal norm".[13]: 139For example, if the non-Brahmin caste uses Brahmin terms in their mode of speech it is seen as self-raising, whereas if people within the Brahmin caste use non-Brahmin speech it is seen as pejoratives.[13]: 138Therefore, depending on which castes use certain words the pragmatics change. Hence, this speech system is determined by socioeconomic class and social context. Norwegiandoes not have a spoken standard and is heavily dependent on dialect variants. The following example shows the difference between the national written standard and a spoken variant, where the phonology and pronunciation differ. These are not sociolectic differences per se. As Agha states, "Some lexical contrasts are due to the phonological difference (e.g., R makes more consonantal and vocalic distinctions than B), while others are due to the morphological difference (e.g., difference in plural suffixes and certain verb inflections) between two varieties.[13]: 140 The chart below gives an example ofdiglossiainArabic-speaking nationsand where it is used. Diglossia is defined by Mesthrie as "[a] situation where two varieties of a language exist side by side".[15]Classical Arabic is known asal-fuṣḥā(الفصحى), while the colloquial dialect depends on the country. For example,šāmi(شامي) is spoken in Lebanon and parts of Syria. In many situations, there is a major lexical difference among words in the classical and colloquial speech, as well as pronunciation differences, such as a difference in short vowels, when the words are the same. Although a specific example of diglossia was not given, its social context is almost if not more important. For example, Halliday states that "in areas with Diglossia, the link between language and success is apparent as the higher, classical register is learned through formal education".[10]: 175 Below is an example of African American Vernacular English, showing the addition of the verbal -s not just on third-person singular verbs in the present tense such as inStandard American English, but added onto infinitives, first-person present verbs, and third-person past perfect verbs.[11]: 49 Further examples of the phenomenon in AAVE are provided below. Below are examples of the lack of the possessive ending; -s is usually absent in AAVE but contains a rule As Labov states, "[the] use -s to indicate possession by a single noun or pronoun, but never between the possessor and the possessed."[11]: 49 "This is hers, This is mines, This is John's, but not in her book, my book, John book"[11]: 49 "Interview with Bryan A., seven years old, a struggling reader in a West Philadelphia elementary school: Many times within communities that contain sociolects that separate groups linguistically it is necessary to have a process where the independent speech communities can communicate in the same register; even if the change is as simple as different pronunciation. Therefore, the act of code-switching becomes essential. Code-switching is defined as "the process whereby bilingual or bidialectal speakers switch back and forth between one language or dialect and another within the same conversation".[16]: 23At times code-switching can be situational, depending on the situation or topical, depending on the topic. Halliday terms this the best when he defines the role of discourse, stating that "it is this that determines, or rather correlates with, the role played by the language activity in the situation".[10]: 20Therefore, meaning that which register is used depends on the situation and lays out the social context of the situation, because if the wrong register is used, then the wrong context is placed on the words. Furthermore, referring back to the diglossia expressed in the Arab-speaking world and the Tamil caste system in India, which words are used must be appropriate to not only the social class of the speaker, but the situation, the topic, and the need for courtesy. A more comprehensive definition is stated, "Code-switching is not only a definition of the situation but an expression of social hierarchy."[10]: 137
https://en.wikipedia.org/wiki/Sociolect
TheParsley massacre(Spanish:el corte"the cutting";[5]Creole:kout kouto-a"the stabbing"[6]) (French:Massacre du Persil;Spanish:Masacre del Perejil;Haitian Creole:Masak nan Pèsil) was a mass killing ofHaitiansliving in illegal settlements[7]and occupied land in theDominican Republic's northwestern frontier and in certain parts of the contiguousCibaoregion in October 1937.Dominican Armytroops from different areas of the country[8]carried out the massacre on the orders of Dominican dictatorRafael Trujillo.[9] As a result of the massacre, virtually the entire Haitian population in the Dominican frontier was either killed or forced to flee across the border.[10]Many died while trying to flee toHaitiacross theDajabón Riverthat divides the two countries on the island;[11]the troops followed them into the river to cut them down, causing the river to run with blood and corpses for several days. The massacre claimed the lives of an estimated 14,000 to 40,000 Haitian men, women, and children,[12]out of 60,517 "foreign" members of the black population in 1935[13]meaning one to three fifths of the Haitian population of the country or more may have been killed in the massacre. The name of the massacre comes from reports that Dominican troops interrogated thousands of civilians demanding that each victim say the word "parsley" (perejil) as ashibboleth. According to the stories, if the accused could not pronounce the word to the interrogators' satisfaction, they were deemed to be Haitians and killed. However, most scholars believe this aspect of the massacre to be mythical. DominicandictatorRafael Trujillo, a strong proponent ofanti-Haitianism, made his intentions towards the Haitian community clear in a short speech he gave on 2 October 1937 during a celebration in his honor in the province ofDajabón. For some months, I have traveled and traversed the border in every sense of the word. I have seen, investigated, and inquired about the needs of the population. To the Dominicans who were complaining of the depredations by Haitians living among them, thefts of cattle, provisions, fruits, etc., and were thus prevented from enjoying in peace the products of their labor, I have responded, 'I will fix this.' And we have already begun to remedy the situation. Three hundred Haitians are now dead inBánica. This remedy will continue.[14] Trujillo reportedly acted in response to reports of Haitians stealing cattle and crops from Dominican borderland residents. Trujillo commanded his army to kill all Haitians living in the Dominican Republic's northwestern frontier and in certain parts of the contiguous Cibao region. Between 2 October and 8 October, hundreds of Dominican troops, who came mostly from other areas of the country, poured into the Cibao,[8]and used rifles, machetes, shovels, knives, and bayonets to kill Haitians. Haitian babies were reportedly thrown in the air and caught by soldiers' bayonets, then thrown on their mothers' corpses.[15][16]Dominican troops beheaded thousands of Haitians, and took others to the port ofMontecristi, where they were thrown into theAtlantic Oceanto drown with their hands and feet bound, some with wounds inflicted by the soldiers in order to attract sharks.[17]Survivors who managed to cross the border and return to Haiti told stories of family members being hacked with machetes and strangled by the soldiers, and children bashed against rocks and tree trunks.[17] The use of military units from outside the region was not always enough to expedite soldiers' killings of Haitians. U.S. legation informants reported that many soldiers "confessed that in order to perform such ghastly slaughter they had to get 'blind' drunk."[18]Several months later, a barrage of killings and repatriations of Haitians occurred in the southern frontier. Lauren Derby claims that a majority of those who died were born in the Dominican Republic and belonged to well-established Haitian communities in the borderlands.[19] Haitian-Dominican relationshave long been strained by territorial disputes and competition for the resources ofHispaniola. Between the years of 1910–1930, there was an extensive migration of Haitians to their neighboring countries of the Dominican Republic andCubain search of work. The exact number of Haitian migration to the Dominican Republic is not readily available but it is more than the estimated 200,000 that emigrated to Cuba. Among several authors, the Haiti-Dominican Republic migration corridor is considered far more important than the Haiti-Cuba migration due to geographic proximity. On the other hand, the large influx of Haitians to the Dominican Republic further divided the complicated relationship between the two states.[20][page needed]TheDominican Republic, formerly the Spanish colony of Santo Domingo, is the eastern portion of the island ofHispaniolaand it occupies five-eighths of the land with a population of ten million inhabitants.[21]In contrast, Haiti, the former French colony ofSaint-Domingue, is on the western three-eighths of the island[22][23]and has almost exactly the same population, with an estimated 200 people per square kilometre.[24] Due to inadequate roadways which connect the borderlands to major cities, "Communication with Dominican markets was so limited that the small commercial surplus of the frontier slowly moved toward Haiti."[25] Furthermore, the Dominican government saw the loose borderlands as a liability in terms of possible formation of revolutionary groups that could flee across the border with ease, while at the same time amassing weapons and followers.[26] At first the Haitian presidentSténio Vincentprohibited any discussion of the massacre and issued a statement on 15 October: "...it is declared that the good relations between Haiti and the Dominican Republic have not suffered any damage." Vincent's failure to initially press for justice for the slain workers prompted protests inPort-au-Princeafter two years of relative silence. It was known that Vincent had a cooperative relationship and financial support from theTrujillogovernment. After a failed coup effort in December, the Haitian president was eventually forced to seek an international investigation and mediation. Unwilling to submit to an inquiry, Trujillo instead offered an indemnity to Haiti.[27] In the end, U.S. presidentFranklin D. Rooseveltand Haitian presidentSténio Vincentsought reparations of US$750,000, of which the Dominican government paid $525,000 (US $11,483,159.72 in 2024 dollars), or around $30 per victim. Due to the corruption deeply embedded within the Haitian bureaucracy however, survivors on average received only 2 cents each.[28]In the agreement signed in Washington, D.C., on 31 January 1938, the Dominican government defended the massacre as a response to illegal immigration by "undesirable" Haitians, and recognized "no responsibility whatsoever" for the killings with Trujillo stating how the agreement established new laws prohibiting migration between Haiti and the Dominican Republic. Trujillo's regime thus used a moment of international inquiry to legitimize his policies.[27] Thereafter, Trujillo began to develop the borderlands to link them more closely with themain cities and urban areasof the Dominican Republic.[29]These areas were modernized, with the addition of modern hospitals, schools, political headquarters, military barracks, and housing projects—as well as a highway to connect the borderlands to major cities. Additionally, after 1937, quotas restricted the number of Haitians permitted to enter theDominican Republic, and a strict and often discriminatory border policy was enacted. Dominicans continued to deport and kill Haitians in southern frontier regions—as refugees died of exposure, malaria and influenza.[30] Despite attempts to blame Dominican civilians, it has been confirmed by U.S. sources that "bullets fromKragrifles were found in Haitian bodies, and only Dominican soldiers had access to this type of rifle."[31]Therefore, the Haitian Massacre, which is still referred to as "el corte" (the cutting) by Dominicans and as "kouto-a" (the knife) by Haitians, was, "...a calculated action on the part of Dominican dictator Rafael Trujillo to homogenize the furthest stretches of the country in order to bring the region into the social, political and economic fold,"[11]and rid his republic of Haitians. Condemnation of the massacres was not limited to international sources, as a number of Trujillo's exiled political opponents also publicly spoke out against the events. In November 1937, four anti-Trujillistas were declared "unworthy Dominicans" and "traitors to the Homeland" for their comments—Rafael Brache, José Manuel Jimenes,Juan Isidro Jimenes Grullón, and Buenaventura Sánchez.[32] The popular name[33]for the massacre came from theshibboleththat Trujillo reportedly had his soldiers apply to determine whether or not those living on the border were nativeAfro-Dominicansor immigrantAfro-Haitians. Dominican soldiers would hold up a sprig ofparsleyto someone and ask what it was. How the person pronounced the Spanish word for parsley (perejil) determined their fate. The Haitian languages,FrenchandHaitian Creole, pronounce theras auvular approximantor avoiced velar fricative, respectively so their speakers can have difficulty pronouncing thealveolar tapor thealveolar trillofSpanish, the language of the Dominican Republic. Also, only Spanish but not French or Haitian Creole pronounces thejas thevoiceless velar fricative. If they could pronounce it the Spanish way the soldiers considered them Dominican and let them live, but if they pronounced it the French or Creole way they considered them Haitian and murdered them. The termparsley massacrewas used frequently in the English-speaking media 75 years after the event, but most scholars recognize that it is a misconception, as research by Lauren Derby shows that the explanation is based more on myth than on personal accounts.[34] According to some sources, the massacre killed an estimated 20,000 Haitians[35][36]living in the northern frontier—clearly at Trujillo's direct order.[citation needed]However, the exact number of victims is impossible to calculate due to several reasons. The Dominican Army carried out most of the killings in isolated areas, often leaving no witnesses or few survivors. Furthermore, many bodies were either disposed of in the sea, where they were consumed by sharks, or buried in mass graves, where acidic soil degraded them, leaving nothing for forensic investigators to exhume.[37] Haitian PresidentÉlie Lescotput the death toll at 12,168; Haitian historian Jean Price-Mars cited 12,136 deaths and 2,419 injuries. The Dominican Republic's interim Foreign Minister put the number of dead at 17,000. Dominican historianBernardo Vegaestimated as many as 35,000.[2]
https://en.wikipedia.org/wiki/Parsley_massacre
Atongue twisteris a phrase that is designed to be difficult toarticulateproperly, and can be used as a type of spoken (or sung)word game. Additionally, they can be used as exercises to improve pronunciation and fluency. Some tongue twisters produce results that are humorous (or humorously vulgar) when they are mispronounced, while others simply rely on the confusion and mistakes of the speaker for their amusement value. Some tongue twisters rely on rapid alternation between similar but distinctphonemes(e.g.,s[s]andsh[ʃ]), combining two different alternation patterns,[1]familiar constructs inloanwords, or other features[which?]of a spoken language in order to be difficult to articulate.[1]For example, the following sentence was said to be "the most difficult of common English-language tongue twisters" byWilliam Poundstone.[2] The seething sea ceaseth and thus the seething sea sufficeth us. These deliberately difficult expressions were popular in the 19th century. The popular "she sells seashells" tongue twister was originally published in 1850 as a diction exercise. The term "tongue twister" was first applied to this kind of expression in 1895. "She sells seashells" was turned into a popular song in 1908, with words by British songwriter Terry Sullivan and music byHarry Gifford. According to folklore, it was said to be inspired by the life and work ofMary Anning, an early fossil collector.[3]However, there is no evidence that Anning inspired either the tongue twister or the song.[4] She sells sea-shells by the sea-shore.The shells she sells are sea-shells, I'm sure.For if she sells sea-shells by the sea-shoreThen I'm sure she sells sea-shore shells. Another well-known tongue twister is "Peter Piper": Peter Piper picked a peck of pickled peppersA peck of pickled peppers Peter Piper pickedIf Peter Piper picked a peck of pickled peppersWhere's the peck of pickled peppers Peter Piper picked Many tongue twisters use a combination ofalliterationandrhyme. They have two or more sequences ofsoundsthat require repositioning the tongue between syllables, then the same sounds are repeated in a different sequence.[citation needed]An example of this is the song "Betty Botter" (listenⓘ), first published in 1899:[5] Betty Botter bought a bit of butter. "But," she said, "this butter's bitter!If I put it in my batter, it will make my batter bitter!"So she bought a bit of butter better than her bitter butter,And she put it in her batter, and her batter was not bitter.So 'twas better Betty Botter bought a bit of better butter. There are twisters that make use ofcompound wordsand theirstems, for example: How much wood would a woodchuck chuckif a woodchuck could chuck wood?A woodchuck would chuck all the wood he could chuckif a woodchuck would chuck wood. The following twister entered a contest inGames Magazineon the November/December 1979 issue and was announced the winner on the March/April 1980 issue:[6][7] Shep Schwab shopped at Scott'sSchnappsshop;Oneshotof Scott's Schnapps stopped Schwab's watch. Some tongue twisters take the form of words or short phrases which become tongue twisters when repeated rapidly (the game is often expressed in the form "Say this phrase three (or five, or ten, etc.) times as fast as you can!").[citation needed]Examples include: Some tongue twisters are used for speech practice and vocal warmup:[8] The lips, the teeth, the tip of the tongue,the tip of the tongue, the teeth, the lips. Tongue twisters are used to train pronunciation skills in non-native speakers:[9] The sheep on the ship slipped on the sheet of sleet. Other types of tongue twisters derive their humor from producing vulgar results only when performed incorrectly: Old Mother Hunt had a rough cutpuntNot a punt cut rough,But a rough cut punt. One smart feller, he felt smart,Two smart fellers, they both felt smart,Three smart fellers, they all felt smart. Some twisters are amusing because they sound incorrect even when pronounced correctly: Are you copperbottoming those pans, my man?No, I'm aluminiuming 'em Ma'am. In 2013, MIT researchers claimed that this is the trickiest twister to date:[10][11] Pad kid poured curd pulled cold Based on the MIT confusion matrix of 1620 single phoneme errors, the phoneme with the greatest margin of speech error isl[l] mistaken forr[r]. Other phonemes that had a high level of speech error includes[s] mistaken forsh[ʃ],f[f] forp[p],r[r] forl[l],w[w] forr[r], and many more.[12]These sounds are most likely to transform to a similar sound when placed in near vicinity of each other. Most of these mix-ups can be attributed to the two phonemes having similar areas of articulation in the mouth.[13] Pronunciation difficulty is also theorized to have an effect on tongue twisters.[12]For example,t[t] is thought to be easier to pronounce thanch[tʃ]. As a result, speakers may naturally transformch[tʃ] tot[t] or when trying to pronounce certain tongue twisters. Fortis and lenisare the classification of strong and weak consonants. Some characteristics of strong consonants include:[12] It is common for more difficult sounds to be replaced with strong consonants in tongue twisters.[12]This is partially determinant of which sounds are most likely to transform to other sounds with linguistic confusion. Tongue twisters exist in many languages, such asSpanish:trabalenguas,lit.'tongue jammer', andGerman:Zungenbrecher,lit.'tongue breaker'. The complexity of tongue twisters varies from language to language. For example, inLugandavowels differ by length so tongue twisters exploit vowel length: "Akawala akaawa Kaawa kaawa akaawa ka wa?". Translation: "The girl who gave Kaawa bitter coffee, where is she from?"[14] Shibboleths, that is, phrases in a language that are difficult for someone who is not anative speakerof that language to say might be regarded as a type of tongue-twist.[citation needed]An example isGeorgianbaq'aq'i ts'q'alshi q'iq'inebs("a frog croaks in the water"), in whichq'is auvular ejective. Another example, theCzechandSlovakstrč prst skrz krk("stick a finger through the throat") is difficult for a non-native speaker due to the absence of vowels, althoughsyllabic ris a common sound in Czech, Slovak and some otherSlavic languages. Thesign languageequivalent of a tongue twister is called afinger-fumbler.[15][16]According to Susan Fischer, the phraseGood blood, bad bloodis a tongue twister in English as well as a finger-fumbler inASL.[17] One-syllable articleis a form of Mandarin Chinese tongue twister, written in Classical Chinese. Due to Mandarin Chinese having only four tonal ranges (compared to nine in Cantonese, for example), these works sound like a work of one syllable in different tonal range when spoken in Mandarin,[18]but are far more comprehensible when spoken in another dialect.
https://en.wikipedia.org/wiki/Tongue-twister
U and non-U English usage, where "U" stands forupper classand "non-U" represents the aspiringmiddleandlower classes, was part of the terminology of popular discourse of social dialects (sociolects) inBritainin the 1950s.[1]The different vocabularies can often appear quite counter-intuitive: the middle classes prefer "fancy" or fashionable words, evenneologismsand ofteneuphemisms, in attempts to make themselves sound more refined ("posher than posh"), while the upper classes in many cases stick to the same plain and traditional words that the working classes also use, as, confident in the security of their social position, they have no need to seek to display refinement.[2] The discussion was set in motion in 1954 by the British linguistAlan S. C. Ross, professor of linguistics in theUniversity of Birmingham. He coined the terms "U" and "non-U" in an article on the differencessocial classmakes inEnglish languageusage, published in a Finnish professional linguistics journal.[2]Though his article included differences inpronunciationandwriting styles, it was his remark about differences ofvocabularythat received the most attention. The upper class English authorNancy Mitfordwas alerted and immediately took up the usage in an essay, "The English Aristocracy", whichStephen Spenderpublished in his magazineEncounterin 1954. Mitford provided a glossary of terms used by the upper classes (some appear in the table), unleashing an anxious national debate aboutEnglish class-consciousnessandsnobbery, which involved a good deal of soul-searching that itself provided fuel for the fires. The essay was reprinted, with contributions byEvelyn Waugh,John Betjeman, and others, as well as a "condensed and simplified version"[3]of Ross's original article, asNoblesse Oblige: an Enquiry into the Identifiable Characteristics of the English Aristocracy[4]in 1956. Betjeman's poem "How to Get On in Society" concluded the collection. The issue of U and non-U could have been taken lightheartedly, but at the time many took it very seriously. This was a reflection of the anxieties of the middle class in Britain of the 1950s, recently emerged frompost-war austerities. In particular the media used it as a launch pad for many stories, making much more out of it than was first intended. In the meantime, the idea that one might "improve oneself" by adopting the culture and manner of one's "betters", instinctively assented to beforeWorld War II, was now greeted with resentment.[5] Some of the terms and the ideas behind them were largely obsolete by the late 20th century, when, in the United Kingdom,reverse snobberyled younger members of the British upper and middle classes to adopt elements of working class speech, such asEstuary EnglishorMockney. However, many, if not most, of the differences remain very much current, and can therefore continue to be used asclass indicators.[6] A study in 1940 on the speaking differences between the American upper and middle classes revealed a strong similarity with the results of Ross's research. For instance, the American upper class said 'curtains', whilst the middle class used 'drapes'. Notably, the well-heeled would use 'toilet' whereas the less well-heeled would say 'lavatory', an inversion of the British usage.[7]
https://en.wikipedia.org/wiki/U_and_non-U_English
Insystems engineering, thesystem usability scale(SUS) is a simple, ten-item attitudeLikert scalegiving a global view of subjective assessments ofusability. It was developed by John Brooke[1]atDigital Equipment Corporationin theUKin 1986 as a tool to be used inusability engineeringofelectronic officesystems. The usability of a system, as defined by the ISO standardISO 9241Part 11, can be measured only by taking into account the context of use of the system—i.e., who is using the system, what they are using it for, and the environment in which they are using it. Furthermore, measurements of usability have several different aspects: Measures of effectiveness and efficiency are also context-specific. Effectiveness in using a system for controlling a continuous industrial process would generally be measured in very different terms to, say, effectiveness in using a text editor. Thus, it can be difficult, if not impossible, to answer the question "is system A more usable than system B", because the measures of effectiveness and efficiency may be very different. However, it can be argued that given a sufficiently high-level definition ofsubjectiveassessments of usability, comparisons can be made between systems. The formula for computing the final SUS score requires converting the raw scores, by subtracting 1 from each raw score, then utilizing the following equation[2]:SUS=2.5(20+∑(SUS01,SUS03,SUS05,SUS07,SUS09)−∑(SUS02,SUS04,SUS06,SUS08,SUS10)){\displaystyle SUS=2.5\left(20+\sum ({\text{SUS01}},{\text{SUS03}},{\text{SUS05}},{\text{SUS07}},{\text{SUS09}})-\sum ({\text{SUS02}},{\text{SUS04}},{\text{SUS06}},{\text{SUS08}},{\text{SUS10}})\right)} SUS has generally been seen as providing this type of high-level subjective view of usability and is thus often used in carrying out comparisons of usability between systems. Because it yields a single score on a scale of 0–100, it can be used to compare even systems that are outwardly dissimilar. This one-dimensional aspect of the SUS is both a benefit and a drawback because the questionnaire is necessarily quite general. Recently, Lewis and Sauro[3]suggested a two-factor orthogonal structure, which practitioners may use to score the SUS on independent Usability and Learnability dimensions. At the same time, Borsci, Federici and Lauriola[4]by an independent analysis confirm the two factors structure of SUS, also showing that those factors (Usability and Learnability) are correlated. The SUS has been widely used in the evaluation of a range of systems. Bangor, Kortum and Miller[5]have used the scale extensively over a ten-year period and have produced normative data that allow SUS ratings to be positioned relative to other systems. They propose an extension to SUS to provide an adjective rating that correlates with a given score. Based on a review of hundreds of usability studies, Sauro and Lewis[6]proposed a curved grading scale for mean SUS scores.
https://en.wikipedia.org/wiki/System_usability_scale
Usabilitycan be described as the capacity of a system to provide a condition for its users to perform the tasks safely, effectively, and efficiently while enjoying the experience.[1]Insoftware engineering, usability is the degree to which a software can be used by specified consumers to achieve quantified objectives with effectiveness, efficiency, and satisfaction in a quantified context of use.[2] The object of use can be asoftware application, website,book,tool,machine, process,vehicle, or anything a human interacts with. A usability study may be conducted as a primary job function by ausability analystor as a secondary job function bydesigners,technical writers, marketing personnel, and others. It is widely used inconsumer electronics,communication, andknowledge transferobjects (such as a cookbook, a document oronline help) andmechanicalobjects such as a door handle or a hammer. Usability includes methods ofmeasuringusability, such as needs analysis[3]and the study of the principles behind an object's perceived efficiency or elegance. Inhuman-computer interactionandcomputer science, usability studies the elegance and clarity with which theinteractionwith a computer program or a web site (web usability) is designed. Usability considersuser satisfactionand utility as quality components, and aims to improveuser experiencethroughiterative design.[4] The term "user friendly" must surely rate as the inanity of the decade. When was the last time you thought of a tool as "friendly"? "Usable" and "useful" are the appropriate operative terms. The primary notion of usability is that an object designed with a generalized users'psychologyandphysiologyin mind is, for example: Complex computer systems find their way into everyday life, and at the same time the market is saturated with competingbrands. This has made usability more popular and widely recognized in recent years, as companies see the benefits of researching and developing their products with user-oriented methods instead oftechnology-oriented methods. By understanding and researching theinteractionbetween product and user, theusability expertcan also provide insight that is unattainable by traditional company-orientedmarket research. For example, after observing and interviewing users, the usability expert may identify needed functionality or design flaws that were not anticipated. A method calledcontextual inquirydoes this in the naturally occurring context of the users own environment. In theuser-centered designparadigm, the product is designed with its intended users in mind at all times. In the user-driven orparticipatory designparadigm, some of the users become actual orde factomembers of the design team.[6] The termuser friendlyis often used as a synonym forusable, though it may also refer toaccessibility. Usability describes the quality of user experience across websites, software, products, and environments. There is no consensus about the relation of the termsergonomics(orhuman factors) and usability. Some think of usability as thesoftwarespecialization of the larger topic of ergonomics. Others view these topics as tangential, with ergonomics focusing on physiological matters (e.g., turning a door handle) and usability focusing on psychological matters (e.g., recognizing that a door can be opened by turning its handle). Usability is also important in website development (web usability). According toJakob Nielsen, "Studies of user behavior on the Web find a low tolerance for difficult designs or slow sites. People don't want to wait. And they don't want to learn how to use a home page. There's no such thing as a training class or a manual for a Web site. People have to be able to grasp the functioning of the site immediately after scanning the home page—for a few seconds at most."[7]Otherwise, most casual users simply leave the site and browse or shop elsewhere. Usability can also include the concept of prototypicality, which is how much a particular thing conforms to the expected shared norm, for instance, in website design, users prefer sites that conform to recognised design norms.[8] ISOdefines usability as "The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use." The word "usability" also refers to methods for improving ease-of-use during the design process. Usability consultantJakob Nielsenand computer science professorBen Shneidermanhave written (separately) about a framework of system acceptability, where usability is a part of "usefulness" and is composed of:[9] Usability is often associated with the functionalities of the product (cf.ISOdefinition, below), in addition to being solely a characteristic of the user interface (cf. framework of system acceptability, also below, which separatesusefulnessintousabilityandutility). For example, in the context of mainstream consumer products, an automobile lacking a reverse gear could be consideredunusableaccording to the former view, andlacking in utilityaccording to the latter view. When evaluating user interfaces for usability, the definition can be as simple as "the perception of a target user of the effectiveness (fit for purpose) and efficiency (work or time required to use) of the Interface"[citation needed]. Each component may be measured subjectively against criteria, e.g., Principles of User Interface Design, to provide a metric, often expressed as a percentage. It is important to distinguish between usability testing and usability engineering.Usability testingis the measurement of ease of use of a product or piece of software. In contrast,usability engineering(UE) is the research and design process that ensures a product with good usability. Usability is anon-functional requirement. As with other non-functional requirements, usability cannot be directly measured but must be quantified by means of indirect measures or attributes such as, for example, the number of reported problems with ease-of-use of a system. The termintuitiveis often listed as a desirable trait in usable interfaces, sometimes used as a synonym forlearnable. In the past,Jef Raskindiscouraged using this term in user interface design, claiming that easy to use interfaces are often easy because of the user's exposure to previous similar systems, thus the term 'familiar' should be preferred.[10]As an example: Two vertical lines "||" on media player buttons do not intuitively mean "pause"—they do so by convention. This association between intuitive use and familiarity has since been empirically demonstrated in multiple studies by a range of researchers across the world, and intuitive interaction is accepted in the research community as being use of an interface based on past experience with similar interfaces or something else, often not fully conscious,[11]and sometimes involving a feeling of "magic"[12]since the course of the knowledge itself may not be consciously available to the user . Researchers have also investigated intuitive interaction for older people,[13]people living with dementia,[14]and children.[15] Some have argued that aiming for "intuitive" interfaces (based on reusing existing skills with interaction systems) could lead designers to discard a better design solution only because it would require a novel approach and to stick with boring designs. However, applying familiar features into a new interface has been shown not to result in boring design if designers use creative approaches rather than simple copying.[16]The throwaway remark that "the only intuitive interface is the nipple; everything else is learned."[17]is still occasionally mentioned. Any breastfeeding mother orlactation consultantwill tell you this is inaccurate and the nipple does in fact require learning on both sides. In 1992,Bruce Tognazzinieven denied the existence of "intuitive" interfaces, since such interfaces must be able to intuit, i.e., "perceive the patterns of the user's behavior and draw inferences."[18]Instead, he advocated the term "intuitable," i.e., "that users could intuit the workings of an application by seeing it and using it". However, the term intuitive interaction has become well accepted in the research community over the past 20 or so years and, although not perfect, it should probably be accepted and used. ISO/TR 16982:2002 ("Ergonomicsof human-system interaction—Usability methods supporting human-centered design") is anInternational Standards Organization(ISO) standard that provides information on human-centered usability methods that can be used for design and evaluation. It details the advantages, disadvantages, and other factors relevant to using each usability method. It explains the implications of the stage of the life cycle and the individual project characteristics for the selection of usability methods and provides examples of usability methods in context. The main users of ISO/TR 16982:2002 areproject managers. It therefore addresses technical human factors and ergonomics issues only to the extent necessary to allow managers to understand their relevance and importance in the design process as a whole. The guidance in ISO/TR 16982:2002 can be tailored for specific design situations by using the lists of issues characterizing the context of use of the product to be delivered. Selection of appropriate usability methods should also take account of the relevant life-cycle process. ISO/TR 16982:2002 is restricted to methods that are widely used by usability specialists and project managers. It doesnotspecify the details of how to implement or carry out the usability methods described. ISO 9241is a multi-part standard that covers a number of aspects of people working with computers. Although originally titledErgonomic requirements for office work with visual display terminals (VDTs), it has been retitled to the more genericErgonomics of Human System Interaction.[19]As part of this change, ISO is renumbering some parts of the standard so that it can cover more topics, e.g. tactile and haptic interaction. The first part to be renumbered was part 10 in 2006, now part 110.[20] IEC 62366-1:2015 + COR1:2016 & IEC/TR 62366-2 provide guidance on usability engineering specific to amedical device. Any system or device designed for use by people should be easy to use, easy to learn, easy to remember (the instructions), and helpful to users. John Gould and Clayton Lewis recommend that designers striving for usability follow these three design principles[21] The design team should be user-driven and it should be in direct contact with potential users. Severalevaluation methods, includingpersonas,cognitive modeling, inspection, inquiry,prototyping, and testing methods may contribute to understanding potential users and their perceptions of how well the product or process works. Usability considerations, such as who the users are and their experience with similar systems must be examined. As part of understanding users, this knowledge must "...be played against the tasks that the users will be expected to perform."[21]This includes the analysis of what tasks the users will perform, which are most important, and what decisions the users will make while using your system. Designers must understand how cognitive and emotional characteristics of users will relate to a proposed system. One way to stress the importance of these issues in the designers' minds is to use personas, which are made-up representative users. See below for further discussion of personas. Another more expensive but more insightful method is to have a panel of potential users work closely with the design team from the early stages.[22] Test the system early on, and test the system on real users using behavioral measurements. This includes testing the system for both learnability and usability. (SeeEvaluation Methods). It is important in this stage to use quantitative usability specifications such as time and errors to complete tasks and number of users to test, as well as examine performance and attitudes of the users testing the system.[22]Finally, "reviewing or demonstrating" a system before the user tests it can result in misleading results. The emphasis of empirical measurement is on measurement, both informal and formal, which can be carried out through a variety ofevaluation methods.[21] Iterative designis a design methodology based on a cyclic process of prototyping, testing, analyzing, and refining a product or process. Based on the results of testing the most recent iteration of a design, changes and refinements are made. This process is intended to ultimately improve the quality and functionality of a design. In iterative design, interaction with the designed system is used as a form of research for informing and evolving a project, as successive versions, or iterations of a design are implemented. The key requirements for Iterative Design are: identification of required changes, an ability to make changes, and a willingness to make changes. When a problem is encountered, there is no set method to determine the correct solution. Rather, there are empirical methods that can be used during system development or after the system is delivered, usually a more inopportune time. Ultimately, iterative design works towards meeting goals such as making the system user friendly, easy to use, easy to operate, simple, etc.[22] There are a variety of usability evaluation methods. Certain methods use data from users, while others rely on usability experts. There are usability evaluation methods for all stages of design and development, from product definition to final design modifications. When choosing a method, consider cost, time constraints, and appropriateness. For a brief overview of methods, seeComparison of usability evaluation methodsor continue reading below. Usability methods can be further classified into the subcategories below. Cognitive modeling involves creating a computational model to estimate how long it takes people to perform a given task. Models are based on psychological principles and experimental studies to determine times for cognitive processing and motor movements. Cognitive models can be used to improve user interfaces or predict problem errors and pitfalls during the design process. A few examples of cognitive models include: With parallel design, several people create an initial design from the same set of requirements. Each person works independently, and when finished, shares concepts with the group. The design team considers each solution, and each designer uses the best ideas to further improve their own solution. This process helps generate many different, diverse ideas, and ensures that the best ideas from each design are integrated into the final concept. This process can be repeated several times until the team is satisfied with the final concept. GOMSstands forgoals, operators, methods, and selection rules. It is a family of techniques that analyzes the user complexity of interactive systems. Goals are what the user must accomplish. An operator is an action performed in pursuit of a goal. A method is a sequence of operators that accomplish a goal. Selection rules specify which method satisfies a given goal, based on context. Sometimes it is useful to break a task down and analyze each individual aspect separately. This helps the tester locate specific areas for improvement. To do this, it is necessary to understand how the human brain processes information. A model of the human processor is shown below. Many studies have been done to estimate the cycle times, decay times, and capacities of each of these processors. Variables that affect these can include subject age,aptitudes, ability, and the surrounding environment. For a younger adult, reasonable estimates are: Long-term memory is believed to have an infinite capacity and decay time.[23] Keystroke level modeling is essentially a less comprehensive version of GOMS that makes simplifying assumptions in order to reduce calculation time and complexity. These usability evaluation methods involve observation of users by an experimenter, or the testing and evaluation of a program by an expert reviewer. They provide more quantitative data as tasks can be timed and recorded. Card sortingis a way to involve users in grouping information for a website's usability review. Participants in a card sorting session are asked to organize the content from a Web site in a way that makes sense to them. Participants review items from a Web site and then group these items into categories. Card sorting helps to learn how users think about the content and how they would organize the information on the Web site. Card sorting helps to build the structure for a Web site, decide what to put on the home page, and label the home page categories. It also helps to ensure that information is organized on the site in a way that is logical to users. Tree testingis a way to evaluate the effectiveness of a website's top-down organization. Participants are given "find it" tasks, then asked to drill down through successive text lists of topics and subtopics to find a suitable answer. Tree testing evaluates thefindabilityand labeling of topics in a site, separate from its navigation controls orvisual design. Ethnographicanalysis is derived from anthropology. Field observations are taken at a site of a possible user, which track the artifacts of work such as Post-It notes, items on desktop, shortcuts, and items in trash bins. These observations also gather the sequence of work and interruptions that determine the user's typical day. Heuristic evaluationis a usability engineering method for finding and assessing usability problems in a user interface design as part of an iterative design process. It involves having a small set of evaluators examining the interface and using recognized usability principles (the "heuristics"). It is the most popular of the usability inspection methods, as it is quick, cheap, and easy. Heuristic evaluation was developed to aid in the design of computer user-interface design. It relies on expert reviewers to discover usability problems and then categorize and rate them by a set of principles (heuristics.) It is widely used based on its speed and cost-effectiveness. Jakob Nielsen's list of ten heuristics is the most commonly used in industry. These are ten general principles for user interface design. They are called "heuristics" because they are more in the nature of rules of thumb than specific usability guidelines. Thus, by determining which guidelines are violated, the usability of a device can be determined. Usability inspectionis a review of a system based on a set of guidelines. The review is conducted by a group of experts who are deeply familiar with the concepts of usability in design. The experts focus on a list of areas in design that have been shown to be troublesome for users. Pluralistic Inspections are meetings where users, developers, and human factors people meet together to discuss and evaluate step by step of a task scenario. As more people inspect the scenario for problems, the higher the probability to find problems. In addition, the more interaction in the team, the faster the usability issues are resolved. In consistency inspection, expert designers review products or projects to ensure consistency across multiple products to look if it does things in the same way as their own designs. Activity analysis is a usability method used in preliminary stages of development to get a sense of situation. It involves an investigator observing users as they work in the field. Also referred to as user observation, it is useful for specifying user requirements and studying currently used tasks and subtasks. The data collected are qualitative and useful for defining the problem. It should be used when you wish to frame what is needed, or "What do we want to know?" The following usability evaluation methods involve collecting qualitative data from users. Although the data collected is subjective, it provides valuable information on what the user wants. Task analysis means learning about users' goals and users' ways of working. Task analysis can also mean figuring out what more specific tasks users must do to meet those goals and what steps they must take to accomplish those tasks. Along with user and task analysis, a third analysis is often used: understanding users' environments (physical, social, cultural, and technological environments). A focus group is a focused discussion where a moderator leads a group of participants through a set of questions on a particular topic. Although typically used as a marketing tool,focus groupsare sometimes used to evaluate usability. Used in the product definition stage, a group of 6 to 10 users are gathered to discuss what they desire in a product. An experienced focus group facilitator is hired to guide the discussion to areas of interest for the developers. Focus groups are typically videotaped to help get verbatim quotes, and clips are often used to summarize opinions. The data gathered is not usually quantitative, but can help get an idea of a target group's opinion. Surveyshave the advantages of being inexpensive, require no testing equipment, and results reflect the users' opinions. When written carefully and given to actual users who have experience with the product and knowledge of design, surveys provide useful feedback on the strong and weak areas of the usability of a design. This is a very common method and often does not appear to be a survey, but just a warranty card. It is often very difficult for designers to conduct usability tests with the exact system being designed. Cost constraints, size, and design constraints usually lead the designer to creating a prototype of the system. Instead of creating the complete final system, the designer may test different sections of the system, thus making several small models of each component of the system. Prototyping is an attitude and an output, as it is a process for generating and reflecting on tangible ideas by allowing failure to occur early.[25]prototyping helps people to see what could be of communicating a shared vision, and of giving shape to the future. The types of usability prototypes may vary from using paper models, index cards, hand drawn models, or storyboards.[26]Prototypes are able to be modified quickly, often are faster and easier to create with less time invested by designers and are more apt to change design; although sometimes are not an adequate representation of the whole system, are often not durable and testing results may not be parallel to those of the actual system. This tool kit is a wide library of methods that used the traditional programming language and it is primarily developed for computer programmers. The code created for testing in the tool kit approach can be used in the final product. However, to get the highest benefit from the tool, the user must be an expert programmer.[27] The two elements of this approach include a parts library and a method used for identifying the connection between the parts.  This approach can be used by almost anyone and it is a great asset for designers with repetitive tasks.[27] This approach is a combination of the tool kit approach and the part kit approach. Both the dialogue designers and the programmers are able to interact with this prototyping tool.[27] Rapid prototyping is a method used in early stages of development to validate and refine the usability of a system. It can be used to quickly and cheaply evaluate user-interface designs without the need for an expensive working model. This can help remove hesitation to change the design, since it is implemented before any real programming begins. One such method of rapid prototyping ispaper prototyping. These usability evaluation methods involve testing of subjects for the most quantitative data. Usually recorded on video, they provide task completion time and allow for observation of attitude. Regardless to how carefully a system is designed, all theories must be tested using usability tests. Usability tests involve typical users using the system (or product) in a realistic environment [seesimulation]. Observation of the user's behavior, emotions, and difficulties while performing different tasks, often identify areas of improvement for the system. While conducting usability tests, designers must use usability metrics to identify what it is they are going to measure, or the usability metrics. These metrics are often variable, and change in conjunction with the scope and goals of the project. The number of subjects being tested can also affect usability metrics, as it is often easier to focus on specific demographics. Qualitative design phases, such as general usability (can the task be accomplished?), and user satisfaction are also typically done with smaller groups of subjects.[28]Using inexpensive prototypes on small user groups provides more detailed information, because of the more interactive atmosphere, and the designer's ability to focus more on the individual user. As the designs become more complex, the testing must become more formalized. Testing equipment will become more sophisticated and testing metrics become more quantitative. With a more refined prototype, designers often test effectiveness, efficiency, and subjective satisfaction, by asking the user to complete various tasks. These categories are measured by the percent that complete the task, how long it takes to complete the tasks, ratios of success to failure to complete the task, time spent on errors, the number of errors, rating scale of satisfactions, number of times user seems frustrated, etc.[29]Additional observations of the users give designers insight on navigation difficulties, controls, conceptual models, etc. The ultimate goal of analyzing these metrics is to find/create a prototype design that users like and use to successfully perform given tasks.[26]After conducting usability tests, it is important for a designer to record what was observed, in addition to why such behavior occurred and modify the model according to the results. Often it is quite difficult to distinguish the source of the design errors, and what the user did wrong. However, effective usability tests will not generate a solution to the problems, but provide modified design guidelines for continued testing. Remote usability testing (also known as unmoderated or asynchronous usability testing) involves the use of a specially modified online survey, allowing the quantification of user testing studies by providing the ability to generate large sample sizes, or a deep qualitative analysis without the need for dedicated facilities. Additionally, this style of user testing also provides an opportunity to segment feedback by demographic, attitudinal and behavioral type. The tests are carried out in the user's own environment (rather than labs) helping further simulate real-life scenario testing. This approach also provides a vehicle to easily solicit feedback from users in remote areas. There are two types, quantitative or qualitative. Quantitative use large sample sized and task based surveys. These types of studies are useful for validating suspected usability issues. Qualitative studies are best used as exploratory research, in small sample sizes but frequent, even daily iterations. Qualitative usually allows for observing respondent's screens and verbal think aloud commentary (Screen Recording Video, SRV), and for a richer level of insight also include the webcam view of the respondent (Video-in-Video, ViV, sometimes referred to as Picture-in-Picture, PiP) The growth in mobile and associated platforms and services (e.g.: Mobile gaming has experienced 20x growth in 2010–2012) has generated a need for unmoderated remote usability testing on mobile devices, both for websites but especially for app interactions. One methodology consists of shipping cameras and special camera holding fixtures to dedicated testers, and having them record the screens of the mobile smart-phone or tablet device, usually using an HD camera. A drawback of this approach is that the finger movements of the respondent can obscure the view of the screen, in addition to the bias and logistical issues inherent in shipping special hardware to selected respondents. A newer approach uses a wireless projection of the mobile device screen onto the computer desktop screen of the respondent, who can then be recorded through their webcam, and thus a combined Video-in-Video view of the participant and the screen interactions viewed simultaneously while incorporating the verbal think aloud commentary of the respondents. TheThink aloud protocolis a method of gathering data that is used in both usability and psychology studies. It involves getting a user to verbalize their thought processes (i.e. expressing their opinions, thoughts, anticipations, and actions)[30]as they perform a task or set of tasks. As a widespread method of usability testing, think aloud provides the researchers with the ability to discover what user really think during task performance and completion.[30] Often an instructor is present to prompt the user into being more vocal as they work. Similar to the Subjects-in-Tandem method, it is useful in pinpointing problems and is relatively simple to set up. Additionally, it can provide insight into the user's attitude, which can not usually be discerned from a survey or questionnaire. Rapid Iterative Testing and Evaluation (RITE)[31]is an iterative usability method similar to traditional "discount" usability testing. The tester and team must define a target population for testing, schedule participants to come into the lab, decide on how the users behaviors will be measured, construct a test script and have participants engage in a verbal protocol (e.g., think aloud). However it differs from these methods in that it advocates that changes to the user interface are made as soon as a problem is identified and a solution is clear. Sometimes this can occur after observing as few as 1 participant. Once the data for a participant has been collected the usability engineer and team decide if they will be making any changes to the prototype prior to the next participant. The changed interface is then tested with the remaining users. Subjects-in-tandem (also called co-discovery) is the pairing of subjects in a usability test to gather important information on the ease of use of a product. Subjects tend to discuss the tasks they have to accomplish out loud and through these discussions observers learn where the problem areas of a design are. To encourage co-operative problem-solving between the two subjects, and the attendant discussions leading to it, the tests can be designed to make the subjects dependent on each other by assigning them complementary areas of responsibility (e.g. for testing of software, one subject may be put in charge of the mouse and the other of the keyboard.) Component-based usability testingis an approach which aims to test the usability of elementary units of an interaction system, referred to as interaction components. The approach includes component-specific quantitative measures based on user interaction recorded in log files, and component-based usability questionnaires. Cognitive walkthroughis a method of evaluating the user interaction of a working prototype or final product. It is used to evaluate the system's ease of learning. Cognitive walkthrough is useful to understand the user's thought processes and decision making when interacting with a system, specially for first-time or infrequent users. Benchmarkingcreates standardized test materials for a specific type of design. Four key characteristics are considered when establishing a benchmark: time to do the core task, time to fix errors, time to learn applications, and the functionality of the system. Once there is a benchmark, other designs can be compared to it to determine the usability of the system. Many of the common objectives of usability studies, such as trying to understand user behavior or exploring alternative designs, must be put aside. Unlike many other usability methods or types of labs studies, benchmark studies more closely resemble true experimental psychology lab studies, with greater attention to detail on methodology, study protocol and data analysis.[32] Meta-analysisis a statistical procedure to combine results across studies to integrate the findings. This phrase was coined in 1976 as aquantitativeliterature review. This type of evaluation is very powerful for determining the usability of a device because it combines multiple studies to provide very accurate quantitative support. Personasare fictitious characters created to represent a site or product's different user types and their associated demographics and technographics.Alan Cooperintroduced the concept of using personas as a part of interactive design in 1998 in his bookThe Inmates Are Running the Asylum,[33]but had used this concept since as early as 1975. Personas are a usability evaluation method that can be used at various design stages. The most typical time to create personas is at the beginning of designing so that designers have a tangible idea of who the users of their product will be. Personas are thearchetypesthat represent actual groups of users and their needs, which can be a general description of person, context, or usage scenario. This technique turns marketing data on target user population into a few physical concepts of users to create empathy among the design team, with the final aim of tailoring a product more closely to how the personas will use it. To gather themarketingdata that personas require, several tools can be used, including online surveys,web analytics, customer feedback forms, and usability tests, and interviews with customer-service representatives.[34] The key benefits of usability are: An increase in usability generally positively affects several facets of a company's output quality. In particular, the benefits fall into several common areas:[35] Increased usability in the workplace fosters several responses from employees: "Workers who enjoy their work do it better, stay longer in the face of temptation, and contribute ideas and enthusiasm to the evolution of enhanced productivity."[36]To create standards, companies often implement experimental design techniques that create baseline levels. Areas of concern in an office environment include (though are not necessarily limited to):[37] By working to improve said factors, corporations can achieve their goals of increased output at lower costs, while potentially creating optimal levels of customer satisfaction. There are numerous reasons why each of these factors correlates to overall improvement. For example, making software user interfaces easier to understand reduces the need for extensive training. The improved interface tends to lower the time needed to perform tasks, and so would both raise the productivity levels for employees and reduce development time (and thus costs). Each of the aforementioned factors are not mutually exclusive; rather they should be understood to work in conjunction to form the overall workplace environment. In the 2010s, usability is recognized as an important software quality attribute, earning its place among more traditional attributes such asperformance, robustnessand aesthetic appearance. Various academic programs focus on usability. Several usabilityconsultancycompanies have emerged, and traditional consultancy and design firms offer similar services. There is some resistance to integrating usability work in organisations. Usability is seen as a vague concept, it is difficult to measure and other areas are prioritised when IT projects run out of time or money.[38] Usability practitioners are sometimes trained as industrial engineers, psychologists,kinesiologists, systems design engineers, or with a degree in information architecture, information or library science, orHuman-Computer Interaction (HCI). More often though they are people who are trained in specific applied fields who have taken on a usability focus within their organization. Anyone who aims to make tools easier to use and more effective for their desired function within the context of work or everyday living can benefit from studying usability principles and guidelines. For those seeking to extend their training, theUser Experience Professionals' Associationoffers online resources, reference lists, courses, conferences, and local chapter meetings. The UXPA also sponsorsWorld Usability Dayeach November.[39]Related professional organizations include theHuman Factors and Ergonomics Society(HFES) and theAssociation for Computing Machinery's special interest groups in Computer Human Interaction (SIGCHI), Design of Communication (SIGDOC) and Computer Graphics and Interactive Techniques (SIGGRAPH). TheSociety for Technical Communicationalso has a special interest group on Usability and User Experience (UUX). They publish a quarterly newsletter calledUsability Interface.[40]
https://en.wikipedia.org/wiki/Usability
Acontinued fractionis amathematical expressionthat can be written as afractionwith adenominatorthat is a sum that contains another simple or continued fraction. Depending on whether thisiterationterminates with a simple fraction or not, the continued fraction isfiniteorinfinite. Different fields ofmathematicshave different terminology and notation for continued fraction. Innumber theorythe standard unqualified use of the term continued fraction refers to the special case where all numerators are 1, and is treated in the articlesimple continued fraction. The present article treats the case wherenumeratorsanddenominatorsare sequences{ai},{bi}{\displaystyle \{a_{i}\},\{b_{i}\}}of constants or functions. From the perspective of number theory, these are calledgeneralizedcontinued fraction. From the perspective ofcomplex analysisornumerical analysis, however, they are just standard, and in the present article they will simply be called "continued fraction". A continued fraction is an expression of the form where thean(n> 0) are thepartial numerators, thebnare thepartial denominators, and the leading termb0is called theinteger partof the continued fraction. The successiveconvergentsof the continued fraction are formed by applying thefundamental recurrence formulas: whereAnis the numerator andBnis the denominator, calledcontinuants,[1][2]of thenth convergent. They are given by thethree-term recurrence relation[3] with initial values If the sequence of convergents{xn}approaches alimit, the continued fraction is convergent and has a definite value. If the sequence of convergents never approaches a limit, the continued fraction is divergent. It may diverge by oscillation (for example, the odd and even convergents may approach two different limits), or it may produce an infinite number of zero denominatorsBn. The story of continued fractions begins with theEuclidean algorithm,[4]a procedure for finding thegreatest common divisorof two natural numbersmandn. That algorithm introduced the idea of dividing to extract a new remainder – and then dividing by the new remainder repeatedly. Nearly two thousand years passed beforeBombelli (1579)devised atechnique for approximating the roots of quadratic equationswith continued fractions in the mid-sixteenth century. Now the pace of development quickened. Just 24 years later, in 1613,Pietro Cataldiintroduced the first formal notation for the generalized continued fraction.[5]Cataldi represented a continued fraction as with the dots indicating where the next fraction goes, and each&representing a modern plus sign. Late in the seventeenth centuryJohn Wallisintroduced the term "continued fraction" into mathematical literature.[6]New techniques for mathematical analysis (Newton'sandLeibniz'scalculus) had recently come onto the scene, and a generation of Wallis' contemporaries put the new phrase to use. In 1748Eulerpublished a theorem showing that a particular kind of continued fraction is equivalent to a certain very generalinfinite series.[7]Euler's continued fraction formulais still the basis of many modern proofs ofconvergence of continued fractions. In 1761,Johann Heinrich Lambertgave the firstproof thatπis irrational, by using the following continued fraction fortanx:[8] Continued fractions can also be applied to problems innumber theory, and are especially useful in the study ofDiophantine equations. In the late eighteenth centuryLagrangeused continued fractions to construct the general solution ofPell's equation, thus answering a question that had fascinated mathematicians for more than a thousand years.[9]Lagrange's discovery implies that the canonical continued fraction expansion of thesquare rootof every non-square integer is periodic and that, if the period is of lengthp> 1, it contains apalindromicstring of lengthp− 1. In 1813Gaussderived from complex-valuedhypergeometric functionswhat is now calledGauss's continued fractions.[10]They can be used to express many elementary functions and some more advanced functions (such as theBessel functions), as continued fractions that are rapidly convergent almost everywhere in the complex plane. The long continued fraction expression displayed in the introduction is easy for an unfamiliar reader to interpret. However, it takes up a lot of space and can be difficult to typeset. So mathematicians have devised several alternative notations. One convenient way to express a generalized continued fraction sets each nested fraction on the same line, indicating the nesting by dangling plus signs in the denominators: Sometimes the plus signs are typeset to vertically align with the denominators but not under the fraction bars: Pringsheimwrote a generalized continued fraction this way: Carl Friedrich Gaussevoked the more familiarinfinite productΠwhen he devised this notation: Here the "K" stands forKettenbruch, the German word for "continued fraction". This is probably the most compact and convenient way to express continued fractions; however, it is not widely used by English typesetters. Here are some elementary results that are of fundamental importance in the further development of the analytic theory of continued fractions. If one of the partial numeratorsan+1is zero, the infinite continued fraction is really just a finite continued fraction withnfractional terms, and therefore arational functionofa1toanandb0tobn+1. Such an object is of little interest from the point of view adopted in mathematical analysis, so it is usually assumed that allai≠ 0. There is no need to place this restriction on the partial denominatorsbi. When thenth convergent of a continued fraction is expressed as a simple fractionxn=⁠An/Bn⁠we can use thedeterminant formula to relate the numerators and denominators of successive convergentsxnandxn− 1to one another. The proof for this can be easily seen byinduction. Base case Inductive step If{ci} = {c1,c2,c3, ...}is any infinite sequence of non-zero complex numbers we can prove, by induction, that where equality is understood as equivalence, which is to say that the successive convergents of the continued fraction on the left are exactly the same as the convergents of the fraction on the right. The equivalence transformation is perfectly general, but two particular cases deserve special mention. First, if none of theaiare zero, a sequence{ci}can be chosen to make each partial numerator a 1: wherec1=⁠1/a1⁠,c2=⁠a1/a2⁠,c3=⁠a2/a1a3⁠, and in generalcn+1=⁠1/an+1cn⁠. Second, if none of the partial denominatorsbiare zero we can use a similar procedure to choose another sequence{di}to make each partial denominator a 1: whered1=⁠1/b1⁠and otherwisedn+1=⁠1/bnbn+1⁠. These two special cases of the equivalence transformation are enormously useful when the generalconvergence problemis analyzed. As mentioned in the introduction, the continued fraction converges if the sequence of convergents{xn} tends to a finite limit. This notion of convergence is very natural, but it is sometimes too restrictive. It is therefore useful to introduce the notion of general convergence of a continued fraction. Roughly speaking, this consists in replacing theKi=n∞⁡aibi{\displaystyle \operatorname {K} _{i=n}^{\infty }{\tfrac {a_{i}}{b_{i}}}}part of the fraction bywn, instead of by 0, to compute the convergents. The convergents thus obtained are calledmodified convergents. We say that the continued fractionconverges generallyif there exists a sequence{wn∗}{\displaystyle \{w_{n}^{*}\}}such that the sequence of modified convergents converges for all{wn}{\displaystyle \{w_{n}\}}sufficiently distinct from{wn∗}{\displaystyle \{w_{n}^{*}\}}. The sequence{wn∗}{\displaystyle \{w_{n}^{*}\}}is then called anexceptional sequencefor the continued fraction. See Chapter 2 ofLorentzen & Waadeland (1992)for a rigorous definition. There also exists a notion ofabsolute convergencefor continued fractions, which is based on the notion of absolute convergence of a series: a continued fraction is said to beabsolutely convergentwhen the series wherefn=Ki=1n⁡aibi{\displaystyle f_{n}=\operatorname {K} _{i=1}^{n}{\tfrac {a_{i}}{b_{i}}}}are the convergents of the continued fraction,converges absolutely.[11]TheŚleszyński–Pringsheim theoremprovides a sufficient condition for absolute convergence. Finally, a continued fraction of one or more complex variables isuniformly convergentin anopen neighborhoodΩwhen its convergentsconverge uniformlyonΩ; that is, when for everyε> 0there existsMsuch that for alln>M, for allz∈Ω{\displaystyle z\in \Omega }, It is sometimes necessary to separate a continued fraction into its even and odd parts. For example, if the continued fraction diverges by oscillation between two distinct limit pointspandq, then the sequence{x0,x2,x4, ...}must converge to one of these, and{x1,x3,x5, ...}must converge to the other. In such a situation it may be convenient to express the original continued fraction as two different continued fractions, one of them converging top, and the other converging toq. The formulas for the even and odd parts of a continued fraction can be written most compactly if the fraction has already been transformed so that all its partial denominators are unity. Specifically, if is a continued fraction, then the even partxevenand the odd partxoddare given by and respectively. More precisely, if the successive convergents of the continued fractionxare{x1,x2,x3, ...}, then the successive convergents ofxevenas written above are{x2,x4,x6, ...}, and the successive convergents ofxoddare{x1,x3,x5, ...}.[12] Ifa1,a2,...andb1,b2,...are positive integers withak≤bkfor all sufficiently largek, then converges to an irrational limit.[13] The partial numerators and denominators of the fraction's successive convergents are related by thefundamental recurrence formulas: The continued fraction's successive convergents are then given by These recurrence relations are due toJohn Wallis(1616–1703) andLeonhard Euler(1707–1783).[14]These recurrence relations are simply a different notation for the relations obtained by Pietro Antonio Cataldi (1548-1626). As an example, consider the regular continued fraction in canonical form that represents thegolden ratioφ: Applying the fundamental recurrence formulas we find that the successive numeratorsAnare{1, 2, 3, 5, 8, 13, ...}and the successive denominatorsBnare{1, 1, 2, 3, 5, 8, ...}, theFibonacci numbers. Since all the partial numerators in this example are equal to one, the determinant formula assures us that the absolute value of the difference between successive convergents approaches zero quite rapidly. Alinear fractional transformation(LFT) is acomplex functionof the form wherezis a complex variable, anda,b,c,dare arbitrary complex constants such thatcz+d≠ 0. An additional restriction thatad≠bcis customarily imposed, to rule out the cases in whichw=f(z)is a constant. The linear fractional transformation, also known as aMöbius transformation, has many fascinating properties. Four of these are of primary importance in developing the analytic theory of continued fractions. Consider a sequence of simple linear fractional transformations Here we useτto represent each simple LFT, and we adopt the conventional circle notation for composition of functions. We also introduce a new symbolΤnto represent the composition ofn+ 1transformationsτi; that is, and so forth. By direct substitution from the first set of expressions into the second we see that and, in general, where the last partial denominator in the finite continued fractionKis understood to bebn+z. And, sincebn+ 0 =bn, the image of the pointz= 0under the iterated LFTΤnis indeed the value of the finite continued fraction withnpartial numerators: Defining a finite continued fraction as the image of a point under the iterated linear fractional transformationΤn(z)leads to an intuitively appealing geometric interpretation of infinite continued fractions. The relationship can be understood by rewritingΤn(z)andΤn+1(z)in terms of thefundamental recurrence formulas: In the first of these equations the ratio tends toward⁠An/Bn⁠asztends toward zero. In the second, the ratio tends toward⁠An/Bn⁠asztends to infinity. This leads us to our first geometric interpretation. If the continued fraction converges, the successive convergents⁠An/Bn⁠are eventuallyarbitrarily close together. Since the linear fractional transformationΤn(z)is acontinuous mapping, there must be a neighborhood ofz= 0that is mapped into an arbitrarily small neighborhood ofΤn(0) =⁠An/Bn⁠. Similarly, there must be a neighborhood of the point at infinity which is mapped into an arbitrarily small neighborhood ofΤn(∞) =⁠An−1/Bn−1⁠. So if the continued fraction converges the transformationΤn(z)maps both very smallzand very largezinto an arbitrarily small neighborhood ofx, the value of the continued fraction, asngets larger and larger. For intermediate values ofz, since the successive convergents are getting closer together we must have wherekis a constant, introduced for convenience. But then, by substituting in the expression forΤn(z)we obtain so that even the intermediate values ofz(except whenz≈ −k−1) are mapped into an arbitrarily small neighborhood ofx, the value of the continued fraction, asngets larger and larger. Intuitively, it is almost as if the convergent continued fraction maps the entire extended complex plane into a single point.[15] Notice that the sequence{Τn}lies within theautomorphism groupof the extended complex plane, since eachΤnis a linear fractional transformation for whichab≠cd. And every member of that automorphism group maps the extended complex plane into itself: not one of theΤncan possibly map the plane into a single point. Yet in the limit the sequence{Τn}defines an infinite continued fraction which (if it converges) represents a single point in the complex plane. When an infinite continued fraction converges, the corresponding sequence{Τn}of LFTs "focuses" the plane in the direction ofx, the value of the continued fraction. At each stage of the process a larger and larger region of the plane is mapped into a neighborhood ofx, and the smaller and smaller region of the plane that's left over is stretched out ever more thinly to cover everything outside that neighborhood.[16] For divergent continued fractions, we can distinguish three cases: Interesting examples of cases 1 and 3 can be constructed by studying the simple continued fraction wherezis any real number such thatz< −⁠1/4⁠.[18] Eulerproved the following identity:[7] From this many other results can be derived, such as and Euler's formula connecting continued fractions and series is the motivation for thefundamental inequalities[link orclarification needed], and also the basis of elementary approaches to theconvergence problem. Here are two continued fractions that can be built viaEuler's identity. Here are additional generalized continued fractions: This last is based on an algorithm derived by Aleksei Nikolaevich Khovansky in the 1970s.[19] Example: thenatural logarithm of 2(=[0; 1, 2, 3, 1, 5,⁠2/3⁠, 7,⁠1/2⁠, 9,⁠2/5⁠,..., 2k− 1,⁠2/k⁠,...]≈ 0.693147...):[20] Here are three ofπ'sbest-known generalized continued fractions, the first and third of which are derived from their respectivearctangentformulas above by settingx=y= 1and multiplying by 4. TheLeibniz formula forπ: converges too slowly, requiring roughly3 × 10nterms to achievencorrect decimal places. The series derived byNilakantha Somayaji: is a much more obvious expression but still converges quite slowly, requiring nearly 50 terms for five decimals and nearly 120 for six. Both convergesublinearlytoπ. On the other hand: convergeslinearlytoπ, adding at least three digits of precision per four terms, a pace slightly faster than thearcsine formula forπ: which adds at least three decimal digits per five terms.[21] withu= 5andv= 239. Thenth rootof any positive numberzmcan be expressed by restatingz=xn+y, resulting in which can be simplified, by folding each pair of fractions into one fraction, to Thesquare rootofzis a special case withm= 1andn= 2: which can be simplified by noting that⁠5/10⁠=⁠3/6⁠=⁠1/2⁠: The square root can also be expressed by aperiodic continued fraction, but the above form converges more quickly with the properxandy. Thecube root of two(21/3or3√2≈ 1.259921...) can be calculated in two ways: Firstly, "standard notation" ofx= 1,y= 1, and2z−y= 3: Secondly, a rapid convergence withx= 5,y= 3and2z−y= 253: Pogson's ratio(1001/5or5√100≈ 2.511886...), withx= 5,y= 75and2z−y= 6325: Thetwelfth root of two(21/12or12√2≈ 1.059463...), using "standard notation": Equal temperament'sperfect fifth(27/12or12√27≈ 1.498307...), withm= 7: With "standard notation": A rapid convergence withx= 3,y= −7153, and2z−y= 219+ 312: More details on this technique can be found inGeneral Method for Extracting Roots using (Folded) Continued Fractions. Another meaning forgeneralized continued fractionis a generalization to higher dimensions. For example, there is a close relationship between the simple continued fraction in canonical form for the irrational real numberα, and the waylattice pointsin two dimensions lie to either side of the liney=αx. Generalizing this idea, one might ask about something related to lattice points in three or more dimensions. One reason to study this area is to quantify themathematical coincidenceidea; for example, formonomialsin several real numbers, take thelogarithmic formand consider how small it can be. Another reason is to find a possible solution toHermite's problem. There have been numerous attempts to construct a generalized theory. Notable efforts in this direction were made byFelix Klein(theKlein polyhedron),Georges PoitouandGeorge Szekeres.
https://en.wikipedia.org/wiki/Continued_fraction
Inmathematics,convergence testsare methods of testing for theconvergence,conditional convergence,absolute convergence,interval of convergenceor divergence of aninfinite series∑n=1∞an{\displaystyle \sum _{n=1}^{\infty }a_{n}}. If the limit of the summand is undefined or nonzero, that islimn→∞an≠0{\displaystyle \lim _{n\to \infty }a_{n}\neq 0}, then the series must diverge. In this sense, the partial sums areCauchyonly ifthis limit exists and is equal to zero. The test is inconclusive if the limit of the summand is zero. This is also known as thenth-term test,test for divergence, orthe divergence test. This is also known asd'Alembert's criterion. This is also known as thenth root testorCauchy's criterion. The root test is stronger than the ratio test: whenever the ratio test determines the convergence or divergence of an infinite series, the root test does too, but not conversely.[1] The series can be compared to an integral to establish convergence or divergence. Letf:[1,∞)→R+{\displaystyle f:[1,\infty )\to \mathbb {R} _{+}}be a non-negative andmonotonically decreasing functionsuch thatf(n)=an{\displaystyle f(n)=a_{n}}. If∫1∞f(x)dx=limt→∞∫1tf(x)dx<∞,{\displaystyle \int _{1}^{\infty }f(x)\,dx=\lim _{t\to \infty }\int _{1}^{t}f(x)\,dx<\infty ,}then the series converges. But if the integral diverges, then the series does so as well. In other words, the seriesan{\displaystyle {a_{n}}}convergesif and only ifthe integral converges. A commonly-used corollary of the integral test is the p-series test. Letk>0{\displaystyle k>0}. Then∑n=k∞(1np){\displaystyle \sum _{n=k}^{\infty }{\bigg (}{\frac {1}{n^{p}}}{\bigg )}}converges ifp>1{\displaystyle p>1}. The case ofp=1,k=1{\displaystyle p=1,k=1}yields the harmonic series, which diverges. The case ofp=2,k=1{\displaystyle p=2,k=1}is theBasel problemand the series converges toπ26{\displaystyle {\frac {\pi ^{2}}{6}}}. In general, forp>1,k=1{\displaystyle p>1,k=1}, the series is equal to theRiemann zeta functionapplied top{\displaystyle p}, that isζ(p){\displaystyle \zeta (p)}. If the series∑n=1∞bn{\displaystyle \sum _{n=1}^{\infty }b_{n}}is anabsolutely convergentseries and|an|≤|bn|{\displaystyle |a_{n}|\leq |b_{n}|}for sufficiently largen, then the series∑n=1∞an{\displaystyle \sum _{n=1}^{\infty }a_{n}}converges absolutely. If{an},{bn}>0{\displaystyle \{a_{n}\},\{b_{n}\}>0}, (that is, each element of the two sequences is positive) and the limitlimn→∞anbn{\displaystyle \lim _{n\to \infty }{\frac {a_{n}}{b_{n}}}}exists, is finite and non-zero, then either both series converge or both series diverge. Let{an}{\displaystyle \left\{a_{n}\right\}}be a non-negative non-increasing sequence. Then the sumA=∑n=1∞an{\displaystyle A=\sum _{n=1}^{\infty }a_{n}}convergesif and only ifthe sumA∗=∑n=0∞2na2n{\displaystyle A^{*}=\sum _{n=0}^{\infty }2^{n}a_{2^{n}}}converges. Moreover, if they converge, thenA≤A∗≤2A{\displaystyle A\leq A^{*}\leq 2A}holds. Suppose the following statements are true: Then∑anbn{\displaystyle \sum a_{n}b_{n}}is also convergent. Everyabsolutely convergentseries converges. Suppose the following statements are true: Then∑n=1∞(−1)nan{\displaystyle \sum _{n=1}^{\infty }(-1)^{n}a_{n}}and∑n=1∞(−1)n+1an{\displaystyle \sum _{n=1}^{\infty }(-1)^{n+1}a_{n}}are convergent series. This test is also known as theLeibniz criterion. If{an}{\displaystyle \{a_{n}\}}is asequenceofreal numbersand{bn}{\displaystyle \{b_{n}\}}a sequence ofcomplex numberssatisfying whereMis some constant, then the series converges. A series∑i=0∞ai{\displaystyle \sum _{i=0}^{\infty }a_{i}}is convergent if and only if for everyε>0{\displaystyle \varepsilon >0}there is a natural numberNsuch that holds for alln>Nand allp≥ 1. Let(an)n≥1{\displaystyle (a_{n})_{n\geq 1}}and(bn)n≥1{\displaystyle (b_{n})_{n\geq 1}}be two sequences of real numbers. Assume that(bn)n≥1{\displaystyle (b_{n})_{n\geq 1}}is astrictly monotoneand divergent sequence and the following limit exists: Then, the limit Suppose that (fn) is a sequence of real- or complex-valued functions defined on a setA, and that there is a sequence of non-negative numbers (Mn) satisfying the conditions Then the series converges absolutely anduniformlyonA. The ratio test may be inconclusive when the limit of the ratio is 1. Extensions to the ratio test, however, sometimes allows one to deal with this case. Let {an} be a sequence of positive numbers. Define If exists there are three possibilities: An alternative formulation of this test is as follows. Let{an} be a series of real numbers. Then ifb> 1 andK(a natural number) exist such that for alln>Kthen the series {an} is convergent. Let {an} be a sequence of positive numbers. Define If exists, there are three possibilities:[2][3] Let {an} be a sequence of positive numbers. Ifanan+1=1+αn+O(1/nβ){\displaystyle {\frac {a_{n}}{a_{n+1}}}=1+{\frac {\alpha }{n}}+O(1/n^{\beta })}for some β > 1, then∑an{\displaystyle \sum a_{n}}converges ifα > 1and diverges ifα ≤ 1.[4] Let {an} be a sequence of positive numbers. Then:[5][6][7] (1)∑an{\displaystyle \sum a_{n}}converges if and only if there is a sequencebn{\displaystyle b_{n}}of positive numbers and a real numberc> 0 such thatbk(ak/ak+1)−bk+1≥c{\displaystyle b_{k}(a_{k}/a_{k+1})-b_{k+1}\geq c}. (2)∑an{\displaystyle \sum a_{n}}diverges if and only if there is a sequencebn{\displaystyle b_{n}}of positive numbers such thatbk(ak/ak+1)−bk+1≤0{\displaystyle b_{k}(a_{k}/a_{k+1})-b_{k+1}\leq 0} and∑1/bn{\displaystyle \sum 1/b_{n}}diverges. Let∑n=1∞an{\displaystyle \sum _{n=1}^{\infty }a_{n}}be an infinite series with real terms and letf:R→R{\displaystyle f:\mathbb {R} \to \mathbb {R} }be any real function such thatf(1/n)=an{\displaystyle f(1/n)=a_{n}}for all positive integersnand the second derivativef″{\displaystyle f''}exists atx=0{\displaystyle x=0}. Then∑n=1∞an{\displaystyle \sum _{n=1}^{\infty }a_{n}}converges absolutely iff(0)=f′(0)=0{\displaystyle f(0)=f'(0)=0}and diverges otherwise.[8] Consider the series Cauchy condensation testimplies that (i) is finitely convergent if is finitely convergent. Since (ii) is a geometric series with ratio2(1−α){\displaystyle 2^{(1-\alpha )}}. (ii) is finitely convergent if its ratio is less than one (namelyα>1{\displaystyle \alpha >1}).Thus, (i) is finitely convergentif and only ifα>1{\displaystyle \alpha >1}. While most of the tests deal with the convergence of infinite series, they can also be used to show the convergence or divergence ofinfinite products. This can be achieved using following theorem: Let{an}n=1∞{\displaystyle \left\{a_{n}\right\}_{n=1}^{\infty }}be a sequence of positive numbers. Then the infinite product∏n=1∞(1+an){\displaystyle \prod _{n=1}^{\infty }(1+a_{n})}convergesif and only ifthe series∑n=1∞an{\displaystyle \sum _{n=1}^{\infty }a_{n}}converges. Also similarly, if0≤an<1{\displaystyle 0\leq a_{n}<1}holds, then∏n=1∞(1−an){\displaystyle \prod _{n=1}^{\infty }(1-a_{n})}approaches a non-zero limit if and only if the series∑n=1∞an{\displaystyle \sum _{n=1}^{\infty }a_{n}}converges . This can be proved by taking the logarithm of the product and using limit comparison test.[9]
https://en.wikipedia.org/wiki/Convergence_tests
Inmathematics, aseriesis thesumof the terms of aninfinite sequenceof numbers. More precisely, an infinite sequence(a1,a2,a3,…){\displaystyle (a_{1},a_{2},a_{3},\ldots )}defines aseriesSthat is denoted Thenthpartial sumSnis the sum of the firstnterms of the sequence; that is, A series isconvergent(orconverges) if and only if the sequence(S1,S2,S3,…){\displaystyle (S_{1},S_{2},S_{3},\dots )}of its partial sums tends to alimit; that means that, when adding oneak{\displaystyle a_{k}}after the otherin the order given by the indices, one gets partial sums that become closer and closer to a given number. More precisely, a series converges, if and only if there exists a numberℓ{\displaystyle \ell }such that for every arbitrarily small positive numberε{\displaystyle \varepsilon }, there is a (sufficiently large)integerN{\displaystyle N}such that for alln≥N{\displaystyle n\geq N}, If the series is convergent, the (necessarily unique) numberℓ{\displaystyle \ell }is called thesum of the series. The same notation is used for the series, and, if it is convergent, to its sum. This convention is similar to that which is used for addition:a+bdenotes theoperation of addingaandbas well as the result of thisaddition, which is called thesumofaandb. Any series that is not convergent is said to bedivergentor to diverge. There are a number of methods of determining whether a series converges ordiverges. Comparison test. The terms of the sequence{an}{\displaystyle \left\{a_{n}\right\}}are compared to those of another sequence{bn}{\displaystyle \left\{b_{n}\right\}}. If, for alln,0≤an≤bn{\displaystyle 0\leq \ a_{n}\leq \ b_{n}}, and∑n=1∞bn{\textstyle \sum _{n=1}^{\infty }b_{n}}converges, then so does∑n=1∞an.{\textstyle \sum _{n=1}^{\infty }a_{n}.} However, if, for alln,0≤bn≤an{\displaystyle 0\leq \ b_{n}\leq \ a_{n}}, and∑n=1∞bn{\textstyle \sum _{n=1}^{\infty }b_{n}}diverges, then so does∑n=1∞an.{\textstyle \sum _{n=1}^{\infty }a_{n}.} Ratio test. Assume that for alln,an{\displaystyle a_{n}}is not zero. Suppose that there existsr{\displaystyle r}such that Ifr< 1, then the series is absolutely convergent. Ifr> 1,then the series diverges. Ifr= 1,the ratio test is inconclusive, and the series may converge or diverge. Root testornth root test. Suppose that the terms of the sequence in question arenon-negative. Defineras follows: Ifr< 1, then the series converges. Ifr> 1,then the series diverges. Ifr= 1,the root test is inconclusive, and the series may converge or diverge. The ratio test and the root test are both based on comparison with a geometric series, and as such they work in similar situations. In fact, if the ratio test works (meaning that the limit exists and is not equal to 1) then so does the root test; the converse, however, is not true. The root test is therefore more generally applicable, but as a practical matter the limit is often difficult to compute for commonly seen types of series. Integral test. The series can be compared to an integral to establish convergence or divergence. Letf(n)=an{\displaystyle f(n)=a_{n}}be a positive andmonotonically decreasing function. If then the series converges. But if the integral diverges, then the series does so as well. Limit comparison test. If{an},{bn}>0{\displaystyle \left\{a_{n}\right\},\left\{b_{n}\right\}>0}, and the limitlimn→∞anbn{\displaystyle \lim _{n\to \infty }{\frac {a_{n}}{b_{n}}}}exists and is not zero, then∑n=1∞an{\textstyle \sum _{n=1}^{\infty }a_{n}}convergesif and only if∑n=1∞bn{\textstyle \sum _{n=1}^{\infty }b_{n}}converges. Alternating series test. Also known as theLeibniz criterion, thealternating series teststates that for analternating seriesof the form∑n=1∞an(−1)n{\textstyle \sum _{n=1}^{\infty }a_{n}(-1)^{n}}, if{an}{\displaystyle \left\{a_{n}\right\}}is monotonicallydecreasing, and has a limit of 0 at infinity, then the series converges. Cauchy condensation test. If{an}{\displaystyle \left\{a_{n}\right\}}is a positive monotone decreasing sequence, then∑n=1∞an{\textstyle \sum _{n=1}^{\infty }a_{n}}converges if and only if∑k=1∞2ka2k{\textstyle \sum _{k=1}^{\infty }2^{k}a_{2^{k}}}converges. Dirichlet's test Abel's test If the series∑n=1∞|an|{\textstyle \sum _{n=1}^{\infty }\left|a_{n}\right|}converges, then the series∑n=1∞an{\textstyle \sum _{n=1}^{\infty }a_{n}}is said to beabsolutely convergent. Every absolute convergent series (real or complex)is also convergent, but the converse is not true. TheMaclaurin seriesof theexponential functionis absolutely convergent for everycomplexvalue of the variable. If the series∑n=1∞an{\textstyle \sum _{n=1}^{\infty }a_{n}}converges but the series∑n=1∞|an|{\textstyle \sum _{n=1}^{\infty }\left|a_{n}\right|}diverges, then the series∑n=1∞an{\textstyle \sum _{n=1}^{\infty }a_{n}}isconditionally convergent. The Maclaurin series of thelogarithm functionln⁡(1+x){\displaystyle \ln(1+x)}is conditionally convergent forx= 1(see theMercator series). TheRiemann series theoremstates that if a series converges conditionally, it is possible to rearrange the terms of the series in such a way that the series converges to any value, or even diverges.Agnew's theoremcharacterizes rearrangements that preserve convergence for all series. Let{f1,f2,f3,…}{\displaystyle \left\{f_{1},\ f_{2},\ f_{3},\dots \right\}}be a sequence of functions. The series∑n=1∞fn{\textstyle \sum _{n=1}^{\infty }f_{n}}is said to converge uniformly tofif the sequence{sn}{\displaystyle \{s_{n}\}}of partial sums defined by converges uniformly tof. There is an analogue of the comparison test for infinite series of functions called theWeierstrass M-test. TheCauchy convergence criterionstates that a series convergesif and only ifthe sequence ofpartial sumsis aCauchy sequence. This means that for everyε>0,{\displaystyle \varepsilon >0,}there is a positive integerN{\displaystyle N}such that for alln≥m≥N{\displaystyle n\geq m\geq N}we have This is equivalent to limm→∞(supn>m|∑k=mnak|)=0.{\displaystyle \lim _{m\to \infty }\left(\sup _{n>m}\left|\sum _{k=m}^{n}a_{k}\right|\right)=0.}
https://en.wikipedia.org/wiki/Convergent_series
Les séries divergentes sont en général quelque chose de bien fatal et c’est une honte qu’on ose y fonder aucune démonstration. ("Divergent series are in general something fatal, and it is a disgrace to base any proof on them." Often translated as "Divergent series are an invention of the devil …") Inmathematics, adivergent seriesis aninfinite seriesthat is notconvergent, meaning that the infinitesequenceof thepartial sumsof the series does not have a finitelimit. If a series converges, the individual terms of the series must approach zero. Thus any series in which the individual terms do not approach zero diverges. However, convergence is a stronger condition: not all series whose terms approach zero converge. Acounterexampleis theharmonic series The divergence of the harmonic serieswas provenby the medieval mathematicianNicole Oresme. In specialized mathematical contexts, values can be objectively assigned to certain series whose sequences of partial sums diverge, in order to make meaning of the divergence of the series. Asummability methodorsummation methodis apartial functionfrom the set of series to values. For example,Cesàro summationassignsGrandi's divergent series the value⁠1/2⁠. Cesàro summation is anaveragingmethod, in that it relies on thearithmetic meanof the sequence of partial sums. Other methods involveanalytic continuationsof related series. Inphysics, there are a wide variety of summability methods; these are discussed in greater detail in the article onregularization. ... but it is broadly true to say that mathematicians before Cauchy asked not 'How shall wedefine1 − 1 + 1...?' but 'Whatis1 − 1 + 1...?', and that this habit of mind led them into unnecessary perplexities and controversies which were often really verbal. Before the 19th century, divergent series were widely used byLeonhard Eulerand others, but often led to confusing and contradictory results. A major problem was Euler's idea that any divergent series should have a natural sum, without first defining what is meant by the sum of a divergent series.Augustin-Louis Cauchyeventually gave a rigorous definition of the sum of a (convergent) series, and for some time after this, divergent series were mostly excluded from mathematics. They reappeared in 1886 withHenri Poincaré's work on asymptotic series. In 1890,Ernesto Cesàrorealized that one could give a rigorous definition of the sum of some divergent series, and definedCesàro summation. (This was not the first use of Cesàro summation, which was used implicitly byFerdinand Georg Frobeniusin 1880; Cesàro's key contribution was not the discovery of this method, but his idea that one should give an explicit definition of the sum of a divergent series.) In the years after Cesàro's paper, several other mathematicians gave other definitions of the sum of a divergent series, although these are not always compatible: different definitions can give different answers for the sum of the same divergent series; so, when talking about the sum of a divergent series, it is necessary to specify which summation method one is using. A summability methodMisregularif it agrees with the actual limit on allconvergent series. Such a result is called anAbelian theoremforM, from the prototypicalAbel's theorem. More subtle, are partial converse results, calledTauberian theorems, from a prototype proved byAlfred Tauber. Herepartial conversemeans that ifMsums the seriesΣ, and some side-condition holds, thenΣwas convergent in the first place; without any side-condition such a result would say thatMonly summed convergent series (making it useless as a summation method for divergent series). The function giving the sum of a convergent series islinear, and it follows from theHahn–Banach theoremthat it may be extended to a summation method summing any series with bounded partial sums. This is called theBanach limit. This fact is not very useful in practice, since there are many such extensions, inconsistent with each other, and also since proving such operators exist requires invoking theaxiom of choiceor its equivalents, such asZorn's lemma. They are therefore nonconstructive. The subject of divergent series, as a domain ofmathematical analysis, is primarily concerned with explicit and natural techniques such asAbel summation,Cesàro summationandBorel summation, and their relationships. The advent ofWiener's tauberian theoremmarked an epoch in the subject, introducing unexpected connections toBanach algebramethods inFourier analysis. Summation of divergent series is also related toextrapolationmethods andsequence transformationsas numerical techniques. Examples of such techniques arePadé approximants,Levin-type sequence transformations, and order-dependent mappings related torenormalizationtechniques for large-orderperturbation theoryinquantum mechanics. Summation methods usually concentrate on the sequence of partial sums of the series. While this sequence does not converge, we may often find that when we take an average of larger and larger numbers of initial terms of the sequence, the average converges, and we can use this average instead of a limit to evaluate the sum of the series. Asummation methodcan be seen as a function from a set of sequences of partial sums to values. IfAis any summation method assigning values to a set of sequences, we may mechanically translate this to aseries-summation methodAΣthat assigns the same values to the corresponding series. There are certain properties it is desirable for these methods to possess if they are to arrive at values corresponding to limits and sums, respectively. The third condition is less important, and some significant methods, such asBorel summation, do not possess it.[3] One can also give a weaker alternative to the last condition. A desirable property for two distinct summation methodsAandBto share isconsistency:AandBareconsistentif for every sequencesto which both assign a value,A(s) =B(s).(Using this language, a summation methodAis regular iff it is consistent with the standard sumΣ.) If two methods are consistent, and one sums more series than the other, the one summing more series isstronger. There are powerful numerical summation methods that are neither regular nor linear, for instance nonlinearsequence transformationslikeLevin-type sequence transformationsandPadé approximants, as well as the order-dependent mappings of perturbative series based onrenormalizationtechniques. Taking regularity, linearity and stability as axioms, it is possible to sum many divergent series by elementary algebraic manipulations. This partly explains why many different summation methods give the same answer for certain series. For instance, wheneverr≠ 1,thegeometric series can be evaluated regardless of convergence. More rigorously, any summation method that possesses these properties and which assigns a finite value to the geometric series must assign this value. However, whenris a real number larger than 1, the partial sums increase without bound, and averaging methods assign a limit of infinity. The two classical summation methods for series, ordinary convergence and absolute convergence, define the sum as a limit of certain partial sums. These are included only for completeness; strictly speaking they are not true summation methods for divergent series since, by definition, a series is divergent only if these methods do not work. Most but not all summation methods for divergent series extend these methods to a larger class of sequences. Cauchy's classical definition of the sum of a seriesa0+a1+ ...defines the sum to be the limit of the sequence of partial sumsa0+ ... +an. This is the default definition of convergence of a sequence. Absolute convergence defines the sum of a sequence (or set) of numbers to be the limit of the net of all partial sumsak1+ ... +akn, if it exists. It does not depend on the order of the elements of the sequence, and a classical theorem says that a sequence is absolutely convergent if and only if the sequence of absolute values is convergent in the standard sense. Supposepnis a sequence of positive terms, starting fromp0. Suppose also that If now we transform a sequence s by usingpto give weighted means, setting then the limit oftnasngoes to infinity is an average called theNørlundmeanNp(s). The Nørlund mean is regular, linear, and stable. Moreover, any two Nørlund means are consistent. The most significant of the Nørlund means are the Cesàro sums. Here, if we define the sequencepkby then the Cesàro sumCkis defined byCk(s) =N(pk)(s).Cesàro sums are Nørlund means ifk≥ 0, and hence are regular, linear, stable, and consistent.C0is ordinary summation, andC1is ordinaryCesàro summation. Cesàro sums have the property that ifh>k,thenChis stronger thanCk. Supposeλ= {λ0,λ1,λ2,...} is a strictly increasing sequence tending towards infinity, and thatλ0≥ 0. Suppose converges for all real numbersx> 0. Then theAbelian meanAλis defined as More generally, if the series forfonly converges for largexbut can be analytically continued to all positive realx, then one can still define the sum of the divergent series by the limit above. A series of this type is known as a generalizedDirichlet series; in applications to physics, this is known as the method ofheat-kernel regularization. Abelian means are regular and linear, but not stable and not always consistent between different choices ofλ. However, some special cases are very important summation methods. Ifλn=n, then we obtain the method ofAbel summation. Here wherez= exp(−x). Then the limit off(x) asxapproaches 0 throughpositive realsis the limit of thepower seriesforf(z) aszapproaches 1 from below through positive reals, and the Abel sumA(s) is defined as Abel summation is interesting in part because it is consistent with but more powerful thanCesàro summation:A(s) =Ck(s)whenever the latter is defined. The Abel sum is therefore regular, linear, stable, and consistent with Cesàro summation. Ifλn=nlog(n), then (indexing from one) we have ThenL(s), theLindelöf sum,[4]is the limit off(x) asxgoes to positive zero. The Lindelöf sum is a powerful method when applied to power series among other applications, summing power series in theMittag-Leffler star. Ifg(z) is analytic in a disk around zero, and hence has aMaclaurin seriesG(z) with a positive radius of convergence, thenL(G(z)) =g(z)in the Mittag-Leffler star. Moreover, convergence tog(z) is uniform on compact subsets of the star. Several summation methods involve taking the value of ananalytic continuationof a function. If Σanxnconverges for small complexxand can be analytically continued along some path fromx= 0 to the pointx= 1, then the sum of the series can be defined to be the value atx= 1. This value may depend on the choice of path. One of the first examples of potentially different sums for a divergent series, using analytic continuation, was given by Callet,[5]who observed that if1≤m<n{\displaystyle 1\leq m<n}then 1−xm1−xn=1+x+⋯+xm−11+x+…xn−1=1−xm+xn−xn+m+x2n−…{\displaystyle {\frac {1-x^{m}}{1-x^{n}}}={\frac {1+x+\dots +x^{m-1}}{1+x+\dots x^{n-1}}}=1-x^{m}+x^{n}-x^{n+m}+x^{2n}-\dots } Evaluating atx=1{\displaystyle x=1}, one gets 1−1+1−1+⋯=mn.{\displaystyle 1-1+1-1+\dots ={\frac {m}{n}}.} However, the gaps in the series are key. Form=1,n=3{\displaystyle m=1,n=3}for example, we actually would get 1−1+0+1−1+0+1−1+⋯=13{\displaystyle 1-1+0+1-1+0+1-1+\dots ={\frac {1}{3}}}, so different sums correspond to different placements of the0{\displaystyle 0}'s. Another example of analytic continuation is the divergent alternating series∑k≥0(−1)k+112k−1(2kk)=1+2−2+4−10+28−84+264−858+2860−9724+⋯{\displaystyle \sum _{k\geq 0}(-1)^{k+1}{\frac {1}{2k-1}}{\binom {2k}{k}}=1+2-2+4-10+28-84+264-858+2860-9724+\cdots }which is a sum over products ofΓ{\displaystyle \Gamma }-functions andPochhammer'ssymbols. Using the duplication formula of theΓ{\displaystyle \Gamma }-function, it reduces to ageneralized hypergeometric series…=∑k≥0(−4)k(−1/2)kk!=1F0(−1/2;;−4)=5.{\displaystyle \ldots =\sum _{k\geq 0}(-4)^{k}{\frac {(-1/2)_{k}}{k!}}={}_{1}F_{0}(-1/2;;-4)={\sqrt {5}}.} Euler summation is essentially an explicit form of analytic continuation. If a power series converges for small complexzand can be analytically continued to the open disk with diameter from⁠−1/q+ 1⁠to 1 and is continuous at 1, then its value atqis called the Euler or (E,q) sum of the series Σan. Euler used it before analytic continuation was defined in general, and gave explicit formulas for the power series of the analytic continuation. The operation of Euler summation can be repeated several times, and this is essentially equivalent to taking an analytic continuation of a power series to the pointz= 1. This method defines the sum of a series to be the value of the analytic continuation of the Dirichlet series ats= 0, if this exists and is unique. This method is sometimes confused with zeta function regularization. Ifs= 0 is an isolated singularity, the sum is defined by the constant term of the Laurent series expansion. If the series (for positive values of thean) converges for large realsand can beanalytically continuedalong the real line tos= −1, then its value ats= −1 is called thezeta regularizedsum of the seriesa1+a2+ ... Zeta function regularization is nonlinear. In applications, the numbersaiare sometimes the eigenvalues of a self-adjoint operatorAwith compact resolvent, andf(s) is then the trace ofA−s. For example, ifAhas eigenvalues 1, 2, 3, ... thenf(s) is theRiemann zeta function,ζ(s), whose value ats= −1 is −⁠1/12⁠, assigning a value to the divergent series1 + 2 + 3 + 4 + ⋯. Other values ofscan also be used to assign values for the divergent sumsζ(0) = 1 + 1 + 1 + ... = −⁠1/2⁠,ζ(−2) = 1 + 4 + 9 + ... = 0and in general whereBkis aBernoulli number.[6] IfJ(x) = Σpnxnis an integral function, then theJsum of the seriesa0+ ... is defined to be if this limit exists. There is a variation of this method where the series forJhas a finite radius of convergencerand diverges atx=r. In this case one defines the sum as above, except taking the limit asxtends torrather than infinity. In the special case whenJ(x) =exthis gives one (weak) form ofBorel summation. Valiron's method is a generalization of Borel summation to certain more general integral functionsJ. Valiron showed that under certain conditions it is equivalent to defining the sum of a series as whereHis the second derivative ofGandc(n) =e−G(n), anda0+ ... +ahis to be interpreted as 0 whenh< 0. Suppose thatdμis a measure on the real line such that all the moments are finite. Ifa0+a1+ ... is a series such that converges for allxin the support ofμ, then the (dμ) sum of the series is defined to be the value of the integral if it is defined. (If the numbersμnincrease too rapidly then they do not uniquely determine the measureμ.) For example, ifdμ=e−xdxfor positivexand 0 for negativexthenμn=n!, and this gives one version ofBorel summation, where the value of a sum is given by There is a generalization of this depending on a variableα, called the (B′,α) sum, where the sum of a seriesa0+ ... is defined to be if this integral exists. A further generalization is to replace the sum under the integral by its analytic continuation from smallt. This summation method works by using an extension to the real numbers known as thehyperreal numbers. Since the hyperreal numbers include distinct infinite values, these numbers can be used to represent the values of divergent series. The key method is to designate a particular infinite value that is being summed, usuallyω{\displaystyle \omega }, which is used as a unit of infinity. Instead of summing to an arbitrary infinity (as is typically done with∞{\displaystyle \infty }), the BGN method sums to the specific hyperreal infinite value labeledω{\displaystyle \omega }. Therefore, the summations are of the form This allows the usage of standard formulas for finite series such asarithmetic progressionsin an infinite context. For instance, using this method, the sum of the progression1+2+3+…{\displaystyle 1+2+3+\ldots }isω22+ω2{\displaystyle {\frac {\omega ^{2}}{2}}+{\frac {\omega }{2}}}, or, using just the most significant infinite hyperreal part,ω22{\displaystyle {\frac {\omega ^{2}}{2}}}.[7] Hardy (1949, chapter 11). In 1812 Hutton introduced a method of summing divergent series by starting with the sequence of partial sums, and repeatedly applying the operation of replacing a sequences0,s1, ... by the sequence of averages⁠s0+s1/2⁠,⁠s1+s2/2⁠, ..., and then taking the limit.[8] The seriesa1+ ... is called Ingham summable tosif Albert Inghamshowed that ifδis any positive number then (C,−δ) (Cesàro) summability implies Ingham summability, and Ingham summability implies (C,δ) summability.[9] The seriesa1+ ... is calledLambert summabletosif If a series is (C,k) (Cesàro) summable for anykthen it is Lambert summable to the same value, and if a series is Lambert summable then it is Abel summable to the same value.[9] The seriesa0+ ... is called Le Roy summable tosif[10] The seriesa0+ ... is called Mittag-Leffler (M) summable tosif[10] Ramanujan summation is a method of assigning a value to divergent series used by Ramanujan and based on theEuler–Maclaurin summation formula. The Ramanujan sum of a seriesf(0) +f(1) + ... depends not only on the values offat integers, but also on values of the functionfat non-integral points, so it is not really a summation method in the sense of this article. The seriesa1+ ... is called (R,k) (or Riemann) summable tosif[11] The seriesa1+ ... is called R2summable tosif Ifλnform an increasing sequence of real numbers and then the Riesz (R,λ,κ) sum of the seriesa0+ ... is defined to be The seriesa1+ ... is called VP (or Vallée-Poussin) summable tosif whereΓ(x){\displaystyle \Gamma (x)}is the gamma function.[11] The series is Zeldovich summable if
https://en.wikipedia.org/wiki/Divergent_series
Inmathematics, aninfinite expressionis anexpressionin which someoperatorstake an infinite number ofarguments, or in which the nesting of the operators continues to an infinite depth.[1]A generic concept for infinite expression can lead to ill-defined or self-inconsistent constructions (much like aset of all sets), but there are several instances of infinite expressions that are well-defined. Examples of well-defined infinite expressions are[2] Ininfinitary logic, one can use infiniteconjunctionsand infinitedisjunctions. Even for well-defined infinite expressions, thevalueof the infinite expression may be ambiguous or not well-defined; for instance, there are multiple summation rules available for assigning values to series, and the same series may have different values according to different summation rules if the series is notabsolutely convergent.
https://en.wikipedia.org/wiki/Infinite_expression_(mathematics)
Inmathematics, for asequenceof complex numbersa1,a2,a3, ... theinfinite product is defined to be thelimitof thepartial productsa1a2...anasnincreases without bound. The product is said toconvergewhen the limit exists and is not zero. Otherwise the product is said todiverge. A limit of zero is treated specially in order to obtain results analogous to those forinfinite sums. Some sources allow convergence to 0 if there are only a finite number of zero factors and the product of the non-zero factors is non-zero, but for simplicity we will not allow that here. If the product converges, then the limit of the sequenceanasnincreases without bound must be 1, while the converse is in general not true. The best known examples of infinite products are probably some of the formulae forπ, such as the following two products, respectively byViète(Viète's formula, the first published infinite product in mathematics) andJohn Wallis(Wallis product): The product of positive real numbers converges to a nonzero real number if and only if the sum converges. This allows the translation of convergence criteria for infinite sums into convergence criteria for infinite products. The same criterion applies to products of arbitrary complex numbers (including negative reals) if the logarithm is understood as a fixedbranch of logarithmwhich satisfiesln⁡(1)=0{\displaystyle \ln(1)=0}, with the provision that the infinite product diverges when infinitely manyanfall outside the domain ofln{\displaystyle \ln }, whereas finitely many suchancan be ignored in the sum. If we definean=1+pn{\displaystyle a_{n}=1+p_{n}}, the bounds show that the infinite product ofanconverges if the infinite sum of thepnconverges. This relies on theMonotone convergence theorem. We can show the converse by observing that, ifpn→0{\displaystyle p_{n}\to 0}, then and by thelimit comparison testit follows that the two series are equivalent meaning that either they both converge or they both diverge. If the series∑n=1∞log⁡(an){\textstyle \sum _{n=1}^{\infty }\log(a_{n})}diverges to−∞{\displaystyle -\infty }, then the sequence of partial products of theanconverges to zero. The infinite product is said todiverge to zero.[1] For the case where thepn{\displaystyle p_{n}}have arbitrary signs, the convergence of the sum∑n=1∞pn{\textstyle \sum _{n=1}^{\infty }p_{n}}does not guarantee the convergence of the product∏n=1∞(1+pn){\textstyle \prod _{n=1}^{\infty }(1+p_{n})}. For example, ifpn=(−1)n+1n{\displaystyle p_{n}={\frac {(-1)^{n+1}}{\sqrt {n}}}}, then∑n=1∞pn{\textstyle \sum _{n=1}^{\infty }p_{n}}converges, but∏n=1∞(1+pn){\textstyle \prod _{n=1}^{\infty }(1+p_{n})}diverges to zero. However, if∑n=1∞|pn|{\textstyle \sum _{n=1}^{\infty }|p_{n}|}is convergent, then the product∏n=1∞(1+pn){\textstyle \prod _{n=1}^{\infty }(1+p_{n})}convergesabsolutely–that is, the factors may be rearranged in any order without altering either the convergence, or the limiting value, of the infinite product.[2]Also, if∑n=1∞|pn|2{\textstyle \sum _{n=1}^{\infty }|p_{n}|^{2}}is convergent, then the sum∑n=1∞pn{\textstyle \sum _{n=1}^{\infty }p_{n}}and the product∏n=1∞(1+pn){\textstyle \prod _{n=1}^{\infty }(1+p_{n})}are either both convergent, or both divergent.[3] One important result concerning infinite products is that everyentire functionf(z) (that is, every function that isholomorphicover the entirecomplex plane) can be factored into an infinite product of entire functions, each with at most a single root. In general, iffhas a root of ordermat the origin and has other complex roots atu1,u2,u3, ... (listed with multiplicities equal to their orders), then whereλnare non-negative integers that can be chosen to make the product converge, andϕ(z){\displaystyle \phi (z)}is some entire function (which means the term before the product will have no roots in the complex plane). The above factorization is not unique, since it depends on the choice of values forλn. However, for most functions, there will be some minimum non-negative integerpsuch thatλn=pgives a convergent product, called thecanonical product representation. Thispis called therankof the canonical product. In the event thatp= 0, this takes the form This can be regarded as a generalization of thefundamental theorem of algebra, since for polynomials, the product becomes finite andϕ(z){\displaystyle \phi (z)}is constant. In addition to these examples, the following representations are of special note: The last of these is not a product representation of the same sort discussed above, asζis not entire. Rather, the above product representation ofζ(z) converges precisely for Re(z) > 1, where it is an analytic function. By techniques ofanalytic continuation, this function can be extended uniquely to an analytic function (still denotedζ(z)) on the whole complex plane except at the pointz= 1, where it has a simplepole.
https://en.wikipedia.org/wiki/Infinite_product
Thislist of mathematical seriescontains formulae for finite and infinite sums. It can be used in conjunction with other tools for evaluating sums. SeeFaulhaber's formula. The first few values are: Seezeta constants. The first few values are: Finite sums: Infinite sums, valid for|z|<1{\displaystyle |z|<1}(seepolylogarithm): The following is a useful property to calculate low-integer-order polylogarithms recursively inclosed form: whereTn(z){\displaystyle T_{n}(z)}is theTouchard polynomials. (Seeharmonic numbers, themselves definedHn=∑j=1n1j{\textstyle H_{n}=\sum _{j=1}^{n}{\frac {1}{j}}}, andH(x){\displaystyle H(x)}generalized to the real numbers) Sums ofsinesandcosinesarise inFourier series. These numeric series can be found by plugging in numbers from the series listed above. WhereTen=∑k=1nTk{\displaystyle Te_{n}=\sum _{k=1}^{n}T_{k}}
https://en.wikipedia.org/wiki/List_of_mathematical_series
Inmathematics, asequence transformationis anoperatoracting on a given space ofsequences(asequence space). Sequence transformations includelinear mappingssuch asdiscrete convolutionwith another sequence andresummationof a sequence and nonlinear mappings, more generally. They are commonly used forseries acceleration, that is, for improving therate of convergenceof a slowly convergent sequence orseries. Sequence transformations are also commonly used to compute theantilimitof adivergent seriesnumerically, and are used in conjunction withextrapolation methods. Classical examples for sequence transformations include thebinomial transform,Möbius transform, andStirling transform. For a given sequence and a sequence transformationT,{\displaystyle \mathbf {T} ,}the sequence resulting from transformation byT{\displaystyle \mathbf {T} }is where the elements of the transformed sequence are usually computed from some finite number of members of the original sequence, for instance for some natural numberkn{\displaystyle k_{n}}for eachn{\displaystyle n}and amultivariate functionTn{\displaystyle T_{n}}ofkn+1{\displaystyle k_{n}+1}variables for eachn.{\displaystyle n.}See for instance thebinomial transformandAitken's delta-squared process. In the simplest case the elements of the sequences, thesn{\displaystyle s_{n}}andsn′{\displaystyle s'_{n}}, arerealorcomplex numbers. More generally, they may be elements of somevector spaceoralgebra. If the multivariate functionsTn{\displaystyle T_{n}}arelinearin each of their arguments for each value ofn,{\displaystyle n,}for instance if for some constantskn{\displaystyle k_{n}}andcn,0,…,cn,kn{\displaystyle c_{n,0},\dots ,c_{n,k_{n}}}for eachn,{\displaystyle n,}then the sequence transformationT{\displaystyle \mathbf {T} }is called alinear sequence transformation. Sequence transformations that are not linear are called nonlinear sequence transformations. In the context ofseries acceleration, when the original sequence(sn){\displaystyle (s_{n})}and the transformed sequence(sn′){\displaystyle (s'_{n})}share the same limitℓ{\displaystyle \ell }asn→∞,{\displaystyle n\rightarrow \infty ,}the transformed sequence is said to have a fasterrate of convergencethan the original sequence if If the original sequence isdivergent, the sequence transformation may act as anextrapolation methodto anantilimitℓ{\displaystyle \ell }. The simplest examples of sequence transformations include shifting all elements by an integerk{\displaystyle k}that does not depend onn,{\displaystyle n,}sn′=sn+k{\displaystyle s'_{n}=s_{n+k}}ifn+k≥0{\displaystyle n+k\geq 0}and 0 otherwise, andscalar multiplicationof the sequence some constantc{\displaystyle c}that does not depend onn,{\displaystyle n,}sn′=csn.{\displaystyle s'_{n}=cs_{n}.}These are both examples of linear sequence transformations. Less trivial examples include thediscrete convolutionof sequences with another reference sequence. A particularly basic example is thedifference operator, which is convolution with the sequence(−1,1,0,…){\displaystyle (-1,1,0,\ldots )}and is a discrete analog of thederivative; technically the shift operator and scalar multiplication can also be written as trivial discrete convolutions. Thebinomial transformand theStirling transformare two linear transformations of a more general type. An example of a nonlinear sequence transformation isAitken's delta-squared process, used to improve therate of convergenceof a slowly convergent sequence. An extended form of this is theShanks transformation. TheMöbius transformis also a nonlinear transformation, only possible forinteger sequences.
https://en.wikipedia.org/wiki/Sequence_transformation
Inmathematics, aseries expansionis a technique that expresses afunctionas an infinite sum, orseries, of simpler functions. It is a method for calculating afunctionthat cannot be expressed by just elementary operators (addition, subtraction, multiplication and division).[1] The resulting so-calledseriesoften can be limited to a finite number of terms, thus yielding anapproximationof the function. The fewer terms of the sequence are used, the simpler this approximation will be. Often, the resulting inaccuracy (i.e., thepartial sumof the omitted terms) can be described by an equation involvingBig O notation(see alsoasymptotic expansion). The series expansion on anopen intervalwill also be an approximation for non-analytic functions.[2][verification needed] There are several kinds of series expansions, listed below. ATaylor seriesis apower seriesbased on a function'sderivativesat a single point.[3]More specifically, if a functionf:U→R{\displaystyle f:U\to \mathbb {R} }is infinitely differentiable around a pointx0{\displaystyle x_{0}}, then the Taylor series offaround this point is given by ∑n=0∞f(n)(x0)n!(x−x0)n{\displaystyle \sum _{n=0}^{\infty }{\frac {f^{(n)}(x_{0})}{n!}}(x-x_{0})^{n}} under the convention00:=1{\displaystyle 0^{0}:=1}.[3][4]TheMaclaurin seriesoffis its Taylor series aboutx0=0{\displaystyle x_{0}=0}.[5][4] ALaurent seriesis a generalization of the Taylor series, allowing terms with negative exponents; it takes the form∑k=−∞∞ck(z−a)k{\textstyle \sum _{k=-\infty }^{\infty }c_{k}(z-a)^{k}}and converges in anannulus.[6]In particular, a Laurent series can be used to examine the behavior of a complex function near a singularity by considering the series expansion on an annulus centered at the singularity. AgeneralDirichlet seriesis a series of the form∑n=1∞ane−λns.{\textstyle \sum _{n=1}^{\infty }a_{n}e^{-\lambda _{n}s}.}One important special case of this is theordinary Dirichlet series∑n=1∞anns.{\textstyle \sum _{n=1}^{\infty }{\frac {a_{n}}{n^{s}}}.}[7]Used innumber theory.[citation needed] AFourier seriesis an expansion of periodic functions as a sum of manysineandcosinefunctions.[8]More specifically, the Fourier series of a functionf(x){\displaystyle f(x)}of period2L{\displaystyle 2L}is given by the expressiona0+∑n=1∞[ancos⁡(nπxL)+bnsin⁡(nπxL)]{\displaystyle a_{0}+\sum _{n=1}^{\infty }\left[a_{n}\cos \left({\frac {n\pi x}{L}}\right)+b_{n}\sin \left({\frac {n\pi x}{L}}\right)\right]}where the coefficients are given by the formulae[8][9]an:=1L∫−LLf(x)cos⁡(nπxL)dx,bn:=1L∫−LLf(x)sin⁡(nπxL)dx.{\displaystyle {\begin{aligned}a_{n}&:={\frac {1}{L}}\int _{-L}^{L}f(x)\cos \left({\frac {n\pi x}{L}}\right)dx,\\b_{n}&:={\frac {1}{L}}\int _{-L}^{L}f(x)\sin \left({\frac {n\pi x}{L}}\right)dx.\end{aligned}}} The following is theTaylor seriesofex{\displaystyle e^{x}}:ex=∑n=0∞xnn!=1+x+x22+x36...{\displaystyle e^{x}=\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}=1+x+{\frac {x^{2}}{2}}+{\frac {x^{3}}{6}}...}[11][12] The Dirichlet series of theRiemann zeta functionisζ(s):=∑n=1∞1ns=11s+12s+⋯{\displaystyle \zeta (s):=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}={\frac {1}{1^{s}}}+{\frac {1}{2^{s}}}+\cdots }[7]
https://en.wikipedia.org/wiki/Series_expansion
Inmathematics, aZorn ringis analternative ringin which for every non-nilpotentxthere exists an elementysuch thatxyis a non-zeroidempotent(Kaplansky 1968, pages 19, 25).Kaplansky (1951)named them afterMax August Zorn, who studied a similar condition in (Zorn 1941). Forassociative rings, the definition of Zorn ring can be restated as follows: theJacobson radicalJ(R) is anil idealand every rightidealofRwhich is not contained in J(R) contains a nonzero idempotent. Replacing "right ideal" with "left ideal" yields an equivalent definition. Left or rightArtinian rings, left or rightperfect rings,semiprimary ringsandvon Neumann regular ringsare all examples of associative Zorn rings. Thisabstract algebra-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Zorn_ring
Inmathematics, aMalcev algebra(orMaltsev algebraorMoufang–Liealgebra) over afieldis anonassociative algebrathat is antisymmetric, so that and satisfies theMalcev identity They were first defined byAnatoly Maltsev(1955). Malcev algebras play a role in the theory ofMoufang loopsthat generalizes the role ofLie algebrasin the theory ofgroups. Namely, just as the tangent space of the identity element of aLie groupforms a Lie algebra, the tangent space of the identity of a smooth Moufang loop forms a Malcev algebra. Moreover, just as a Lie group can be recovered from its Lie algebra under certain supplementary conditions, a smooth Moufang loop can be recovered from its Malcev algebra if certain supplementary conditions hold. For example, this is true for a connected, simply connected real-analytic Moufang loop.[1] In the case of Malcev algebras, this construction can be simplified. Every Malcev algebra has a specialneutral element(thezero vectorin the case ofvector spaces, theidentity elementin the case ofcommutative groups, and thezero elementin the case ofringsor modules). The characteristic feature of a Malcev algebra is that we can recover the entire equivalence relation kerffrom theequivalence classof the neutral element. To be specific, letAandBbe Malcev algebraic structures of a given type and letfbe a homomorphism of that type fromAtoB. IfeBis the neutral element ofB, then thekerneloffis thepreimageof thesingleton set{eB}; that is, thesubsetofAconsisting of all those elements ofAthat are mapped byfto the elementeB. The kernel is usually denotedkerf(or a variation). In symbols: Since a Malcev algebra homomorphism preserves neutral elements, the identity elementeAofAmust belong to the kernel. The homomorphismfis injective if and only if its kernel is only the singleton set {eA}. The notion ofidealgeneralises to any Malcev algebra (aslinear subspacein the case of vector spaces,normal subgroupin the case of groups, two-sided ideals in the case of rings, andsubmodulein the case ofmodules). It turns out that kerfis not asubalgebraofA, but it is an ideal. Then it makes sense to speak of thequotient algebraG/ (kerf). The first isomorphism theorem for Malcev algebras states that this quotient algebra is naturally isomorphic to the image off(which is a subalgebra ofB). The connection between this and the congruence relation for more general types of algebras is as follows. First, the kernel-as-an-ideal is the equivalence class of the neutral elementeAunder the kernel-as-a-congruence. For the converse direction, we need the notion ofquotientin the Mal'cev algebra (which isdivisionon either side for groups andsubtractionfor vector spaces, modules, and rings). Using this, elementsaandbofAare equivalent under the kernel-as-a-congruence if and only if their quotienta/bis an element of the kernel-as-an-ideal. Thisalgebra-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Maltsev_algebra
Inmathematics, aMalcev algebra(orMaltsev algebraorMoufang–Liealgebra) over afieldis anonassociative algebrathat is antisymmetric, so that and satisfies theMalcev identity They were first defined byAnatoly Maltsev(1955). Malcev algebras play a role in the theory ofMoufang loopsthat generalizes the role ofLie algebrasin the theory ofgroups. Namely, just as the tangent space of the identity element of aLie groupforms a Lie algebra, the tangent space of the identity of a smooth Moufang loop forms a Malcev algebra. Moreover, just as a Lie group can be recovered from its Lie algebra under certain supplementary conditions, a smooth Moufang loop can be recovered from its Malcev algebra if certain supplementary conditions hold. For example, this is true for a connected, simply connected real-analytic Moufang loop.[1] In the case of Malcev algebras, this construction can be simplified. Every Malcev algebra has a specialneutral element(thezero vectorin the case ofvector spaces, theidentity elementin the case ofcommutative groups, and thezero elementin the case ofringsor modules). The characteristic feature of a Malcev algebra is that we can recover the entire equivalence relation kerffrom theequivalence classof the neutral element. To be specific, letAandBbe Malcev algebraic structures of a given type and letfbe a homomorphism of that type fromAtoB. IfeBis the neutral element ofB, then thekerneloffis thepreimageof thesingleton set{eB}; that is, thesubsetofAconsisting of all those elements ofAthat are mapped byfto the elementeB. The kernel is usually denotedkerf(or a variation). In symbols: Since a Malcev algebra homomorphism preserves neutral elements, the identity elementeAofAmust belong to the kernel. The homomorphismfis injective if and only if its kernel is only the singleton set {eA}. The notion ofidealgeneralises to any Malcev algebra (aslinear subspacein the case of vector spaces,normal subgroupin the case of groups, two-sided ideals in the case of rings, andsubmodulein the case ofmodules). It turns out that kerfis not asubalgebraofA, but it is an ideal. Then it makes sense to speak of thequotient algebraG/ (kerf). The first isomorphism theorem for Malcev algebras states that this quotient algebra is naturally isomorphic to the image off(which is a subalgebra ofB). The connection between this and the congruence relation for more general types of algebras is as follows. First, the kernel-as-an-ideal is the equivalence class of the neutral elementeAunder the kernel-as-a-congruence. For the converse direction, we need the notion ofquotientin the Mal'cev algebra (which isdivisionon either side for groups andsubtractionfor vector spaces, modules, and rings). Using this, elementsaandbofAare equivalent under the kernel-as-a-congruence if and only if their quotienta/bis an element of the kernel-as-an-ideal. Thisalgebra-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Malcev_algebra
Inmathematicsandabstract algebra, aBol loopis analgebraic structuregeneralizing the notion ofgroup. Bol loops are named for the Dutch mathematicianGerrit Bolwho introduced them in (Bol 1937). Aloop,L, is said to be aleft Bol loopif it satisfies theidentity whileLis said to be aright Bol loopif it satisfies These identities can be seen as weakened forms ofassociativity, or a strengthened form of (left or right)alternativity. A loop is both left Bol and right Bol if and only if it is aMoufang loop. Alternatively, a right or left Bol loop is Moufang if and only if it satisfies the flexible identitya(ba) = (ab)a. Different authors use the term "Bol loop" to refer to either a left Bol or a right Bol loop. The left (right) Bol identity directly implies the left (right)alternative property, as can be shown by setting b to the identity. It also implies theleft (right) inverse property, as can be seen by setting b to the left (right) inverse of a, and using loop division to cancel the superfluous factor of a. As a result, Bol loops have two-sided inverses. Bol loops are alsopower-associative. A Bol loop where the aforementioned two-sided inverse satisfies theautomorphic inverse property,(ab)−1=a−1b−1for alla,binL, is known as a (left or right)Bruck looporK-loop(named for the American mathematicianRichard Bruck). The example in the following section is a Bruck loop. Bruck loops have applications inspecial relativity; see Ungar (2002). Left Bruck loops are equivalent to Ungar's (2002)gyrocommutativegyrogroups, even though the two structures are defined differently. LetLdenote the set ofn x npositive definite,Hermitianmatricesover the complex numbers. It is generally not true that thematrix productABof matricesA,BinLis Hermitian, let alone positive definite. However, there exists a uniquePinLand a uniqueunitary matrixUsuch thatAB = PU; this is thepolar decompositionofAB. Define a binary operation * onLbyA*B=P. Then (L, *) is a left Bruck loop. An explicit formula for * is given byA*B= (A B2A)1/2, where the superscript 1/2 indicates the unique positive definite Hermitiansquare root. A (left) Bol algebra is a vector space equipped with a binary operation[a,b]+[b,a]=0{\displaystyle [a,b]+[b,a]=0}and a ternary operation{a,b,c}{\displaystyle \{a,b,c\}}that satisfies the following identities:[1] and and and Note that {.,.,.} acts as aLie triple system. IfAis a left or rightalternative algebrathen it has an associated Bol algebraAb, where[a,b]=ab−ba{\displaystyle [a,b]=ab-ba}is thecommutatorand{a,b,c}=⟨b,c,a⟩{\displaystyle \{a,b,c\}=\langle b,c,a\rangle }is theJordan associator.
https://en.wikipedia.org/wiki/Bol_loop
Agyrovector spaceis amathematicalconcept proposed by Abraham A. Ungar for studyinghyperbolic geometryin analogy to the wayvector spacesare used inEuclidean geometry.[1]Ungar introduced the concept of gyrovectors that have addition based on gyrogroups instead of vectors which have addition based ongroups. Ungar developed his concept as a tool for the formulation ofspecial relativityas an alternative to the use ofLorentz transformationsto represent compositions of velocities (also calledboosts– "boosts" are aspects ofrelative velocities, and should not be conflated with "translations"). This is achieved by introducing "gyro operators"; two 3d velocity vectors are used to construct an operator, which acts on another 3d velocity. Gyrogroups are weakly associative group-like structures. Ungar proposed the term gyrogroup for what he called a gyrocommutative-gyrogroup, with the term gyrogroup being reserved for the non-gyrocommutative case, in analogy with groups vs.abelian groups. Gyrogroups are a type ofBol loop. Gyrocommutative gyrogroups are equivalent toK-loops[2]although defined differently. The termsBruck loop[3]anddyadic symset[4]are also in use. Agyrogroup(G,⊕{\displaystyle \oplus }) consists of an underlying setGand abinary operation⊕{\displaystyle \oplus }satisfying the following axioms: The first pair of axioms are like the group axioms. The last pair present the gyrator axioms and the middle axiom links the two pairs. Since a gyrogroup has inverses and an identity it qualifies as aquasigroupand aloop. Gyrogroups are a generalization ofgroups. Every group is an example of a gyrogroup with gyr[a,b] defined as the identity map for allaandbinG. An example of a finite gyrogroup is given in[5]. Some identities which hold in any gyrogroup (G,⊕{\displaystyle \oplus }) are: Furthermore, one may prove the Gyration inversion law, which is the motivation for the definition of gyrocommutativity below: Some additional theorems satisfied by the Gyration group of any gyrogroup include: More identities given on page 50 of[6]. One particularly useful consequence of the above identities is that Gyrogroups satisfy theleft Bol property A gyrogroup (G,⊕{\displaystyle \oplus }) isgyrocommutativeif its binary operation obeys the gyrocommutative law:a⊕{\displaystyle \oplus }b= gyr[a,b](b⊕{\displaystyle \oplus }a). For relativistic velocity addition, this formula showing the role of rotation relatinga+bandb+awas published in 1914 byLudwik Silberstein.[7][8] In every gyrogroup, a second operation can be defined calledcoaddition:a⊞{\displaystyle \boxplus }b=a⊕{\displaystyle \oplus }gyr[a,⊖{\displaystyle \ominus }b]bfor alla,b∈G. Coaddition is commutative if the gyrogroup addition is gyrocommutative. Relativistic velocities can be considered as points in theBeltrami–Klein modelof hyperbolic geometry and so vector addition in the Beltrami–Klein model can be given by thevelocity additionformula. In order for the formula to generalize to vector addition in hyperbolic space of dimensions greater than 3, the formula must be written in a form that avoids use of thecross productin favour of thedot product. In the general case, the Einsteinvelocity additionof two velocitiesu{\displaystyle \mathbf {u} }andv{\displaystyle \mathbf {v} }is given in coordinate-independent form as: whereγu{\displaystyle \gamma _{\mathbf {u} }}is the gamma factor given by the equationγu=11−|u|2c2{\displaystyle \gamma _{\mathbf {u} }={\frac {1}{\sqrt {1-{\frac {|\mathbf {u} |^{2}}{c^{2}}}}}}}. Using coordinates this becomes: whereγu=11−u12+u22+u32c2{\displaystyle \gamma _{\mathbf {u} }={\frac {1}{\sqrt {1-{\frac {u_{1}^{2}+u_{2}^{2}+u_{3}^{2}}{c^{2}}}}}}}. Einstein velocity addition iscommutativeandassociativeonlywhenu{\displaystyle \mathbf {u} }andv{\displaystyle \mathbf {v} }areparallel. In fact and where "gyr" is the mathematical abstraction ofThomas precessioninto an operator called Thomas gyration and given by for allw. Thomas precession has an interpretation in hyperbolic geometry as the negativehyperbolic triangledefect. If the 3 × 3 matrix form of the rotation applied to 3-coordinates is given by gyr[u,v], then the 4 × 4 matrix rotation applied to 4-coordinates is given by: The composition of twoLorentz boostsB(u) and B(v) of velocitiesuandvis given by:[9][10] This fact that either B(u⊕{\displaystyle \oplus }v) or B(v⊕{\displaystyle \oplus }u) can be used depending whether you write the rotation before or after explains thevelocity composition paradox. The composition of two Lorentz transformations L(u,U) and L(v,V) which include rotations U and V is given by:[11] In the above, a boost can be represented as a 4 × 4 matrix. The boost matrix B(v) means the boost B that uses the components ofv, i.e.v1,v2,v3in the entries of the matrix, or rather the components ofv/cin the representation that is used in the sectionLorentz transformation#Matrix forms. The matrix entries depend on the components of the 3-velocityv, and that's what the notation B(v) means. It could be argued that the entries depend on the components of the 4-velocity because 3 of the entries of the 4-velocity are the same as the entries of the 3-velocity, but the usefulness of parameterizing the boost by 3-velocity is that the resultant boost you get from the composition of two boosts uses the components of the 3-velocity compositionu⊕{\displaystyle \oplus }vin the 4 × 4 matrix B(u⊕{\displaystyle \oplus }v). But the resultant boost also needs to be multiplied by a rotation matrix because boost composition (i.e. the multiplication of two 4 × 4 matrices) results not in a pure boost but a boost and a rotation, i.e. a 4 × 4 matrix that corresponds to the rotation Gyr[u,v] to get B(u)B(v) = B(u⊕{\displaystyle \oplus }v)Gyr[u,v] = Gyr[u,v]B(v⊕{\displaystyle \oplus }u). Let s be any positive constant, let (V,+,.) be any realinner product spaceand let Vs={v∈  V :|v|<s}. An Einstein gyrovector space (Vs,⊕{\displaystyle \oplus },⊗{\displaystyle \otimes }) is an Einstein gyrogroup (Vs,⊕{\displaystyle \oplus }) with scalar multiplication given byr⊗{\displaystyle \otimes }v=stanh(rtanh−1(|v|/s))v/|v| whereris any real number,v∈Vs,v≠0andr⊗{\displaystyle \otimes }0=0with the notationv⊗{\displaystyle \otimes }r=r⊗{\displaystyle \otimes }v. Einstein scalar multiplication does not distribute over Einstein addition except when the gyrovectors are colinear (monodistributivity), but it has other properties of vector spaces: For any positive integernand for all real numbersr,r1,r2andv∈Vs: TheMöbius transformationof the open unit disc in thecomplex planeis given by the polar decomposition To generalize this to higher dimensions the complex numbers are considered as vectors in the planeR2{\displaystyle \mathbf {\mathrm {R} } ^{2}}, and Möbius addition is rewritten in vector form as: This gives the vector addition of points in thePoincaré ballmodel of hyperbolic geometry where radius s=1 for the complex unit disc now becomes any s>0. Let s be any positive constant, let (V,+,.) be any realinner product spaceand let Vs={v∈  V :|v|<s}. A Möbius gyrovector space (Vs,⊕{\displaystyle \oplus },⊗{\displaystyle \otimes }) is a Möbius gyrogroup (Vs,⊕{\displaystyle \oplus }) with scalar multiplication given byr⊗{\displaystyle \otimes }v=stanh(rtanh−1(|v|/s))v/|v| whereris any real number,v∈Vs,v≠0andr⊗{\displaystyle \otimes }0=0with the notationv⊗{\displaystyle \otimes }r=r⊗{\displaystyle \otimes }v. Möbius scalar multiplication coincides with Einstein scalar multiplication (see section above) and this stems from Möbius addition and Einstein addition coinciding for vectors that are parallel. A proper velocity space model of hyperbolic geometry is given byproper velocitieswith vector addition given by the proper velocity addition formula:[6][12][13] whereβw{\displaystyle \beta _{\mathbf {w} }}is the beta factor given byβw=11+|w|2c2{\displaystyle \beta _{\mathbf {w} }={\frac {1}{\sqrt {1+{\frac {|\mathbf {w} |^{2}}{c^{2}}}}}}}. This formula provides a model that uses a whole space compared to other models of hyperbolic geometry which use discs or half-planes. A proper velocity gyrovector space is a real inner product space V, with the proper velocity gyrogroup addition⊕U{\displaystyle \oplus _{U}}and with scalar multiplication defined byr⊗{\displaystyle \otimes }v=ssinh(rsinh−1(|v|/s))v/|v| whereris any real number,v∈V,v≠0andr⊗{\displaystyle \otimes }0=0with the notationv⊗{\displaystyle \otimes }r=r⊗{\displaystyle \otimes }v. A gyrovector spaceisomorphismpreserves gyrogroup addition and scalar multiplication and the inner product. The three gyrovector spaces Möbius, Einstein and Proper Velocity are isomorphic. If M, E and U are Möbius, Einstein and Proper Velocity gyrovector spaces respectively with elementsvm,veandvuthen the isomorphisms are given by: From this table the relation between⊕E{\displaystyle \oplus _{E}}and⊕M{\displaystyle \oplus _{M}}is given by the equations: u⊕Ev=2⊗(12⊗u⊕M12⊗v){\displaystyle \mathbf {u} \oplus _{E}\mathbf {v} =2\otimes \left({{\frac {1}{2}}\otimes \mathbf {u} \oplus _{M}{\frac {1}{2}}\otimes \mathbf {v} }\right)} u⊕Mv=12⊗(2⊗u⊕E2⊗v){\displaystyle \mathbf {u} \oplus _{M}\mathbf {v} ={\frac {1}{2}}\otimes \left({2\otimes \mathbf {u} \oplus _{E}2\otimes \mathbf {v} }\right)} This is related to theconnection between Möbius transformations and Lorentz transformations. Gyrotrigonometry is the use of gyroconcepts to studyhyperbolic triangles. Hyperbolic trigonometry as usually studied uses thehyperbolic functionscosh, sinh etc., and this contrasts withspherical trigonometrywhich uses the Euclidean trigonometric functions cos, sin, but withspherical triangle identitiesinstead of ordinary planetriangle identities. Gyrotrigonometry takes the approach of using the ordinary trigonometric functions but in conjunction with gyrotriangle identities. The study oftriangle centerstraditionally is concerned with Euclidean geometry, but triangle centers can also be studied in hyperbolic geometry. Using gyrotrigonometry, expressions for trigonometric barycentric coordinates can be calculated that have the same form for both euclidean and hyperbolic geometry. In order for the expressions to coincide, the expressions mustnotencapsulate the specification of the anglesum being 180 degrees.[14][15][16] Using gyrotrigonometry, a gyrovector addition can be found which operates according to the gyroparallelogram law. This is thecoadditionto the gyrogroup operation. Gyroparallelogram addition is commutative. Thegyroparallelogram lawis similar to theparallelogram lawin that a gyroparallelogram is a hyperbolic quadrilateral the two gyrodiagonals of which intersect at their gyromidpoints, just as a parallelogram is a Euclidean quadrilateral the two diagonals of which intersect at their midpoints.[17] Bloch vectorswhich belong to the open unit ball of the Euclidean 3-space, can be studied with Einstein addition[18]or Möbius addition.[6] A review of one of the earlier gyrovector books[19]says the following: "Over the years, there have been a handful of attempts to promote the non-Euclidean style for use in problem solving in relativity and electrodynamics, the failure of which to attract any substantial following, compounded by the absence of any positive results must give pause to anyone considering a similar undertaking. Until recently, no one was in a position to offer an improvement on the tools available since 1912. In his new book, Ungar furnishes the crucial missing element from the panoply of the non-Euclidean style: an elegant nonassociative algebraic formalism that fully exploits the structure of Einstein’s law of velocity composition."[20]
https://en.wikipedia.org/wiki/Gyrogroup
Ingeometry, aMoufang plane, named forRuth Moufang, is a type ofprojective plane, more specifically a special type oftranslation plane. A translation plane is a projective plane that has atranslation line, that is, a line with the property that the group of automorphisms that fixes every point of the lineactstransitively on the points of the plane not on the line.[1]A translation plane is Moufang if every line of the plane is a translation line.[2] A Moufang plane can also be described as a projective plane in which thelittle Desargues theoremholds.[3]This theorem states that a restricted form ofDesargues' theoremholds for every line in the plane.[4]For example, everyDesarguesian planeis a Moufang plane.[5] In algebraic terms, a projective plane over anyalternative division ringis a Moufang plane,[6]and this gives a 1:1 correspondence between isomorphism classes of alternative division rings and of Moufang planes. As a consequence of the algebraicArtin–Zorn theorem, that every finite alternative division ring is a field, every finite Moufang plane is Desarguesian, but some infinite Moufang planes arenon-Desarguesian planes. In particular, theCayley plane, an infinite Moufang projective plane over theoctonions, is one of these because the octonions do not form a division ring.[7] The following conditions on a projective planePare equivalent:[8] Also, in a Moufang plane:
https://en.wikipedia.org/wiki/Moufang_plane
In mathematics,Moufang polygonsare a generalization byJacques Titsof theMoufang planesstudied byRuth Moufang, and are irreduciblebuildingsof rank two that admit the action ofroot groups. In a book on the topic, Tits and Richard Weiss[1]classify them all. An earlier theorem, proved independently by Tits and Weiss,[2][3]showed that a Moufang polygon must be a generalized 3-gon, 4-gon, 6-gon, or 8-gon, so the purpose of the aforementioned book was to analyze these four cases. A Moufang 3-gon can be identified with theincidence graphof a Moufangprojective plane. In this identification, the points and lines of the plane correspond to the vertices of the building. Real forms ofLie groupsgive rise to examples which are the three main types of Moufang 3-gons. There are four realdivision algebras: the real numbers, thecomplex numbers, thequaternions, and theoctonions, of dimensions 1,2,4 and 8, respectively. The projective plane over such a division algebra then gives rise to a Moufang 3-gon. These projective planes correspond to the building attached to SL3(R), SL3(C), a real form of A5and to a real form of E6, respectively. In the first diagram[clarification neededwhat diagram?]the circled nodes represent 1-spaces and 2-spaces in a three-dimensional vector space. In the second diagram[clarification neededwhat diagram?]the circled nodes represent 1-space and 2-spaces in a 3-dimensional vector space over thequaternions, which in turn represent certain 2-spaces and 4-spaces in a 6-dimensional complex vector space, as expressed by the circled nodes in the A5diagram. The fourth case — a form of E6— is exceptional, and its analogue for Moufang 4-gons is a major feature of Weiss's book. Going from the real numbers to an arbitrary field, Moufang 3-gons can be divided into three cases as above. The split case in the first diagram exists over any field. The second case extends to all associative, non-commutative division algebras; over the reals these are limited to the algebra of quaternions, which has degree 2 (and dimension 4), but some fields admit central division algebras of other degrees. The third case involves "alternative" division algebras (which satisfy a weakened form of the associative law), and a theorem ofRichard Bruckand Erwin Kleinfeld[4]shows that these are Cayley-Dickson algebras.[5]This concludes the discussion of Moufang 3-gons. Moufang 4-gons are also called Moufang quadrangles. The classification of Moufang 4-gons was the hardest of all, and when Tits and Weiss started to write it up, a hitherto unnoticed type came into being, arising from groups of type F4. They can be divided into three classes: There is some overlap here, in the sense that some classical groups arising from pseudo-quadratic spaces can be obtained from quadrangular algebras (which Weiss calls special), but there are other, non-special ones. The most important of these arise from algebraic groups of types E6, E7, and E8. They are k-forms of algebraic groups belonging to the following diagrams: E6 E7 E8. The E6 one exists over the real numbers, though the E7 and E8 ones do not. Weiss calls the quadrangular algebras in all these cases Weiss regular, but not special. There is a further type that he calls defective arising from groups of type F4. These are the most exotic of all—they involve purely inseparable field extensions in characteristic 2—and Weiss only discovered them during the joint work with Tits on the classification of Moufang 4-gons by investigating a strange lacuna that should not have existed but did. The classification of Moufang 4-gons by Tits and Weiss is related to their intriguing monograph in two ways. One is that the use of quadrangular algebras short-cuts some of the methods known before. The other is that the concept is an analogue to theoctonion algebras, and quadratic Jordan division algebras of degree 3, that give rise to Moufang 3-gons and 6-gons. In fact all the exceptional Moufang planes, quadrangles, and hexagons that do not arise from "mixed groups" (of characteristic 2 for quadrangles or characteristic 3 for hexagons) come from octonions, quadrangular algebras, orJordan algebras. Moufang 6-gons are also called Moufang hexagons. A classification of Moufang 6-gons was stated by Tits,[6]though the details remained unproven until the joint work with Weiss on Moufang Polygons. Moufang 8-gons are also called Moufang octagons. They were classified by Tits,[7]where he showed that they all arise fromRee groupsof type2F4. A potential use for quadrangular algebras is to analyze two open questions. One is the Kneser-Tits conjecture[8]that concerns the full group oflinear transformationsof a building (e.g. GLn) factored out by the subgroup generated by root groups (e.g. SLn). The conjecture is proved for all Moufang buildings except the 6-gons and 4-gons of type E8, in which case the group of linear transformations is conjectured to be equal to the subgroup generated by root groups. For the E8 hexagons this can be rephrased as a question on quadratic Jordan algebras, and for the E8 quadrangles it can now be rephrased in terms of quadrangular algebras. Another open question about the E8 quadrangle concerns fields that are complete with respect to adiscrete valuation: is there, in such cases, an affine building that yields the quadrangle as its structure at infinity?
https://en.wikipedia.org/wiki/Moufang_polygon
Square root algorithmscompute the non-negativesquare rootS{\displaystyle {\sqrt {S}}}of a positivereal numberS{\displaystyle S}. Since all square roots ofnatural numbers, other than ofperfect squares, areirrational,[1]square roots can usually only be computed to some finite precision: thesealgorithmstypically construct a series of increasingly accurateapproximations. Most square root computation methods are iterative: after choosing a suitableinitial estimateofS{\displaystyle {\sqrt {S}}}, an iterative refinement is performed until some termination criterion is met. One refinement scheme isHeron's method, a special case ofNewton's method. If division is much more costly than multiplication, it may be preferable to compute theinverse square rootinstead. Other methods are available to compute the square rootdigit by digit, or usingTaylor series. Rational approximations of square roots may be calculated usingcontinued fraction expansions. The method employed depends on the needed accuracy, and the available tools and computational power. The methods may be roughly classified as those suitable for mental calculation, those usually requiring at least paper and pencil, and those which are implemented as programs to be executed on a digital electronic computer or other computing device. Algorithms may take into account convergence (how many iterations are required to achieve a specified precision), computational complexity of individual operations (i.e. division) or iterations, and error propagation (the accuracy of the final result). A few methods like paper-and-pencil synthetic division and series expansion, do not require a starting value. In some applications, aninteger square rootis required, which is the square root rounded or truncated to the nearest integer (a modified procedure may be employed in this case). Procedures for finding square roots (particularly thesquare root of 2) have been known since at least the period of ancient Babylon in the 17th century BCE.Babylonian mathematicianscalculated the square root of 2 to threesexagesimal"digits" after the 1, but it is not known exactly how. They knew how to approximate a hypotenuse usinga2+b2≈a+b22a{\displaystyle {\sqrt {a^{2}+b^{2}}}\approx a+{\frac {b^{2}}{2a}}}(giving for example4160+153600{\displaystyle {\frac {41}{60}}+{\frac {15}{3600}}}for the diagonal of a gate whose height is4060{\displaystyle {\frac {40}{60}}}rods and whose width is1060{\displaystyle {\frac {10}{60}}}rods) and they may have used a similar approach for finding the approximation of2.{\displaystyle {\sqrt {2}}.}[2] Heron's methodfrom first century Egypt was the first ascertainable algorithm for computing square root.[3] Modern analytic methods began to be developed after introduction of theArabic numeralsystem to western Europe in the early Renaissance.[citation needed] Today, nearly all computing devices have a fast and accurate square root function, either as a programming language construct, a compiler intrinsic or library function, or as a hardware operator, based on one of the described procedures. Many iterative square root algorithms require an initialseed value. The seed must be a non-zero positive number; it should be between 1 andS{\displaystyle S}, the number whose square root is desired, because the square root must be in that range. If the seed is far away from the root, the algorithm will require more iterations. If one initializes withx0=1{\displaystyle x_{0}=1}(orS{\displaystyle S}), then approximately12|log2⁡S|{\displaystyle {\tfrac {1}{2}}\vert \log _{2}S\vert }iterations will be wasted just getting the order of magnitude of the root. It is therefore useful to have a rough estimate, which may have limited accuracy but is easy to calculate. In general, the better the initial estimate, the faster the convergence. For Newton's method, a seed somewhat larger than the root will converge slightly faster than a seed somewhat smaller than the root. In general, an estimate is pursuant to an arbitrary interval known to contain the root (such as[x0,S/x0]{\displaystyle [x_{0},S/x_{0}]}). The estimate is a specific value of a functional approximation tof(x)=x{\displaystyle f(x)={\sqrt {x}}}over the interval. Obtaining a better estimate involves either obtaining tighter bounds on the interval, or finding a better functional approximation tof(x){\displaystyle f(x)}. The latter usually means using a higher order polynomial in the approximation, though not all approximations are polynomial. Common methods of estimating include scalar, linear, hyperbolic and logarithmic. A decimal base is usually used for mental or paper-and-pencil estimating. A binary base is more suitable for computer estimates. In estimating, the exponent andmantissaare usually treated separately, as the number would be expressed in scientific notation. Typically the numberS{\displaystyle S}is expressed inscientific notationasa×102n{\displaystyle a\times 10^{2n}}where1≤a<100{\displaystyle 1\leq a<100}andnis an integer, and the range of possible square roots isa×10n{\displaystyle {\sqrt {a}}\times 10^{n}}where1≤a<10{\displaystyle 1\leq {\sqrt {a}}<10}. Scalar methods divide the range into intervals, and the estimate in each interval is represented by a single scalar number. If the range is considered as a single interval, the arithmetic mean (5.5) or geometric mean (10≈3.16{\displaystyle {\sqrt {10}}\approx 3.16}) times10n{\displaystyle 10^{n}}are plausible estimates. The absolute and relative error for these will differ. In general, a single scalar will be very inaccurate. Better estimates divide the range into two or more intervals, but scalar estimates have inherently low accuracy. For two intervals, divided geometrically, the square rootS=a×10n{\displaystyle {\sqrt {S}}={\sqrt {a}}\times 10^{n}}can be estimated as[Note 1]S≈{2⋅10nifa<10,6⋅10nifa≥10.{\displaystyle {\sqrt {S}}\approx {\begin{cases}2\cdot 10^{n}&{\text{if }}a<10,\\6\cdot 10^{n}&{\text{if }}a\geq 10.\end{cases}}} This estimate has maximum absolute error of4⋅10n{\displaystyle 4\cdot 10^{n}}at a = 100, and maximum relative error of 100% at a = 1. For example, forS=125348{\displaystyle S=125348}factored as12.5348×104{\displaystyle 12.5348\times 10^{4}}, the estimate isS≈6⋅102=600{\displaystyle {\sqrt {S}}\approx 6\cdot 10^{2}=600}.125348=354.0{\displaystyle {\sqrt {125348}}=354.0}, an absolute error of 246 and relative error of almost 70%. A better estimate, and the standard method used, is a linear approximation to the functiony=x2{\displaystyle y=x^{2}}over a small arc. If, as above, powers of the base are factored out of the numberS{\displaystyle S}and the interval reduced to[1,100]{\displaystyle [1,100]}, a secant line spanning the arc, or a tangent line somewhere along the arc may be used as the approximation, but a least-squares regression line intersecting the arc will be more accurate. A least-squares regression line minimizes the average difference between the estimate and the value of the function. Its equation isy=8.7x−10{\displaystyle y=8.7x-10}. Reordering,x=0.115y+1.15{\displaystyle x=0.115y+1.15}. Rounding the coefficients for ease of computation,S≈(a/10+1.2)⋅10n{\displaystyle {\sqrt {S}}\approx (a/10+1.2)\cdot 10^{n}} That is the best estimateon averagethat can be achieved with a single piece linear approximation of the function y=x2in the interval[1,100]{\displaystyle [1,100]}. It has a maximum absolute error of 1.2 at a=100, and maximum relative error of 30% at S=1 and 10.[Note 2] To divide by 10, subtract one from the exponent ofa{\displaystyle a}, or figuratively move the decimal point one digit to the left. For this formulation, any additive constant 1 plus a small increment will make a satisfactory estimate so remembering the exact number isn't a burden. The approximation (rounded or not) using a single line spanning the range[1,100]{\displaystyle [1,100]}is less than one significant digit of precision; the relative error is greater than 1/22, so less than 2 bits of information are provided. The accuracy is severely limited because the range is two orders of magnitude, quite large for this kind of estimation. A much better estimate can be obtained by a piece-wise linear approximation: multiple line segments, each approximating some subarc of the original. The more line segments used, the better the approximation. The most common way is to use tangent lines; the critical choices are how to divide the arc and where to place the tangent points. An efficacious way to divide the arc fromy= 1 toy= 100 is geometrically: for two intervals, the bounds of the intervals are the square root of the bounds of the original interval, 1×100, i.e. [1,2√100] and [2√100,100]. For three intervals, the bounds are the cube roots of 100: [1,3√100], [3√100,(3√100)2], and [(3√100)2,100], etc. For two intervals,2√100= 10, a very convenient number. Tangent lines are easy to derive, and are located at x =√1*√10andx=√10*√10. Their equations are:y=3.56x−3.16{\displaystyle y=3.56x-3.16}andy=11.2x−31.6{\displaystyle y=11.2x-31.6}. Inverting, the square roots are:x=0.28y+0.89{\displaystyle x=0.28y+0.89}andx=.089y+2.8{\displaystyle x=.089y+2.8}. Thus forS=a⋅102n{\displaystyle S=a\cdot 10^{2n}}:S≈{(0.28a+0.89)⋅10nifa<10,(.089a+2.8)⋅10nifa≥10.{\displaystyle {\sqrt {S}}\approx {\begin{cases}(0.28a+0.89)\cdot 10^{n}&{\text{if }}a<10,\\(.089a+2.8)\cdot 10^{n}&{\text{if }}a\geq 10.\end{cases}}} The maximum absolute errors occur at the high points of the intervals, ata=10 and 100, and are 0.54 and 1.7 respectively. The maximum relative errors are at the endpoints of the intervals, ata=1, 10 and 100, and are 17% in both cases. 17% or 0.17 is larger than 1/10, so the method yields less than a decimal digit of accuracy. In some cases, hyperbolic estimates may be efficacious, because a hyperbola is also a convex curve and may lie along an arc ofy=x2better than a line. Hyperbolic estimates are more computationally complex, because they necessarily require a floating division. A near-optimal hyperbolic approximation tox2on the interval[1,100]{\displaystyle [1,100]}is y = 190/(10−x) − 20. Transposing, the square root isx= 10 − 190/(y+20). Thus forS=a⋅102n{\displaystyle S=a\cdot 10^{2n}}:S≈(10−190a+20)⋅10n{\displaystyle {\sqrt {S}}\approx \left(10-{\frac {190}{a+20}}\right)\cdot 10^{n}} The division need be accurate to only one decimal digit, because the estimate overall is only that accurate, and can be done mentally. This hyperbolic estimate is better on average than scalar or linear estimates. It has maximum absolute error of 1.58 ata= 100 and maximum relative error ata= 10, where the estimate of 3.67 is 16.0% higher than the root of 3.16. If instead one performed Newton-Raphson iterations beginning with an estimate of 10, it would take two iterations to get to 3.66, matching the hyperbolic estimate. For a more typical case like 75, the hyperbolic estimate of 8.00 is only 7.6% low, and 5 Newton-Raphson iterations starting at 75 would be required to obtain a more accurate result. A method analogous to piece-wise linear approximation but using only arithmetic instead of algebraic equations, uses the multiplication tables in reverse: the square root of a number between 1 and 100 is between 1 and 10, so if we know 25 is a perfect square (5 × 5), and 36 is a perfect square (6 × 6), then the square root of a number greater than or equal to 25 but less than 36, begins with a 5. Similarly for numbers between other squares. This method will yield a correct first digit, but it is not accurate to one digit: the first digit of the square root of 35 for example, is 5, but the square root of 35 is almost 6. A better way is to the divide the range into intervals halfway between the squares. So any number between 25 and halfway to 36, which is 30.5, estimate 5; any number greater than 30.5 up to 36, estimate 6.[Note 3]The procedure only requires a little arithmetic to find a boundary number in the middle of two products from the multiplication table. Here is a reference table of those boundaries: The final operation is to multiply the estimatekby the power of ten divided by 2, so forS=a⋅102n{\displaystyle S=a\cdot 10^{2n}},S≈k⋅10n{\displaystyle {\sqrt {S}}\approx k\cdot 10^{n}} The method implicitly yields one significant digit of accuracy, since it rounds to the best first digit. The method can be extended 3 significant digits in most cases, by interpolating between the nearest squares bounding the operand. Ifk2≤a<(k+1)2{\displaystyle k^{2}\leq a<(k+1)^{2}}, thena{\displaystyle {\sqrt {a}}}is approximately k plus a fraction, the difference betweenaandk2divided by the difference between the two squares:a≈k+R{\displaystyle {\sqrt {a}}\approx k+R}whereR=(a−k2)(k+1)2−k2{\displaystyle R={\frac {(a-k^{2})}{(k+1)^{2}-k^{2}}}} The final operation, as above, is to multiply the result by the power of ten divided by 2;S=a⋅10n≈(k+R)⋅10n{\displaystyle {\sqrt {S}}={\sqrt {a}}\cdot 10^{n}\approx (k+R)\cdot 10^{n}} kis a decimal digit andRis a fraction that must be converted to decimal. It usually has only a single digit in the numerator, and one or two digits in the denominator, so the conversion to decimal can be done mentally. Example: find the square root of 75.75 = 75 × 102·0, soais 75 andnis 0. From the multiplication tables, the square root of the mantissa must be 8 pointsomethingbecauseais between 8×8 = 64 and 9×9 = 81, sokis 8;somethingis the decimal representation ofR. The fractionRis 75 −k2= 11, the numerator, and 81 −k2= 17, the denominator. 11/17 is a little less than 12/18 = 2/3 = .67, so guess .66 (it's okay to guess here, the error is very small). The final estimate is8 + .66 = 8.66.√75to three significant digits is 8.66, so the estimate is good to 3 significant digits. Not all such estimates using this method will be so accurate, but they will be close. When working in thebinary numeral system(as computers do internally), by expressingS{\displaystyle S}asa×22n{\displaystyle a\times 2^{2n}}where0.12≤a<102{\displaystyle 0.1_{2}\leq a<10_{2}}, the square rootS=a×2n{\displaystyle {\sqrt {S}}={\sqrt {a}}\times 2^{n}}can be estimated asS≈(0.485+0.485⋅a)⋅2n{\displaystyle {\sqrt {S}}\approx (0.485+0.485\cdot a)\cdot 2^{n}} which is the least-squares regression line to 3 significant digit coefficients.a{\displaystyle {\sqrt {a}}}has maximum absolute error of 0.0408 ata{\displaystyle a}=2, and maximum relative error of 3.0% ata{\displaystyle a}=1. A computationally convenient rounded estimate (because the coefficients are powers of 2) is: which has maximum absolute error of 0.086 at 2 and maximum relative error of 6.1% ata= 0.5anda= 2.0. ForS=125348=111101001101001002=1.11101001101001002×216{\displaystyle S=125348=1\;1110\;1001\;1010\;0100_{2}=1.1110\;1001\;1010\;0100_{2}\times 2^{16}\,}, the binary approximation givesS≈(0.5+0.5⋅a)⋅28=1.01110100110100102⋅1000000002=1.456⋅256=372.8{\displaystyle {\sqrt {S}}\approx (0.5+0.5\cdot a)\cdot 2^{8}=1.0111\;0100\;1101\;0010_{2}\cdot 1\;0000\;0000_{2}=1.456\cdot 256=372.8}.125348=354.0{\displaystyle {\sqrt {125348}}=354.0}, so the estimate has an absolute error of 19 and relative error of 5.3%. The relative error is a little less than 1/24, so the estimate is good to 4+ bits. An estimate fora{\displaystyle a}good to 8 bits can be obtained by table lookup on the high 8 bits ofa{\displaystyle a}, remembering that the high bit is implicit in most floating point representations, and the bottom bit of the 8 should be rounded. The table is 256 bytes of precomputed 8-bit square root values. For example, for the index 111011012representing 1.851562510, the entry is 101011102representing 1.35937510, the square root of 1.851562510to 8 bit precision (2+ decimal digits). The first explicitalgorithmfor approximatingS{\displaystyle \ {\sqrt {S~}}\ }is known asHeron's method, after the first-centuryGreekmathematicianHero of Alexandriawho described the method in hisAD 60workMetrica.[3]This method is also called theBabylonian method(not to be confused with theBabylonian method for approximating hypotenuses), although there is no evidence that the method was known toBabylonians. Given a positive real numberS{\displaystyle S}, letx0> 0be any positiveinitial estimate. Heron's method consists in iteratively computingxn+1=12(xn+Sxn),{\displaystyle x_{n+1}={\frac {1}{2}}\left(x_{n}+{\frac {S}{x_{n}}}\right),}until the desired accuracy is achieved. The sequence(x0,x1,x2,x3,…){\displaystyle \ {\bigl (}\ x_{0},\ x_{1},\ x_{2},\ x_{3},\ \ldots \ {\bigr )}\ }defined by this equation converges tolimn→∞xn=S.{\displaystyle \ \lim _{n\to \infty }x_{n}={\sqrt {S~}}~.} This is equivalent to usingNewton's methodto solvex2−S=0{\displaystyle x^{2}-S=0}. This algorithm isquadratically convergent: the number of correct digits ofxn{\displaystyle x_{n}}roughly doubles with each iteration. The basic idea is that ifx{\displaystyle \ x\ }is an overestimate to the square root of a non-negative real numberS{\displaystyle \ S\ }thenSx{\displaystyle \ {\tfrac {\ S\ }{x}}\ }will be an underestimate, and vice versa, so the average of these two numbers may reasonably be expected to provide a better approximation (though the formal proof of that assertion depends on theinequality of arithmetic and geometric meansthat shows this average is always an overestimate of the square root, as noted in the article onsquare roots, thus assuring convergence). More precisely, ifx{\displaystyle \ x\ }is our initial guess ofS{\displaystyle \ {\sqrt {S~}}\ }andε{\displaystyle \ \varepsilon \ }is the error in our estimate such thatS=(x+ε)2,{\displaystyle \ S=\left(x+\varepsilon \right)^{2}\ ,}then we can expand the binomial as:(x+ε)2=x2+2xε+ε2{\displaystyle \ {\bigl (}\ x+\varepsilon \ {\bigr )}^{2}=x^{2}+2x\varepsilon +\varepsilon ^{2}}and solve for the error term Therefore, we can compensate for the error and update our old estimate asx+ε≈x+S−x22x=S+x22x=Sx+x2≡xrevised.{\displaystyle \ x+\varepsilon \ \approx \ x+{\frac {\ S-x^{2}\ }{2x}}\ =\ {\frac {\ S+x^{2}\ }{2x}}\ =\ {\frac {\ {\frac {S}{\ x\ }}+x\ }{2}}\ \equiv \ x_{\mathsf {revised}}~.}Since the computed error was not exact, this is not the actual answer, but becomes our new guess to use in the next round of correction. The process of updating is iterated until desired accuracy is obtained. This algorithm works equally well in thep-adic numbers, but cannot be used to identify real square roots withp-adic square roots; one can, for example, construct a sequence of rational numbers by this method that converges to+3in the reals, but to−3in the 2-adics. To calculateS{\displaystyle {\sqrt {S\,}}}forS=125348{\displaystyle S=125348}to seven significant figures, use the rough estimation method above to getx0=6⋅102=600x1=12(x0+Sx0)=12(600.1+125348600)=404.457≈400x2=12(x1+Sx1)=12(400.1+125348400)=356.685≈360x3=12(x2+Sx2)=12(360.1+125348360)=354.094≈354.1x4=12(x3+Sx3)=12(354.1+125348354.1)=354.045199{\displaystyle {\begin{alignedat}{5}x_{0}&=6\cdot 10^{2}&&&&=600\\[0.3em]x_{1}&={\frac {1}{2}}\left(x_{0}+{\frac {S}{x_{0}}}\right)&&={\frac {1}{2}}\left(600{\phantom {.1}}+{\frac {125348}{600}}\right)&&=404.457\approx 400\\[0.3em]x_{2}&={\frac {1}{2}}\left(x_{1}+{\frac {S}{x_{1}}}\right)&&={\frac {1}{2}}\left(400{\phantom {.1}}+{\frac {125348}{400}}\right)&&=356.685\approx 360\\[0.3em]x_{3}&={\frac {1}{2}}\left(x_{2}+{\frac {S}{x_{2}}}\right)&&={\frac {1}{2}}\left(360{\phantom {.1}}+{\frac {125348}{360}}\right)&&=354.094\approx 354.1\\[0.3em]x_{4}&={\frac {1}{2}}\left(x_{3}+{\frac {S}{x_{3}}}\right)&&={\frac {1}{2}}\left(354.1+{\frac {125348}{354.1}}\right)&&=354.045199\end{alignedat}}} Therefore125348≈354.0452{\displaystyle {\sqrt {\,125348\,}}\approx 354.0452}to seven significant figures. (The true value is 354.0451948551....) Notice that early iterations only needed to be computed to 1, 2 or 4 places to produce an accurate final answer. Suppose thatx0>0andS>0.{\displaystyle \ x_{0}>0~~{\mathsf {and}}~~S>0~.}Then for any natural numbern:xn>0.{\displaystyle \ n:x_{n}>0~.}Let therelative errorinxn{\displaystyle \ x_{n}\ }be defined byεn=xnS−1>−1{\displaystyle \ \varepsilon _{n}={\frac {~x_{n}\ }{\ {\sqrt {S~}}\ }}-1>-1\ }and thusxn=S⋅(1+εn).{\displaystyle \ x_{n}={\sqrt {S~}}\cdot \left(1+\varepsilon _{n}\right)~.} Then it can be shown thatεn+1=εn22(1+εn)≥0.{\displaystyle \ \varepsilon _{n+1}={\frac {\varepsilon _{n}^{2}}{2(1+\varepsilon _{n})}}\geq 0~.} And thus thatεn+2≤min{εn+122,εn+12}{\displaystyle \ \varepsilon _{n+2}\leq \min \left\{\ {\frac {\ \varepsilon _{n+1}^{2}\ }{2}},{\frac {\ \varepsilon _{n+1}\ }{2}}\ \right\}\ }and consequently that convergence is assured, andquadratic. If using the rough estimate above with the Babylonian method, then the least accurate cases in ascending order are as follows:S=1;x0=2;x1=1.250;ε1=0.250.S=10;x0=2;x1=3.500;ε1<0.107.S=10;x0=6;x1=3.833;ε1<0.213.S=100;x0=6;x1=11.333;ε1<0.134.{\displaystyle {\begin{aligned}S&=\ 1\ ;&x_{0}&=\ 2\ ;&x_{1}&=\ 1.250\ ;&\varepsilon _{1}&=\ 0.250~.\\S&=\ 10\ ;&x_{0}&=\ 2\ ;&x_{1}&=\ 3.500\ ;&\varepsilon _{1}&<\ 0.107~.\\S&=\ 10\ ;&x_{0}&=\ 6\ ;&x_{1}&=\ 3.833\ ;&\varepsilon _{1}&<\ 0.213~.\\S&=\ 100\ ;&x_{0}&=\ 6\ ;&x_{1}&=\ 11.333\ ;&\varepsilon _{1}&<\ 0.134~.\end{aligned}}} Thus in any case,ε1≤2−2.ε2<2−5<10−1.ε3<2−11<10−3.ε4<2−23<10−6.ε5<2−47<10−14.ε6<2−95<10−28.ε7<2−191<10−57.ε8<2−383<10−115.{\displaystyle {\begin{aligned}\varepsilon _{1}&\leq 2^{-2}.\\\varepsilon _{2}&<2^{-5}<10^{-1}~.\\\varepsilon _{3}&<2^{-11}<10^{-3}~.\\\varepsilon _{4}&<2^{-23}<10^{-6}~.\\\varepsilon _{5}&<2^{-47}<10^{-14}~.\\\varepsilon _{6}&<2^{-95}<10^{-28}~.\\\varepsilon _{7}&<2^{-191}<10^{-57}~.\\\varepsilon _{8}&<2^{-383}<10^{-115}~.\end{aligned}}} Rounding errors will slow the convergence. It is recommended to keep at least one extra digit beyond the desired accuracy of thexn{\displaystyle \ x_{n}\ }being calculated, to avoid significant round-off error. This method for finding an approximation to a square root was described in an AncientIndianmanuscript, called theBakhshali manuscript. It is algebraically equivalent to two iterations of Heron's method and thus quartically convergent, meaning that the number of correct digits of the approximation roughly quadruples with each iteration.[4]The original presentation, using modern notation, is as follows: To calculateS{\displaystyle {\sqrt {S}}}, letx02{\displaystyle x_{0}^{2}}be the initial approximation toS{\displaystyle S}. Then, successively iterate as:an=S−xn22xn,xn+1=xn+an,xn+2=xn+1−an22xn+1.{\displaystyle {\begin{aligned}a_{n}&={\frac {S-x_{n}^{2}}{2x_{n}}},\\x_{n+1}&=x_{n}+a_{n},\\x_{n+2}&=x_{n+1}-{\frac {a_{n}^{2}}{2x_{n+1}}}.\end{aligned}}} The valuesxn+1{\displaystyle x_{n+1}}andxn+2{\displaystyle x_{n+2}}are exactly the same as those computed by Heron's method. To see this, the second Heron's method step would computexn+2=xn+12+S2xn+1=xn+1+S−xn+122xn+1{\displaystyle x_{n+2}={\frac {x_{n+1}^{2}+S}{2x_{n+1}}}=x_{n+1}+{\frac {S-x_{n+1}^{2}}{2x_{n+1}}}}and we can use the definitions ofxn+1{\displaystyle x_{n+1}}andan{\displaystyle a_{n}}to rearrange the numerator into:S−xn+12=S−(xn+an)2=S−xn2−2xnan−an2=S−xn2−(S−xn2)−an2=−an2.{\displaystyle {\begin{aligned}S-x_{n+1}^{2}&=S-(x_{n}+a_{n})^{2}\\&=S-x_{n}^{2}-2x_{n}a_{n}-a_{n}^{2}\\&=S-x_{n}^{2}-(S-x_{n}^{2})-a_{n}^{2}\\&=-a_{n}^{2}.\end{aligned}}} This can be used to construct a rational approximation to the square root by beginning with an integer. Ifx0=N{\displaystyle x_{0}=N}is an integer chosen soN2{\displaystyle N^{2}}is close toS{\displaystyle S}, andd=S−N2{\displaystyle d=S-N^{2}}is the difference whose absolute value is minimized, then the first iteration can be written as:S≈N+d2N−d28N3+4Nd=8N4+8N2d+d28N3+4Nd=N4+6N2S+S24N3+4NS=N2(N2+6S)+S24N(N2+S).{\displaystyle {\sqrt {S}}\approx N+{\frac {d}{2N}}-{\frac {d^{2}}{8N^{3}+4Nd}}={\frac {8N^{4}+8N^{2}d+d^{2}}{8N^{3}+4Nd}}={\frac {N^{4}+6N^{2}S+S^{2}}{4N^{3}+4NS}}={\frac {N^{2}(N^{2}+6S)+S^{2}}{4N(N^{2}+S)}}.} The Bakhshali method can be generalized to the computation of an arbitrary root, including fractional roots.[5] One might think the second half of the Bakhshali method could be used as a simpler form of Heron's iteration and used repeatedly, e.g.an+1=−an22xn+1,xn+2=xn+1+an+1,an+2=−an+122xn+2,xn+3=xn+2+an+2,etc.{\displaystyle {\begin{aligned}a_{n+1}&={\frac {-a_{n}^{2}}{2x_{n+1}}},&x_{n+2}&=x_{n+1}+a_{n+1},\\a_{n+2}&={\frac {-a_{n+1}^{2}}{2x_{n+2}}},&x_{n+3}&=x_{n+2}+a_{n+2},{\text{ etc.}}\end{aligned}}}however, this isnumerically unstable. Without any reference to the original input valueS{\displaystyle S}, the accuracy is limited by that of the original computation ofan{\displaystyle a_{n}}, and that rapidly becomes inadequate. Using the same exampleS=125348{\displaystyle S=125348}as in theHeron's method example, the first iteration givesx0=600a0=125348−60022×600=−195.5433≈−200x1=600+(−200)=−400x2=400−(−200)22×400=−350{\displaystyle {\begin{alignedat}{3}x_{0}&=600\\[1ex]a_{0}&={\frac {125348-600^{2}}{2\times 600}}&&=-195.5433\approx -200\\[1ex]x_{1}&=600+(-200)&&={\phantom {-}}400\\[1ex]x_{2}&=400-{\frac {(-200)^{2}}{2\times 400}}&&={\phantom {-}}350\end{alignedat}}} Likewise the second iteration givesa2=125348−35022×350=004.06857x3=350+4.06857=354.06857x4=354.06857−4.0685722×354.06857=354.045194{\displaystyle {\begin{alignedat}{3}a_{2}&={\frac {125348-350^{2}}{2\times 350}}&&={\phantom {00}}4.06857\\[1ex]x_{3}&=350+4.06857&&=354.06857\\[1ex]x_{4}&=354.06857-{\frac {4.06857^{2}}{2\times 354.06857}}&&=354.045194\end{alignedat}}}Unlike in Heron's method,x3{\displaystyle x_{3}}must be computed to 8 digits because the formula forx4{\displaystyle x_{4}}does not correct any error inx3{\displaystyle x_{3}}. This is a method to find each digit of the square root in a sequence. This method is based on thebinomial theoremand basically an inverse algorithm solving(x+y)2=x2+2xy+y2{\displaystyle (x+y)^{2}=x^{2}+2xy+y^{2}}. It is slower than the Babylonian method, but it has several advantages: Disadvantages are: Napier's bonesinclude an aid for the execution of this algorithm. The shiftingnth root algorithm is a generalization of this method. First, consider the case of finding the square root of a numberS, that is the square of a base-10 two-digit numberXY, whereXis the tens digit andYis the units digit. Specifically:S=(10X+Y)2=100X2+20XY+Y2.{\displaystyle S=\left(10X+Y\right)^{2}=100X^{2}+20XY+Y^{2}.}Swill consist of 3 or 4 decimal digits. Now to start the digit-by-digit algorithm, we split the digits ofSin two groups of two digits, starting from the right. This means that the first group will be of 1 or 2 digits. Then we determine the value ofXas the largest digit such thatX2is less than or equal to the first group. We then compute the difference between the first group andX2and start the second iteration by concatenating the second group to it. This is equivalent to subtracting100X2{\displaystyle 100X^{2}}fromS, and we're left withS′=20XY+Y2{\displaystyle S'=20XY+Y^{2}}. We divideS'by 10, then divide it by2Xand keep the integer part to try and guessY. We concatenate2Xwith the tentativeYand multiply it byY. If our guess is correct, this is equivalent to computing:(10(2X)+Y)Y=20XY+Y2=S′,{\displaystyle (10(2X)+Y)Y=20XY+Y^{2}=S',}and so the remainder, that is the difference betweenS'and the result, is zero; if the result is higher thanS', we lower our guess by 1 and try again until the remainder is 0. Since this is a simple case where the answer is a perfect square rootXY, the algorithm stops here. The same idea can be extended to any arbitrary square root computation next. Suppose we are able to find the square root ofSby expressing it as a sum ofnpositive numbers such thatS=(a1+a2+a3+⋯+an)2.{\displaystyle S=\left(a_{1}+a_{2}+a_{3}+\dots +a_{n}\right)^{2}.} By repeatedly applying the basic identity(x+y)2=x2+2xy+y2,{\displaystyle (x+y)^{2}=x^{2}+2xy+y^{2},}the right-hand-side term can be expanded as(a1+a2+a3+⋯+an)2=a12+2a1a2+a22+2(a1+a2)a3+a32+⋯+an−12+2(∑i=1n−1ai)an+an2=a12+[2a1+a2]a2+[2(a1+a2)+a3]a3+⋯+[2(∑i=1n−1ai)+an]an.{\displaystyle {\begin{aligned}&(a_{1}+a_{2}+a_{3}+\dotsb +a_{n})^{2}\\=&\,a_{1}^{2}+2a_{1}a_{2}+a_{2}^{2}+2(a_{1}+a_{2})a_{3}+a_{3}^{2}+\dots +a_{n-1}^{2}+2\left(\sum _{i=1}^{n-1}a_{i}\right)a_{n}+a_{n}^{2}\\=&\,a_{1}^{2}+[2a_{1}+a_{2}]a_{2}+[2(a_{1}+a_{2})+a_{3}]a_{3}+\dots +\left[2\left(\sum _{i=1}^{n-1}a_{i}\right)+a_{n}\right]a_{n}.\end{aligned}}} This expression allows us to find the square root by sequentially guessing the values ofai{\displaystyle a_{i}}s. Suppose that the numbersa1,…,am−1{\displaystyle a_{1},\ldots ,a_{m-1}}have already been guessed, then them-th term of the right-hand-side of the above summation is given byYm=[2Pm−1+am]am,{\displaystyle Y_{m}=\left[2P_{m-1}+a_{m}\right]a_{m},}wherePm−1=∑i=1m−1ai{\textstyle P_{m-1}=\sum _{i=1}^{m-1}a_{i}}is the approximate square root found so far. Now each new guessam{\displaystyle a_{m}}should satisfy the recursionXm=Xm−1−Ym,{\displaystyle X_{m}=X_{m-1}-Y_{m},}whereXm{\displaystyle X_{m}}is the sum of all the terms afterYm{\displaystyle Y_{m}}, i.e. the remainder, such thatXm≥0{\displaystyle X_{m}\geq 0}for all1≤m≤n,{\displaystyle 1\leq m\leq n,}with initializationX0=S.{\displaystyle X_{0}=S.}WhenXn=0,{\displaystyle X_{n}=0,}the exact square root has been found; if not, then the sum of theai{\displaystyle a_{i}}s gives a suitable approximation of the square root, withXn{\displaystyle X_{n}}being the approximation error. For example, in the decimal number system we haveS=(a1⋅10n−1+a2⋅10n−2+⋯+an−1⋅10+an)2,{\displaystyle S=\left(a_{1}\cdot 10^{n-1}+a_{2}\cdot 10^{n-2}+\cdots +a_{n-1}\cdot 10+a_{n}\right)^{2},}where10n−i{\displaystyle 10^{n-i}}are place holders and the coefficientsai∈{0,1,2,…,9}{\displaystyle a_{i}\in \{0,1,2,\ldots ,9\}}. At any m-th stage of the square root calculation, the approximate root found so far,Pm−1{\displaystyle P_{m-1}}and the summation termYm{\displaystyle Y_{m}}are given byPm−1=∑i=1m−1ai⋅10n−i=10n−m+1∑i=1m−1ai⋅10m−i−1,{\displaystyle P_{m-1}=\sum _{i=1}^{m-1}a_{i}\cdot 10^{n-i}=10^{n-m+1}\sum _{i=1}^{m-1}a_{i}\cdot 10^{m-i-1},}Ym=[2Pm−1+am⋅10n−m]am⋅10n−m=[20∑i=1m−1ai⋅10m−i−1+am]am⋅102(n−m).{\displaystyle Y_{m}=\left[2P_{m-1}+a_{m}\cdot 10^{n-m}\right]a_{m}\cdot 10^{n-m}=\left[20\sum _{i=1}^{m-1}a_{i}\cdot 10^{m-i-1}+a_{m}\right]a_{m}\cdot 10^{2(n-m)}.} Here since the place value ofYm{\displaystyle Y_{m}}is an even power of 10, we only need to work with the pair of most significant digits of the remainderXm−1{\displaystyle X_{m-1}}, whose first term isYm{\displaystyle Y_{m}}, at any m-th stage. The section below codifies this procedure. It is obvious that a similar method can be used to compute the square root in number systems other than the decimal number system. For instance, finding the digit-by-digit square root in the binary number system is quite efficient since the value ofai{\displaystyle a_{i}}is searched from a smaller set of binary digits {0,1}. This makes the computation faster since at each stage the value ofYm{\displaystyle Y_{m}}is eitherYm=0{\displaystyle Y_{m}=0}foram=0{\displaystyle a_{m}=0}orYm=2Pm−1+1{\displaystyle Y_{m}=2P_{m-1}+1}foram=1{\displaystyle a_{m}=1}. The fact that we have only two possible options foram{\displaystyle a_{m}}also makes the process of deciding the value ofam{\displaystyle a_{m}}atm-th stage of calculation easier. This is because we only need to check ifYm≤Xm−1{\displaystyle Y_{m}\leq X_{m-1}}foram=1.{\displaystyle a_{m}=1.}If this condition is satisfied, then we takeam=1{\displaystyle a_{m}=1}; if not thenam=0.{\displaystyle a_{m}=0.}Also, the fact that multiplication by 2 is done by left bit-shifts helps in the computation. Write the original number in decimal form. The numbers are written similar to thelong divisionalgorithm, and, as in long division, the root will be written on the line above. Now separate the digits into pairs, starting from the decimal point and going both left and right. The decimal point of the root will be above the decimal point of the square. One digit of the root will appear above each pair of digits of the square. Beginning with the left-most pair of digits, do the following procedure for each pair: Find the square root of 152.2756. This section uses the formalism fromthe digit-by-digit calculation section above, with the slight variation that we letN2=(an+⋯+a0)2{\displaystyle N^{2}=(a_{n}+\dotsb +a_{0})^{2}}, with eacham=2m{\displaystyle a_{m}=2^{m}}oram=0{\displaystyle a_{m}=0}.We iterate all2m{\displaystyle 2^{m}}, from2n{\displaystyle 2^{n}}down to20{\displaystyle 2^{0}}, and build up an approximate solutionPm=an+an−1+…+am{\displaystyle P_{m}=a_{n}+a_{n-1}+\ldots +a_{m}}, the sum of allai{\displaystyle a_{i}}for which we have determined the value.To determine ifam{\displaystyle a_{m}}equals2m{\displaystyle 2^{m}}or0{\displaystyle 0}, we letPm=Pm+1+2m{\displaystyle P_{m}=P_{m+1}+2^{m}}. IfPm2≤N2{\displaystyle P_{m}^{2}\leq N^{2}}(i.e. the square of our approximate solution including2m{\displaystyle 2^{m}}does not exceed the target square) thenam=2m{\displaystyle a_{m}=2^{m}}, otherwiseam=0{\displaystyle a_{m}=0}andPm=Pm+1{\displaystyle P_{m}=P_{m+1}}.To avoid squaringPm{\displaystyle P_{m}}in each step, we store the differenceXm=N2−Pm2{\displaystyle X_{m}=N^{2}-P_{m}^{2}}and incrementally update it by settingXm=Xm+1−Ym{\displaystyle X_{m}=X_{m+1}-Y_{m}}withYm=Pm2−Pm+12=2Pm+1am+am2{\displaystyle Y_{m}=P_{m}^{2}-P_{m+1}^{2}=2P_{m+1}a_{m}+a_{m}^{2}}.Initially, we setan=Pn=2n{\displaystyle a_{n}=P_{n}=2^{n}}for the largestn{\displaystyle n}with(2n)2=4n≤N2{\displaystyle (2^{n})^{2}=4^{n}\leq N^{2}}. As an extra optimization, we storePm+12m+1{\displaystyle P_{m+1}2^{m+1}}and(2m)2{\displaystyle (2^{m})^{2}}, the two terms ofYm{\displaystyle Y_{m}}in case thatam{\displaystyle a_{m}}is nonzero, in separate variablescm{\displaystyle c_{m}},dm{\displaystyle d_{m}}:cm=Pm+12m+1{\displaystyle c_{m}=P_{m+1}2^{m+1}}dm=(2m)2{\displaystyle d_{m}=(2^{m})^{2}}Ym={cm+dmifam=2m0ifam=0{\displaystyle Y_{m}={\begin{cases}c_{m}+d_{m}&{\text{if }}a_{m}=2^{m}\\0&{\text{if }}a_{m}=0\end{cases}}} cm{\displaystyle c_{m}}anddm{\displaystyle d_{m}}can be efficiently updated in each step:cm−1=Pm2m=(Pm+1+am)2m=Pm+12m+am2m={cm/2+dmifam=2mcm/2ifam=0{\displaystyle c_{m-1}=P_{m}2^{m}=(P_{m+1}+a_{m})2^{m}=P_{m+1}2^{m}+a_{m}2^{m}={\begin{cases}c_{m}/2+d_{m}&{\text{if }}a_{m}=2^{m}\\c_{m}/2&{\text{if }}a_{m}=0\end{cases}}}dm−1=dm4{\displaystyle d_{m-1}={\frac {d_{m}}{4}}} Note that:c−1=P020=P0=N,{\displaystyle c_{-1}=P_{0}2^{0}=P_{0}=N,}which is the final result returned in the function below. An implementation of this algorithm in C:[6] Faster algorithms, in binary and decimal or any other base, can be realized by using lookup tables—in effect tradingmore storage space for reduced run time.[7] Pocket calculatorstypically implement good routines to compute theexponential functionand thenatural logarithm, and then compute the square root ofSusing the identity found using the properties of logarithms (ln⁡xn=nln⁡x{\displaystyle \ln x^{n}=n\ln x}) and exponentials (eln⁡x=x{\displaystyle e^{\ln x}=x}):[citation needed]S=e12ln⁡S.{\displaystyle {\sqrt {S}}=e^{{\frac {1}{2}}\ln S}.}The denominator in the fraction corresponds to thenth root. In the case above the denominator is 2, hence the equation specifies that the square root is to be found. The same identity is used when computing square roots withlogarithm tablesorslide rules. This method is applicable for finding the square root of0<S<3{\displaystyle 0<S<3\,\!}and converges best forS≈1{\displaystyle S\approx 1}. This, however, is no real limitation for a computer-based calculation, as in base 2 floating-point and fixed-point representations, it is trivial to multiplyS{\displaystyle S\,\!}by an integer power of 4, and thereforeS{\displaystyle {\sqrt {S}}}by the corresponding power of 2, by changing the exponent or by shifting, respectively. Therefore,S{\displaystyle S\,\!}can be moved to the range12≤S<2{\textstyle {\tfrac {1}{2}}\leq S<2}. Moreover, the following method does not employ general divisions, but only additions, subtractions, multiplications, and divisions by powers of two, which are again trivial to implement. A disadvantage of the method is that numerical errors accumulate, in contrast to single variable iterative methods such as the Babylonian one. The initialization step of this method isa0=Sc0=S−1{\displaystyle {\begin{aligned}a_{0}&=S\\c_{0}&=S-1\end{aligned}}}while the iterative steps readan+1=an−ancn/2cn+1=cn2(cn−3)/4{\displaystyle {\begin{aligned}a_{n+1}&=a_{n}-a_{n}c_{n}/2\\c_{n+1}&=c_{n}^{2}(c_{n}-3)/4\end{aligned}}}Then,an→S{\displaystyle a_{n}\to {\sqrt {S}}}(whilecn→0{\displaystyle c_{n}\to 0}). The convergence ofcn{\displaystyle c_{n}\,\!}, and therefore also ofan{\displaystyle a_{n}\,\!}, is quadratic. The proof of the method is rather easy. First, rewrite the iterative definition ofcn{\displaystyle c_{n}}as1+cn+1=(1+cn)(1−12cn)2.{\displaystyle 1+c_{n+1}=(1+c_{n})(1-{\tfrac {1}{2}}c_{n})^{2}.}Then it is straightforward to prove by induction thatS(1+cn)=an2{\displaystyle S(1+c_{n})=a_{n}^{2}}and therefore the convergence ofan{\displaystyle a_{n}\,\!}to the desired resultS{\displaystyle {\sqrt {S}}}is ensured by the convergence ofcn{\displaystyle c_{n}\,\!}to 0, which in turn follows from−1<c0<2{\displaystyle -1<c_{0}<2\,\!}. This method was developed around 1950 byM. V. Wilkes,D. J. WheelerandS. Gill[8]for use onEDSAC, one of the first electronic computers.[9]The method was later generalized, allowing the computation of non-square roots.[10] The following are iterative methods for finding the reciprocal square root ofSwhich is1/S{\displaystyle 1/{\sqrt {S}}}. Once it has been found, findS{\displaystyle {\sqrt {S}}}by simple multiplication:S=S⋅(1/S){\displaystyle {\sqrt {S}}=S\cdot (1/{\sqrt {S}})}. These iterations involve only multiplication, and not division. They are therefore faster than theBabylonian method. However, they are not stable. If the initial value is not close to the reciprocal square root, the iterations will diverge away from it rather than converge to it. It can therefore be advantageous to perform an iteration of the Babylonian method on a rough estimate before starting to apply these methods. Goldschmidt's algorithm is an extension ofGoldschmidt division, named after Robert Elliot Goldschmidt,[11][12]which can be used to calculate square roots. Some computers use Goldschmidt's algorithm to simultaneously calculateS{\displaystyle {\sqrt {S}}}and1/S{\displaystyle 1/{\sqrt {S}}}. Goldschmidt's algorithm findsS{\displaystyle {\sqrt {S}}}faster than Newton-Raphson iteration on a computer with afused multiply–addinstruction and either a pipelined floating-point unit or two independent floating-point units.[13] The first way of writing Goldschmidt's algorithm begins and iteratesbn+1=bnYn2Yn+1=12(3−bn+1)xn+1=xnYn+1yn+1=ynYn+1{\displaystyle {\begin{aligned}b_{n+1}&=b_{n}Y_{n}^{2}\\Y_{n+1}&={\tfrac {1}{2}}(3-b_{n+1})\\x_{n+1}&=x_{n}Y_{n+1}\\y_{n+1}&=y_{n}Y_{n+1}\end{aligned}}}untilbi{\displaystyle b_{i}}is sufficiently close to 1, or a fixed number of iterations. The iterations converge tolimn→∞xn=S,{\displaystyle \lim _{n\to \infty }x_{n}={\sqrt {S}},}andlimn→∞yn=1/S.{\displaystyle \lim _{n\to \infty }y_{n}=1/{\sqrt {S}}.}Note that it is possible to omit eitherxn{\displaystyle x_{n}}andyn{\displaystyle y_{n}}from the computation, and if both are desired thenxn=Syn{\displaystyle x_{n}=Sy_{n}}may be used at the end rather than computing it through in each iteration. A second form, usingfused multiply-addoperations, begins and iteratesrn=0.5−xnhnxn+1=xn+xnrnhn+1=hn+hnrn{\displaystyle {\begin{aligned}r_{n}&=0.5-x_{n}h_{n}\\x_{n+1}&=x_{n}+x_{n}r_{n}\\h_{n+1}&=h_{n}+h_{n}r_{n}\end{aligned}}}untilri{\displaystyle r_{i}}is sufficiently close to 0, or a fixed number of iterations. This converges tolimn→∞xn=S,{\displaystyle \lim _{n\to \infty }x_{n}={\sqrt {S}},}andlimn→∞2hn=1/S.{\displaystyle \lim _{n\to \infty }2h_{n}=1/{\sqrt {S}}.} IfNis an approximation toS{\displaystyle {\sqrt {S}}}, a better approximation can be found by using theTaylor seriesof thesquare rootfunction:N2+d=N∑n=0∞(−1)n(2n)!(1−2n)n!24ndnN2n=N(1+d2N2−d28N4+d316N6−5d4128N8+⋯){\displaystyle {\sqrt {N^{2}+d}}=N\sum _{n=0}^{\infty }{\frac {(-1)^{n}(2n)!}{(1-2n)n!^{2}4^{n}}}{\frac {d^{n}}{N^{2n}}}=N\left(1+{\frac {d}{2N^{2}}}-{\frac {d^{2}}{8N^{4}}}+{\frac {d^{3}}{16N^{6}}}-{\frac {5d^{4}}{128N^{8}}}+\cdots \right)} As an iterative method, theorder of convergenceis equal to the number of terms used. With two terms, it is identical to theBabylonian method. With three terms, each iteration takes almost as many operations as theBakhshali approximation, but converges more slowly.[citation needed]Therefore, this is not a particularly efficient way of calculation. To maximize the rate of convergence, chooseNso that|d|N2{\displaystyle {\frac {|d|}{N^{2}}}\,}is as small as possible. Thecontinued fractionrepresentation of a real number can be used instead of its decimal or binary expansion and this representation has the property that the square root of any rational number (which is not already a perfect square) has a periodic, repeating expansion, similar to how rational numbers have repeating expansions in the decimal notation system. Quadratic irrationals(numbers of the forma+bc{\displaystyle {\frac {a+{\sqrt {b}}}{c}}}, wherea,bandcare integers), and in particular, square roots of integers, haveperiodic continued fractions. Sometimes what is desired is finding not the numerical value of a square root, but rather itscontinued fractionexpansion, and hence its rational approximation. LetSbe the positive number for which we are required to find the square root. Then assumingato be a number that serves as an initial guess andrto be the remainder term, we can writeS=a2+r.{\displaystyle S=a^{2}+r.}Since we haveS−a2=(S+a)(S−a)=r{\displaystyle S-a^{2}=({\sqrt {S}}+a)({\sqrt {S}}-a)=r}, we can express the square root ofSasS=a+ra+S.{\displaystyle {\sqrt {S}}=a+{\frac {r}{a+{\sqrt {S}}}}.} By applying this expression forS{\displaystyle {\sqrt {S}}}to the denominator term of the fraction, we haveS=a+ra+(a+ra+S)=a+r2a+ra+S.{\displaystyle {\sqrt {S}}=a+{\frac {r}{a+(a+{\frac {r}{a+{\sqrt {S}}}})}}=a+{\frac {r}{2a+{\frac {r}{a+{\sqrt {S}}}}}}.} The numerator/denominator expansion for continued fractions (see left) is cumbersome to write as well as to embed in text formatting systems. So mathematicians have devised several alternative notations, like[14]S=a+r2a+r2a+r2a+⋯{\displaystyle {\sqrt {S}}=a+{\frac {r}{2a+}}\,{\frac {r}{2a+}}\,{\frac {r}{2a+}}\cdots } Whenr=1{\displaystyle r=1}throughout, an even more compact notation is:[15][a;2a,2a,2a,⋯]{\displaystyle [a;2a,2a,2a,\cdots ]}For repeating continued fractions (which all square roots of non-perfect squares do), the repetend is represented only once, with an overline to signify a non-terminating repetition of the overlined part:[16][a;2a¯]{\displaystyle [a;{\overline {2a}}]} For√2, the value ofa{\displaystyle a}is 1, so its representation is:[1;2¯]{\displaystyle [1;{\overline {2}}]} Proceeding this way, we get ageneralized continued fractionfor the square root asS=a+r2a+r2a+r2a+⋱{\displaystyle {\sqrt {S}}=a+{\cfrac {r}{2a+{\cfrac {r}{2a+{\cfrac {r}{2a+\ddots }}}}}}} The first step to evaluating such a fraction[17]to obtain a root is to do numerical substitutions for the root of the number desired, and number of denominators selected. For example, in canonical form,r{\displaystyle r}is 1 and for√2,a{\displaystyle a}is 1, so the numerical continued fraction for 3 denominators is:2≈1+12+12+12{\displaystyle {\sqrt {2}}\approx 1+{\cfrac {1}{2+{\cfrac {1}{2+{\cfrac {1}{2}}}}}}} Step 2 is to reduce the continued fraction from the bottom up, one denominator at a time, to yield a rational fraction whose numerator and denominator are integers. The reduction proceeds thus (taking the first three denominators):1+12+12+12=1+12+152=1+12+25=1+1125=1+512=1712{\displaystyle {\begin{aligned}1+{\cfrac {1}{2+{\cfrac {1}{2+{\cfrac {1}{2}}}}}}&=1+{\cfrac {1}{2+{\cfrac {1}{\frac {5}{2}}}}}\\&=1+{\cfrac {1}{2+{\cfrac {2}{5}}}}=1+{\cfrac {1}{\frac {12}{5}}}\\&=1+{\cfrac {5}{12}}={\frac {17}{12}}\end{aligned}}} Finally (step 3), divide the numerator by the denominator of the rational fraction to obtain the approximate value of the root:17÷12=1.42{\displaystyle 17\div 12=1.42}rounded to three digits of precision. The actual value of√2is 1.41 to three significant digits. The relative error is 0.17%, so the rational fraction is good to almost three digits of precision. Taking more denominators gives successively better approximations: four denominators yields the fraction4129=1.4137{\displaystyle {\frac {41}{29}}=1.4137}, good to almost 4 digits of precision, etc. The following are examples of square roots, their simple continued fractions, and their first terms — calledconvergents— up to and including denominator 99: In general, the larger the denominator of a rational fraction, the better the approximation. It can also be shown that truncating a continued fraction yields a rational fraction that is the best approximation to the root of any fraction with denominator less than or equal to the denominator of that fraction — e.g., no fraction with a denominator less than or equal to 70 is as good an approximation to√2as 99/70. A number is represented in afloating pointformat asm×bp{\displaystyle m\times b^{p}}which is also calledscientific notation. Its square root ism×bp/2{\displaystyle {\sqrt {m}}\times b^{p/2}}and similar formulae would apply for cube roots and logarithms. On the face of it, this is no improvement in simplicity, but suppose that only an approximation is required: then justbp/2{\displaystyle b^{p/2}}is good to an order of magnitude. Next, recognise that some powers,p, will be odd, thus for 3141.59 = 3.14159×103rather than deal with fractional powers of the base, multiply the mantissa by the base and subtract one from the power to make it even. The adjusted representation will become the equivalent of 31.4159×102so that the square root will be√31.4159×101. If the integer part of the adjusted mantissa is taken, there can only be the values 1 to 99, and that could be used as an index into a table of 99 pre-computed square roots to complete the estimate. A computer using base sixteen would require a larger table, but one using base two would require only three entries: the possible bits of the integer part of the adjusted mantissa are 01 (the power being even so there was no shift, remembering that anormalisedfloating point number always has a non-zero high-order digit) or if the power was odd, 10 or 11, these being the firsttwobits of the original mantissa. Thus, 6.25 = 110.01 in binary, normalised to 1.1001 × 22an even power so the paired bits of the mantissa are 01, while .625 = 0.101 in binary normalises to 1.01 × 2−1an odd power so the adjustment is to 10.1 × 2−2and the paired bits are 10. Notice that the low order bit of the power is echoed in the high order bit of the pairwise mantissa. An even power has its low-order bit zero and the adjusted mantissa will start with 0, whereas for an odd power that bit is one and the adjusted mantissa will start with 1. Thus, when the power is halved, it is as if its low order bit is shifted out to become the first bit of the pairwise mantissa. A table with only three entries could be enlarged by incorporating additional bits of the mantissa. However, with computers, rather than calculate an interpolation into a table, it is often better to find some simpler calculation giving equivalent results. Everything now depends on the exact details of the format of the representation, plus what operations are available to access and manipulate the parts of the number. For example,Fortranoffers anEXPONENT(x)function to obtain the power. Effort expended in devising a good initial approximation is to be recouped by thereby avoiding the additional iterations of the refinement process that would have been needed for a poor approximation. Since these are few (one iteration requires a divide, an add, and a halving) the constraint is severe. Many computers follow theIEEE(or sufficiently similar) representation, and a very rapid approximation to the square root can be obtained for starting Newton's method. The technique that follows is based on the fact that the floating point format (in base two) approximates the base-2 logarithm. That islog2⁡(m×2p)=p+log2⁡(m){\displaystyle \log _{2}(m\times 2^{p})=p+\log _{2}(m)} So for a 32-bit single precision floating point number in IEEE format (where notably, the power has abiasof 127 added for the represented form) you can get the approximate logarithm by interpreting its binary representation as a 32-bit integer, scaling it by2−23{\displaystyle 2^{-23}}, and removing a bias of 127, i.e.xint⋅2−23−127≈log2⁡(x).{\displaystyle x_{\text{int}}\cdot 2^{-23}-127\approx \log _{2}(x).} For example, 1.0 is represented by ahexadecimalnumber 0x3F800000, which would represent1065353216=127⋅223{\displaystyle 1065353216=127\cdot 2^{23}}if taken as an integer. Using the formula above you get1065353216⋅2−23−127=0{\displaystyle 1065353216\cdot 2^{-23}-127=0}, as expected fromlog2⁡(1.0){\displaystyle \log _{2}(1.0)}. In a similar fashion you get 0.5 from 1.5 (0x3FC00000). To get the square root, divide the logarithm by 2 and convert the value back. The following program demonstrates the idea. The exponent's lowest bit is intentionally allowed to propagate into the mantissa. One way to justify the steps in this program is to assumeb{\displaystyle b}is the exponent bias andn{\displaystyle n}is the number of explicitly stored bits in the mantissa and then show that((12(xint/2n−b))+b)⋅2n=12(xint−2n)+(12(b+1))⋅2n.{\displaystyle \left(\left({\tfrac {1}{2}}\left(x_{\text{int}}/2^{n}-b\right)\right)+b\right)\cdot 2^{n}={\tfrac {1}{2}}\left(x_{\text{int}}-2^{n}\right)+\left({\tfrac {1}{2}}\left(b+1\right)\right)\cdot 2^{n}.} The three mathematical operations forming the core of the above function can be expressed in a single line. An additional adjustment can be added to reduce the maximum relative error. So, the three operations, not including the cast, can be rewritten as whereais a bias for adjusting the approximation errors. For example, witha= 0 the results are accurate for even powers of 2 (e.g. 1.0), but for other numbers the results will be slightly too big (e.g. 1.5 for 2.0 instead of 1.414... with 6% error). Witha= −0x4B0D2, the maximum relative error is minimized to ±3.5%. If the approximation is to be used for an initial guess forNewton's methodto the equation(1/x2)−S=0{\displaystyle (1/x^{2})-S=0}, then the reciprocal form shown in the following section is preferred. A variant of the above routine is included below, which can be used to compute thereciprocalof the square root, i.e.,x−1/2{\displaystyle x^{-1/2}}instead, was written by Greg Walsh. The integer-shift approximation produced a relative error of less than 4%, and the error dropped further to 0.15% with one iteration ofNewton's methodon the following line.[18]In computer graphics it is a very efficient way to normalize a vector. Some VLSI hardware implements inverse square root using a second degree polynomial estimation followed by aGoldschmidt iteration.[19] IfS< 0, then its principal square root isS=|S|i.{\displaystyle {\sqrt {S}}={\sqrt {\vert S\vert }}\,\,i\,.} IfS=a+biwhereaandbare real andb≠ 0, then its principal square root isS=|S|+a2+sgn⁡(b)|S|−a2i.{\displaystyle {\sqrt {S}}={\sqrt {\frac {\vert S\vert +a}{2}}}\,+\,\operatorname {sgn}(b){\sqrt {\frac {\vert S\vert -a}{2}}}\,\,i\,.} This can be verified by squaring the root.[20][21]Here|S|=a2+b2{\displaystyle \vert S\vert ={\sqrt {a^{2}+b^{2}}}} is themodulusofS. The principal square root of acomplex numberis defined to be the root with the non-negative real part.
https://en.wikipedia.org/wiki/Methods_of_computing_square_roots
This is alist ofpolynomialtopics, by Wikipedia page. See alsotrigonometric polynomial,list of algebraic geometry topics. Polynomial mapping
https://en.wikipedia.org/wiki/List_of_polynomial_topics
Inmathematics, asquare rootof a numberxis a numberysuch thaty2=x{\displaystyle y^{2}=x}; in other words, a numberywhosesquare(the result of multiplying the number by itself, ory⋅y{\displaystyle y\cdot y}) isx.[1]For example, 4 and −4 are square roots of 16 because42=(−4)2=16{\displaystyle 4^{2}=(-4)^{2}=16}. Everynonnegativereal numberxhas a unique nonnegative square root, called theprincipal square rootor simplythe square root(with a definite article, see below), which is denoted byx,{\displaystyle {\sqrt {x}},}where the symbol "{\displaystyle {\sqrt {~^{~}}}}" is called theradical sign[2]orradix. For example, to express the fact that the principal square root of 9 is 3, we write9=3{\displaystyle {\sqrt {9}}=3}. The term (or number) whose square root is being considered is known as theradicand. The radicand is the number or expression underneath the radical sign, in this case, 9. For non-negativex, the principal square root can also be written inexponentnotation, asx1/2{\displaystyle x^{1/2}}. Everypositive numberxhas two square roots:x{\displaystyle {\sqrt {x}}}(which is positive) and−x{\displaystyle -{\sqrt {x}}}(which is negative). The two roots can be written more concisely using the± signas±x{\displaystyle \pm {\sqrt {x}}}. Although the principal square root of a positive number is only one of its two square roots, the designation "thesquare root" is often used to refer to the principal square root.[3][4] Square roots of negative numbers can be discussed within the framework ofcomplex numbers. More generally, square roots can be considered in any context in which a notion of the "square" of a mathematical object is defined. These includefunction spacesandsquare matrices, among othermathematical structures. TheYale Babylonian Collectionclay tabletYBC 7289was created between 1800 BC and 1600 BC, showing2{\displaystyle {\sqrt {2}}}and22=12{\textstyle {\frac {\sqrt {2}}{2}}={\frac {1}{\sqrt {2}}}}respectively as 1;24,51,10 and 0;42,25,35base 60numbers on a square crossed by two diagonals.[5](1;24,51,10) base 60 corresponds to 1.41421296, which is correct to 5 decimal places (1.41421356...). TheRhind Mathematical Papyrusis a copy from 1650 BC of an earlierBerlin Papyrusand other texts – possibly theKahun Papyrus– that shows how the Egyptians extracted square roots by an inverse proportion method.[6] InAncient India, the knowledge of theoretical and applied aspects of square and square root was at least as old as theSulba Sutras, dated around 800–500 BC (possibly much earlier).[7]A method for finding very good approximations to the square roots of 2 and 3 are given in theBaudhayana Sulba Sutra.[8]Apastambawho was dated around 600 BCE has given a strikingly accurate value for2{\displaystyle {\sqrt {2}}}which is correct up to five decimal places as1+13+13×4−13×4×34{\textstyle 1+{\frac {1}{3}}+{\frac {1}{3\times 4}}-{\frac {1}{3\times 4\times 34}}}.[9][10][11]Aryabhata, in theAryabhatiya(section 2.4), has given a method for finding the square root of numbers having many digits. It was known to the ancient Greeks that square roots ofpositive integersthat are notperfect squaresare alwaysirrational numbers: numbers not expressible as aratioof two integers (that is, they cannot be written exactly asmn{\displaystyle {\frac {m}{n}}}, wheremandnare integers). This is the theoremEuclid X, 9, almost certainly due toTheaetetusdating back toc.380 BC.[12]The discovery of irrational numbers, including the particular case of thesquare root of 2, is widely associated with the Pythagorean school.[13][14]Although some accounts attribute the discovery toHippasus, the specific contributor remains uncertain due to the scarcity of primary sources and the secretive nature of the brotherhood.[15][16]It is exactly the length of thediagonalof asquare with side length 1. In the Chinese mathematical workWritings on Reckoning, written between 202 BC and 186 BC during the earlyHan dynasty, the square root is approximated by using an "excess and deficiency" method, which says to "...combine the excess and deficiency as the divisor; (taking) the deficiency numerator multiplied by the excess denominator and the excess numerator times the deficiency denominator, combine them as the dividend."[17] A symbol for square roots, written as an elaborate R, was invented byRegiomontanus(1436–1476). An R was also used for radix to indicate square roots inGerolamo Cardano'sArs Magna.[18] According to historian of mathematicsD.E. Smith, Aryabhata's method for finding the square root was first introduced in Europe byCataneo—in 1546. According to Jeffrey A. Oaks, Arabs used the letterjīm/ĝīm(ج), the first letter of the word "جذر" (variously transliterated asjaḏr,jiḏr,ǧaḏrorǧiḏr, "root"), placed in its initial form (ﺟ) over a number to indicate its square root. The letterjīmresembles the present square root shape. Its usage goes as far as the end of the twelfth century in the works of the Moroccan mathematicianIbn al-Yasamin.[19] The symbol "√" for the square root was first used in print in 1525, inChristoph Rudolff'sCoss.[20] The principal square root functionf(x)=x{\displaystyle f(x)={\sqrt {x}}}(usually just referred to as the "square root function") is afunctionthat maps thesetof nonnegative real numbers onto itself. Ingeometricalterms, the square root function maps theareaof a square to its side length. The square root ofxis rational if and only ifxis arational numberthat can be represented as a ratio of two perfect squares. (Seesquare root of 2for proofs that this is an irrational number, andquadratic irrationalfor a proof for all non-square natural numbers.) The square root function maps rational numbers intoalgebraic numbers, the latter being asupersetof the rational numbers). For all real numbersx,x2=|x|={x,ifx≥0−x,ifx<0.{\displaystyle {\sqrt {x^{2}}}=\left|x\right|={\begin{cases}x,&{\text{if }}x\geq 0\\-x,&{\text{if }}x<0.\end{cases}}}(seeabsolute value). For all nonnegative real numbersxandy,xy=xy{\displaystyle {\sqrt {xy}}={\sqrt {x}}{\sqrt {y}}}andx=x1/2.{\displaystyle {\sqrt {x}}=x^{1/2}.} The square root function iscontinuousfor all nonnegativex, anddifferentiablefor all positivex. Iffdenotes the square root function, whose derivative is given by:f′(x)=12x.{\displaystyle f'(x)={\frac {1}{2{\sqrt {x}}}}.} TheTaylor seriesof1+x{\displaystyle {\sqrt {1+x}}}aboutx= 0converges for|x| ≤ 1, and is given by 1+x=∑n=0∞(−1)n(2n)!(1−2n)(n!)2(4n)xn=1+12x−18x2+116x3−5128x4+⋯,{\displaystyle {\sqrt {1+x}}=\sum _{n=0}^{\infty }{\frac {(-1)^{n}(2n)!}{(1-2n)(n!)^{2}(4^{n})}}x^{n}=1+{\frac {1}{2}}x-{\frac {1}{8}}x^{2}+{\frac {1}{16}}x^{3}-{\frac {5}{128}}x^{4}+\cdots ,} The square root of a nonnegative number is used in the definition ofEuclidean norm(anddistance), as well as in generalizations such asHilbert spaces. It defines an important concept ofstandard deviationused inprobability theoryandstatistics. It has a major use in the formula for solutions of aquadratic equation.Quadratic fieldsand rings ofquadratic integers, which are based on square roots, are important in algebra and have uses in geometry. Square roots frequently appear in mathematical formulas elsewhere, as well as in manyphysicallaws. A positive number has two square roots, one positive, and one negative, which areoppositeto each other. When talking ofthesquare root of a positive integer, it is usually the positive square root that is meant. The square roots of an integer arealgebraic integers—more specificallyquadratic integers. The square root of a positive integer is the product of the roots of itsprimefactors, because the square root of a product is the product of the square roots of the factors. Sincep2k=pk,{\textstyle {\sqrt {p^{2k}}}=p^{k},}only roots of those primes having an odd power in thefactorizationare necessary. More precisely, the square root of a prime factorization isp12e1+1⋯pk2ek+1pk+12ek+1…pn2en=p1e1…pnenp1…pk.{\displaystyle {\sqrt {p_{1}^{2e_{1}+1}\cdots p_{k}^{2e_{k}+1}p_{k+1}^{2e_{k+1}}\dots p_{n}^{2e_{n}}}}=p_{1}^{e_{1}}\dots p_{n}^{e_{n}}{\sqrt {p_{1}\dots p_{k}}}.} The square roots of theperfect squares(e.g., 0, 1, 4, 9, 16) areintegers. In all other cases, the square roots of positive integers areirrational numbers, and hence have non-repeating decimalsin theirdecimal representations. Decimal approximations of the square roots of the first few natural numbers are given in the following table. As with before, the square roots of theperfect squares(e.g., 0, 1, 4, 9, 16) are integers. In all other cases, the square roots of positive integers areirrational numbers, and therefore have non-repeating digits in any standardpositional notationsystem. The square roots of small integers are used in both theSHA-1andSHA-2hash function designs to providenothing up my sleeve numbers. A result from the study ofirrational numbersassimple continued fractionswas obtained byJoseph Louis Lagrangec.1780. Lagrange found that the representation of the square root of any non-square positive integer as a continued fraction isperiodic. That is, a certain pattern of partial denominators repeats indefinitely in the continued fraction. In a sense these square roots are the very simplest irrational numbers, because they can be represented with a simple repeating pattern of integers. Thesquare bracketnotation used above is a short form for a continued fraction. Written in the more suggestive algebraic form, the simple continued fraction for the square root of 11, [3; 3, 6, 3, 6, ...], looks like this:11=3+13+16+13+16+13+⋱{\displaystyle {\sqrt {11}}=3+{\cfrac {1}{3+{\cfrac {1}{6+{\cfrac {1}{3+{\cfrac {1}{6+{\cfrac {1}{3+\ddots }}}}}}}}}}} where the two-digit pattern {3, 6} repeats over and over again in the partial denominators. Since11 = 32+ 2, the above is also identical to the followinggeneralized continued fractions: 11=3+26+26+26+26+26+⋱=3+620−1−120−120−120−120−⋱.{\displaystyle {\sqrt {11}}=3+{\cfrac {2}{6+{\cfrac {2}{6+{\cfrac {2}{6+{\cfrac {2}{6+{\cfrac {2}{6+\ddots }}}}}}}}}}=3+{\cfrac {6}{20-1-{\cfrac {1}{20-{\cfrac {1}{20-{\cfrac {1}{20-{\cfrac {1}{20-\ddots }}}}}}}}}}.} Square roots of positive numbers are not in generalrational numbers, and so cannot be written as a terminating or recurring decimal expression. Therefore in general any attempt to compute a square root expressed in decimal form can only yield an approximation, though a sequence of increasingly accurate approximations can be obtained. Mostpocket calculatorshave a square root key. Computerspreadsheetsand othersoftwareare also frequently used to calculate square roots. Pocket calculators typically implement efficient routines, such as theNewton's method(frequently with an initial guess of 1), to compute the square root of a positive real number.[21][22]When computing square roots withlogarithm tablesorslide rules, one can exploit the identitiesa=e(ln⁡a)/2=10(log10⁡a)/2,{\displaystyle {\sqrt {a}}=e^{(\ln a)/2}=10^{(\log _{10}a)/2},}wherelnandlog10are thenaturalandbase-10 logarithms. By trial-and-error,[23]one can square an estimate fora{\displaystyle {\sqrt {a}}}and raise or lower the estimate until it agrees to sufficient accuracy. For this technique it is prudent to use the identity(x+c)2=x2+2xc+c2,{\displaystyle (x+c)^{2}=x^{2}+2xc+c^{2},}as it allows one to adjust the estimatexby some amountcand measure the square of the adjustment in terms of the original estimate and its square. The most commoniterative methodof square root calculation by hand is known as the "Babylonian method" or "Heron's method" after the first-century Greek philosopherHeron of Alexandria, who first described it.[24]The method uses the same iterative scheme as theNewton–Raphson methodyields when applied to the functiony=f(x) =x2−a, using the fact that its slope at any point isdy/dx=f′(x) = 2x, but predates it by many centuries.[25]The algorithm is to repeat a simple calculation that results in a number closer to the actual square root each time it is repeated with its result as the new input. The motivation is that ifxis an overestimate to the square root of a nonnegative real numberathena/xwill be an underestimate and so the average of these two numbers is a better approximation than either of them. However, theinequality of arithmetic and geometric meansshows this average is always an overestimate of the square root (as notedbelow), and so it can serve as a new overestimate with which to repeat the process, whichconvergesas a consequence of the successive overestimates and underestimates being closer to each other after each iteration. To findx: That is, if an arbitrary guess fora{\displaystyle {\sqrt {a}}}isx0, andxn+ 1= (xn+a/xn) / 2, then eachxnis an approximation ofa{\displaystyle {\sqrt {a}}}which is better for largenthan for smalln. Ifais positive, the convergence isquadratic, which means that in approaching the limit, the number of correct digits roughly doubles in each next iteration. Ifa= 0, the convergence is only linear; however,0=0{\displaystyle {\sqrt {0}}=0}so in this case no iteration is needed. Using the identitya=2−n4na,{\displaystyle {\sqrt {a}}=2^{-n}{\sqrt {4^{n}a}},}the computation of the square root of a positive number can be reduced to that of a number in the range[1, 4). This simplifies finding a start value for the iterative method that is close to the square root, for which apolynomialorpiecewise-linearapproximationcan be used. Thetime complexityfor computing a square root withndigits of precision is equivalent to that of multiplying twon-digit numbers. Another useful method for calculating the square root is the shifting nth root algorithm, applied forn= 2. The name of the square rootfunctionvaries fromprogramming languageto programming language, withsqrt[26](often pronounced "squirt"[27]) being common, used inCand derived languages such asC++,JavaScript,PHP, andPython. The square of any positive or negative number is positive, and the square of 0 is 0. Therefore, no negative number can have arealsquare root. However, it is possible to work with a more inclusive set of numbers, called thecomplex numbers, that does contain solutions to the square root of a negative number. This is done by introducing a new number, denoted byi(sometimes byj, especially in the context ofelectricitywhereitraditionally represents electric current) and called theimaginary unit, which isdefinedsuch thati2= −1. Using this notation, we can think ofias the square root of −1, but we also have(−i)2=i2= −1and so−iis also a square root of −1. By convention, the principal square root of −1 isi, or more generally, ifxis any nonnegative number, then the principal square root of−xis−x=ix.{\displaystyle {\sqrt {-x}}=i{\sqrt {x}}.} The right side (as well as its negative) is indeed a square root of−x, since(ix)2=i2(x)2=(−1)x=−x.{\displaystyle (i{\sqrt {x}})^{2}=i^{2}({\sqrt {x}})^{2}=(-1)x=-x.} For every non-zero complex numberzthere exist precisely two numberswsuch thatw2=z: the principal square root ofz(defined below), and its negative. To find a definition for the square root that allows us to consistently choose a single value, called theprincipal value, we start by observing that any complex numberx+iy{\displaystyle x+iy}can be viewed as a point in the plane,(x,y),{\displaystyle (x,y),}expressed usingCartesian coordinates. The same point may be reinterpreted usingpolar coordinatesas the pair(r,φ),{\displaystyle (r,\varphi ),}wherer≥0{\displaystyle r\geq 0}is the distance of the point from the origin, andφ{\displaystyle \varphi }is the angle that the line from the origin to the point makes with the positive real (x{\displaystyle x}) axis. In complex analysis, the location of this point is conventionally writtenreiφ.{\displaystyle re^{i\varphi }.}Ifz=reiφwith−π<φ≤π,{\displaystyle z=re^{i\varphi }{\text{ with }}-\pi <\varphi \leq \pi ,}then theprincipal square rootofz{\displaystyle z}is defined to be the following:z=reiφ/2.{\displaystyle {\sqrt {z}}={\sqrt {r}}e^{i\varphi /2}.}The principal square root function is thus defined using the non-positive real axis as abranch cut. Ifz{\displaystyle z}is a non-negative real number (which happens if and only ifφ=0{\displaystyle \varphi =0}) then the principal square root ofz{\displaystyle z}isrei(0)/2=r;{\displaystyle {\sqrt {r}}e^{i(0)/2}={\sqrt {r}};}in other words, the principal square root of a non-negative real number is just the usual non-negative square root. It is important that−π<φ≤π{\displaystyle -\pi <\varphi \leq \pi }because if, for example,z=−2i{\displaystyle z=-2i}(soφ=−π/2{\displaystyle \varphi =-\pi /2}) then the principal square root is−2i=2eiφ=2eiφ/2=2ei(−π/4)=1−i{\displaystyle {\sqrt {-2i}}={\sqrt {2e^{i\varphi }}}={\sqrt {2}}e^{i\varphi /2}={\sqrt {2}}e^{i(-\pi /4)}=1-i}but usingφ~:=φ+2π=3π/2{\displaystyle {\tilde {\varphi }}:=\varphi +2\pi =3\pi /2}would instead produce the other square root2eiφ~/2=2ei(3π/4)=−1+i=−−2i.{\displaystyle {\sqrt {2}}e^{i{\tilde {\varphi }}/2}={\sqrt {2}}e^{i(3\pi /4)}=-1+i=-{\sqrt {-2i}}.} The principal square root function isholomorphiceverywhere except on the set of non-positive real numbers (on strictly negative reals it is not evencontinuous). The above Taylor series for1+x{\displaystyle {\sqrt {1+x}}}remains valid for complex numbersx{\displaystyle x}with|x|<1.{\displaystyle |x|<1.} The above can also be expressed in terms oftrigonometric functions:r(cos⁡φ+isin⁡φ)=r(cos⁡φ2+isin⁡φ2).{\displaystyle {\sqrt {r\left(\cos \varphi +i\sin \varphi \right)}}={\sqrt {r}}\left(\cos {\frac {\varphi }{2}}+i\sin {\frac {\varphi }{2}}\right).} When the number is expressed using its real and imaginary parts, the following formula can be used for the principal square root:[28][29] x+iy=12(x2+y2+x)+isgn⁡(y)12(x2+y2−x),{\displaystyle {\sqrt {x+iy}}={\sqrt {{\tfrac {1}{2}}{\bigl (}{\sqrt {\textstyle x^{2}+y^{2}}}+x{\bigr )}}}+i\operatorname {sgn}(y){\sqrt {{\tfrac {1}{2}}{\bigl (}{\sqrt {\textstyle x^{2}+y^{2}}}-x{\bigr )}}},} wheresgn(y) = 1ify≥ 0andsgn(y) = −1otherwise.[30]In particular, the imaginary parts of the original number and the principal value of its square root have the same sign. The real part of the principal value of the square root is always nonnegative. For example, the principal square roots of±iare given by: i=1+i2,−i=1−i2.{\displaystyle {\sqrt {i}}={\frac {1+i}{\sqrt {2}}},\qquad {\sqrt {-i}}={\frac {1-i}{\sqrt {2}}}.} In the following, the complexzandwmay be expressed as: where−π<θz≤π{\displaystyle -\pi <\theta _{z}\leq \pi }and−π<θw≤π{\displaystyle -\pi <\theta _{w}\leq \pi }. Because of the discontinuous nature of the square root function in the complex plane, the following laws arenot truein general. A similar problem appears with other complex functions with branch cuts, e.g., thecomplex logarithmand the relationslogz+ logw= log(zw)orlog(z*) = log(z)*which are not true in general. Wrongly assuming one of these laws underlies several faulty "proofs", for instance the following one showing that−1 = 1:−1=i⋅i=−1⋅−1=(−1)⋅(−1)=1=1.{\displaystyle {\begin{aligned}-1&=i\cdot i\\&={\sqrt {-1}}\cdot {\sqrt {-1}}\\&={\sqrt {\left(-1\right)\cdot \left(-1\right)}}\\&={\sqrt {1}}\\&=1.\end{aligned}}} The third equality cannot be justified (seeinvalid proof).[31]: Chapter VI, Section I, Subsection 2The fallacy that +1 = −1It can be made to hold by changing the meaning of √ so that this no longer represents the principal square root (see above) but selects a branch for the square root that contains1⋅−1.{\displaystyle {\sqrt {1}}\cdot {\sqrt {-1}}.}The left-hand side becomes either−1⋅−1=i⋅i=−1{\displaystyle {\sqrt {-1}}\cdot {\sqrt {-1}}=i\cdot i=-1}if the branch includes+ior−1⋅−1=(−i)⋅(−i)=−1{\displaystyle {\sqrt {-1}}\cdot {\sqrt {-1}}=(-i)\cdot (-i)=-1}if the branch includes−i, while the right-hand side becomes(−1)⋅(−1)=1=−1,{\displaystyle {\sqrt {\left(-1\right)\cdot \left(-1\right)}}={\sqrt {1}}=-1,}where the last equality,1=−1,{\displaystyle {\sqrt {1}}=-1,}is a consequence of the choice of branch in the redefinition of√. The definition of a square root ofx{\displaystyle x}as a numbery{\displaystyle y}such thaty2=x{\displaystyle y^{2}=x}has been generalized in the following way. Acube rootofx{\displaystyle x}is a numbery{\displaystyle y}such thaty3=x{\displaystyle y^{3}=x}; it is denotedx3.{\displaystyle {\sqrt[{3}]{x}}.} Ifnis an integer greater than two, an-th rootofx{\displaystyle x}is a numbery{\displaystyle y}such thatyn=x{\displaystyle y^{n}=x}; it is denotedxn.{\displaystyle {\sqrt[{n}]{x}}.} Given anypolynomialp, arootofpis a numberysuch thatp(y) = 0. For example, thenth roots ofxare the roots of the polynomial (iny)yn−x.{\displaystyle y^{n}-x.} Abel–Ruffini theoremstates that, in general, the roots of a polynomial of degree five or higher cannot be expressed in terms ofnth roots. IfAis apositive-definite matrixor operator, then there exists precisely one positive definite matrix or operatorBwithB2=A; we then defineA1/2=B. In general matrices may have multiple square roots or even an infinitude of them. For example, the2 × 2identity matrixhas an infinity of square roots,[32]though only one of them is positive definite. Each element of anintegral domainhas no more than 2 square roots. Thedifference of two squaresidentityu2−v2= (u−v)(u+v)is proved using thecommutativity of multiplication. Ifuandvare square roots of the same element, thenu2−v2= 0. Because there are nozero divisorsthis impliesu=voru+v= 0, where the latter means that two roots areadditive inversesof each other. In other words if an element a square rootuof an elementaexists, then the only square roots ofaareuand−u. The only square root of 0 in an integral domain is 0 itself. In a field ofcharacteristic2, an element either has one square root or does not have any at all, because each element is its own additive inverse, so that−u=u. If the field isfiniteof characteristic 2 then every element has a unique square root. In afieldof any other characteristic, any non-zero element either has two square roots, as explained above, or does not have any. Given an oddprime numberp, letq=pefor some positive integere. A non-zero element of the fieldFqwithqelements is aquadratic residueif it has a square root inFq. Otherwise, it is a quadratic non-residue. There are(q− 1)/2quadratic residues and(q− 1)/2quadratic non-residues; zero is not counted in either class. The quadratic residues form agroupunder multiplication. The properties of quadratic residues are widely used innumber theory. Unlike in an integral domain, a square root in an arbitrary (unital) ring need not be unique up to sign. For example, in the ringZ/8Z{\displaystyle \mathbb {Z} /8\mathbb {Z} }of integersmodulo 8(which is commutative, but has zero divisors), the element 1 has four distinct square roots: ±1 and ±3. Another example is provided by the ring ofquaternionsH,{\displaystyle \mathbb {H} ,}which has no zero divisors, but is not commutative. Here, the element −1 hasinfinitely many square roots, including±i,±j, and±k. In fact, the set of square roots of−1is exactly{ai+bj+ck∣a2+b2+c2=1}.{\displaystyle \{ai+bj+ck\mid a^{2}+b^{2}+c^{2}=1\}.} A square root of 0 is either 0 or a zero divisor. Thus in rings where zero divisors do not exist, it is uniquely 0. However, rings with zero divisors may have multiple square roots of 0. For example, inZ/n2Z,{\displaystyle \mathbb {Z} /n^{2}\mathbb {Z} ,}any multiple ofnis a square root of 0. The square root of a positive number is usually defined as the side length of asquarewith theareaequal to the given number. But the square shape is not necessary for it: if one of twosimilarplanar Euclideanobjects has the areaatimes greater than another, then the ratio of their linear sizes isa{\displaystyle {\sqrt {a}}}. A square root can be constructed with a compass and straightedge. In hisElements,Euclid(fl.300 BC) gave the construction of thegeometric meanof two quantities in two different places:Proposition II.14andProposition VI.13. Since the geometric mean ofaandbisab{\displaystyle {\sqrt {ab}}}, one can constructa{\displaystyle {\sqrt {a}}}simply by takingb= 1. The construction is also given byDescartesin hisLa Géométrie, see figure 2 onpage 2. However, Descartes made no claim to originality and his audience would have been quite familiar with Euclid. Euclid's second proof in Book VI depends on the theory ofsimilar triangles. Let AHB be a line segment of lengtha+bwithAH =aandHB =b. Construct the circle with AB as diameter and let C be one of the two intersections of the perpendicular chord at H with the circle and denote the length CH ash. Then, usingThales' theoremand, as in theproof of Pythagoras' theorem by similar triangles, triangle AHC is similar to triangle CHB (as indeed both are to triangle ACB, though we don't need that, but it is the essence of the proof of Pythagoras' theorem) so that AH:CH is as HC:HB, i.e.a/h=h/b, from which we conclude by cross-multiplication thath2=ab, and finally thath=ab{\displaystyle h={\sqrt {ab}}}. When marking the midpoint O of the line segment AB and drawing the radius OC of length(a+b)/2, then clearly OC > CH, i.e.a+b2≥ab{\textstyle {\frac {a+b}{2}}\geq {\sqrt {ab}}}(with equality if and only ifa=b), which is thearithmetic–geometric mean inequality for two variablesand, as notedabove, is the basis of theAncient Greekunderstanding of "Heron's method". Another method of geometric construction usesright trianglesandinduction:1{\displaystyle {\sqrt {1}}}can be constructed, and oncex{\displaystyle {\sqrt {x}}}has been constructed, the right triangle with legs 1 andx{\displaystyle {\sqrt {x}}}has ahypotenuseofx+1{\displaystyle {\sqrt {x+1}}}. Constructing successive square roots in this manner yields theSpiral of Theodorusdepicted above.
https://en.wikipedia.org/wiki/Square_root
Inmathematics, thecomposition operator∘{\displaystyle \circ }takes twofunctions,f{\displaystyle f}andg{\displaystyle g}, and returns a new functionh(x):=(g∘f)(x)=g(f(x)){\displaystyle h(x):=(g\circ f)(x)=g(f(x))}. Thus, the functiongisappliedafter applyingftox.(g∘f){\displaystyle (g\circ f)}is pronounced "the composition ofgandf".[1] Reverse composition, sometimes denotedf↦g{\displaystyle f\mapsto g}, applies the operation in the opposite order, applyingf{\displaystyle f}first andg{\displaystyle g}second. Intuitively, reverse composition is a chaining process in which the output of functionffeeds the input of functiong. The composition of functions is a special case of thecomposition of relations, sometimes also denoted by∘{\displaystyle \circ }. As a result, all properties of composition of relations are true of composition of functions,[2]such asassociativity. The composition of functions is alwaysassociative—a property inherited from thecomposition of relations.[2]That is, iff,g, andhare composable, thenf∘ (g∘h) = (f∘g) ∘h.[3]Since the parentheses do not change the result, they are generally omitted. In a strict sense, the compositiong∘fis only meaningful if the codomain offequals the domain ofg; in a wider sense, it is sufficient that the former be an impropersubsetof the latter.[nb 1]Moreover, it is often convenient to tacitly restrict the domain off, such thatfproduces only values in the domain ofg. For example, the compositiong∘fof the functionsf:R→(−∞,+9]defined byf(x) = 9 −x2andg:[0,+∞)→Rdefined byg(x)=x{\displaystyle g(x)={\sqrt {x}}}can be defined on theinterval[−3,+3]. The functionsgandfare said tocommutewith each other ifg∘f=f∘g. Commutativity is a special property, attained only by particular functions, and often in special circumstances. For example,|x| + 3 = |x+ 3|only whenx≥ 0. The picture shows another example. The composition ofone-to-one(injective) functions is always one-to-one. Similarly, the composition ofonto(surjective) functions is always onto. It follows that the composition of twobijectionsis also a bijection. Theinverse functionof a composition (assumed invertible) has the property that(f∘g)−1=g−1∘f−1.[4] Derivativesof compositions involving differentiable functions can be found using thechain rule.Higher derivativesof such functions are given byFaà di Bruno's formula.[3] Composition of functions is sometimes described as a kind ofmultiplicationon a function space, but has very different properties frompointwisemultiplication of functions (e.g. composition is notcommutative).[5] Suppose one has two (or more) functionsf:X→X,g:X→Xhaving the same domain and codomain; these are often calledtransformations. Then one can form chains of transformations composed together, such asf∘f∘g∘f. Such chains have thealgebraic structureof amonoid, called atransformation monoidor (much more seldom) acomposition monoid. In general, transformation monoids can have remarkably complicated structure. One particular notable example is thede Rham curve. The set ofallfunctionsf:X→Xis called thefull transformation semigroup[6]orsymmetric semigroup[7]onX. (One can actually define two semigroups depending how one defines the semigroup operation as the left or right composition of functions.[8]) If the given transformations arebijective(and thus invertible), then the set of all possible combinations of these functions forms atransformation group(also known as apermutation group); and one says that the group isgeneratedby these functions. The set of all bijective functionsf:X→X(calledpermutations) forms a group with respect to function composition. This is thesymmetric group, also sometimes called thecomposition group. A fundamental result in group theory,Cayley's theorem, essentially says that any group is in fact just a subgroup of a symmetric group (up toisomorphism).[9] In the symmetric semigroup (of all transformations) one also finds a weaker, non-unique notion of inverse (called a pseudoinverse) because the symmetric semigroup is aregular semigroup.[10] IfY⊆X, thenf:X→Y{\displaystyle f:X\to Y}may compose with itself; this is sometimes denoted asf2{\displaystyle f^{2}}. That is: More generally, for anynatural numbern≥ 2, thenthfunctionalpowercan be defined inductively byfn=f∘fn−1=fn−1∘f, a notation introduced byHans Heinrich Bürmann[citation needed][11][12]andJohn Frederick William Herschel.[13][11][14][12]Repeated composition of such a function with itself is calledfunction iteration. Note:Ifftakes its values in aring(in particular for real or complex-valuedf), there is a risk of confusion, asfncould also stand for then-fold product off, e.g.f2(x) =f(x) ·f(x).[12]For trigonometric functions, usually the latter is meant, at least for positive exponents.[12]For example, intrigonometry, this superscript notation represents standardexponentiationwhen used withtrigonometric functions: sin2(x) = sin(x) · sin(x). However, for negative exponents (especially −1), it nevertheless usually refers to the inverse function, e.g.,tan−1= arctan ≠ 1/tan. In some cases, when, for a given functionf, the equationg∘g=fhas a unique solutiong, that function can be defined as thefunctional square rootoff, then written asg=f1/2. More generally, whengn=fhas a unique solution for some natural numbern> 0, thenfm/ncan be defined asgm. Under additional restrictions, this idea can be generalized so that theiteration countbecomes a continuous parameter; in this case, such a system is called aflow, specified through solutions ofSchröder's equation. Iterated functions and flows occur naturally in the study offractalsanddynamical systems. To avoid ambiguity, some mathematicians[citation needed]choose to use∘to denote the compositional meaning, writingf∘n(x)for then-th iterate of the functionf(x), as in, for example,f∘3(x)meaningf(f(f(x))). For the same purpose,f[n](x)was used byBenjamin Peirce[15][12]whereasAlfred PringsheimandJules Molksuggestednf(x)instead.[16][12][nb 2] Many mathematicians, particularly ingroup theory, omit the composition symbol, writinggfforg∘f.[17] During the mid-20th century, some mathematicians adoptedpostfix notation, writingxfforf(x)and(xf)gforg(f(x)).[18]This can be more natural thanprefix notationin many cases, such as inlinear algebrawhenxis arow vectorandfandgdenotematricesand the composition is bymatrix multiplication. The order is important because function composition is not necessarily commutative. Having successive transformations applying and composing to the right agrees with the left-to-right reading sequence. Mathematicians who use postfix notation may write "fg", meaning first applyfand then applyg, in keeping with the order the symbols occur in postfix notation, thus making the notation "fg" ambiguous. Computer scientists may write "f;g" for this,[19]thereby disambiguating the order of composition. To distinguish the left composition operator from a text semicolon, in theZ notationthe ⨾ character is used for leftrelation composition.[20]Since all functions arebinary relations, it is correct to use the [fat] semicolon for function composition as well (see the article oncomposition of relationsfor further details on this notation). Given a functiong, thecomposition operatorCgis defined as thatoperatorwhich maps functions to functions asCgf=f∘g.{\displaystyle C_{g}f=f\circ g.}Composition operators are studied in the field ofoperator theory. Function composition appears in one form or another in numerousprogramming languages. Partial composition is possible formultivariate functions. The function resulting when some argumentxiof the functionfis replaced by the functiongis called a composition offandgin some computer engineering contexts, and is denotedf|xi=gf|xi=g=f(x1,…,xi−1,g(x1,x2,…,xn),xi+1,…,xn).{\displaystyle f|_{x_{i}=g}=f(x_{1},\ldots ,x_{i-1},g(x_{1},x_{2},\ldots ,x_{n}),x_{i+1},\ldots ,x_{n}).} Whengis a simple constantb, composition degenerates into a (partial) valuation, whose result is also known asrestrictionorco-factor.[21] f|xi=b=f(x1,…,xi−1,b,xi+1,…,xn).{\displaystyle f|_{x_{i}=b}=f(x_{1},\ldots ,x_{i-1},b,x_{i+1},\ldots ,x_{n}).} In general, the composition of multivariate functions may involve several other functions as arguments, as in the definition ofprimitive recursive function. Givenf, an-ary function, andnm-ary functionsg1, ...,gn, the composition offwithg1, ...,gn, is them-ary functionh(x1,…,xm)=f(g1(x1,…,xm),…,gn(x1,…,xm)).{\displaystyle h(x_{1},\ldots ,x_{m})=f(g_{1}(x_{1},\ldots ,x_{m}),\ldots ,g_{n}(x_{1},\ldots ,x_{m})).} This is sometimes called thegeneralized compositeorsuperpositionoffwithg1, ...,gn.[22]The partial composition in only one argument mentioned previously can be instantiated from this more general scheme by setting all argument functions except one to be suitably chosenprojection functions. Hereg1, ...,gncan be seen as a single vector/tuple-valued function in this generalized scheme, in which case this is precisely the standard definition of function composition.[23] A set of finitaryoperationson some base setXis called acloneif it contains all projections and is closed under generalized composition. A clone generally contains operations of variousarities.[22]The notion of commutation also finds an interesting generalization in the multivariate case; a functionfof aritynis said to commute with a functiongof aritymiffis ahomomorphismpreservingg, and vice versa, that is:[22]f(g(a11,…,a1m),…,g(an1,…,anm))=g(f(a11,…,an1),…,f(a1m,…,anm)).{\displaystyle f(g(a_{11},\ldots ,a_{1m}),\ldots ,g(a_{n1},\ldots ,a_{nm}))=g(f(a_{11},\ldots ,a_{n1}),\ldots ,f(a_{1m},\ldots ,a_{nm})).} A unary operation always commutes with itself, but this is not necessarily the case for a binary (or higher arity) operation. A binary (or higher arity) operation that commutes with itself is calledmedial or entropic.[22] Compositioncan be generalized to arbitrarybinary relations. IfR⊆X×YandS⊆Y×Zare two binary relations, then their composition amounts to R∘S={(x,z)∈X×Z:(∃y∈Y)((x,y)∈R∧(y,z)∈S)}{\displaystyle R\circ S=\{(x,z)\in X\times Z:(\exists y\in Y)((x,y)\in R\,\land \,(y,z)\in S)\}}. Considering a function as a special case of a binary relation (namelyfunctional relations), function composition satisfies the definition for relation composition. A small circleR∘Shas been used for theinfix notation of composition of relations, as well as functions. When used to represent composition of functions(g∘f)(x)=g(f(x)){\displaystyle (g\circ f)(x)\ =\ g(f(x))}however, the text sequence is reversed to illustrate the different operation sequences accordingly. The composition is defined in the same way forpartial functionsand Cayley's theorem has its analogue called theWagner–Preston theorem.[24] Thecategory of setswith functions asmorphismsis the prototypicalcategory. The axioms of a category are in fact inspired from the properties (and also the definition) of function composition.[25]The structures given by composition are axiomatized and generalized incategory theorywith the concept ofmorphismas the category-theoretical replacement of functions. The reversed order of composition in the formula(f∘g)−1= (g−1∘f−1)applies forcomposition of relationsusingconverse relations, and thus ingroup theory. These structures formdagger categories. The standard "foundation" for mathematics starts withsets and their elements. It is possible to start differently, by axiomatising not elements of sets but functions between sets. This can be done by using the language of categories and universal constructions. . . . the membership relation for sets can often be replaced by the composition operation for functions. This leads to an alternative foundation for Mathematics upon categories -- specifically, on the category of all functions. Now much of Mathematics is dynamic, in that it deals with morphisms of an object into another object of the same kind. Such morphisms(like functions)form categories, and so the approach via categories fits well with the objective of organizing and understanding Mathematics. That, in truth, should be the goal of a proper philosophy of Mathematics. -Saunders Mac Lane,Mathematics: Form and Function[26] The composition symbol∘is encoded asU+2218∘RING OPERATOR(&compfn;, &SmallCircle;); see theDegree symbolarticle for similar-appearing Unicode characters. InTeX, it is written\circ.
https://en.wikipedia.org/wiki/Function_composition
TheAbel equation, named afterNiels Henrik Abel, is a type offunctional equationof the form or The forms are equivalent whenαisinvertible.horαcontrol theiterationoff. The second equation can be written Takingx=α−1(y), the equation can be written For a known functionf(x), a problem is to solve the functional equation for the functionα−1≡h, possibly satisfying additional requirements, such asα−1(0) = 1. The change of variablessα(x)= Ψ(x), for arealparameters, brings Abel's equation into the celebratedSchröder's equation,Ψ(f(x)) =sΨ(x). The further changeF(x) = exp(sα(x))intoBöttcher's equation,F(f(x)) =F(x)s. The Abel equation is a special case of (and easily generalizes to) thetranslation equation,[1] e.g., forω(x,1)=f(x){\displaystyle \omega (x,1)=f(x)}, The Abel functionα(x)further provides the canonical coordinate forLie advective flows(one parameterLie groups). Initially, the equation in the more general form[2][3]was reported. Even in the case of a single variable, the equation is non-trivial, and admits special analysis.[4][5][6] In the case of a linear transfer function, the solution is expressible compactly.[7] The equation oftetrationis a special case of Abel's equation, withf= exp. In the case of an integer argument, the equation encodes a recurrent procedure, e.g., and so on, The Abel equation has at least one solution onE{\displaystyle E}if and only iffor allx∈E{\displaystyle x\in E}and alln∈N{\displaystyle n\in \mathbb {N} },fn(x)≠x{\displaystyle f^{n}(x)\neq x}, wherefn=f∘f∘...∘f{\displaystyle f^{n}=f\circ f\circ ...\circ f}, is the functionfiteratedntimes.[8] We have the following existence and uniqueness theorem[9]: Theorem B Leth:R→R{\displaystyle h:\mathbb {R} \to \mathbb {R} }beanalytic, meaning it has a Taylor expansion. To find: real analytic solutionsα:R→C{\displaystyle \alpha :\mathbb {R} \to \mathbb {C} }of the Abel equationα∘h=α+1{\textstyle \alpha \circ h=\alpha +1}. A real analytic solutionα{\displaystyle \alpha }exists if and only if both of the following conditions hold: The solution is essentially unique in the sense that there exists a canonical solutionα0{\displaystyle \alpha _{0}}with the following properties: {α0+β∘α0|β:R→Ris analytic, with period 1}.{\displaystyle \{\alpha _{0}+\beta \circ \alpha _{0}|\beta :\mathbb {R} \to \mathbb {R} {\text{ is analytic, with period 1}}\}.} Analytic solutions (Fatou coordinates) can be approximated byasymptotic expansionof a function defined bypower seriesin the sectors around aparabolic fixed point.[10]The analytic solution is unique up to a constant.[11]
https://en.wikipedia.org/wiki/Abel_equation
Schröder's equation,[1][2][3]named afterErnst Schröder, is afunctional equationwith oneindependent variable: given the functionh, find the functionΨsuch that ∀xΨ(h(x))=sΨ(x).{\displaystyle \forall x\;\;\;\Psi {\big (}h(x){\big )}=s\Psi (x).} Schröder's equation is an eigenvalue equation for thecomposition operatorChthat sends a functionftof(h(.)). Ifais afixed pointofh, meaningh(a) =a, then eitherΨ(a) = 0(or∞) ors= 1. Thus, provided thatΨ(a)is finite andΨ′(a)does not vanish or diverge, theeigenvaluesis given bys=h′(a). Fora= 0, ifhis analytic on the unit disk, fixes0, and0 < |h′(0)| < 1, thenGabriel Koenigsshowed in 1884 that there is an analytic (non-trivial)Ψsatisfying Schröder's equation. This is one of the first steps in a long line of theorems fruitful for understanding composition operators on analytic function spaces, cf.Koenigs function. Equations such as Schröder's are suitable to encodingself-similarity, and have thus been extensively utilized in studies ofnonlinear dynamics(often referred to colloquially aschaos theory). It is also used in studies ofturbulence, as well as therenormalization group.[4][5] An equivalent transpose form of Schröder's equation for the inverseΦ = Ψ−1of Schröder's conjugacy function ish(Φ(y)) = Φ(sy). The change of variablesα(x) = log(Ψ(x))/log(s)(theAbel function) further converts Schröder's equation to the olderAbel equation,α(h(x)) = α(x) + 1. Similarly, the change of variablesΨ(x) = log(φ(x))converts Schröder's equation toBöttcher's equation,φ(h(x)) = (φ(x))s. Moreover, for the velocity,[5]β(x) = Ψ/Ψ′,Julia's equation,β(f(x)) =f′(x)β(x), holds. Then-th power of a solution of Schröder's equation provides a solution of Schröder's equation with eigenvaluesn, instead. In the same vein, for an invertible solutionΨ(x)of Schröder's equation, the (non-invertible) functionΨ(x)k(log Ψ(x))is also a solution, foranyperiodic functionk(x)with periodlog(s). All solutions of Schröder's equation are related in this manner. Schröder's equation was solved analytically ifais an attracting (but not superattracting) fixed point, that is0 < |h′(a)| < 1byGabriel Koenigs(1884).[6][7] In the case of a superattracting fixed point,|h′(a)| = 0, Schröder's equation is unwieldy, and had best be transformed toBöttcher's equation.[8] There are a good number of particular solutions dating back to Schröder's original 1870 paper.[1] The series expansion around a fixed point and the relevant convergence properties of the solution for the resulting orbit and its analyticity properties are cogently summarized bySzekeres.[9]Several of the solutions are furnished in terms ofasymptotic series, cf.Carleman matrix. It is used to analyse discrete dynamical systems by finding a new coordinate system in which the system (orbit) generated byh(x) looks simpler, a mere dilation. More specifically, a system for which a discrete unit time step amounts tox→h(x), can have its smoothorbit(orflow) reconstructed from the solution of the above Schröder's equation, itsconjugacy equation. That is,h(x) = Ψ−1(sΨ(x)) ≡h1(x). In general,all of its functional iterates(itsregular iterationgroup, seeiterated function) are provided by theorbit ht(x)=Ψ−1(stΨ(x)),{\displaystyle h_{t}(x)=\Psi ^{-1}{\big (}s^{t}\Psi (x){\big )},} fortreal — not necessarily positive or integer. (Thus a fullcontinuous group.) The set ofhn(x), i.e., of all positive integer iterates ofh(x)(semigroup) is called thesplinter(or Picard sequence) ofh(x). However,all iterates(fractional, infinitesimal, or negative) ofh(x)are likewise specified through the coordinate transformationΨ(x)determined to solve Schröder's equation: a holographic continuous interpolation of the initial discrete recursionx→h(x)has been constructed;[10]in effect, the entireorbit. For instance, thefunctional square rootish1/2(x) = Ψ−1(s1/2Ψ(x)), so thath1/2(h1/2(x)) =h(x), and so on. For example,[11]special cases of thelogistic mapsuch as the chaotic caseh(x) = 4x(1 −x)were already worked out by Schröder in his original article[1](p. 306), In fact, this solution is seen to result as motion dictated by a sequence of switchback potentials,[12]V(x) ∝x(x− 1) (nπ+ arcsin√x)2, a generic feature of continuous iterates effected by Schröder's equation. A nonchaotic case he also illustrated with his method,h(x) = 2x(1 −x), yields Likewise, for theBeverton–Holt model,h(x) =x/(2 −x), one readily finds[10]Ψ(x) =x/(1 −x), so that[13]
https://en.wikipedia.org/wiki/Schr%C3%B6der%27s_equation
Inmathematics,superfunctionis a nonstandard name for aniterated functionfor complexified continuous iteration index. Roughly, for somefunctionfand for some variablex, the superfunction could be defined by the expression Then,S(z;x) can be interpreted as the superfunction of the functionf(x). Such a definition is valid only for a positiveintegerindexz. The variablexis often omitted. Much study and many applications of superfunctions employ variousextensions of these superfunctions to complex and continuous indices; and the analysis of existence, uniqueness and their evaluation. TheAckermann functionsandtetrationcan be interpreted in terms of superfunctions. Analysis of superfunctions arose from applications of the evaluation offractional iterations of functions.Superfunctions and their inverses allow evaluation of not only the first negative power of a function (inverse function), but also of anyrealand evencomplexiterate of that function. Historically, an early function of this kind considered wasexp{\displaystyle {\sqrt {\exp }}}; the function!{\displaystyle {\sqrt {\,!\;}}}has then been used as the logo of the physics department of theMoscow State University.[1] At that time, these investigators did not have computational access for the evaluation of such functions, but the functionexp{\displaystyle {\sqrt {\exp }}}was luckier than!{\displaystyle {\sqrt {\,!\;}}}: at the very least, the existence of theholomorphic functionφ{\displaystyle \varphi }such thatφ(φ(u))=exp⁡(u){\displaystyle \varphi (\varphi (u))=\exp(u)}had been demonstrated in 1950 byHellmuth Kneser.[2] Relying on the elegant functional conjugacy theory ofSchröder's equation,[3]for his proof, Kneser had constructed the "superfunction" of theexponential mapthrough the correspondingAbel functionX{\displaystyle {\mathcal {X}}}, satisfying the relatedAbel equation so thatX(S(z;u))=X(u)+z{\displaystyle {\mathcal {X}}(S(z;u))={\mathcal {X}}(u)+z\ }. The inverse function Kneser found, is anentiresuper-exponential, although it is not real on the real axis; it cannot be interpreted astetrational, because the conditionS(0;x)=x{\displaystyle S(0;x)=x}cannot be realized for the entire super-exponential. Therealexp{\displaystyle {\sqrt {\exp }}}can be constructed with thetetrational(which is also a superexponential); while the real!{\displaystyle {\sqrt {\,!\;}}}can be constructed with thesuperfactorial. There is a book dedicated to superfunctions[4] The recurrence formula of the above preamble can be written as Instead of the last equation, one could write the identity function, and extend the range of definition of the superfunctionSto the non-negative integers. Then, one may posit and extend the range of validity to the integer values larger than −2. The following extension, for example, is not trivial, because the inverse function may happen to be not defined for some values ofx{\displaystyle x}. In particular,tetrationcan be interpreted as superfunction ofexponentiationfor some real baseb{\displaystyle b}; in this case, Then, atx= 1, but is not defined. For extension to non-integer values of the argument, the superfunction should be defined in a different way. For complex numbersa{\displaystyle a}andb{\displaystyle b}such thata{\displaystyle a}belongs to some connected domainD⊆C{\displaystyle D\subseteq \mathbb {C} }, the superfunction (froma{\displaystyle a}tob{\displaystyle b}) of aholomorphic functionfon the domainD{\displaystyle D}is a functionS{\displaystyle S},holomorphicon domainD{\displaystyle D}, such that In general, the superfunction is not unique. For a given base functionf{\displaystyle f}, from a given(a↦d){\displaystyle (a\mapsto d)}superfunctionS{\displaystyle S}, another(a↦d){\displaystyle (a\mapsto d)}superfunctionG{\displaystyle G}could be constructed as whereμ{\displaystyle \mu }is any 1-periodicfunction, holomorphic at least in some vicinity of the real axis, such thatμ(a)=0{\displaystyle \mu (a)=0}. The modified superfunction may have a narrower range of holomorphy. The variety of possible superfunctions is especially large in the limiting case, when the width of the range of holomorphy becomes zero; in this case, one deals withreal-analyticsuperfunctions.[5] If the range of holomorphy required is large enough, then the superfunction is expected to be unique, at least in some specific base functionsH{\displaystyle H}. In particular, the(C,0↦1){\displaystyle (C,0\mapsto 1)}superfunction ofexpb{\displaystyle \exp _{b}}, forb>1{\displaystyle b>1}, is calledtetrationand is believed to be unique, at least forC={z∈C:ℜ(z)>−2}{\displaystyle C=\{z\in \mathbb {C} ~:~\Re (z)>-2\}}; for the caseb>exp⁡(1/e){\displaystyle b>\exp(1/\mathrm {e} )},[6]but up to 2009, the uniqueness was aconjectureand not a theorem with a formalmathematical proof. This short collection of elementary superfunctions is illustrated in.[7]Some superfunctions can be expressed throughelementary functions; they are used without mention that they are superfunctions. For example, for the transfer function "++", which means unit increment, the superfunction is just addition of a constant. Chose acomplex numberc{\displaystyle c}and define the functionaddc{\displaystyle \mathrm {add} _{c}}byaddc(x)=c+x{\displaystyle \mathrm {add} _{c}(x)=c+x}for allx∈C{\displaystyle x\in \mathbb {C} }. Further define the functionmulc{\displaystyle \mathrm {mul_{c}} }bymulc(x)=c⋅x{\displaystyle \mathrm {mul_{c}} (x)=c\cdot x}for allx∈C{\displaystyle x\in \mathbb {C} }. Then, the functionS(z;x)=x+mulc(z){\displaystyle S(z;x)=x+\mathrm {mul_{c}} (z)}is the superfunction (0 toc) of the functionaddc{\displaystyle \mathrm {add_{c}} }onC. Exponentiationexpc{\displaystyle \exp _{c}}is superfunction (from 1 toc{\displaystyle c}) of functionmulc{\displaystyle \mathrm {mul} _{c}}. The examples except the last one, below, are essentially from Schröder's pioneering 1870 paper.[3] Letf(x)=2x2−1{\displaystyle f(x)=2x^{2}-1}. Then, is a(C,0→1){\displaystyle (\mathbb {C} ,~0\!\rightarrow \!1)}superfunction (iteration orbit) off. Indeed, andS(0;x)=x.{\displaystyle S(0;x)=x.} In this case, the superfunctionS{\displaystyle S}is periodic, with periodT=2πln⁡(2)i≈9.0647202836543876194i{\displaystyle T={\frac {2\pi }{\ln(2)}}i\approx 9.0647202836543876194\!~i}; and the superfunction approaches unity in the negative direction on the real axis: Similarly, has an iteration orbit In general, the transfer (step) functionf(x) need not be anentire function. An example involving ameromorphic functionfreads, Its iteration orbit (superfunction) is onC, the set of complex numbers except for the singularities of the functionS. To see this, recall the double angle trigonometric formula Letb>1{\displaystyle b>1},f(u)=expb⁡(u){\displaystyle f(u)=\exp _{b}(u)},C={z∈C:ℜ(u)>−2}{\displaystyle C=\{z\in \mathbb {C} :\Re (u)>-2\}}. Thetetrationtetb{\displaystyle \mathrm {tet} _{b}}is then a(C,0→1){\displaystyle (C,~0\!\rightarrow \!1)}superfunction ofexpb{\displaystyle \exp _{b}}. The inverse of a superfunction for a suitable argumentxcan be interpreted as theAbel function, the solution of theAbel equation, and hence The inverse function when defined, is for suitable domains and ranges, when they exist. The recursive property ofSis then self-evident. The figure at left shows an example of transition fromexp1=exp{\displaystyle \exp ^{1}\!=\!\exp }toexp−1=ln{\displaystyle \exp ^{\!-1}\!=\!\ln }. The iterated functionexpz{\displaystyle \exp ^{z}}versus real argument is plotted forz=2,1,0.9,0.5,0.1,−0.1,−0.5,−0.9,−1,−2{\displaystyle z=2,1,0.9,0.5,0.1,-0.1,-0.5,-0.9,-1,-2}. Thetetrationaland ArcTetrational were used as superfunctionS{\displaystyle S}and Abel functionA{\displaystyle A}of the exponential. The figure at right shows these functions in the complex plane. At non-negative integer number of iteration, the iterated exponential is anentire function; at non-integer values, it has twobranch points, which correspond to thefixed pointL{\displaystyle L}andL∗{\displaystyle L^{*}}ofnatural logarithm. Atz≥0{\displaystyle z\!\geq \!0}, functionexpz⁡(x){\displaystyle \exp ^{z}(x)}remainsholomorphicat least in the strip|ℑ(z)|<ℑ(L)≈1.3{\displaystyle |\Im (z)|<\Im (L)\approx 1.3}along the real axis. Superfunctions, usually thesuperexponentials, are proposed as a fast-growing function for an upgrade of thefloating pointrepresentation of numbers in computers. Such an upgrade would greatly extend the range of huge numbers which are still distinguishable from infinity. Other applications include the calculation of fractional iterates (or fractional powers) of a function. Any holomorphic function can be identified to a transfer function, and then its superfunctions and corresponding Abel functions can be considered. In the investigation of the nonlinear response of optical materials, the sample is supposed to be optically thin, in such a way that the intensity of the light does not change much as it goes through. Then one can consider, for example, the absorption as function of the intensity. However, at small variation of the intensity in the sample, the precision of measurement of the absorption as function of intensity is not good. The reconstruction of the superfunction from the transfer function allows to work with relatively thick samples, improving the precision of measurements. In particular, the transfer function of the similar sample, which is half thinner, could be interpreted as thesquare root(i.e. half-iteration) of the transfer function of the initial sample. Similar example is suggested for a nonlinear optical fiber.[6] It may make sense to characterize the nonlinearities in the attenuation of shock waves in a homogeneous tube. This could find an application in some advanced muffler, using nonlinear acoustic effects to withdraw the energy of the sound waves without to disturb the flux of the gas. Again, the analysis of the nonlinear response, i.e. the transfer function, may be boosted with the superfunction. In analysis of condensation, the growth (or vaporization) of a small drop of liquid can be considered, as it diffuses down through a tube with some uniform concentration of vapor. In the first approximation, at fixed concentration of the vapor, the mass of the drop at the output end can be interpreted as the transfer function of the input mass. The square root of this transfer function will characterize the tube of half length. The mass of a snowball that rolls down a hill can be considered as a function of the path it has already passed. At fixed length of this path (that can be determined by the altitude of the hill) this mass can be considered also as a transfer function of the input mass. The mass of the snowball could be measured at the top of the hill and at the bottom, giving the transfer function; then, the mass of the snowball, as a function of the length it passed, is a superfunction. If one needs to build up an operational element with some given transfer functionH{\displaystyle H}, and wants to realize it as a sequential connection of a couple of identical operational elements, then each of these two elements should have transfer functionh=H{\displaystyle h={\sqrt {H}}}. Such a function can be evaluated through the superfunction and the Abel function of the transfer functionH{\displaystyle H}. The operational element may have any origin: it can be realized as an electronic microchip, or a mechanical couple of curvilinear grains, or some asymmetric U-tube filled with different liquids, and so on. This article incorporates material from theCitizendiumarticle "Superfunction", which is licensed under theCreative Commons Attribution-ShareAlike 3.0 Unported Licensebut not under theGFDL.
https://en.wikipedia.org/wiki/Superfunction
Fractional calculusis a branch ofmathematical analysisthat studies the several different possibilities of definingreal numberpowers orcomplex numberpowers of thedifferentiationoperatorD{\displaystyle D}Df(x)=ddxf(x),{\displaystyle Df(x)={\frac {d}{dx}}f(x)\,,} and of theintegrationoperatorJ{\displaystyle J}[Note 1]Jf(x)=∫0xf(s)ds,{\displaystyle Jf(x)=\int _{0}^{x}f(s)\,ds\,,} and developing acalculusfor such operators generalizing the classical one. In this context, the termpowersrefers to iterative application of alinear operatorD{\displaystyle D}to afunctionf{\displaystyle f},that is, repeatedlycomposingD{\displaystyle D}with itself, as inDn(f)=(D∘D∘D∘⋯∘D⏟n)(f)=D(D(D(⋯D⏟n(f)⋯))).{\displaystyle {\begin{aligned}D^{n}(f)&=(\underbrace {D\circ D\circ D\circ \cdots \circ D} _{n})(f)\\&=\underbrace {D(D(D(\cdots D} _{n}(f)\cdots ))).\end{aligned}}} For example, one may ask for a meaningful interpretation ofD=D12{\displaystyle {\sqrt {D}}=D^{\scriptstyle {\frac {1}{2}}}} as an analogue of thefunctional square rootfor the differentiation operator, that is, an expression for some linear operator that, when appliedtwiceto any function, will have the same effect asdifferentiation. More generally, one can look at the question of defining a linear operatorDa{\displaystyle D^{a}} for every real numbera{\displaystyle a}in such a way that, whena{\displaystyle a}takes anintegervaluen∈Z{\displaystyle n\in \mathbb {Z} },it coincides with the usualn{\displaystyle n}-folddifferentiationD{\displaystyle D}ifn>0{\displaystyle n>0},and with then{\displaystyle n}-thpower ofJ{\displaystyle J}whenn<0{\displaystyle n<0}. One of the motivations behind the introduction and study of these sorts of extensions of the differentiation operatorD{\displaystyle D}is that thesetsof operator powers{Da∣a∈R}{\displaystyle \{D^{a}\mid a\in \mathbb {R} \}}defined in this way arecontinuoussemigroupswith parametera{\displaystyle a},of which the originaldiscretesemigroup of{Dn∣n∈Z}{\displaystyle \{D^{n}\mid n\in \mathbb {Z} \}}for integern{\displaystyle n}is adenumerablesubgroup: since continuous semigroups have a well developed mathematical theory, they can be applied to other branches of mathematics. Fractionaldifferential equations, also known as extraordinary differential equations,[1]are a generalization of differential equations through the application of fractional calculus. Inapplied mathematicsand mathematical analysis, afractional derivativeis a derivative of any arbitrary order, real or complex. Its first appearance is in a letter written toGuillaume de l'HôpitalbyGottfried Wilhelm Leibnizin 1695.[2]Around the same time, Leibniz wrote toJohann Bernoulliabout derivatives of "general order".[3]In the correspondence between Leibniz andJohn Wallisin 1697, Wallis's infinite product forπ/2{\displaystyle \pi /2}is discussed. Leibniz suggested using differential calculus to achieve this result. Leibniz further used the notationd1/2y{\displaystyle {d}^{1/2}{y}}to denote the derivative of order ½.[3] Fractional calculus was introduced in one ofNiels Henrik Abel's early papers[4]where all the elements can be found: the idea of fractional-order integration and differentiation, the mutually inverse relationship between them, the understanding that fractional-order differentiation and integration can be considered as the same generalized operation, and the unified notation for differentiation and integration of arbitrary real order.[5]Independently, the foundations of the subject were laid byLiouvillein a paper from 1832.[6][7][8]Oliver Heavisideintroduced the practical use offractional differential operatorsin electrical transmission line analysis circa 1890.[9]The theory and applications of fractional calculus expanded greatly over the 19th and 20th centuries, and numerous contributors have given different definitions for fractional derivatives and integrals.[10] Letf(x){\displaystyle f(x)}be a function defined forx>0{\displaystyle x>0}. Form the definite integral from 0 tox{\displaystyle x}. Call this(Jf)(x)=∫0xf(t)dt.{\displaystyle (Jf)(x)=\int _{0}^{x}f(t)\,dt\,.} Repeating this process gives(J2f)(x)=∫0x(Jf)(t)dt=∫0x(∫0tf(s)ds)dt,{\displaystyle {\begin{aligned}\left(J^{2}f\right)(x)&=\int _{0}^{x}(Jf)(t)\,dt\\&=\int _{0}^{x}\left(\int _{0}^{t}f(s)\,ds\right)dt\,,\end{aligned}}} and this can be extended arbitrarily. TheCauchy formula for repeated integration, namely(Jnf)(x)=1(n−1)!∫0x(x−t)n−1f(t)dt,{\displaystyle \left(J^{n}f\right)(x)={\frac {1}{(n-1)!}}\int _{0}^{x}\left(x-t\right)^{n-1}f(t)\,dt\,,}leads in a straightforward way to a generalization for realn: using thegamma functionto remove the discrete nature of the factorial function gives us a natural candidate for applications of the fractional integral operator as(Jαf)(x)=1Γ(α)∫0x(x−t)α−1f(t)dt.{\displaystyle \left(J^{\alpha }f\right)(x)={\frac {1}{\Gamma (\alpha )}}\int _{0}^{x}\left(x-t\right)^{\alpha -1}f(t)\,dt\,.} This is in fact a well-defined operator. It is straightforward to show that theJoperator satisfies(Jα)(Jβf)(x)=(Jβ)(Jαf)(x)=(Jα+βf)(x)=1Γ(α+β)∫0x(x−t)α+β−1f(t)dt.{\displaystyle {\begin{aligned}\left(J^{\alpha }\right)\left(J^{\beta }f\right)(x)&=\left(J^{\beta }\right)\left(J^{\alpha }f\right)(x)\\&=\left(J^{\alpha +\beta }f\right)(x)\\&={\frac {1}{\Gamma (\alpha +\beta )}}\int _{0}^{x}\left(x-t\right)^{\alpha +\beta -1}f(t)\,dt\,.\end{aligned}}} (Jα)(Jβf)(x)=1Γ(α)∫0x(x−t)α−1(Jβf)(t)dt=1Γ(α)Γ(β)∫0x∫0t(x−t)α−1(t−s)β−1f(s)dsdt=1Γ(α)Γ(β)∫0xf(s)(∫sx(x−t)α−1(t−s)β−1dt)ds{\displaystyle {\begin{aligned}\left(J^{\alpha }\right)\left(J^{\beta }f\right)(x)&={\frac {1}{\Gamma (\alpha )}}\int _{0}^{x}(x-t)^{\alpha -1}\left(J^{\beta }f\right)(t)\,dt\\&={\frac {1}{\Gamma (\alpha )\Gamma (\beta )}}\int _{0}^{x}\int _{0}^{t}\left(x-t\right)^{\alpha -1}\left(t-s\right)^{\beta -1}f(s)\,ds\,dt\\&={\frac {1}{\Gamma (\alpha )\Gamma (\beta )}}\int _{0}^{x}f(s)\left(\int _{s}^{x}\left(x-t\right)^{\alpha -1}\left(t-s\right)^{\beta -1}\,dt\right)\,ds\end{aligned}}} where in the last step we exchanged the order of integration and pulled out thef(s)factor from thetintegration. Changing variables tordefined byt=s+ (x−s)r,(Jα)(Jβf)(x)=1Γ(α)Γ(β)∫0x(x−s)α+β−1f(s)(∫01(1−r)α−1rβ−1dr)ds{\displaystyle \left(J^{\alpha }\right)\left(J^{\beta }f\right)(x)={\frac {1}{\Gamma (\alpha )\Gamma (\beta )}}\int _{0}^{x}\left(x-s\right)^{\alpha +\beta -1}f(s)\left(\int _{0}^{1}\left(1-r\right)^{\alpha -1}r^{\beta -1}\,dr\right)\,ds} The inner integral is thebeta functionwhich satisfies the following property:∫01(1−r)α−1rβ−1dr=B(α,β)=Γ(α)Γ(β)Γ(α+β){\displaystyle \int _{0}^{1}\left(1-r\right)^{\alpha -1}r^{\beta -1}\,dr=B(\alpha ,\beta )={\frac {\Gamma (\alpha )\,\Gamma (\beta )}{\Gamma (\alpha +\beta )}}} Substituting back into the equation:(Jα)(Jβf)(x)=1Γ(α+β)∫0x(x−s)α+β−1f(s)ds=(Jα+βf)(x){\displaystyle {\begin{aligned}\left(J^{\alpha }\right)\left(J^{\beta }f\right)(x)&={\frac {1}{\Gamma (\alpha +\beta )}}\int _{0}^{x}\left(x-s\right)^{\alpha +\beta -1}f(s)\,ds\\&=\left(J^{\alpha +\beta }f\right)(x)\end{aligned}}} Interchangingαandβshows that the order in which theJoperator is applied is irrelevant and completes the proof. This relationship is called the semigroup property of fractionaldifferintegraloperators. The classical form of fractional calculus is given by theRiemann–Liouville integral, which is essentially what has been described above. The theory of fractional integration forperiodic functions(therefore including the "boundary condition" of repeating after a period) is given by theWeyl integral. It is defined onFourier series, and requires the constant Fourier coefficient to vanish (thus, it applies to functions on theunit circlewhose integrals evaluate to zero). The Riemann–Liouville integral exists in two forms, upper and lower. Considering the interval[a,b], the integrals are defined asDaDt−α⁡f(t)=IaItα⁡f(t)=1Γ(α)∫at(t−τ)α−1f(τ)dτDtDb−α⁡f(t)=ItIbα⁡f(t)=1Γ(α)∫tb(τ−t)α−1f(τ)dτ{\displaystyle {\begin{aligned}\sideset {_{a}}{_{t}^{-\alpha }}Df(t)&=\sideset {_{a}}{_{t}^{\alpha }}If(t)\\&={\frac {1}{\Gamma (\alpha )}}\int _{a}^{t}\left(t-\tau \right)^{\alpha -1}f(\tau )\,d\tau \\\sideset {_{t}}{_{b}^{-\alpha }}Df(t)&=\sideset {_{t}}{_{b}^{\alpha }}If(t)\\&={\frac {1}{\Gamma (\alpha )}}\int _{t}^{b}\left(\tau -t\right)^{\alpha -1}f(\tau )\,d\tau \end{aligned}}} Where the former is valid fort>aand the latter is valid fort<b.[11] It has been suggested[12]that the integral on the positive real axis (i.e.a=0{\displaystyle a=0}) would be more appropriately named the Abel–Riemann integral, on the basis of history of discovery and use, and in the same vein the integral over the entire real line be named Liouville–Weyl integral. By contrast theGrünwald–Letnikov derivativestarts with the derivative instead of the integral. TheHadamard fractional integralwas introduced byJacques Hadamard[13]and is given by the following formula,DaDt−α⁡f(t)=1Γ(α)∫at(log⁡tτ)α−1f(τ)dττ,t>a.{\displaystyle \sideset {_{a}}{_{t}^{-\alpha }}{\mathbf {D} }f(t)={\frac {1}{\Gamma (\alpha )}}\int _{a}^{t}\left(\log {\frac {t}{\tau }}\right)^{\alpha -1}f(\tau ){\frac {d\tau }{\tau }},\qquad t>a\,.} The Atangana–Baleanu fractional integral of a continuous function is defined as:IAaABItα⁡f(t)=1−αAB⁡(α)f(t)+αAB⁡(α)Γ(α)∫at(t−τ)α−1f(τ)dτ{\displaystyle \sideset {_{{\hphantom {A}}a}^{\operatorname {AB} }}{_{t}^{\alpha }}If(t)={\frac {1-\alpha }{\operatorname {AB} (\alpha )}}f(t)+{\frac {\alpha }{\operatorname {AB} (\alpha )\Gamma (\alpha )}}\int _{a}^{t}\left(t-\tau \right)^{\alpha -1}f(\tau )\,d\tau } Unfortunately, the comparable process for the derivative operatorDis significantly more complex, but it can be shown thatDis neithercommutativenoradditivein general.[14] Unlike classical Newtonian derivatives, fractional derivatives can be defined in a variety of different ways that often do not all lead to the same result even for smooth functions. Some of these are defined via a fractional integral. Because of the incompatibility of definitions, it is frequently necessary to be explicit about which definition is used. The corresponding derivative is calculated using Lagrange's rule for differential operators. To find theαth order derivative, thenth order derivative of the integral of order(n−α)is computed, wherenis the smallest integer greater thanα(that is,n=⌈α⌉). The Riemann–Liouville fractional derivative and integral has multiple applications such as in case of solutions to the equation in the case of multiple systems such as the tokamak systems, and Variable order fractional parameter.[15][16]Similar to the definitions for the Riemann–Liouville integral, the derivative has upper and lower variants.[17]DaDtα⁡f(t)=dndtnDaDt−(n−α)⁡f(t)=dndtnIaItn−α⁡f(t)DtDbα⁡f(t)=dndtnDtDb−(n−α)⁡f(t)=dndtnItIbn−α⁡f(t){\displaystyle {\begin{aligned}\sideset {_{a}}{_{t}^{\alpha }}Df(t)&={\frac {d^{n}}{dt^{n}}}\sideset {_{a}}{_{t}^{-(n-\alpha )}}Df(t)\\&={\frac {d^{n}}{dt^{n}}}\sideset {_{a}}{_{t}^{n-\alpha }}If(t)\\\sideset {_{t}}{_{b}^{\alpha }}Df(t)&={\frac {d^{n}}{dt^{n}}}\sideset {_{t}}{_{b}^{-(n-\alpha )}}Df(t)\\&={\frac {d^{n}}{dt^{n}}}\sideset {_{t}}{_{b}^{n-\alpha }}If(t)\end{aligned}}} Another option for computing fractional derivatives is theCaputo fractional derivative. It was introduced byMichele Caputoin his 1967 paper.[18]In contrast to the Riemann–Liouville fractional derivative, when solving differential equations using Caputo's definition, it is not necessary to define the fractional order initial conditions. Caputo's definition is illustrated as follows, where againn= ⌈α⌉:DCDtα⁡f(t)=1Γ(n−α)∫0tf(n)(τ)(t−τ)α+1−ndτ.{\displaystyle \sideset {^{C}}{_{t}^{\alpha }}Df(t)={\frac {1}{\Gamma (n-\alpha )}}\int _{0}^{t}{\frac {f^{(n)}(\tau )}{\left(t-\tau \right)^{\alpha +1-n}}}\,d\tau .} There is the Caputo fractional derivative defined as:Dνf(t)=1Γ(n−ν)∫0t(t−u)(n−ν−1)f(n)(u)du(n−1)<ν<n{\displaystyle D^{\nu }f(t)={\frac {1}{\Gamma (n-\nu )}}\int _{0}^{t}(t-u)^{(n-\nu -1)}f^{(n)}(u)\,du\qquad (n-1)<\nu <n}which has the advantage that it is zero whenf(t)is constant and its Laplace Transform is expressed by means of the initial values of the function and its derivative. Moreover, there is the Caputo fractional derivative of distributed order defined asDabDn⁡u⁡f(t)=∫abϕ(ν)[D(ν)f(t)]dν=∫ab[ϕ(ν)Γ(1−ν)∫0t(t−u)−νf′(u)du]dν{\displaystyle {\begin{aligned}\sideset {_{a}^{b}}{^{n}u}Df(t)&=\int _{a}^{b}\phi (\nu )\left[D^{(\nu )}f(t)\right]\,d\nu \\&=\int _{a}^{b}\left[{\frac {\phi (\nu )}{\Gamma (1-\nu )}}\int _{0}^{t}\left(t-u\right)^{-\nu }f'(u)\,du\right]\,d\nu \end{aligned}}} whereϕ(ν)is a weight function and which is used to represent mathematically the presence of multiple memory formalisms. In a paper of 2015, M. Caputo and M. Fabrizio presented a definition of fractional derivative with a non singular kernel, for a functionf(t){\displaystyle f(t)}ofC1{\displaystyle C^{1}}given by:DCaCFDtα⁡f(t)=11−α∫atf′(τ)e(−αt−τ1−α)dτ,{\displaystyle \sideset {_{{\hphantom {C}}a}^{\text{CF}}}{_{t}^{\alpha }}Df(t)={\frac {1}{1-\alpha }}\int _{a}^{t}f'(\tau )\ e^{\left(-\alpha {\frac {t-\tau }{1-\alpha }}\right)}\ d\tau ,} wherea<0,α∈(0,1]{\displaystyle a<0,\alpha \in (0,1]}.[19] In 2016, Atangana and Baleanu suggested differential operators based on the generalizedMittag-Leffler functionEα{\displaystyle E_{\alpha }}. The aim was to introduce fractional differential operators with non-singular nonlocal kernel. Their fractional differential operators are given below in Riemann–Liouville sense and Caputo sense respectively. For a functionf(t){\displaystyle f(t)}ofC1{\displaystyle C^{1}}given by[20][21]DABaABCDtα⁡f(t)=AB⁡(α)1−α∫atf′(τ)Eα(−α(t−τ)α1−α)dτ,{\displaystyle \sideset {_{{\hphantom {AB}}a}^{\text{ABC}}}{_{t}^{\alpha }}Df(t)={\frac {\operatorname {AB} (\alpha )}{1-\alpha }}\int _{a}^{t}f'(\tau )E_{\alpha }\left(-\alpha {\frac {(t-\tau )^{\alpha }}{1-\alpha }}\right)d\tau ,} If the function is continuous, the Atangana–Baleanu derivative in Riemann–Liouville sense is given by:DABaABCDtα⁡f(t)=AB⁡(α)1−αddt∫atf(τ)Eα(−α(t−τ)α1−α)dτ,{\displaystyle \sideset {_{{\hphantom {AB}}a}^{\text{ABC}}}{_{t}^{\alpha }}Df(t)={\frac {\operatorname {AB} (\alpha )}{1-\alpha }}{\frac {d}{dt}}\int _{a}^{t}f(\tau )E_{\alpha }\left(-\alpha {\frac {(t-\tau )^{\alpha }}{1-\alpha }}\right)d\tau ,} The kernel used in Atangana–Baleanu fractional derivative has some properties of a cumulative distribution function.  For example, for allα∈(0,1]{\displaystyle \alpha \in (0,1]},the functionEα{\displaystyle E_{\alpha }}is increasing on the real line, converges to0{\displaystyle 0}in−∞{\displaystyle -\infty },andEα(0)=1{\displaystyle E_{\alpha }(0)=1}.Therefore, we have that, the functionx↦1−Eα(−xα){\displaystyle x\mapsto 1-E_{\alpha }(-x^{\alpha })}is the cumulative distribution function of a probability measure on the positive real numbers. The distribution is therefore defined, and any of its multiples is called aMittag-Leffler distributionof orderα{\displaystyle \alpha }.It is also very well-known that, all these probability distributions areabsolutely continuous. In particular, the function Mittag-Leffler has a particular caseE1{\displaystyle E_{1}},which is the exponential function, the Mittag-Leffler distribution of order1{\displaystyle 1}is therefore anexponential distribution. However, forα∈(0,1){\displaystyle \alpha \in (0,1)},the Mittag-Leffler distributions areheavy-tailed. Their Laplace transform is given by:E(e−λXα)=11+λα,{\displaystyle \mathbb {E} (e^{-\lambda X_{\alpha }})={\frac {1}{1+\lambda ^{\alpha }}},} This directly implies that, forα∈(0,1){\displaystyle \alpha \in (0,1)},the expectation is infinite. In addition, these distributions aregeometric stable distributions. The Riesz derivative is defined asF{∂αu∂|x|α}(k)=−|k|αF{u}(k),{\displaystyle {\mathcal {F}}\left\{{\frac {\partial ^{\alpha }u}{\partial \left|x\right|^{\alpha }}}\right\}(k)=-\left|k\right|^{\alpha }{\mathcal {F}}\{u\}(k),} whereF{\displaystyle {\mathcal {F}}}denotes theFourier transform.[22][23] The conformable fractional derivative of a functionf{\displaystyle f}of orderα{\displaystyle \alpha }is given byTa(f)(t)=limϵ→0f(t+ϵt1−α)−f(t)ϵ{\displaystyle T_{a}(f)(t)=\lim _{\epsilon \rightarrow 0}{\frac {f\left(t+\epsilon t^{1-\alpha }\right)-f(t)}{\epsilon }}}Unlike other definitions of the fractional derivative, the conformable fractional derivative obeys theproductandquotient rulehas analogs toRolle's theoremand themean value theorem.[24][25]However, this fractional derivative produces significantly different results compared to the Riemann-Liouville and Caputo fractional derivative. In 2020, Feng Gao and Chunmei Chi defined the improved Caputo-type conformable fractional derivative, which more closely approximates the behavior of the Caputo fractional derivative:[25]aCT~a(f)(t)=limϵ→0[(1−α)(f(t)−f(a))+αf(t+ϵ(t−a)1−α)−f(t)ϵ]{\displaystyle _{a}^{C}{\widetilde {T}}_{a}(f)(t)=\lim _{\epsilon \rightarrow 0}\left[(1-\alpha )(f(t)-f(a))+\alpha {\frac {f\left(t+\epsilon (t-a)^{1-\alpha }\right)-f(t)}{\epsilon }}\right]}wherea{\displaystyle a}andt{\displaystyle t}arereal numbersanda<t{\displaystyle a<t}. They also defined the improved Riemann-Liouville-type conformable fractional derivative to similarly approximate the Riemann-Liouville fractional derivative:[25] aRLT~a(f)(t)=limϵ→0[(1−α)f(t)+αf(t+ϵ(t−a)1−α)−f(t)ϵ]{\displaystyle _{a}^{RL}{\widetilde {T}}_{a}(f)(t)=\lim _{\epsilon \rightarrow 0}\left[(1-\alpha )f(t)+\alpha {\frac {f\left(t+\epsilon (t-a)^{1-\alpha }\right)-f(t)}{\epsilon }}\right]}wherea{\displaystyle a}andt{\displaystyle t}arereal numbersanda<t{\displaystyle a<t}. Both improved conformable fractional derivatives have analogs to Rolle's theorem and theinterior extremum theorem.[26] Classical fractional derivatives include: New fractional derivatives include: TheCoimbra derivativeis used for physical modeling:[35]A number of applications in both mechanics and optics can be found in the works by Coimbra and collaborators,[36][37][38][39][40][41][42]as well as additional applications to physical problems and numerical implementations studied in a number of works by other authors[43][44][45][46] where the lower limita{\displaystyle a}can be taken as either0−{\displaystyle 0^{-}}or−∞{\displaystyle -\infty }as long asf(t){\displaystyle f(t)}is identically zero from or−∞{\displaystyle -\infty }to0−{\displaystyle 0^{-}}. Note that this operator returns the correct fractional derivatives for all values oft{\displaystyle t}and can be applied to either the dependent function itselff(t){\displaystyle f(t)}with a variable order of the formq(f(t)){\displaystyle q(f(t))}or to the independent variable with a variable order of the formq(t){\displaystyle q(t)}.[1]{\displaystyle ^{[1]}} The Coimbra derivative can be generalized to any order,[47]leading to the Coimbra Generalized Order Differintegration Operator (GODO)[48] wherem{\displaystyle m}is an integer larger than the larger value ofq(t){\displaystyle q(t)}for all values oft{\displaystyle t}. Note that the second (summation) term on the right side of the definition above can be expressed as so to keep the denominator on the positive branch of the Gamma (Γ{\displaystyle \Gamma }) function and for ease of numerical calculation. Thea{\displaystyle a}-thderivative of a functionf{\displaystyle f}at a pointx{\displaystyle x}is alocal propertyonly whena{\displaystyle a}is an integer; this is not the case for non-integer power derivatives. In other words, a non-integer fractional derivative off{\displaystyle f}atx=c{\displaystyle x=c}depends on all values off{\displaystyle f},even those far away fromc{\displaystyle c}.Therefore, it is expected that the fractional derivative operation involves some sort ofboundary conditions, involving information on the function further out.[49] The fractional derivative of a function of ordera{\displaystyle a}is nowadays often defined by means of theFourierorMellinintegral transforms.[citation needed] TheErdélyi–Kober operatoris an integral operator introduced byArthur Erdélyi(1940).[50]andHermann Kober(1940)[51]and is given byx−ν−α+1Γ(α)∫0x(t−x)α−1t−α−νf(t)dt,{\displaystyle {\frac {x^{-\nu -\alpha +1}}{\Gamma (\alpha )}}\int _{0}^{x}\left(t-x\right)^{\alpha -1}t^{-\alpha -\nu }f(t)\,dt\,,} which generalizes theRiemann–Liouville fractional integraland the Weyl integral. In the context offunctional analysis, functionsf(D)more general than powers are studied in thefunctional calculusofspectral theory. The theory ofpseudo-differential operatorsalso allows one to consider powers ofD. The operators arising are examples ofsingular integral operators; and the generalisation of the classical theory to higher dimensions is called the theory ofRiesz potentials. So there are a number of contemporary theories available, within whichfractional calculuscan be discussed. See alsoErdélyi–Kober operator, important inspecial functiontheory (Kober 1940), (Erdélyi 1950–1951). As described by Wheatcraft and Meerschaert (2008),[52]a fractional conservation of mass equation is needed to model fluid flow when thecontrol volumeis not large enough compared to the scale ofheterogeneityand when the flux within the control volume is non-linear. In the referenced paper, the fractional conservation of mass equation for fluid flow is:−ρ(∇α⋅u→)=Γ(α+1)Δx1−αρ(βs+ϕβw)∂p∂t{\displaystyle -\rho \left(\nabla ^{\alpha }\cdot {\vec {u}}\right)=\Gamma (\alpha +1)\Delta x^{1-\alpha }\rho \left(\beta _{s}+\phi \beta _{w}\right){\frac {\partial p}{\partial t}}} When studying the redox behavior of a substrate in solution, a voltage is applied at an electrode surface to force electron transfer between electrode and substrate. The resulting electron transfer is measured as a current. The current depends upon the concentration of substrate at the electrode surface. As substrate is consumed, fresh substrate diffuses to the electrode as described byFick's laws of diffusion. Taking the Laplace transform of Fick's second law yields an ordinary second-order differential equation (here in dimensionless form):d2dx2C(x,s)=sC(x,s){\displaystyle {\frac {d^{2}}{dx^{2}}}C(x,s)=sC(x,s)} whose solutionC(x,s)contains a one-half power dependence ons. Taking the derivative ofC(x,s)and then the inverse Laplace transform yields the following relationship:ddxC(x,t)=d12dt12C(x,t){\displaystyle {\frac {d}{dx}}C(x,t)={\frac {d^{\scriptstyle {\frac {1}{2}}}}{dt^{\scriptstyle {\frac {1}{2}}}}}C(x,t)} which relates the concentration of substrate at the electrode surface to the current.[53]This relationship is applied in electrochemical kinetics to elucidate mechanistic behavior. For example, it has been used to study the rate of dimerization of substrates upon electrochemical reduction.[54] In 2013–2014 Atangana et al. described some groundwater flow problems using the concept of a derivative with fractional order.[55][56]In these works, the classicalDarcy lawis generalized by regarding the water flow as a function of a non-integer order derivative of the piezometric head. This generalized law and the law of conservation of mass are then used to derive a new equation for groundwater flow. This equation[clarification needed]has been shown useful for modeling contaminant flow in heterogenous porous media.[57][58][59] Atangana and Kilicman extended the fractional advection dispersion equation to a variable order equation. In their work, the hydrodynamic dispersion equation was generalized using the concept of avariational order derivative. The modified equation was numerically solved via theCrank–Nicolson method. The stability and convergence in numerical simulations showed that the modified equation is more reliable in predicting the movement of pollution in deformable aquifers than equations with constant fractional and integer derivatives[60] Anomalous diffusion processes in complex media can be well characterized by using fractional-order diffusion equation models.[61][62]The time derivative term corresponds to long-time heavy tail decay and the spatial derivative for diffusion nonlocality. The time-space fractional diffusion governing equation can be written as∂αu∂tα=−K(−Δ)βu.{\displaystyle {\frac {\partial ^{\alpha }u}{\partial t^{\alpha }}}=-K(-\Delta )^{\beta }u.} A simple extension of the fractional derivative is the variable-order fractional derivative,αandβare changed intoα(x,t)andβ(x,t). Its applications in anomalous diffusion modeling can be found in the reference.[60][63][64] Fractional derivatives are used to modelviscoelasticdampingin certain types of materials like polymers.[12] GeneralizingPID controllersto use fractional orders can increase their degree of freedom. The new equation relating thecontrol variableu(t)in terms of a measurederror valuee(t)can be written asu(t)=Kpe(t)+KiDt−αe(t)+KdDtβe(t){\displaystyle u(t)=K_{\mathrm {p} }e(t)+K_{\mathrm {i} }D_{t}^{-\alpha }e(t)+K_{\mathrm {d} }D_{t}^{\beta }e(t)} whereαandβare positive fractional orders andKp,Ki, andKd, all non-negative, denote the coefficients for theproportional,integral, andderivativeterms, respectively (sometimes denotedP,I, andD).[65] The propagation of acoustical waves in complex media, such as in biological tissue, commonly implies attenuation obeying a frequency power-law. This kind of phenomenon may be described using a causal wave equation which incorporates fractional time derivatives:∇2u−1c02∂2u∂t2+τσα∂α∂tα∇2u−τϵβc02∂β+2u∂tβ+2=0.{\displaystyle \nabla ^{2}u-{\dfrac {1}{c_{0}^{2}}}{\frac {\partial ^{2}u}{\partial t^{2}}}+\tau _{\sigma }^{\alpha }{\dfrac {\partial ^{\alpha }}{\partial t^{\alpha }}}\nabla ^{2}u-{\dfrac {\tau _{\epsilon }^{\beta }}{c_{0}^{2}}}{\dfrac {\partial ^{\beta +2}u}{\partial t^{\beta +2}}}=0\,.} See also Holm & Näsholm (2011)[66]and the references therein. Such models are linked to the commonly recognized hypothesis that multiple relaxation phenomena give rise to the attenuation measured in complex media. This link is further described in Näsholm & Holm (2011b)[67]and in the survey paper,[68]as well as theAcoustic attenuationarticle. See Holm & Nasholm (2013)[69]for a paper which compares fractional wave equations which model power-law attenuation. This book on power-law attenuation also covers the topic in more detail.[70] Pandey and Holm gave a physical meaning to fractional differential equations by deriving them from physical principles and interpreting the fractional-order in terms of the parameters of the acoustical media, example in fluid-saturated granular unconsolidated marine sediments.[71]Interestingly, Pandey and Holm derivedLomnitz's lawinseismologyand Nutting's law innon-Newtonian rheologyusing the framework of fractional calculus.[72]Nutting's law was used to model the wave propagation in marine sediments using fractional derivatives.[71] Thefractional Schrödinger equation, a fundamental equation offractional quantum mechanics, has the following form:[73][74]iℏ∂ψ(r,t)∂t=Dα(−ℏ2Δ)α2ψ(r,t)+V(r,t)ψ(r,t).{\displaystyle i\hbar {\frac {\partial \psi (\mathbf {r} ,t)}{\partial t}}=D_{\alpha }\left(-\hbar ^{2}\Delta \right)^{\frac {\alpha }{2}}\psi (\mathbf {r} ,t)+V(\mathbf {r} ,t)\psi (\mathbf {r} ,t)\,.} where the solution of the equation is thewavefunctionψ(r,t)– the quantum mechanicalprobability amplitudefor the particle to have a givenposition vectorrat any given timet, andħis thereduced Planck constant. Thepotential energyfunctionV(r,t)depends on the system. Further,Δ=∂2∂r2{\textstyle \Delta ={\frac {\partial ^{2}}{\partial \mathbf {r} ^{2}}}}is theLaplace operator, andDαis a scale constant with physicaldimension[Dα] = J1 −α·mα·s−α= kg1 −α·m2 −α·sα− 2, (atα= 2,D2=12m{\textstyle D_{2}={\frac {1}{2m}}}for a particle of massm), and the operator(−ħ2Δ)α/2is the 3-dimensional fractional quantum Riesz derivative defined by(−ℏ2Δ)α2ψ(r,t)=1(2πℏ)3∫d3peiℏp⋅r|p|αφ(p,t).{\displaystyle (-\hbar ^{2}\Delta )^{\frac {\alpha }{2}}\psi (\mathbf {r} ,t)={\frac {1}{(2\pi \hbar )^{3}}}\int d^{3}pe^{{\frac {i}{\hbar }}\mathbf {p} \cdot \mathbf {r} }|\mathbf {p} |^{\alpha }\varphi (\mathbf {p} ,t)\,.} The indexαin the fractional Schrödinger equation is the Lévy index,1 <α≤ 2. As a natural generalization of thefractional Schrödinger equation, the variable-order fractional Schrödinger equation has been exploited to study fractional quantum phenomena:[75]iℏ∂ψα(r)(r,t)∂tα(r)=(−ℏ2Δ)β(t)2ψ(r,t)+V(r,t)ψ(r,t),{\displaystyle i\hbar {\frac {\partial \psi ^{\alpha (\mathbf {r} )}(\mathbf {r} ,t)}{\partial t^{\alpha (\mathbf {r} )}}}=\left(-\hbar ^{2}\Delta \right)^{\frac {\beta (t)}{2}}\psi (\mathbf {r} ,t)+V(\mathbf {r} ,t)\psi (\mathbf {r} ,t),} whereΔ=∂2∂r2{\textstyle \Delta ={\frac {\partial ^{2}}{\partial \mathbf {r} ^{2}}}}is theLaplace operatorand the operator(−ħ2Δ)β(t)/2is the variable-order fractional quantum Riesz derivative.
https://en.wikipedia.org/wiki/Fractional_calculus
Inmathematics, ahalf-exponential functionis afunctional square rootof anexponential function. That is, afunctionf{\displaystyle f}such thatf{\displaystyle f}composedwith itself results in an exponential function:[1][2]f(f(x))=abx,{\displaystyle f{\bigl (}f(x){\bigr )}=ab^{x},}for some constantsa{\displaystyle a}andb{\displaystyle b}. Hellmuth Kneserfirst proposed aholomorphicconstruction of the solution off(f(x))=ex{\displaystyle f{\bigl (}f(x){\bigr )}=e^{x}}in 1950. It is closely related to the problem of extendingtetrationto non-integer values; the value of12a{\displaystyle {}^{\frac {1}{2}}a}can be understood as the value off(1){\displaystyle f{\bigl (}1)}, wheref(x){\displaystyle f{\bigl (}x)}satisfiesf(f(x))=ax{\displaystyle f{\bigl (}f(x){\bigr )}=a^{x}}. Example values from Kneser's solution off(f(x))=ex{\displaystyle f{\bigl (}f(x){\bigr )}=e^{x}}includef(0)≈0.49856{\displaystyle f{\bigl (}0)\approx 0.49856}andf(1)≈1.64635{\displaystyle f{\bigl (}1)\approx 1.64635}. If a functionf{\displaystyle f}is defined using the standard arithmetic operations, exponentials,logarithms, andreal-valued constants, thenf(f(x)){\displaystyle f{\bigl (}f(x){\bigr )}}is either subexponential or superexponential.[3]Thus, aHardyL-functioncannot be half-exponential. Any exponential function can be written as the self-compositionf(f(x)){\displaystyle f(f(x))}for infinitely many possible choices off{\displaystyle f}. In particular, for everyA{\displaystyle A}in theopen interval(0,1){\displaystyle (0,1)}and for everycontinuousstrictly increasingfunctiong{\displaystyle g}from[0,A]{\displaystyle [0,A]}onto[A,1]{\displaystyle [A,1]}, there is an extension of this function to a continuous strictly increasing functionf{\displaystyle f}on the real numbers such thatf(f(x))=exp⁡x{\displaystyle f{\bigl (}f(x){\bigr )}=\exp x}.[4]The functionf{\displaystyle f}is the unique solution to thefunctional equationf(x)={g(x)ifx∈[0,A],exp⁡g−1(x)ifx∈(A,1],exp⁡f(ln⁡x)ifx∈(1,∞),ln⁡f(exp⁡x)ifx∈(−∞,0).{\displaystyle f(x)={\begin{cases}g(x)&{\mbox{if }}x\in [0,A],\\\exp g^{-1}(x)&{\mbox{if }}x\in (A,1],\\\exp f(\ln x)&{\mbox{if }}x\in (1,\infty ),\\\ln f(\exp x)&{\mbox{if }}x\in (-\infty ,0).\\\end{cases}}} A simple example, which leads tof{\displaystyle f}having a continuous first derivativef′{\displaystyle f'}everywhere, and also causesf″≥0{\displaystyle f''\geq 0}everywhere (i.e.f(x){\displaystyle f(x)}is concave-up, andf′(x){\displaystyle f'(x)}increasing, for all realx{\displaystyle x}), is to takeA=12{\displaystyle A={\tfrac {1}{2}}}andg(x)=x+12{\displaystyle g(x)=x+{\tfrac {1}{2}}}, givingf(x)={loge⁡(ex+12)ifx≤−loge⁡2,ex−12if−loge⁡2≤x≤0,x+12if0≤x≤12,ex−1/2if12≤x≤1,xeif1≤x≤e,ex/eife≤x≤e,xeife≤x≤ee,ex1/eifee≤x≤ee,…{\displaystyle f(x)={\begin{cases}\log _{e}\left(e^{x}+{\tfrac {1}{2}}\right)&{\mbox{if }}x\leq -\log _{e}2,\\e^{x}-{\tfrac {1}{2}}&{\mbox{if }}{-\log _{e}2}\leq x\leq 0,\\x+{\tfrac {1}{2}}&{\mbox{if }}0\leq x\leq {\tfrac {1}{2}},\\e^{x-1/2}&{\mbox{if }}{\tfrac {1}{2}}\leq x\leq 1,\\x{\sqrt {e}}&{\mbox{if }}1\leq x\leq {\sqrt {e}},\\e^{x/{\sqrt {e}}}&{\mbox{if }}{\sqrt {e}}\leq x\leq e,\\x^{\sqrt {e}}&{\mbox{if }}e\leq x\leq e^{\sqrt {e}},\\e^{x^{1/{\sqrt {e}}}}&{\mbox{if }}e^{\sqrt {e}}\leq x\leq e^{e},\ldots \\\end{cases}}}Crone and Neuendorffer claim that there is no semi-exponential function f(x) that is both (a) analytic and (b) always maps reals to reals. Thepiecewisesolution above achieves goal (b) but not (a). Achieving goal (a) is possible by writingex{\displaystyle e^{x}}as a Taylor series based at a fixpoint Q (there are an infinitude of such fixpoints, but they all are nonreal complex, for exampleQ=0.3181315+1.3372357i{\displaystyle Q=0.3181315+1.3372357i}), making Q also be a fixpoint of f, that isf(Q)=eQ=Q{\displaystyle f(Q)=e^{Q}=Q}, then computing theMaclaurin seriescoefficients off(x−Q){\displaystyle f(x-Q)}one by one. This results in Kneser's construction mentioned above. Half-exponential functions are used incomputational complexity theoryfor growth rates "intermediate" between polynomial and exponential.[2]A functionf{\displaystyle f}grows at least as quickly as some half-exponential function (its composition with itself grows exponentially) if it isnon-decreasingandf−1(xC)=o(log⁡x){\displaystyle f^{-1}(x^{C})=o(\log x)}, foreveryC>0{\displaystyle C>0}.[5]
https://en.wikipedia.org/wiki/Half-exponential_function
Inmathematics,exponentiation, denotedbn, is anoperationinvolving two numbers: thebase,b, and theexponentorpower,n.[1]Whennis a positiveinteger, exponentiation corresponds to repeatedmultiplicationof the base: that is,bnis theproductof multiplyingnbases:[1]bn=b×b×⋯×b×b⏟ntimes.{\displaystyle b^{n}=\underbrace {b\times b\times \dots \times b\times b} _{n{\text{ times}}}.}In particular,b1=b{\displaystyle b^{1}=b}. The exponent is usually shown as asuperscriptto the right of the base asbnor in computer code asb^n. Thisbinary operationis often read as "bto the powern"; it may also be referred to as "braised to thenth power", "thenth power ofb",[2]or, most briefly, "bto then". The above definition ofbn{\displaystyle b^{n}}immediately implies several properties, in particular the multiplication rule:[nb 1] bn×bm=b×⋯×b⏟ntimes×b×⋯×b⏟mtimes=b×⋯×b⏟n+mtimes=bn+m.{\displaystyle {\begin{aligned}b^{n}\times b^{m}&=\underbrace {b\times \dots \times b} _{n{\text{ times}}}\times \underbrace {b\times \dots \times b} _{m{\text{ times}}}\\[1ex]&=\underbrace {b\times \dots \times b} _{n+m{\text{ times}}}\ =\ b^{n+m}.\end{aligned}}} That is, when multiplying a base raised to one power times the same base raised to another power, the powers add. Extending this rule to the power zero givesb0×bn=b0+n=bn{\displaystyle b^{0}\times b^{n}=b^{0+n}=b^{n}}, and, wherebis non-zero, dividing both sides bybn{\displaystyle b^{n}}givesb0=bn/bn=1{\displaystyle b^{0}=b^{n}/b^{n}=1}. That is the multiplication rule implies the definitionb0=1.{\displaystyle b^{0}=1.}A similar argument implies the definition for negative integer powers:b−n=1/bn.{\displaystyle b^{-n}=1/b^{n}.}That is, extending the multiplication rule givesb−n×bn=b−n+n=b0=1{\displaystyle b^{-n}\times b^{n}=b^{-n+n}=b^{0}=1}. Dividing both sides bybn{\displaystyle b^{n}}givesb−n=1/bn{\displaystyle b^{-n}=1/b^{n}}. This also implies the definition for fractional powers:bn/m=bnm.{\displaystyle b^{n/m}={\sqrt[{m}]{b^{n}}}.}For example,b1/2×b1/2=b1/2+1/2=b1=b{\displaystyle b^{1/2}\times b^{1/2}=b^{1/2\,+\,1/2}=b^{1}=b}, meaning(b1/2)2=b{\displaystyle (b^{1/2})^{2}=b}, which is the definition of square root:b1/2=b{\displaystyle b^{1/2}={\sqrt {b}}}. The definition of exponentiation can be extended in a natural way (preserving the multiplication rule) to definebx{\displaystyle b^{x}}for any positive real baseb{\displaystyle b}and any real number exponentx{\displaystyle x}. More involved definitions allowcomplexbase and exponent, as well as certain types ofmatricesas base or exponent. Exponentiation is used extensively in many fields, includingeconomics,biology,chemistry,physics, andcomputer science, with applications such ascompound interest,population growth,chemical reaction kinetics,wavebehavior, andpublic-key cryptography. The termexponentoriginates from theLatinexponentem, thepresent participleofexponere, meaning "to put forth".[3]The termpower(Latin:potentia, potestas, dignitas) is a mistranslation[4][5]of theancient Greekδύναμις (dúnamis, here: "amplification"[4]) used by theGreekmathematicianEuclidfor the square of a line,[6]followingHippocrates of Chios.[7] The wordexponentwas coined in 1544 by Michael Stifel.[8][9]In the 16th century,Robert Recordeused the terms "square", "cube", "zenzizenzic" (fourth power), "sursolid" (fifth), "zenzicube" (sixth), "second sursolid" (seventh), and "zenzizenzizenzic" (eighth).[10]"Biquadrate" has been used to refer to the fourth power as well. InThe Sand Reckoner,Archimedesproved the law of exponents,10a· 10b= 10a+b, necessary to manipulate powers of10.[11]He then used powers of10to estimate the number of grains of sand that can be contained in the universe. In the 9th century, the Persian mathematicianAl-Khwarizmiused the terms مَال (māl, "possessions", "property") for asquare—the Muslims, "like most mathematicians of those and earlier times, thought of a squared number as a depiction of an area, especially of land, hence property"[10]—and كَعْبَة (Kaʿbah, "cube") for acube, which laterIslamicmathematicians represented inmathematical notationas the lettersmīm(m) andkāf(k), respectively, by the 15th century, as seen in the work ofAbu'l-Hasan ibn Ali al-Qalasadi.[12]Nicolas Chuquetused a form of exponential notation in the 15th century, for example122to represent12x2.[13]This was later used byHenricus GrammateusandMichael Stifelin the 16th century. In the late 16th century,Jost Bürgiwould use Roman numerals for exponents in a way similar to that of Chuquet, for exampleiii4for4x3.[14] In 1636,James Humeused in essence modern notation, when inL'algèbre de Viètehe wroteAiiiforA3.[15]Early in the 17th century, the first form of our modern exponential notation was introduced byRené Descartesin his text titledLa Géométrie; there, the notation is introduced in Book I.[16] I designate ...aa, ora2in multiplyingaby itself; anda3in multiplying it once more again bya, and thus to infinity. Some mathematicians (such as Descartes) used exponents only for powers greater than two, preferring to represent squares as repeated multiplication. Thus they would writepolynomials, for example, asax+bxx+cx3+d. Samuel Jeakeintroduced the termindicesin 1696.[6]The terminvolutionwas used synonymously with the termindices, but had declined in usage[17]and should not be confused withits more common meaning. In 1748,Leonhard Eulerintroduced variable exponents, and, implicitly, non-integer exponents by writing: Consider exponentials or powers in which the exponent itself is a variable. It is clear that quantities of this kind are notalgebraic functions, since in those the exponents must be constant.[18] As calculation was mechanized, notation was adapted to numerical capacity by conventions in exponential notation. For exampleKonrad Zuseintroducedfloating-point arithmeticin his 1938 computer Z1. Oneregistercontained representation of leading digits, and a second contained representation of the exponent of 10. EarlierLeonardo Torres QuevedocontributedEssays on Automation(1914) which had suggested the floating-point representation of numbers. The more flexibledecimal floating-pointrepresentation was introduced in 1946 with aBell Laboratoriescomputer. Eventually educators and engineers adoptedscientific notationof numbers, consistent with common reference toorder of magnitudein aratio scale.[19] For instance, in 1961 theSchool Mathematics Study Groupdeveloped the notation in connection with units used in themetric system.[20][21] Exponents also came to be used to describeunits of measurementandquantity dimensions. For instance, sinceforceis mass times acceleration, it is measured in kg m/sec2. Using M for mass, L for length, and T for time, the expression M L T–2is used indimensional analysisto describe force.[22][23] The expressionb2=b·bis called "thesquareofb" or "bsquared", because the area of a square with side-lengthbisb2. (It is true that it could also be called "bto the second power", but "the square ofb" and "bsquared" are more traditional) Similarly, the expressionb3=b·b·bis called "thecubeofb" or "bcubed", because the volume of a cube with side-lengthbisb3. When an exponent is apositive integer, that exponent indicates how many copies of the base are multiplied together. For example,35= 3 · 3 · 3 · 3 · 3 = 243. The base3appears5times in the multiplication, because the exponent is5. Here,243is the5th power of 3, or3 raised to the 5th power. The word "raised" is usually omitted, and sometimes "power" as well, so35can be simply read "3 to the 5th", or "3 to the 5". The exponentiation operation with integer exponents may be defined directly from elementaryarithmetic operations. The definition of the exponentiation as an iterated multiplication can beformalizedby usinginduction,[24]and this definition can be used as soon as one has anassociativemultiplication: The base case is and therecurrenceis The associativity of multiplication implies that for any positive integersmandn, and As mentioned earlier, a (nonzero) number raised to the0power is1:[25][1] This value is also obtained by theempty productconvention, which may be used in everyalgebraic structurewith a multiplication that has anidentity. This way the formula also holds forn=0{\displaystyle n=0}. The case of00is controversial. In contexts where only integer powers are considered, the value1is generally assigned to00but, otherwise, the choice of whether to assign it a value and what value to assign may depend on context.For more details, seeZero to the power of zero. Exponentiation with negative exponents is defined by the following identity, which holds for any integernand nonzerob: Raising 0 to a negative exponent is undefined but, in some circumstances, it may be interpreted as infinity (∞{\displaystyle \infty }).[26] This definition of exponentiation with negative exponents is the only one that allows extending the identitybm+n=bm⋅bn{\displaystyle b^{m+n}=b^{m}\cdot b^{n}}to negative exponents (consider the casem=−n{\displaystyle m=-n}). The same definition applies toinvertible elementsin a multiplicativemonoid, that is, analgebraic structure, with an associative multiplication and amultiplicative identitydenoted1(for example, thesquare matricesof a given dimension). In particular, in such a structure, the inverse of aninvertible elementxis standardly denotedx−1.{\displaystyle x^{-1}.} The followingidentities, often calledexponent rules, hold for all integer exponents, provided that the base is non-zero:[1] Unlike addition and multiplication, exponentiation is notcommutative: for example,23=8{\displaystyle 2^{3}=8}, but reversing the operands gives the different value32=9{\displaystyle 3^{2}=9}. Also unlike addition and multiplication, exponentiation is notassociative: for example,(23)2= 82= 64, whereas2(32)= 29= 512. Without parentheses, the conventionalorder of operationsforserial exponentiationin superscript notation is top-down (orright-associative), not bottom-up[27][28][29](orleft-associative). That is, which, in general, is different from The powers of a sum can normally be computed from the powers of the summands by thebinomial formula However, this formula is true only if the summands commute (i.e. thatab=ba), which is implied if they belong to astructurethat iscommutative. Otherwise, ifaandbare, say,square matricesof the same size, this formula cannot be used. It follows that incomputer algebra, manyalgorithmsinvolving integer exponents must be changed when the exponentiation bases do not commute. Some general purposecomputer algebra systemsuse a different notation (sometimes^^instead of^) for exponentiation with non-commuting bases, which is then callednon-commutative exponentiation. For nonnegative integersnandm, the value ofnmis the number offunctionsfrom asetofmelements to a set ofnelements (seecardinal exponentiation). Such functions can be represented asm-tuplesfrom ann-element set (or asm-letter words from ann-letter alphabet). Some examples for particular values ofmandnare given in the following table: In the base ten (decimal) number system, integer powers of10are written as the digit1followed or preceded by a number of zeroes determined by the sign and magnitude of the exponent. For example,103=1000and10−4=0.0001. Exponentiation with base10is used inscientific notationto denote large or small numbers. For instance,299792458m/s(thespeed of lightin vacuum, inmetres per second) can be written as2.99792458×108m/sand thenapproximatedas2.998×108m/s. SI prefixesbased on powers of10are also used to describe small or large quantities. For example, the prefixkilomeans103=1000, so a kilometre is1000 m. The first negative powers of2have special names:2−1{\displaystyle 2^{-1}}is ahalf;2−2{\displaystyle 2^{-2}}is aquarter. Powers of2appear inset theory, since a set withnmembers has apower set, the set of all of itssubsets, which has2nmembers. Integer powers of2are important incomputer science. The positive integer powers2ngive the number of possible values for ann-bitintegerbinary number; for example, abytemay take28= 256different values. Thebinary number systemexpresses any number as a sum of powers of2, and denotes it as a sequence of0and1, separated by abinary point, where1indicates a power of2that appears in the sum; the exponent is determined by the place of this1: the nonnegative exponents are the rank of the1on the left of the point (starting from0), and the negative exponents are determined by the rank on the right of the point. Every power of one equals:1n= 1. For a positive exponentn> 0, thenth power of zero is zero:0n= 0. For a negative exponent,0−n=1/0n=1/0{\displaystyle 0^{-n}=1/0^{n}=1/0}is undefined. In some contexts (e.g.,combinatorics), the expression00is defined to be equal to1{\displaystyle 1}; in others (e.g.,analysis), it is often undefined. Since a negative number times another negative is positive, we have: (−1)n={1for evenn,−1for oddn.{\displaystyle (-1)^{n}=\left\{{\begin{array}{rl}1&{\text{for even }}n,\\-1&{\text{for odd }}n.\\\end{array}}\right.} Because of this, powers of−1are useful for expressing alternatingsequences. For a similar discussion of powers of the complex numberi, see§ nth roots of a complex number. Thelimit of a sequenceof powers of a number greater than one diverges; in other words, the sequence grows without bound: This can be read as "bto the power ofntends to+∞asntends to infinity whenbis greater than one". Powers of a number withabsolute valueless than one tend to zero: Any power of one is always one: Powers of a negative numberb≤−1{\displaystyle b\leq -1}alternate between positive and negative asnalternates between even and odd, and thus do not tend to any limit asngrows. If the exponentiated number varies while tending to1as the exponent tends to infinity, then the limit is not necessarily one of those above. A particularly important case is See§ Exponential functionbelow. Other limits, in particular those of expressions that take on anindeterminate form, are described in§ Limits of powersbelow. Real functions of the formf(x)=cxn{\displaystyle f(x)=cx^{n}}, wherec≠0{\displaystyle c\neq 0}, are sometimes called power functions.[30]Whenn{\displaystyle n}is anintegerandn≥1{\displaystyle n\geq 1}, two primary families exist: forn{\displaystyle n}even, and forn{\displaystyle n}odd. In general forc>0{\displaystyle c>0}, whenn{\displaystyle n}is evenf(x)=cxn{\displaystyle f(x)=cx^{n}}will tend towards positiveinfinitywith increasingx{\displaystyle x}, and also towards positive infinity with decreasingx{\displaystyle x}. All graphs from the family of even power functions have the general shape ofy=cx2{\displaystyle y=cx^{2}}, flattening more in the middle asn{\displaystyle n}increases.[31]Functions with this kind ofsymmetry(f(−x)=f(x){\displaystyle f(-x)=f(x)})are calledeven functions. Whenn{\displaystyle n}is odd,f(x){\displaystyle f(x)}'sasymptoticbehavior reverses from positivex{\displaystyle x}to negativex{\displaystyle x}. Forc>0{\displaystyle c>0},f(x)=cxn{\displaystyle f(x)=cx^{n}}will also tend towards positiveinfinitywith increasingx{\displaystyle x}, but towards negative infinity with decreasingx{\displaystyle x}. All graphs from the family of odd power functions have the general shape ofy=cx3{\displaystyle y=cx^{3}}, flattening more in the middle asn{\displaystyle n}increases and losing all flatness there in the straight line forn=1{\displaystyle n=1}. Functions with this kind of symmetry(f(−x)=−f(x){\displaystyle f(-x)=-f(x)})are calledodd functions. Forc<0{\displaystyle c<0}, the opposite asymptotic behavior is true in each case.[31] Ifxis a nonnegativereal number, andnis a positive integer,x1/n{\displaystyle x^{1/n}}orxn{\displaystyle {\sqrt[{n}]{x}}}denotes the unique nonnegative realnth rootofx, that is, the unique nonnegative real numberysuch thatyn=x.{\displaystyle y^{n}=x.} Ifxis a positive real number, andpq{\displaystyle {\frac {p}{q}}}is arational number, withpandq > 0integers, thenxp/q{\textstyle x^{p/q}}is defined as The equality on the right may be derived by settingy=x1q,{\displaystyle y=x^{\frac {1}{q}},}and writing(x1q)p=yp=((yp)q)1q=((yq)p)1q=(xp)1q.{\displaystyle (x^{\frac {1}{q}})^{p}=y^{p}=\left((y^{p})^{q}\right)^{\frac {1}{q}}=\left((y^{q})^{p}\right)^{\frac {1}{q}}=(x^{p})^{\frac {1}{q}}.} Ifris a positive rational number,0r= 0, by definition. All these definitions are required for extending the identity(xr)s=xrs{\displaystyle (x^{r})^{s}=x^{rs}}to rational exponents. On the other hand, there are problems with the extension of these definitions to bases that are not positive real numbers. For example, a negative real number has a realnth root, which is negative, ifnisodd, and no real root ifnis even. In the latter case, whichever complexnth root one chooses forx1n,{\displaystyle x^{\frac {1}{n}},}the identity(xa)b=xab{\displaystyle (x^{a})^{b}=x^{ab}}cannot be satisfied. For example, See§ Real exponentsand§ Non-integer powers of complex numbersfor details on the way these problems may be handled. For positive real numbers, exponentiation to real powers can be defined in two equivalent ways, either by extending the rational powers to reals by continuity (§ Limits of rational exponents, below), or in terms of thelogarithmof the base and theexponential function(§ Powers via logarithms, below). The result is always a positive real number, and theidentities and propertiesshown above for integer exponents remain true with these definitions for real exponents. The second definition is more commonly used, since it generalizes straightforwardly tocomplexexponents. On the other hand, exponentiation to a real power of a negative real number is much more difficult to define consistently, as it may be non-real and have several values. One may choose one of these values, called theprincipal value, but there is no choice of the principal value for which the identity is true; see§ Failure of power and logarithm identities. Therefore, exponentiation with a basis that is not a positive real number is generally viewed as amultivalued function. Since anyirrational numbercan be expressed as thelimit of a sequenceof rational numbers, exponentiation of a positive real numberbwith an arbitrary real exponentxcan be defined bycontinuitywith the rule[32] where the limit is taken over rational values ofronly. This limit exists for every positiveband every realx. For example, ifx=π, thenon-terminating decimalrepresentationπ= 3.14159...and themonotonicityof the rational powers can be used to obtain intervals bounded by rational powers that are as small as desired, and must containbπ:{\displaystyle b^{\pi }:} So, the upper bounds and the lower bounds of the intervals form twosequencesthat have the same limit, denotedbπ.{\displaystyle b^{\pi }.} This definesbx{\displaystyle b^{x}}for every positiveband realxas acontinuous functionofbandx. See alsoWell-defined expression.[33] Theexponential functionmay be defined asx↦ex,{\displaystyle x\mapsto e^{x},}wheree≈2.718{\displaystyle e\approx 2.718}isEuler's number, but to avoidcircular reasoning, this definition cannot be used here. Rather, we give an independent definition of the exponential functionexp⁡(x),{\displaystyle \exp(x),}and ofe=exp⁡(1){\displaystyle e=\exp(1)}, relying only on positive integer powers (repeated multiplication). Then we sketch the proof that this agrees with the previous definition:exp⁡(x)=ex.{\displaystyle \exp(x)=e^{x}.} There aremany equivalent ways to define the exponential function, one of them being One hasexp⁡(0)=1,{\displaystyle \exp(0)=1,}and theexponential identity(or multiplication rule)exp⁡(x)exp⁡(y)=exp⁡(x+y){\displaystyle \exp(x)\exp(y)=\exp(x+y)}holds as well, since and the second-order termxyn2{\displaystyle {\frac {xy}{n^{2}}}}does not affect the limit, yieldingexp⁡(x)exp⁡(y)=exp⁡(x+y){\displaystyle \exp(x)\exp(y)=\exp(x+y)}. Euler's number can be defined ase=exp⁡(1){\displaystyle e=\exp(1)}. It follows from the preceding equations thatexp⁡(x)=ex{\displaystyle \exp(x)=e^{x}}whenxis an integer (this results from the repeated-multiplication definition of the exponentiation). Ifxis real,exp⁡(x)=ex{\displaystyle \exp(x)=e^{x}}results from the definitions given in preceding sections, by using the exponential identity ifxis rational, and the continuity of the exponential function otherwise. The limit that defines the exponential function converges for everycomplexvalue ofx, and therefore it can be used to extend the definition ofexp⁡(z){\displaystyle \exp(z)}, and thusez,{\displaystyle e^{z},}from the real numbers to any complex argumentz. This extended exponential function still satisfies the exponential identity, and is commonly used for defining exponentiation for complex base and exponent. The definition ofexas the exponential function allows definingbxfor every positive real numbersb, in terms of exponential andlogarithmfunction. Specifically, the fact that thenatural logarithmln(x)is theinverseof the exponential functionexmeans that one has for everyb> 0. For preserving the identity(ex)y=exy,{\displaystyle (e^{x})^{y}=e^{xy},}one must have So,exln⁡b{\displaystyle e^{x\ln b}}can be used as an alternative definition ofbxfor any positive realb. This agrees with the definition given above using rational exponents and continuity, with the advantage to extend straightforwardly to any complex exponent. Ifbis a positive real number, exponentiation with basebandcomplexexponentzis defined by means of the exponential function with complex argument (see the end of§ Exponential function, above) as whereln⁡b{\displaystyle \ln b}denotes thenatural logarithmofb. This satisfies the identity In general,(bz)t{\textstyle \left(b^{z}\right)^{t}}is not defined, sincebzis not a real number. If a meaning is given to the exponentiation of a complex number (see§ Non-integer powers of complex numbers, below), one has, in general, unlesszis real ortis an integer. Euler's formula, allows expressing thepolar formofbz{\displaystyle b^{z}}in terms of thereal and imaginary partsofz, namely where theabsolute valueof thetrigonometricfactor is one. This results from In the preceding sections, exponentiation with non-integer exponents has been defined for positive real bases only. For other bases, difficulties appear already with the apparently simple case ofnth roots, that is, of exponents1/n,{\displaystyle 1/n,}wherenis a positive integer. Although the general theory of exponentiation with non-integer exponents applies tonth roots, this case deserves to be considered first, since it does not need to usecomplex logarithms, and is therefore easier to understand. Every nonzero complex numberzmay be written inpolar formas whereρ{\displaystyle \rho }is theabsolute valueofz, andθ{\displaystyle \theta }is itsargument. The argument is definedup toan integer multiple of2π; this means that, ifθ{\displaystyle \theta }is the argument of a complex number, thenθ+2kπ{\displaystyle \theta +2k\pi }is also an argument of the same complex number for every integerk{\displaystyle k}. The polar form of the product of two complex numbers is obtained by multiplying the absolute values and adding the arguments. It follows that the polar form of annth root of a complex number can be obtained by taking thenth root of the absolute value and dividing its argument byn: If2π{\displaystyle 2\pi }is added toθ{\displaystyle \theta }, the complex number is not changed, but this adds2iπ/n{\displaystyle 2i\pi /n}to the argument of thenth root, and provides a newnth root. This can be donentimes (k=0,1,...,n−1{\displaystyle k=0,1,...,n-1}), and provides thennth roots of the complex number: It is usual to choose one of thennth root as theprincipal root. The common choice is to choose thenth root for which−π<θ≤π,{\displaystyle -\pi <\theta \leq \pi ,}that is, thenth root that has the largest real part, and, if there are two, the one with positive imaginary part. This makes the principalnth root acontinuous functionin the whole complex plane, except for negative real values of theradicand. This function equals the usualnth root for positive real radicands. For negative real radicands, and odd exponents, the principalnth root is not real, although the usualnth root is real.Analytic continuationshows that the principalnth root is the uniquecomplex differentiablefunction that extends the usualnth root to the complex plane without the nonpositive real numbers. If the complex number is moved around zero by increasing its argument, after an increment of2π,{\displaystyle 2\pi ,}the complex number comes back to its initial position, and itsnth roots arepermuted circularly(they are multiplied bye2iπ/ne^{2i\pi /n}). This shows that it is not possible to define anth root function that is continuous in the whole complex plane. Thenth roots of unity are thencomplex numbers such thatwn= 1, wherenis a positive integer. They arise in various areas of mathematics, such as indiscrete Fourier transformor algebraic solutions of algebraic equations (Lagrange resolvent). Thennth roots of unity are thenfirst powers ofω=e2πin{\displaystyle \omega =e^{\frac {2\pi i}{n}}}, that is1=ω0=ωn,ω=ω1,ω2,...,ωn−1.{\displaystyle 1=\omega ^{0}=\omega ^{n},\omega =\omega ^{1},\omega ^{2},...,\omega ^{n-1}.}Thenth roots of unity that have this generating property are calledprimitiventh roots of unity; they have the formωk=e2kπin,{\displaystyle \omega ^{k}=e^{\frac {2k\pi i}{n}},}withkcoprimewithn. The unique primitive square root of unity is−1;{\displaystyle -1;}the primitive fourth roots of unity arei{\displaystyle i}and−i.{\displaystyle -i.} Thenth roots of unity allow expressing allnth roots of a complex numberzas thenproducts of a givennth roots ofzwith anth root of unity. Geometrically, thenth roots of unity lie on theunit circleof thecomplex planeat the vertices of aregularn-gonwith one vertex on the real number 1. As the numbere2kπin{\displaystyle e^{\frac {2k\pi i}{n}}}is the primitiventh root of unity with the smallest positiveargument, it is called theprincipal primitiventh root of unity, sometimes shortened asprincipalnth root of unity, although this terminology can be confused with theprincipal valueof11/n{\displaystyle 1^{1/n}}, which is 1.[34][35][36] Defining exponentiation with complex bases leads to difficulties that are similar to those described in the preceding section, except that there are, in general, infinitely many possible values forzwz^{w}. So, either aprincipal valueis defined, which is not continuous for the values ofzthat are real and nonpositive, orzwz^{w}is defined as amultivalued function. In all cases, thecomplex logarithmis used to define complex exponentiation as wherelog⁡z{\displaystyle \log z}is the variant of the complex logarithm that is used, which is a function or amultivalued functionsuch that for everyzin itsdomain of definition. Theprincipal valueof thecomplex logarithmis the unique continuous function, commonly denotedlog,{\displaystyle \log ,}such that, for every nonzero complex numberz, and theargumentofzsatisfies The principal value of the complex logarithm is not defined forz=0,{\displaystyle z=0,}it isdiscontinuousat negative real values ofz, and it isholomorphic(that is, complex differentiable) elsewhere. Ifzis real and positive, the principal value of the complex logarithm is the natural logarithm:log⁡z=ln⁡z.{\displaystyle \log z=\ln z.} The principal value ofzw{\displaystyle z^{w}}is defined aszw=ewlog⁡z,{\displaystyle z^{w}=e^{w\log z},}wherelog⁡z{\displaystyle \log z}is the principal value of the logarithm. The function(z,w)→zw{\displaystyle (z,w)\to z^{w}}is holomorphic except in the neighbourhood of the points wherezis real and nonpositive. Ifzis real and positive, the principal value ofzw{\displaystyle z^{w}}equals its usual value defined above. Ifw=1/n,{\displaystyle w=1/n,}wherenis an integer, this principal value is the same as the one defined above. In some contexts, there is a problem with the discontinuity of the principal values oflog⁡z{\displaystyle \log z}andzw{\displaystyle z^{w}}at the negative real values ofz. In this case, it is useful to consider these functions asmultivalued functions. Iflog⁡z{\displaystyle \log z}denotes one of the values of the multivalued logarithm (typically its principal value), the other values are2ikπ+log⁡z,{\displaystyle 2ik\pi +\log z,}wherekis any integer. Similarly, ifzw{\displaystyle z^{w}}is one value of the exponentiation, then the other values are given by wherekis any integer. Different values ofkgive different values ofzw{\displaystyle z^{w}}unlesswis arational number, that is, there is an integerdsuch thatdwis an integer. This results from theperiodicityof the exponential function, more specifically, thatea=eb{\displaystyle e^{a}=e^{b}}if and only ifa−b{\displaystyle a-b}is an integer multiple of2πi.{\displaystyle 2\pi i.} Ifw=mn{\displaystyle w={\frac {m}{n}}}is a rational number withmandncoprime integerswithn>0,{\displaystyle n>0,}thenzw{\displaystyle z^{w}}has exactlynvalues. In the casem=1,{\displaystyle m=1,}these values are the same as those described in§nth roots of a complex number. Ifwis an integer, there is only one value that agrees with that of§ Integer exponents. The multivalued exponentiation is holomorphic forz≠0,{\displaystyle z\neq 0,}in the sense that itsgraphconsists of several sheets that define each a holomorphic function in the neighborhood of every point. Ifzvaries continuously along a circle around0, then, after a turn, the value ofzw{\displaystyle z^{w}}has changed of sheet. Thecanonical formx+iy{\displaystyle x+iy}ofzw{\displaystyle z^{w}}can be computed from the canonical form ofzandw. Although this can be described by a single formula, it is clearer to split the computation in several steps. In both examples, all values ofzw{\displaystyle z^{w}}have the same argument. More generally, this is true if and only if thereal partofwis an integer. Some identities for powers and logarithms for positive real numbers will fail for complex numbers, no matter how complex powers and complex logarithms are definedas single-valued functions. For example: log⁡((−i)2)=log⁡(−1)=iπ≠2log⁡(−i)=2log⁡(e−iπ/2)=2−iπ2=−iπ{\displaystyle \log((-i)^{2})=\log(-1)=i\pi \neq 2\log(-i)=2\log(e^{-i\pi /2})=2\,{\frac {-i\pi }{2}}=-i\pi } Regardless of which branch of the logarithm is used, a similar failure of the identity will exist. The best that can be said (if only using this result) is that:log⁡wz≡zlog⁡w(mod2πi){\displaystyle \log w^{z}\equiv z\log w{\pmod {2\pi i}}} This identity does not hold even when considering log as a multivalued function. The possible values oflog(wz)contain those ofz⋅ logwas aproper subset. UsingLog(w)for the principal value oflog(w)andm,nas any integers the possible values of both sides are: Ifbis a positive realalgebraic number, andxis a rational number, thenbxis an algebraic number. This results from the theory ofalgebraic extensions. This remains true ifbis any algebraic number, in which case, all values ofbx(as amultivalued function) are algebraic. Ifxisirrational(that is,not rational), and bothbandxare algebraic, Gelfond–Schneider theorem asserts that all values ofbxaretranscendental(that is, not algebraic), except ifbequals0or1. In other words, ifxis irrational andb∉{0,1},{\displaystyle b\not \in \{0,1\},}then at least one ofb,xandbxis transcendental. The definition of exponentiation with positive integer exponents as repeated multiplication may apply to anyassociative operationdenoted as a multiplication.[nb 2]The definition ofx0requires further the existence of amultiplicative identity.[38] Analgebraic structureconsisting of a set together with an associative operation denoted multiplicatively, and a multiplicative identity denoted by1is amonoid. In such a monoid, exponentiation of an elementxis defined inductively by Ifnis a negative integer,xn{\displaystyle x^{n}}is defined only ifxhas amultiplicative inverse.[39]In this case, the inverse ofxis denotedx−1, andxnis defined as(x−1)−n.{\displaystyle \left(x^{-1}\right)^{-n}.} Exponentiation with integer exponents obeys the following laws, forxandyin the algebraic structure, andmandnintegers: These definitions are widely used in many areas of mathematics, notably forgroups,rings,fields,square matrices(which form a ring). They apply also tofunctionsfrom asetto itself, which form a monoid underfunction composition. This includes, as specific instances,geometric transformations, andendomorphismsof anymathematical structure. When there are several operations that may be repeated, it is common to indicate the repeated operation by placing its symbol in the superscript, before the exponent. For example, iffis areal functionwhose valued can be multiplied,fn{\displaystyle f^{n}}denotes the exponentiation with respect of multiplication, andf∘n{\displaystyle f^{\circ n}}may denote exponentiation with respect offunction composition. That is, and Commonly,(fn)(x){\displaystyle (f^{n})(x)}is denotedf(x)n,{\displaystyle f(x)^{n},}while(f∘n)(x){\displaystyle (f^{\circ n})(x)}is denotedfn(x).{\displaystyle f^{n}(x).} Amultiplicative groupis a set with asassociative operationdenoted as multiplication, that has anidentity element, and such that every element has an inverse. So, ifGis a group,xn{\displaystyle x^{n}}is defined for everyx∈G{\displaystyle x\in G}and every integern. The set of all powers of an element of a group form asubgroup. A group (or subgroup) that consists of all powers of a specific elementxis thecyclic groupgenerated byx. If all the powers ofxare distinct, the group isisomorphicto theadditive groupZ{\displaystyle \mathbb {Z} }of the integers. Otherwise, the cyclic group isfinite(it has a finite number of elements), and its number of elements is theorderofx. If the order ofxisn, thenxn=x0=1,{\displaystyle x^{n}=x^{0}=1,}and the cyclic group generated byxconsists of thenfirst powers ofx(starting indifferently from the exponent0or1). Order of elements play a fundamental role ingroup theory. For example, the order of an element in a finite group is always a divisor of the number of elements of the group (theorderof the group). The possible orders of group elements are important in the study of the structure of a group (seeSylow theorems), and in theclassification of finite simple groups. Superscript notation is also used forconjugation; that is,gh=h−1gh, wheregandhare elements of a group. This notation cannot be confused with exponentiation, since the superscript is not an integer. The motivation of this notation is that conjugation obeys some of the laws of exponentiation, namely(gh)k=ghk{\displaystyle (g^{h})^{k}=g^{hk}}and(gh)k=gkhk.{\displaystyle (gh)^{k}=g^{k}h^{k}.} In aring, it may occur that some nonzero elements satisfyxn=0{\displaystyle x^{n}=0}for some integern. Such an element is said to benilpotent. In acommutative ring, the nilpotent elements form anideal, called thenilradicalof the ring. If the nilradical is reduced to thezero ideal(that is, ifx≠0{\displaystyle x\neq 0}impliesxn≠0{\displaystyle x^{n}\neq 0}for every positive integern), the commutative ring is said to bereduced. Reduced rings are important inalgebraic geometry, since thecoordinate ringof anaffine algebraic setis always a reduced ring. More generally, given an idealIin a commutative ringR, the set of the elements ofRthat have a power inIis an ideal, called theradicalofI. The nilradical is the radical of thezero ideal. Aradical idealis an ideal that equals its own radical. In apolynomial ringk[x1,…,xn]{\displaystyle k[x_{1},\ldots ,x_{n}]}over afieldk, an ideal is radical if and only if it is the set of all polynomials that are zero on an affine algebraic set (this is a consequence ofHilbert's Nullstellensatz). IfAis a square matrix, then the product ofAwith itselfntimes is called thematrix power. AlsoA0{\displaystyle A^{0}}is defined to be the identity matrix,[40]and ifAis invertible, thenA−n=(A−1)n{\displaystyle A^{-n}=\left(A^{-1}\right)^{n}}. Matrix powers appear often in the context ofdiscrete dynamical systems, where the matrixAexpresses a transition from a state vectorxof some system to the next stateAxof the system.[41]This is the standard interpretation of aMarkov chain, for example. ThenA2x{\displaystyle A^{2}x}is the state of the system after two time steps, and so forth:Anx{\displaystyle A^{n}x}is the state of the system afterntime steps. The matrix powerAn{\displaystyle A^{n}}is the transition matrix between the state now and the state at a timensteps in the future. So computing matrix powers is equivalent to solving the evolution of the dynamical system. In many cases, matrix powers can be expediently computed by usingeigenvalues and eigenvectors. Apart from matrices, more generallinear operatorscan also be exponentiated. An example is thederivativeoperator of calculus,d/dx{\displaystyle d/dx}, which is a linear operator acting on functionsf(x){\displaystyle f(x)}to give a new function(d/dx)f(x)=f′(x){\displaystyle (d/dx)f(x)=f'(x)}. Thenth power of the differentiation operator is thenth derivative: These examples are for discrete exponents of linear operators, but in many circumstances it is also desirable to define powers of such operators with continuous exponents. This is the starting point of the mathematical theory ofsemigroups.[42]Just as computing matrix powers with discrete exponents solves discrete dynamical systems, so does computing matrix powers with continuous exponents solve systems with continuous dynamics. Examples include approaches to solving theheat equation,Schrödinger equation,wave equation, and other partial differential equations including a time evolution. The special case of exponentiating the derivative operator to a non-integer power is called thefractional derivativewhich, together with thefractional integral, is one of the basic operations of thefractional calculus. Afieldis an algebraic structure in which multiplication, addition, subtraction, and division are defined and satisfy the properties that multiplication isassociativeand every nonzero element has amultiplicative inverse. This implies that exponentiation with integer exponents is well-defined, except for nonpositive powers of0. Common examples are the field ofcomplex numbers, thereal numbersand therational numbers, considered earlier in this article, which are allinfinite. Afinite fieldis a field with afinite numberof elements. This number of elements is either aprime numberor aprime power; that is, it has the formq=pk,{\displaystyle q=p^{k},}wherepis a prime number, andkis a positive integer. For every suchq, there are fields withqelements. The fields withqelements are allisomorphic, which allows, in general, working as if there were only one field withqelements, denotedFq.{\displaystyle \mathbb {F} _{q}.} One has for everyx∈Fq.{\displaystyle x\in \mathbb {F} _{q}.} Aprimitive elementinFq{\displaystyle \mathbb {F} _{q}}is an elementgsuch that the set of theq− 1first powers ofg(that is,{g1=g,g2,…,gp−1=g0=1}{\displaystyle \{g^{1}=g,g^{2},\ldots ,g^{p-1}=g^{0}=1\}}) equals the set of the nonzero elements ofFq.{\displaystyle \mathbb {F} _{q}.}There areφ(p−1){\displaystyle \varphi (p-1)}primitive elements inFq,{\displaystyle \mathbb {F} _{q},}whereφ{\displaystyle \varphi }isEuler's totient function. InFq,{\displaystyle \mathbb {F} _{q},}thefreshman's dreamidentity is true for the exponentp. Asxp=x{\displaystyle x^{p}=x}inFq,{\displaystyle \mathbb {F} _{q},}It follows that the map islinearoverFq,{\displaystyle \mathbb {F} _{q},}and is afield automorphism, called theFrobenius automorphism. Ifq=pk,{\displaystyle q=p^{k},}the fieldFq{\displaystyle \mathbb {F} _{q}}haskautomorphisms, which are thekfirst powers (undercomposition) ofF. In other words, theGalois groupofFq{\displaystyle \mathbb {F} _{q}}iscyclicof orderk, generated by the Frobenius automorphism. TheDiffie–Hellman key exchangeis an application of exponentiation in finite fields that is widely used forsecure communications. It uses the fact that exponentiation is computationally inexpensive, whereas the inverse operation, thediscrete logarithm, is computationally expensive. More precisely, ifgis a primitive element inFq,{\displaystyle \mathbb {F} _{q},}thenge{\displaystyle g^{e}}can be efficiently computed withexponentiation by squaringfor anye, even ifqis large, while there is no known computationally practical algorithm that allows retrievingefromge{\displaystyle g^{e}}ifqis sufficiently large. TheCartesian productof twosetsSandTis the set of theordered pairs(x,y){\displaystyle (x,y)}such thatx∈S{\displaystyle x\in S}andy∈T.{\displaystyle y\in T.}This operation is not properlycommutativenorassociative, but has these propertiesup tocanonicalisomorphisms, that allow identifying, for example,(x,(y,z)),{\displaystyle (x,(y,z)),}((x,y),z),{\displaystyle ((x,y),z),}and(x,y,z).{\displaystyle (x,y,z).} This allows defining thenth powerSn{\displaystyle S^{n}}of a setSas the set of alln-tuples(x1,…,xn){\displaystyle (x_{1},\ldots ,x_{n})}of elements ofS. WhenSis endowed with some structure, it is frequent thatSn{\displaystyle S^{n}}is naturally endowed with a similar structure. In this case, the term "direct product" is generally used instead of "Cartesian product", and exponentiation denotes product structure. For exampleRn{\displaystyle \mathbb {R} ^{n}}(whereR{\displaystyle \mathbb {R} }denotes the real numbers) denotes the Cartesian product ofncopies ofR,{\displaystyle \mathbb {R} ,}as well as their direct product asvector space,topological spaces,rings, etc. An-tuple(x1,…,xn){\displaystyle (x_{1},\ldots ,x_{n})}of elements ofScan be considered as afunctionfrom{1,…,n}.{\displaystyle \{1,\ldots ,n\}.}This generalizes to the following notation. Given two setsSandT, the set of all functions fromTtoSis denotedST{\displaystyle S^{T}}. This exponential notation is justified by the following canonical isomorphisms (for the first one, seeCurrying): where×{\displaystyle \times }denotes the Cartesian product, and⊔{\displaystyle \sqcup }thedisjoint union. One can use sets as exponents for other operations on sets, typically fordirect sumsofabelian groups,vector spaces, ormodules. For distinguishing direct sums from direct products, the exponent of a direct sum is placed between parentheses. For example,RN{\displaystyle \mathbb {R} ^{\mathbb {N} }}denotes the vector space of theinfinite sequencesof real numbers, andR(N){\displaystyle \mathbb {R} ^{(\mathbb {N} )}}the vector space of those sequences that have a finite number of nonzero elements. The latter has abasisconsisting of the sequences with exactly one nonzero element that equals1, while theHamel basesof the former cannot be explicitly described (because their existence involvesZorn's lemma). In this context,2can represents the set{0,1}.{\displaystyle \{0,1\}.}So,2S{\displaystyle 2^{S}}denotes thepower setofS, that is the set of the functions fromSto{0,1},{\displaystyle \{0,1\},}which can be identified with the set of thesubsetsofS, by mapping each function to theinverse imageof1. This fits in with theexponentiation of cardinal numbers, in the sense that|ST| = |S||T|, where|X|is the cardinality ofX. In thecategory of sets, themorphismsbetween setsXandYare the functions fromXtoY. It results that the set of the functions fromXtoYthat is denotedYX{\displaystyle Y^{X}}in the preceding section can also be denotedhom⁡(X,Y).{\displaystyle \hom(X,Y).}The isomorphism(ST)U≅ST×U{\displaystyle (S^{T})^{U}\cong S^{T\times U}}can be rewritten This means the functor "exponentiation to the powerT" is aright adjointto the functor "direct product withT". This generalizes to the definition ofexponentiation in a categoryin which finitedirect productsexist: in such a category, the functorX→XT{\displaystyle X\to X^{T}}is, if it exists, a right adjoint to the functorY→T×Y.{\displaystyle Y\to T\times Y.}A category is called aCartesian closed category, if direct products exist, and the functorY→X×Y{\displaystyle Y\to X\times Y}has a right adjoint for everyT. Just as exponentiation of natural numbers is motivated by repeated multiplication, it is possible to define an operation based on repeated exponentiation; this operation is sometimes called hyper-4 or tetration. Iterating tetration leads to another operation, and so on, a concept namedhyperoperation. This sequence of operations is expressed by theAckermann functionandKnuth's up-arrow notation. Just as exponentiation grows faster than multiplication, which is faster-growing than addition, tetration is faster-growing than exponentiation. Evaluated at(3, 3), the functions addition, multiplication, exponentiation, and tetration yield 6, 9, 27, and7625597484987(=327= 333=33) respectively. Zero to the power of zerogives a number of examples of limits that are of theindeterminate form00. The limits in these examples exist, but have different values, showing that the two-variable functionxyhas no limit at the point(0, 0). One may consider at what points this function does have a limit. More precisely, consider the functionf(x,y)=xy{\displaystyle f(x,y)=x^{y}}defined onD={(x,y)∈R2:x>0}{\displaystyle D=\{(x,y)\in \mathbf {R} ^{2}:x>0\}}. ThenDcan be viewed as a subset ofR2(that is, the set of all pairs(x,y)withx,ybelonging to theextended real number lineR= [−∞, +∞], endowed with theproduct topology), which will contain the points at which the functionfhas a limit. In fact,fhas a limit at allaccumulation pointsofD, except for(0, 0),(+∞, 0),(1, +∞)and(1, −∞).[43]Accordingly, this allows one to define the powersxyby continuity whenever0 ≤x≤ +∞,−∞ ≤ y ≤ +∞, except for00,(+∞)0,1+∞and1−∞, which remain indeterminate forms. Under this definition by continuity, we obtain: These powers are obtained by taking limits ofxyforpositivevalues ofx. This method does not permit a definition ofxywhenx< 0, since pairs(x,y)withx< 0are not accumulation points ofD. On the other hand, whennis an integer, the powerxnis already meaningful for all values ofx, including negative ones. This may make the definition0n= +∞obtained above for negativenproblematic whennis odd, since in this casexn→ +∞asxtends to0through positive values, but not negative ones. Computingbnusing iterated multiplication requiresn− 1multiplication operations, but it can be computed more efficiently than that, as illustrated by the following example. To compute2100, applyHorner's ruleto the exponent 100 written in binary: Then compute the following terms in order, reading Horner's rule from right to left. This series of steps only requires 8 multiplications instead of 99. In general, the number of multiplication operations required to computebncan be reduced to♯n+⌊log2⁡n⌋−1,{\displaystyle \sharp n+\lfloor \log _{2}n\rfloor -1,}by usingexponentiation by squaring, where♯n{\displaystyle \sharp n}denotes the number of1s in thebinary representationofn. For some exponents (100 is not among them), the number of multiplications can be further reduced by computing and using the minimaladdition-chain exponentiation. Finding theminimalsequence of multiplications (the minimal-length addition chain for the exponent) forbnis a difficult problem, for which no efficient algorithms are currently known (seeSubset sum problem), but many reasonably efficient heuristic algorithms are available.[44]However, in practical computations, exponentiation by squaring is efficient enough, and much more easy to implement. Function compositionis abinary operationthat is defined onfunctionssuch that thecodomainof the function written on the right is included in thedomainof the function written on the left. It is denotedg∘f,{\displaystyle g\circ f,}and defined as for everyxin the domain off. If the domain of a functionfequals its codomain, one may compose the function with itself an arbitrary number of time, and this defines thenth power of the function under composition, commonly called thenth iterateof the function. Thusfn{\displaystyle f^{n}}denotes generally thenth iterate off; for example,f3(x){\displaystyle f^{3}(x)}meansf(f(f(x))).{\displaystyle f(f(f(x))).}[45] When a multiplication is defined on the codomain of the function, this defines a multiplication on functions, thepointwise multiplication, which induces another exponentiation. When usingfunctional notation, the two kinds of exponentiation are generally distinguished by placing the exponent of the functional iterationbeforethe parentheses enclosing the arguments of the function, and placing the exponent of pointwise multiplicationafterthe parentheses. Thusf2(x)=f(f(x)),{\displaystyle f^{2}(x)=f(f(x)),}andf(x)2=f(x)⋅f(x).{\displaystyle f(x)^{2}=f(x)\cdot f(x).}When functional notation is not used, disambiguation is often done by placing the composition symbol before the exponent; for examplef∘3=f∘f∘f,{\displaystyle f^{\circ 3}=f\circ f\circ f,}andf3=f⋅f⋅f.{\displaystyle f^{3}=f\cdot f\cdot f.}For historical reasons, the exponent of a repeated multiplication is placed before the argument for some specific functions, typically thetrigonometric functions. So,sin2⁡x{\displaystyle \sin ^{2}x}andsin2⁡(x){\displaystyle \sin ^{2}(x)}both meansin⁡(x)⋅sin⁡(x){\displaystyle \sin(x)\cdot \sin(x)}and notsin⁡(sin⁡(x)),{\displaystyle \sin(\sin(x)),}which, in any case, is rarely considered. Historically, several variants of these notations were used by different authors.[46][47][48] In this context, the exponent−1{\displaystyle -1}denotes always theinverse function, if it exists. Sosin−1⁡x=sin−1⁡(x)=arcsin⁡x.{\displaystyle \sin ^{-1}x=\sin ^{-1}(x)=\arcsin x.}For themultiplicative inversefractions are generally used as in1/sin⁡(x)=1sin⁡x.{\displaystyle 1/\sin(x)={\frac {1}{\sin x}}.} Programming languagesgenerally express exponentiation either as an infixoperatoror as a function application, as they do not support superscripts. The most common operator symbol for exponentiation is thecaret(^). Theoriginal version of ASCIIincluded an uparrow symbol (↑), intended for exponentiation, but this wasreplaced by the caretin 1967, so the caret became usual in programming languages.[49]The notations include: In most programming languages with an infix exponentiation operator, it isright-associative, that is,a^b^cis interpreted asa^(b^c).[55]This is because(a^b)^cis equal toa^(b*c)and thus not as useful. In some languages, it is left-associative, notably inAlgol,MATLAB, and theMicrosoft Excelformula language. Other programming languages use functional notation: Still others only provide exponentiation as part of standardlibraries: In somestatically typedlanguages that prioritizetype safetysuch asRust, exponentiation is performed via a multitude of methods:
https://en.wikipedia.org/wiki/Exponentiation
In mathematics, asum of radicalsis defined as a finitelinear combinationofnth roots: wheren,ri{\displaystyle n,r_{i}}arenatural numbersandki,xi{\displaystyle k_{i},x_{i}}arereal numbers. A particular special case arising incomputational complexity theoryis thesquare-root sum problem, asking whether it is possible to determine the sign of a sum ofsquare roots, with integer coefficients, inpolynomial time. This is of importance for many problems incomputational geometry, since the computation of theEuclidean distancebetween two points in the general case involves the computation of asquare root, and therefore theperimeterof apolygonor the length of a polygonal chain takes the form of a sum of radicals.[1] In 1991, Blömer proposed a polynomial timeMonte Carlo algorithmfor determining whether a sum of radicals is zero, or more generally whether it represents a rational number.[2]Blömer's result applies more generally than the square-root sum problem, to sums of radicals that are not necessarily square roots. However, his algorithm does not solve the problem, because it does not determine the sign of a non-zero sum of radicals.[2]
https://en.wikipedia.org/wiki/Sum_of_radicals
In mathematics, thegeometric meanis ameanoraveragewhich indicates acentral tendencyof a finite collection ofpositive real numbersby using the product of their values (as opposed to thearithmetic meanwhich uses their sum). The geometric mean of⁠n{\displaystyle n}⁠numbers is thenth rootof theirproduct, i.e., for a collection of numbersa1,a2, ...,an, the geometric mean is defined as When the collection of numbers and their geometric mean are plotted inlogarithmic scale, the geometric mean is transformed into an arithmetic mean, so the geometric mean can equivalently be calculated by taking thenatural logarithm⁠ln{\displaystyle \ln }⁠of each number, finding the arithmetic mean of the logarithms, and then returning the result to linear scale using theexponential function⁠exp{\displaystyle \exp }⁠, The geometric mean of two numbers is thesquare rootof their product, for example with numbers⁠2{\displaystyle 2}⁠and⁠8{\displaystyle 8}⁠the geometric mean is2⋅8={\displaystyle \textstyle {\sqrt {2\cdot 8}}={}}16=4{\displaystyle \textstyle {\sqrt {16}}=4}.The geometric mean of the three numbers is thecube rootof their product, for example with numbers⁠1{\displaystyle 1}⁠,⁠12{\displaystyle 12}⁠, and⁠18{\displaystyle 18}⁠, the geometric mean is1⋅12⋅183={\displaystyle \textstyle {\sqrt[{3}]{1\cdot 12\cdot 18}}={}}2163=6{\displaystyle \textstyle {\sqrt[{3}]{216}}=6}. The geometric mean is useful whenever the quantities to be averaged combine multiplicatively, such aspopulation growthrates or interest rates of a financial investment. Suppose for example a person invests $1000 and achieves annual returns of +10%, −12%, +90%, −30% and +25%, giving a final value of $1609. The average percentage growth is the geometric mean of the annual growth ratios (1.10, 0.88, 1.90, 0.70, 1.25), namely 1.0998, an annual average growth of 9.98%. The arithmetic mean of these annual returns is 16.6% per annum, which is not a meaningful average because growth rates do not combine additively. The geometric mean can be understood in terms ofgeometry. The geometric mean of two numbers,a{\displaystyle a}andb{\displaystyle b}, is the length of one side of asquarewhose area is equal to the area of arectanglewith sides of lengthsa{\displaystyle a}andb{\displaystyle b}. Similarly, the geometric mean of three numbers,a{\displaystyle a},b{\displaystyle b}, andc{\displaystyle c}, is the length of one edge of acubewhose volume is the same as that of acuboidwith sides whose lengths are equal to the three given numbers. The geometric mean is one of the three classicalPythagorean means, together with the arithmetic mean and theharmonic mean. For all positive data sets containing at least one pair of unequal values, the harmonic mean is always the least of the three means, while the arithmetic mean is always the greatest of the three and the geometric mean is always in between (seeInequality of arithmetic and geometric means.) The geometric mean of a data set{a1,a2,…,an}{\textstyle \left\{a_{1},a_{2},\,\ldots ,\,a_{n}\right\}}is given by: That is, thenth root of theproductof the elements. For example, for1,2,3,4{\textstyle 1,2,3,4}, the product1⋅2⋅3⋅4{\textstyle 1\cdot 2\cdot 3\cdot 4}is24{\textstyle 24}, and the geometric mean is the fourth root of 24, approximately 2.213. The geometric mean can also be expressed as the exponential of the arithmetic mean of logarithms.[4]By usinglogarithmic identitiesto transform the formula, the multiplications can be expressed as a sum and the power as a multiplication: Whena1,a2,…,an>0{\displaystyle a_{1},a_{2},\dots ,a_{n}>0} since|ln⁡a1a2⋯antn=1nln⁡(a1a2⋯an)=1n(ln⁡a1+ln⁡a2+⋯+ln⁡an).{\displaystyle \textstyle {\vphantom {\Big |}}\ln {\sqrt[{n}]{a_{1}a_{2}\cdots a_{n}{\vphantom {t}}}}={\frac {1}{n}}\ln(a_{1}a_{2}\cdots a_{n})={\frac {1}{n}}(\ln a_{1}+\ln a_{2}+\cdots +\ln a_{n}).} This is sometimes called thelog-average(not to be confused with thelogarithmic average). It is simply thearithmetic meanof the logarithm-transformed values ofai{\displaystyle a_{i}}(i.e., the arithmetic mean on the log scale), using the exponentiation to return to the original scale, i.e., it is thegeneralized f-meanwithf(x)=log⁡x{\displaystyle f(x)=\log x}. A logarithm of any base can be used in place of the natural logarithm. For example, the geometric mean of⁠1{\displaystyle 1}⁠,⁠2{\displaystyle 2}⁠,⁠8{\displaystyle 8}⁠, and⁠16{\displaystyle 16}⁠can be calculated using logarithms base 2: Related to the above, it can be seen that for a given sample of pointsa1,…,an{\displaystyle a_{1},\ldots ,a_{n}}, the geometric mean is the minimizer of whereas the arithmetic mean is the minimizer of Thus, the geometric mean provides a summary of the samples whose exponent best matches the exponents of the samples (in the least squares sense). In computer implementations, naïvely multiplying many numbers together can causearithmetic overfloworunderflow. Calculating the geometric mean using logarithms is one way to avoid this problem. The geometric mean of a data setis less thanthe data set'sarithmetic meanunless all members of the data set are equal, in which case the geometric and arithmetic means are equal. This allows the definition of thearithmetic-geometric mean, an intersection of the two which always lies in between. The geometric mean is also thearithmetic-harmonic meanin the sense that if twosequences(an{\textstyle a_{n}}) and (hn{\textstyle h_{n}}) are defined: and wherehn+1{\textstyle h_{n+1}}is theharmonic meanof the previous values of the two sequences, thenan{\textstyle a_{n}}andhn{\textstyle h_{n}}will converge to the geometric mean ofx{\textstyle x}andy{\textstyle y}. The sequences converge to a common limit, and the geometric mean is preserved: Replacing the arithmetic and harmonic mean by a pair ofgeneralized meansof opposite, finite exponents yields the same result. The geometric mean of a non-empty data set of positive numbers is always at most their arithmetic mean. Equality is only obtained when all numbers in the data set are equal; otherwise, the geometric mean is smaller. For example, the geometric mean of 2 and 3 is 2.45, while their arithmetic mean is 2.5. In particular, this means that when a set of non-identical numbers is subjected to amean-preserving spread— that is, the elements of the set are "spread apart" more from each other while leaving the arithmetic mean unchanged — their geometric mean decreases.[5] Iff:[a,b]→(0,∞){\displaystyle f:[a,b]\to (0,\infty )}is a positive continuous real-valued function, its geometric mean over this interval is For instance, taking the identity functionf(x)=x{\displaystyle f(x)=x}over the unit interval shows that the geometric mean of the positive numbers between 0 and 1 is equal to1e{\displaystyle {\frac {1}{e}}}. The geometric mean is more appropriate than thearithmetic meanfor describing proportional growth, bothexponential growth(constant proportional growth) and varying growth; in business the geometric mean of growth rates is known as thecompound annual growth rate(CAGR). The geometric mean of growth over periods yields the equivalent constant growth rate that would yield the same final amount. As an example, suppose an orange tree yields 100 oranges one year and then 180, 210 and 300 the following years, for growth rates of 80%, 16.7% and 42.9% respectively. Using thearithmetic meancalculates a (linear) average growth of 46.5% (calculated by(80%+16.7%+42.9%)÷3{\displaystyle (80\%+16.7\%+42.9\%)\div 3}). However, when applied to the 100 orange starting yield, 46.5% annual growth results in 314 oranges after three years of growth, rather than the observed 300. The linear average overstates the rate of growth. Instead, using the geometric mean, the average yearly growth is approximately 44.2% (calculated by1.80×1.167×1.4293{\displaystyle {\sqrt[{3}]{1.80\times 1.167\times 1.429}}}). Starting from a 100 orange yield with annual growth of 44.2% gives the expected 300 orange yield after three years. In order to determine the average growth rate, it is not necessary to take the product of the measured growth rates at every step. Let the quantity be given as the sequencea0,a1,...,an{\displaystyle a_{0},a_{1},...,a_{n}}, wheren{\displaystyle n}is the number of steps from the initial to final state. The growth rate between successive measurementsak{\displaystyle a_{k}}andak+1{\displaystyle a_{k+1}}isak+1/ak{\displaystyle a_{k+1}/a_{k}}. The geometric mean of these growth rates is then just: The fundamental property of the geometric mean, which does not hold for any other mean, is that for two sequencesX{\displaystyle X}andY{\displaystyle Y}of equal length, This makes the geometric mean the only correct mean when averagingnormalizedresults; that is, results that are presented as ratios to reference values.[6]This is the case when presenting computer performance with respect to a reference computer, or when computing a single average index from several heterogeneous sources (for example, life expectancy, education years, and infant mortality). In this scenario, using the arithmetic or harmonic mean would change the ranking of the results depending on what is used as a reference. For example, take the following comparison of execution time of computer programs: Table 1 The arithmetic and geometric means "agree" that computer C is the fastest. However, by presenting appropriately normalized valuesandusing the arithmetic mean, we can show either of the other two computers to be the fastest. Normalizing by A's result gives A as the fastest computer according to the arithmetic mean: Table 2 while normalizing by B's result gives B as the fastest computer according to the arithmetic mean but A as the fastest according to the harmonic mean: Table 3 and normalizing by C's result gives C as the fastest computer according to the arithmetic mean but A as the fastest according to the harmonic mean: Table 4 In all cases, the ranking given by the geometric mean stays the same as the one obtained with unnormalized values. However, this reasoning has been questioned.[7]Giving consistent results is not always equal to giving the correct results. In general, it is more rigorous to assign weights to each of the programs, calculate the average weighted execution time (using the arithmetic mean), and then normalize that result to one of the computers. The three tables above just give a different weight to each of the programs, explaining the inconsistent results of the arithmetic and harmonic means (Table 4 gives equal weight to both programs, the Table 2 gives a weight of 1/1000 to the second program, and the Table 3 gives a weight of 1/100 to the second program and 1/10 to the first one). The use of the geometric mean for aggregating performance numbers should be avoided if possible, because multiplying execution times has no physical meaning, in contrast to adding times as in the arithmetic mean. Metrics that are inversely proportional to time (speedup,IPC) should be averaged using the harmonic mean. The geometric mean can be derived from thegeneralized meanas its limit asp{\displaystyle p}goes to zero. Similarly, this is possible for the weighted geometric mean. The geometric mean has from time to time been used to calculate financial indices (the averaging is over the components of the index). For example, in the past theFT 30index used a geometric mean.[8]It is also used in theCPIcalculation[9]and recently introduced "RPIJ" measure of inflation in the United Kingdom and in the European Union. This has the effect of understating movements in the index compared to using the arithmetic mean.[8] Although the geometric mean has been relatively rare in computing social statistics, starting from 2010 the United Nations Human Development Index did switch to this mode of calculation, on the grounds that it better reflected the non-substitutable nature of the statistics being compiled and compared: Not all values used to compute theHDI (Human Development Index)are normalized; some of them instead have the form(X−Xmin)/(Xnorm−Xmin){\displaystyle \left(X-X_{\text{min}}\right)/\left(X_{\text{norm}}-X_{\text{min}}\right)}. This makes the choice of the geometric mean less obvious than one would expect from the "Properties" section above. The equally distributed welfare equivalent income associated with anAtkinson Indexwith an inequality aversion parameter of 1.0 is simply the geometric mean of incomes. For values other than one, the equivalent value is anLp normdivided by the number of elements, with p equal to one minus the inequality aversion parameter. In the case of aright triangle, its altitude is the length of a line extending perpendicularly from the hypotenuse to its 90° vertex. Imagining that this line splits the hypotenuse into two segments, the geometric mean of these segment lengths is the length of the altitude. This property is known as thegeometric mean theorem. In anellipse, thesemi-minor axisis the geometric mean of the maximum and minimum distances of the ellipse from afocus; it is also the geometric mean of thesemi-major axisand thesemi-latus rectum. Thesemi-major axisof an ellipse is the geometric mean of the distance from the center to either focus and the distance from the center to eitherdirectrix. Another way to think about it is as follows: Consider a circle with radiusr{\displaystyle r}. Now take two diametrically opposite points on the circle and apply pressure from both ends to deform it into an ellipse with semi-major and semi-minor axes of lengthsa{\displaystyle a}andb{\displaystyle b}. Since the area of the circle and the ellipse stays the same, we have: The radius of the circle is the geometric mean of the semi-major and the semi-minor axes of the ellipse formed by deforming the circle. Distance to thehorizonof asphere(ignoring theeffect of atmospheric refractionwhen atmosphere is present) is equal to the geometric mean of the distance to the closest point of the sphere and the distance to the farthest point of the sphere. The geometric mean is used in both in the approximation ofsquaring the circleby S.A. Ramanujan[11]and in the construction of theheptadecagonwith "mean proportionals".[12] The geometric mean has been used in choosing a compromiseaspect ratioin film and video: given two aspect ratios, the geometric mean of them provides a compromise between them, distorting or cropping both in some sense equally. Concretely, two equal area rectangles (with the same center and parallel sides) of different aspect ratios intersect in a rectangle whose aspect ratio is the geometric mean, and their hull (smallest rectangle which contains both of them) likewise has the aspect ratio of their geometric mean. Inthe choice of 16:9aspect ratio by theSMPTE, balancing 2.35 and 4:3, the geometric mean is2.35×43≈1.7701{\textstyle {\sqrt {2.35\times {\frac {4}{3}}}}\approx 1.7701}, and thus16:9=1.777¯{\textstyle 16:9=1.77{\overline {7}}}... was chosen. This was discoveredempiricallyby Kerns Powers, who cut out rectangles with equal areas and shaped them to match each of the popular aspect ratios. When overlapped with their center points aligned, he found that all of those aspect ratio rectangles fit within an outer rectangle with an aspect ratio of 1.77:1 and all of them also covered a smaller common inner rectangle with the same aspect ratio 1.77:1.[13]The value found by Powers is exactly the geometric mean of the extreme aspect ratios,4:3(1.33:1) andCinemaScope(2.35:1), which is coincidentally close to16:9{\textstyle 16:9}(1.777¯:1{\textstyle 1.77{\overline {7}}:1}). The intermediate ratios have no effect on the result, only the two extreme ratios. Applying the same geometric mean technique to 16:9 and 4:3 approximately yields the14:9(1.555¯{\textstyle 1.55{\overline {5}}}...) aspect ratio, which is likewise used as a compromise between these ratios.[14]In this case 14:9 is exactly thearithmetic meanof16:9{\textstyle 16:9}and4:3=12:9{\textstyle 4:3=12:9}, since 14 is the average of 16 and 12, while the precisegeometric meanis169×43≈1.5396≈13.8:9,{\textstyle {\sqrt {{\frac {16}{9}}\times {\frac {4}{3}}}}\approx 1.5396\approx 13.8:9,}but the two differentmeans, arithmetic and geometric, are approximately equal because both numbers are sufficiently close to each other (a difference of less than 2%). The geometric mean is also used to calculate B and C seriespaper formats. TheBn{\displaystyle B_{n}}format has an area which is the geometric mean of the areas ofAn{\displaystyle A_{n}}andAn−1{\displaystyle A_{n-1}}. For example, the area of a B1 paper is22m2{\textstyle {\frac {\sqrt {2}}{2}}\mathrm {m} ^{2}}, because it is the geometric mean of the areas of an A0 (1m2{\textstyle 1\mathrm {m} ^{2}}) and an A1 (12m2{\textstyle {\frac {1}{2}}\mathrm {m} ^{2}}) paper(1m2⋅12m2=12m4={\textstyle {\sqrt {1\mathrm {m} ^{2}\cdot {\frac {1}{2}}\mathrm {m} ^{2}}}={\sqrt {{\frac {1}{2}}\mathrm {m} ^{4}}}={}}​12m2=22m2{\textstyle {\frac {1}{\sqrt {2}}}\mathrm {m} ^{2}={\frac {\sqrt {2}}{2}}\mathrm {m} ^{2}}). The same principle applies with the C series, whose area is the geometric mean of the A and B series. For example, the C4 format has an area which is the geometric mean of the areas of A4 and B4. An advantage that comes from this relationship is that an A4 paper fits inside a C4 envelope, and both fit inside a B4 envelope.
https://en.wikipedia.org/wiki/Geometric_mean
Thetwelfth root of twoor212{\displaystyle {\sqrt[{12}]{2}}}(orequivalently21/12{\displaystyle 2^{1/12}}) is analgebraicirrational number, approximately equal to 1.0594631. It is most important in Westernmusic theory, where it represents thefrequencyratio(musical interval) of asemitone(Playⓘ) intwelve-tone equal temperament. This number was proposed for the first time in relationship tomusical tuningin the sixteenth and seventeenth centuries. It allows measurement and comparison of different intervals (frequency ratios) as consisting of different numbers of a single interval, the equal tempered semitone (for example, a minor third is 3 semitones, a major third is 4 semitones, and perfect fifth is 7 semitones).[a]A semitone itself is divided into 100cents(1 cent =21200=21/1200{\displaystyle {\sqrt[{1200}]{2}}=2^{1/1200}}). Thetwelfth rootoftwoto 20 significant figures is1.0594630943592952646.[2]Fraction approximations in increasing order of accuracy include⁠18/17⁠,⁠89/84⁠,⁠196/185⁠,⁠1657/1564⁠, and⁠18904/17843⁠. Amusical intervalis a ratio of frequencies and theequal-temperedchromatic scale divides theoctave(which has a ratio of 2:1) into twelve equal parts. Each note has a frequency that is 21⁄12times that of the one below it.[3] Applying this value successively to the tones of a chromatic scale, starting fromAabovemiddleC(known asA4) with a frequency of 440 Hz, produces the following sequence ofpitches: The finalA(A5: 880 Hz) is exactly twice the frequency of the lowerA(A4: 440 Hz), that is, one octave higher. Other tuning scales use slightly different interval ratios: Since the frequency ratio of a semitone is close to 106% (100212≈105.946{\textstyle 100{\sqrt[{12}]{2}}\approx 105.946}), increasing or decreasing the playback speed of a recording by 6% will shift the pitch up or down by about one semitone, or "half-step". Upscalereel-to-reel magnetic tape recorderstypically have pitch adjustments of up to ±6%, generally used to match the playback or recording pitch to other music sources having slightly different tunings (or possibly recorded on equipment that was not running at quite the right speed). Modern recording studios utilize digitalpitch shiftingto achieve similar results, ranging fromcentsup to several half-steps. Reel-to-reel adjustments also affect the tempo of the recorded sound, while digital shifting does not. Historically this number was proposed for the first time in relationship to musical tuning in 1580 (drafted, rewritten 1610) bySimon Stevin.[4]In 1581 Italian musicianVincenzo Galileimay be the first European to suggest twelve-tone equal temperament.[1]The twelfth root of two was first calculated in 1584 by the Chinese mathematician and musicianZhu Zaiyuusing an abacus to reach twenty four decimal places accurately,[1]calculated circa 1605 by Flemish mathematicianSimon Stevin,[1]in 1636 by the French mathematicianMarin Mersenneand in 1691 by German musicianAndreas Werckmeister.[5]
https://en.wikipedia.org/wiki/Twelfth_root_of_two
In mathematics, annth-orderArgand system(named afterFrenchmathematicianJean-Robert Argand) is acoordinate systemconstructed around thenthroots of unity. From theorigin,naxes extend such that the angle between each axis and the axes immediately before and after it is 360/ndegrees. For example, thenumber lineis the 2nd-order Argand system because the two axes extending from the origin represent 1 and −1, the 2nd roots of unity. Thecomplex plane(sometimes called the Argand plane, also named after Argand) is the 4th-order Argand system because the 4 axes extending from the origin represent 1,i, −1, and −i, the 4th roots of unity. Thismathematics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Argand_system
Inalgebraic number theory, acyclotomic fieldis anumber fieldobtained byadjoiningacomplexroot of unitytoQ{\displaystyle \mathbb {Q} }, thefieldofrational numbers.[1] Cyclotomic fields played a crucial role in the development of modernalgebraand number theory because of their relation withFermat's Last Theorem. It was in the process of his deep investigations of the arithmetic of these fields (forprimen{\displaystyle n})—and more precisely, because of the failure ofunique factorizationin theirrings of integers—thatErnst Kummerfirst introduced the concept of anideal numberand proved his celebratedcongruences. Forn≥1{\displaystyle n\geq 1}, let This is aprimitiven{\displaystyle n}th root of unity. Then then{\displaystyle n}th cyclotomic field is thefield extensionQ(ζn){\displaystyle \mathbb {Q} (\zeta _{n})}ofQ{\displaystyle \mathbb {Q} }generated byζn{\displaystyle \zeta _{n}}. Gaussmade early inroads in the theory of cyclotomic fields, in connection with the problem ofconstructingaregularn-gonwith acompass and straightedge. His surprising result that had escaped his predecessors was that a regular17-goncould be so constructed. More generally, for any integern≥3{\displaystyle n\geq 3}, the following are equivalent: A natural approach to provingFermat's Last Theoremis to factor the binomialxn+yn, wherenis an odd prime, appearing in one side of Fermat's equation as follows: Herexandyare ordinary integers, whereas the factors arealgebraic integersin the cyclotomic fieldQ(ζn). Ifunique factorizationholds in the cyclotomic integersZ[ζn], then it can be used to rule out the existence of nontrivial solutions to Fermat's equation. Several attempts to tackle Fermat's Last Theorem proceeded along these lines, and both Fermat's proof forn= 4and Euler's proof forn= 3can be recast in these terms. The complete list ofnfor whichZ[ζn]has unique factorization is[3] Kummerfound a way to deal with the failure of unique factorization. He introduced a replacement for the prime numbers in the cyclotomic integersZ[ζn], measured the failure of unique factorization via theclass numberhnand proved that ifhpis not divisible by a primep(suchpare calledregular primes) then Fermat's theorem is true for the exponentn=p. Furthermore, hegave a criterionto determine which primes are regular, and established Fermat's theorem for all prime exponentspless than 100, except for theirregular primes37,59, and67. Kummer's work on the congruences for the class numbers of cyclotomic fields was generalized in the twentieth century byIwasawainIwasawa theoryand by Kubota and Leopoldt in their theory ofp-adic zeta functions. (sequenceA061653in theOEIS), orOEIS:A055513orOEIS:A000927for theh{\displaystyle h}-part (for primen)
https://en.wikipedia.org/wiki/Cyclotomic_field
Inmathematicsandgroup theory, the termmultiplicative grouprefers to one of the following concepts: Thegroup scheme ofn-throots of unityis by definition the kernel of then-power map on the multiplicative group GL(1), considered as agroup scheme. That is, for any integern> 1 we can consider the morphism on the multiplicative group that takesn-th powers, and take an appropriatefiber product of schemes, with the morphismethat serves as the identity. The resulting group scheme is written μn(orμμn{\displaystyle \mu \!\!\mu _{n}}[2]). It gives rise to areduced scheme, when we take it over a fieldK,if and only ifthecharacteristicofKdoes not dividen. This makes it a source of some key examples of non-reduced schemes (schemes withnilpotent elementsin theirstructure sheaves); for example μpover afinite fieldwithpelements for anyprime numberp. This phenomenon is not easily expressed in the classical language of algebraic geometry. For example, it turns out to be of major importance in expressing theduality theory of abelian varietiesin characteristicp(theory ofPierre Cartier). TheGalois cohomologyof this group scheme is a way of expressingKummer theory.
https://en.wikipedia.org/wiki/Group_scheme_of_roots_of_unity
Innumber theory,Ramanujan's sum, usually denotedcq(n), is a function of two positive integer variablesqandndefined by the formula where (a,q) = 1 means thataonly takes on valuescoprimetoq. Srinivasa Ramanujanmentioned the sums in a 1918 paper.[1]In addition to the expansions discussed in this article, Ramanujan's sums are used in the proof ofVinogradov's theoremthat every sufficiently large odd number is the sum of threeprimes.[2] For integersaandb,a∣b{\displaystyle a\mid b}is read "adividesb" and means that there is an integercsuch thatba=c.{\displaystyle {\frac {b}{a}}=c.}Similarly,a∤b{\displaystyle a\nmid b}is read "adoes not divideb". The summation symbol means thatdgoes through all the positive divisors ofm, e.g. (a,b){\displaystyle (a,\,b)}is thegreatest common divisor, ϕ(n){\displaystyle \phi (n)}isEuler's totient function, μ(n){\displaystyle \mu (n)}is theMöbius function, and ζ(s){\displaystyle \zeta (s)}is theRiemann zeta function. These formulas come from the definition,Euler's formulaeix=cos⁡x+isin⁡x,{\displaystyle e^{ix}=\cos x+i\sin x,}and elementary trigonometric identities. and so on (OEIS:A000012,OEIS:A033999,OEIS:A099837,OEIS:A176742,..,OEIS:A100051,...).cq(n) is always an integer. Letζq=e2πiq.{\displaystyle \zeta _{q}=e^{\frac {2\pi i}{q}}.}Thenζqis a root of the equationxq− 1 = 0. Each of its powers, is also a root. Therefore, since there areqof them, they are all of the roots. The numbersζqn{\displaystyle \zeta _{q}^{n}}where 1 ≤n≤qare called theq-throots of unity.ζqis called aprimitiveq-th root of unity because the smallest value ofnthat makesζqn=1{\displaystyle \zeta _{q}^{n}=1}isq. The other primitiveq-th roots of unity are the numbersζqa{\displaystyle \zeta _{q}^{a}}where (a,q) = 1. Therefore, there are φ(q) primitiveq-th roots of unity. Thus, the Ramanujan sumcq(n) is the sum of then-th powers of the primitiveq-th roots of unity. It is a fact[3]that the powers ofζqare precisely the primitive roots for all the divisors ofq. Example.Letq= 12. Then Therefore, if is the sum of then-th powers of all the roots, primitive and imprimitive, and byMöbius inversion, It follows from the identityxq− 1 = (x− 1)(xq−1+xq−2+ ... +x+ 1) that and this leads to the formula published by Kluyver in 1906.[4] This shows thatcq(n) is always an integer. Compare it with the formula It is easily shown from the definition thatcq(n) ismultiplicativewhen considered as a function ofqfor a fixed value ofn:[5]i.e. From the definition (or Kluyver's formula) it is straightforward to prove that, ifpis a prime number, and ifpkis aprime powerwherek> 1, This result and the multiplicative property can be used to prove This is called von Sterneck's arithmetic function.[6]The equivalence of it and Ramanujan's sum is due to Hölder.[7][8] For all positive integersq, For a fixed value ofqthe absolute value of the sequence{cq(1),cq(2),…}{\displaystyle \{c_{q}(1),c_{q}(2),\ldots \}}is bounded by φ(q), and for a fixed value ofnthe absolute value of the sequence{c1(n),c2(n),…}{\displaystyle \{c_{1}(n),c_{2}(n),\ldots \}}is bounded byn. Ifq> 1 Letm1,m2> 0,m= lcm(m1,m2). Then[9]Ramanujan's sums satisfy anorthogonality property: Letn,k> 0. Then[10] known as theBrauer-Rademacheridentity. Ifn> 0 andais any integer, we also have[11] due to Cohen. Iff(n)is anarithmetic function(i.e. a complex-valued function of the integers or natural numbers), then aconvergent infinite seriesof the form: or of the form: where theak∈C, is called aRamanujan expansion[12]off(n). Ramanujan found expansions of some of the well-known functions of number theory. All of these results are proved in an "elementary" manner (i.e. only using formal manipulations of series and the simplest results about convergence).[13][14][15] The expansion of thezero functiondepends on a result from the analytic theory of prime numbers, namely that the series converges to 0, and the results forr(n)andr′(n)depend on theorems in an earlier paper.[16] All the formulas in this section are from Ramanujan's 1918 paper. Thegenerating functionsof the Ramanujan sums areDirichlet series: is a generating function for the sequencecq(1),cq(2), ... whereqis kept constant, and is a generating function for the sequencec1(n),c2(n), ... wherenis kept constant. There is also the double Dirichlet series The polynomial with Ramanujan sum's as coefficients can be expressed withcyclotomic polynomial[17] σk(n)is thedivisor function(i.e. the sum of thek-th powers of the divisors ofn, including 1 andn).σ0(n), the number of divisors ofn, is usually writtend(n)andσ1(n), the sum of the divisors ofn, is usually writtenσ(n). Ifs> 0, Settings= 1gives If theRiemann hypothesisis true, and−12<s<12,{\displaystyle -{\tfrac {1}{2}}<s<{\tfrac {1}{2}},} d(n) = σ0(n)is the number of divisors ofn, including 1 andnitself. whereγ = 0.5772...is theEuler–Mascheroni constant. Euler's totient functionφ(n)is the number of positive integers less thannand coprime ton. Ramanujan defines a generalization of it, if is theprime factorizationofn, andsis a complex number, let so thatφ1(n) =φ(n)is Euler's function.[18] He proves that and uses this to show that Lettings= 1, Note that the constant is the inverse[19]of the one in the formula forσ(n). Von Mangoldt's functionΛ(n) = 0unlessn=pkis a power of a prime number, in which case it is thenatural logarithmlogp. For alln> 0, This is equivalent to theprime number theorem.[20][21] r2s(n)is the number of ways of representingnas the sum of2ssquares, counting different orders and signs as different (e.g.,r2(13) = 8, as13 = (±2)2+ (±3)2= (±3)2+ (±2)2.) Ramanujan defines a functionδ2s(n)and references a paper[22]in which he proved thatr2s(n) = δ2s(n)fors= 1, 2, 3, and 4. Fors> 4he shows thatδ2s(n)is a good approximation tor2s(n). s= 1has a special formula: In the following formulas the signs repeat with a period of 4. and therefore, r2s′(n){\displaystyle r'_{2s}(n)}is the number of waysncan be represented as the sum of2striangular numbers(i.e. the numbers 1, 3 = 1 + 2, 6 = 1 + 2 + 3, 10 = 1 + 2 + 3 + 4, 15, ...; then-th triangular number is given by the formulan⁠n+ 1/2⁠.) The analysis here is similar to that for squares. Ramanujan refers to the same paper as he did for the squares, where he showed that there is a functionδ2s′(n){\displaystyle \delta '_{2s}(n)}such thatr2s′(n)=δ2s′(n){\displaystyle r'_{2s}(n)=\delta '_{2s}(n)}fors= 1, 2, 3, and 4, and that fors> 4,δ2s′(n){\displaystyle \delta '_{2s}(n)}is a good approximation tor2s′(n).{\displaystyle r'_{2s}(n).} Again,s= 1requires a special formula: Ifsis a multiple of 4, Therefore, Let Then fors> 1, These sums are obviously of great interest, and a few of their properties have been discussed already. But, so far as I know, they have never been considered from the point of view which I adopt in this paper; and I believe that all the results which it contains are new. The majority of my formulae are "elementary" in the technical sense of the word — they can (that is to say) be proved by a combination of processes involving only finite algebra and simple general theorems concerning infinite series
https://en.wikipedia.org/wiki/Ramanujan%27s_sum
Innumber theory, theTeichmüller characterω{\displaystyle \omega }(at a primep{\displaystyle p}) is acharacterof(Z/qZ)×{\displaystyle (\mathbb {Z} /q\mathbb {Z} )^{\times }}, whereq=p{\displaystyle q=p}ifp{\displaystyle p}is odd andq=4{\displaystyle q=4}ifp=2{\displaystyle p=2}, taking values in the roots of unity of thep-adic integers. It was introduced byOswald Teichmüller. Identifying the roots of unity in thep{\displaystyle p}-adic integers with the corresponding ones in the complex numbers,ω{\displaystyle \omega }can be considered as a usualDirichlet characterof conductorq{\displaystyle q}. More generally, given acompletediscrete valuation ringO{\displaystyle O}whoseresidue fieldk{\displaystyle k}isperfectofcharacteristicp{\displaystyle p}, there is a unique multiplicativesectionω:k→O{\displaystyle \omega :k\to O}of the natural surjectionO→k{\displaystyle O\to k}. The image of an element under this map is called itsTeichmüller representative. The restriction ofω{\displaystyle \omega }tokx{\displaystyle k^{x}}is called theTeichmüller character. Ifx{\displaystyle x}is ap{\displaystyle p}-adic integer, thenω(x){\displaystyle \omega (x)}is the unique solution ofω(x)p=ω(x){\displaystyle \omega (x)^{p}=\omega (x)}that is congruent tox{\displaystyle x}modp{\displaystyle p}. It can also be defined by The multiplicative group ofp{\displaystyle p}-adic units is a product of the finite group of roots of unity and a group isomorphic to thep{\displaystyle p}-adic integers. The finite group is cyclic of orderp−1{\displaystyle p-1}or2{\displaystyle 2}, asp{\displaystyle p}is odd or even, respectively, and so it is isomorphic to(Z/qZ)×{\displaystyle (\mathbb {Z} /q\mathbb {Z} )^{\times }}.[citation needed]The Teichmüller character gives a canonical isomorphism between these two groups. A detailed exposition of the construction of Teichmüller representatives for thep{\displaystyle p}-adic integers, by means ofHensel lifting, is given in the article onWitt vectors, where they provide an important role in providing a ring structure.
https://en.wikipedia.org/wiki/Teichm%C3%BCller_character
Inmathematics, theLucas sequencesUn(P,Q){\displaystyle U_{n}(P,Q)}andVn(P,Q){\displaystyle V_{n}(P,Q)}are certainconstant-recursiveinteger sequencesthat satisfy therecurrence relation whereP{\displaystyle P}andQ{\displaystyle Q}are fixedintegers. Any sequence satisfying this recurrence relation can be represented as alinear combinationof the Lucas sequencesUn(P,Q){\displaystyle U_{n}(P,Q)}andVn(P,Q).{\displaystyle V_{n}(P,Q).} More generally, Lucas sequencesUn(P,Q){\displaystyle U_{n}(P,Q)}andVn(P,Q){\displaystyle V_{n}(P,Q)}represent sequences ofpolynomialsinP{\displaystyle P}andQ{\displaystyle Q}with integercoefficients. Famous examples of Lucas sequences include theFibonacci numbers,Mersenne numbers,Pell numbers,Lucas numbers,Jacobsthal numbers, and a superset ofFermat numbers(see below). Lucas sequences are named after theFrenchmathematicianÉdouard Lucas. Given two integer parametersP{\displaystyle P}andQ{\displaystyle Q}, the Lucas sequences of the first kindUn(P,Q){\displaystyle U_{n}(P,Q)}and of the second kindVn(P,Q){\displaystyle V_{n}(P,Q)}are defined by therecurrence relations: and It is not hard to show that forn>0{\displaystyle n>0}, The above relations can be stated inmatrixform as follows: Initial terms of Lucas sequencesUn(P,Q){\displaystyle U_{n}(P,Q)}andVn(P,Q){\displaystyle V_{n}(P,Q)}are given in the table: The characteristic equation of the recurrence relation for Lucas sequencesUn(P,Q){\displaystyle U_{n}(P,Q)}andVn(P,Q){\displaystyle V_{n}(P,Q)}is: It has thediscriminantD=P2−4Q{\displaystyle D=P^{2}-4Q}and theroots: Thus: Note that the sequencean{\displaystyle a^{n}}and the sequencebn{\displaystyle b^{n}}also satisfy the recurrence relation. However these might not be integer sequences. WhenD≠0{\displaystyle D\neq 0},aandbare distinct and one quickly verifies that It follows that the terms of Lucas sequences can be expressed in terms ofaandbas follows The caseD=0{\displaystyle D=0}occurs exactly whenP=2SandQ=S2{\displaystyle P=2S{\text{ and }}Q=S^{2}}for some integerSso thata=b=S{\displaystyle a=b=S}. In this case one easily finds that The ordinarygenerating functionsare WhenQ=±1{\displaystyle Q=\pm 1}, the Lucas sequencesUn(P,Q){\displaystyle U_{n}(P,Q)}andVn(P,Q){\displaystyle V_{n}(P,Q)}satisfy certainPell equations: The terms of Lucas sequences satisfy relations that are generalizations of those betweenFibonacci numbersFn=Un(1,−1){\displaystyle F_{n}=U_{n}(1,-1)}andLucas numbersLn=Vn(1,−1){\displaystyle L_{n}=V_{n}(1,-1)}. For example: Among the consequences is thatUkm(P,Q){\displaystyle U_{km}(P,Q)}is a multiple ofUm(P,Q){\displaystyle U_{m}(P,Q)}, i.e., the sequence(Um(P,Q))m≥1{\displaystyle (U_{m}(P,Q))_{m\geq 1}}is adivisibility sequence. This implies, in particular, thatUn(P,Q){\displaystyle U_{n}(P,Q)}can beprimeonly whennis prime. Another consequence is an analog ofexponentiation by squaringthat allows fast computation ofUn(P,Q){\displaystyle U_{n}(P,Q)}for large values ofn. Moreover, ifgcd(P,Q)=1{\displaystyle \gcd(P,Q)=1}, then(Um(P,Q))m≥1{\displaystyle (U_{m}(P,Q))_{m\geq 1}}is astrong divisibility sequence. Other divisibility properties are as follows:[1] The last fact generalizesFermat's little theorem. These facts are used in theLucas–Lehmer primality test. Like Fermat's little theorem, theconverseof the last fact holds often, but not always; there existcomposite numbersnrelatively prime toDand dividingUl{\displaystyle U_{l}}, wherel=n−(Dn){\displaystyle l=n-\left({\tfrac {D}{n}}\right)}. Such composite numbers are calledLucas pseudoprimes. Aprime factorof a term in a Lucas sequence which does not divide any earlier term in the sequence is calledprimitive.Carmichael's theoremstates that all but finitely many of the terms in a Lucas sequence have a primitive prime factor.[2]Indeed,Carmichael(1913) showed that ifDis positive andnis not 1, 2 or 6, thenUn{\displaystyle U_{n}}has a primitive prime factor. In the caseDis negative, a deep result of Bilu, Hanrot, Voutier and Mignotte[3]shows that ifn> 30, thenUn{\displaystyle U_{n}}has a primitive prime factor and determines all casesUn{\displaystyle U_{n}}has no primitive prime factor. The Lucas sequences for some values ofPandQhave specific names: Some Lucas sequences have entries in theOn-Line Encyclopedia of Integer Sequences: Sagemath implementsUn{\displaystyle U_{n}}andVn{\displaystyle V_{n}}aslucas_number1()andlucas_number2(), respectively.[7]
https://en.wikipedia.org/wiki/Lucas_sequence
Pell's equation, also called thePell–Fermat equation, is anyDiophantine equationof the formx2−ny2=1,{\displaystyle x^{2}-ny^{2}=1,}wherenis a given positivenonsquareinteger, and integer solutions are sought forxandy. InCartesian coordinates, the equation is represented by ahyperbola; solutions occur wherever the curve passes through a point whosexandycoordinates are both integers, such as thetrivial solutionwithx= 1 andy= 0.Joseph Louis Lagrangeproved that, as long asnis not aperfect square, Pell's equation has infinitely many distinct integer solutions. These solutions may be used to accuratelyapproximatethesquare rootofnbyrational numbersof the formx/y. This equation was first studied extensivelyin Indiastarting withBrahmagupta,[1]who found an integer solution to92x2+1=y2{\displaystyle 92x^{2}+1=y^{2}}in hisBrāhmasphuṭasiddhāntacirca 628.[2]Bhaskara IIin the 12th century andNarayana Panditin the 14th century both found general solutions to Pell's equation and other quadratic indeterminate equations. Bhaskara II is generally credited with developing thechakravalamethod, building on the work ofJayadevaand Brahmagupta. Solutions to specific examples of Pell's equation, such as thePell numbersarising from the equation withn= 2, had been known for much longer, since the time ofPythagorasinGreeceand a similar date in India.William Brounckerwas the first European to solve Pell's equation. The name of Pell's equation arose fromLeonhard Eulermistakenly attributing Brouncker's solution of the equation toJohn Pell.[3][4][note 1] As early as 400 BC inIndiaandGreece, mathematicians studied the numbers arising from then= 2 case of Pell's equation,x2−2y2=1,{\displaystyle x^{2}-2y^{2}=1,}and from the closely related equationx2−2y2=−1{\displaystyle x^{2}-2y^{2}=-1}because of the connection of these equations to thesquare root of 2.[5]Indeed, ifxandyarepositive integerssatisfying this equation, thenx/yis an approximation of√2. The numbersxandyappearing in these approximations, calledside and diameter numbers, were known to thePythagoreans, andProclusobserved that in the opposite direction these numbers obeyed one of these two equations.[5]Similarly,Baudhayanadiscovered thatx= 17,y= 12 andx= 577,y= 408 are two solutions to the Pell equation, and that 17/12 and 577/408 are very close approximations to the square root of 2.[6] Later,Archimedesapproximated thesquare root of 3by the rational number 1351/780. Although he did not explain his methods, this approximation may be obtained in the same way, as a solution to Pell's equation.[5]Likewise,Archimedes's cattle problem— an ancientword problemabout finding the number of cattle belonging to the sun godHelios— can be solved by reformulating it as a Pell's equation. The manuscript containing the problem states that it was devised by Archimedes and recorded in a letter toEratosthenes,[7]and the attribution to Archimedes is generally accepted today.[8][9] Around AD 250,Diophantusconsidered the equationa2x2+c=y2,{\displaystyle a^{2}x^{2}+c=y^{2},}whereaandcare fixed numbers, andxandyare the variables to be solved for. This equation is different in form from Pell's equation but equivalent to it. Diophantus solved the equation for (a,c) equal to (1, 1), (1, −1), (1, 12), and (3, 9).Al-Karaji, a 10th-century Persian mathematician, worked on similar problems to Diophantus.[10] In Indian mathematics,Brahmaguptadiscovered that(x12−Ny12)(x22−Ny22)=(x1x2+Ny1y2)2−N(x1y2+x2y1)2,{\displaystyle (x_{1}^{2}-Ny_{1}^{2})(x_{2}^{2}-Ny_{2}^{2})=(x_{1}x_{2}+Ny_{1}y_{2})^{2}-N(x_{1}y_{2}+x_{2}y_{1})^{2},}a form of what is now known asBrahmagupta's identity. Using this, he was able to "compose" triples(x1,y1,k1){\displaystyle (x_{1},y_{1},k_{1})}and(x2,y2,k2){\displaystyle (x_{2},y_{2},k_{2})}that were solutions ofx2−Ny2=k{\displaystyle x^{2}-Ny^{2}=k}, to generate the new triples Not only did this give a way to generate infinitely many solutions tox2−Ny2=1{\displaystyle x^{2}-Ny^{2}=1}starting with one solution, but also, by dividing such a composition byk1k2{\displaystyle k_{1}k_{2}}, integer or "nearly integer" solutions could often be obtained. For instance, forN=92{\displaystyle N=92}, Brahmagupta composed the triple (10, 1, 8) (since102−92(12)=8{\displaystyle 10^{2}-92(1^{2})=8}) with itself to get the new triple (192, 20, 64). Dividing throughout by 64 ("8" forx{\displaystyle x}andy{\displaystyle y}) gave the triple (24, 5/2, 1), which when composed with itself gave the desired integer solution (1151, 120, 1). Brahmagupta solved many Pell's equations with this method, proving that it gives solutions starting from an integer solution ofx2−Ny2=k{\displaystyle x^{2}-Ny^{2}=k}fork= ±1, ±2, or ±4.[11] The first general method for solving the Pell's equation (for allN) was given byBhāskara IIin 1150, extending the methods of Brahmagupta. Called thechakravala (cyclic) method, it starts by choosing two relatively prime integersa{\displaystyle a}andb{\displaystyle b}, then composing the triple(a,b,k){\displaystyle (a,b,k)}(that is, one which satisfiesa2−Nb2=k{\displaystyle a^{2}-Nb^{2}=k}) with the trivial triple(m,1,m2−N){\displaystyle (m,1,m^{2}-N)}to get the triple(am+Nb,a+bm,k(m2−N)){\displaystyle {\big (}am+Nb,a+bm,k(m^{2}-N){\big )}}, which can be scaled down to(am+Nbk,a+bmk,m2−Nk).{\displaystyle \left({\frac {am+Nb}{k}},{\frac {a+bm}{k}},{\frac {m^{2}-N}{k}}\right).} Whenm{\displaystyle m}is chosen so thata+bmk{\displaystyle {\frac {a+bm}{k}}}is an integer, so are the other two numbers in the triple. Among suchm{\displaystyle m}, the method chooses one that minimizesm2−Nk{\displaystyle {\frac {m^{2}-N}{k}}}and repeats the process. This method always terminates with a solution. Bhaskara used it to give the solutionx=1766319049,y=226153980to theN= 61 case.[11] Several European mathematicians rediscovered how to solve Pell's equation in the 17th century.Pierre de Fermatfound how to solve the equation and in a 1657 letter issued it as a challenge to English mathematicians.[12]In a letter toKenelm Digby,Bernard Frénicle de Bessysaid that Fermat found the smallest solution forNup to 150 and challengedJohn Wallisto solve the casesN= 151 or 313. Both Wallis andWilliam Brounckergave solutions to these problems, though Wallis suggests in a letter that the solution was due to Brouncker.[13] John Pell's connection with the equation is that he revisedThomas Branker's translation[14]ofJohann Rahn's 1659 bookTeutsche Algebra[note 2]into English, with a discussion of Brouncker's solution of the equation.Leonhard Eulermistakenly thought that this solution was due to Pell, as a result of which he named the equation after Pell.[4] The general theory of Pell's equation, based oncontinued fractionsand algebraic manipulations with numbers of the formP+Qa,{\displaystyle P+Q{\sqrt {a}},}was developed by Lagrange in 1766–1769.[15]In particular, Lagrange gave a proof that the Brouncker–Wallis algorithm always terminates. Lethi/ki{\displaystyle h_{i}/k_{i}}denote the unique sequence ofconvergentsof theregular continued fractionforn{\displaystyle {\sqrt {n}}}. Then the pair of positive integers(x1,y1){\displaystyle (x_{1},y_{1})}solving Pell's equation and minimizingxsatisfiesx1=hiandy1=kifor somei. This pair is called thefundamental solution. The sequence of integers[a0;a1,a2,…]{\displaystyle [a_{0};a_{1},a_{2},\ldots ]}in the regular continued fraction ofn{\displaystyle {\sqrt {n}}}is always eventually periodic. It can be written in the form[⌊n⌋;a1,a2,…,ar−1,2⌊n⌋¯]{\displaystyle \left[\lfloor {\sqrt {n}}\rfloor ;\;{\overline {a_{1},a_{2},\ldots ,a_{r-1},2\lfloor {\sqrt {n}}\rfloor }}\right]}, where⌊⋅⌋{\displaystyle \lfloor \,\cdot \,\rfloor }denotes integer floor, and the sequencea1,a2,…,ar−1,2⌊n⌋{\displaystyle a_{1},a_{2},\ldots ,a_{r-1},2\lfloor {\sqrt {n}}\rfloor }repeats infinitely. Moreover, the tuple(a1,a2,…,ar−1){\displaystyle (a_{1},a_{2},\ldots ,a_{r-1})}ispalindromic, the same left-to-right or right-to-left.[16] The fundamental solution is(x1,y1)={(hr−1,kr−1),forreven(h2r−1,k2r−1),forrodd{\displaystyle (x_{1},y_{1})={\begin{cases}(h_{r-1},k_{r-1}),&{\text{ for }}r{\text{ even}}\\(h_{2r-1},k_{2r-1}),&{\text{ for }}r{\text{ odd}}\end{cases}}} The computation time for finding the fundamental solution using the continued fraction method, with the aid of theSchönhage–Strassen algorithmfor fast integer multiplication, is within a logarithmic factor of the solution size, the number of digits in the pair(x1,y1){\displaystyle (x_{1},y_{1})}. However, this is not apolynomial-time algorithmbecause the number of digits in the solution may be as large as√n, far larger than a polynomial in the number of digits in the input valuen.[17] Once the fundamental solution is found, all remaining solutions may be calculated algebraically from[17]xk+ykn=(x1+y1n)k,{\displaystyle x_{k}+y_{k}{\sqrt {n}}=(x_{1}+y_{1}{\sqrt {n}})^{k},}expanding the right side,equating coefficientsofn{\displaystyle {\sqrt {n}}}on both sides, and equating the other terms on both sides. This yields therecurrence relationsxk+1=x1xk+ny1yk,{\displaystyle x_{k+1}=x_{1}x_{k}+ny_{1}y_{k},}yk+1=x1yk+y1xk.{\displaystyle y_{k+1}=x_{1}y_{k}+y_{1}x_{k}.} Although writing out the fundamental solution (x1,y1) as a pair of binary numbers may require a large number of bits, it may in many cases be represented more compactly in the formx1+y1n=∏i=1t(ai+bin)ci{\displaystyle x_{1}+y_{1}{\sqrt {n}}=\prod _{i=1}^{t}\left(a_{i}+b_{i}{\sqrt {n}}\right)^{c_{i}}}using much smaller integersai,bi, andci. For instance,Archimedes' cattle problemis equivalent to the Pell equationx2−410286423278424y2=1{\displaystyle x^{2}-410\,286\,423\,278\,424\ y^{2}=1}, the fundamental solution of which has206545digits if written out explicitly. However, the solution is also equal tox1+y1n=u2329,{\displaystyle x_{1}+y_{1}{\sqrt {n}}=u^{2329},}whereu=x1′+y1′4729494=(300426607914281713365609+841295076778583932587766)2{\displaystyle u=x'_{1}+y'_{1}{\sqrt {4\,729\,494}}=(300\,426\,607\,914\,281\,713\,365\ {\sqrt {609}}+84\,129\,507\,677\,858\,393\,258\ {\sqrt {7766}})^{2}}andx1′{\displaystyle x'_{1}}andy1′{\displaystyle y'_{1}}only have 45 and 41 decimal digits respectively.[17] Methods related to thequadratic sieveapproach forinteger factorizationmay be used to collect relations between prime numbers in the number field generated by√nand to combine these relations to find a product representation of this type. The resulting algorithm for solving Pell's equation is more efficient than the continued fraction method, though it still takes more than polynomial time. Under the assumption of thegeneralized Riemann hypothesis, it can be shown to take timeexp⁡O(log⁡N⋅log⁡log⁡N),{\displaystyle \exp O\left({\sqrt {\log N\cdot \log \log N}}\right),}whereN= lognis the input size, similarly to the quadratic sieve.[17] Hallgren showed that aquantum computercan find a product representation, as described above, for the solution to Pell's equation in polynomial time.[18]Hallgren's algorithm, which can be interpreted as an algorithm for finding the group of units of a realquadratic number field, was extended to more general fields by Schmidt and Völlmer.[19] As an example, consider the instance of Pell's equation forn= 7; that is,x2−7y2=1.{\displaystyle x^{2}-7y^{2}=1.}The continued fraction of7{\displaystyle {\sqrt {7}}}has the form[2;1,1,1,4¯]{\displaystyle [2;\ {\overline {1,1,1,4}}]}. Since the period has length4{\displaystyle 4}, which is an even number, the convergent producing the fundamental solution is obtained by truncating the continued fraction right before the end of the first occurrence of the period:[2;1,1,1]=83{\displaystyle [2;\ 1,1,1]={\frac {8}{3}}}. The sequence of convergents for the square root of seven are Applying the recurrence formula to this solution generates the infinite sequence of solutions For the Pell's equationx2−13y2=1,{\displaystyle x^{2}-13y^{2}=1,}the continued fraction13=[3;1,1,1,1,6¯]{\displaystyle {\sqrt {13}}=[3;\ {\overline {1,1,1,1,6}}]}has a period of odd length. For this the fundamental solution is obtained by truncating the continued fraction right before the second occurrence of the period[3;1,1,1,1,6,1,1,1,1]=649180{\displaystyle [3;\ 1,1,1,1,6,1,1,1,1]={\frac {649}{180}}}. Thus, the fundamental solution is(x1,y1)=(649,180){\displaystyle (x_{1},y_{1})=(649,180)}. The smallest solution can be very large. For example, the smallest solution tox2−313y2=1{\displaystyle x^{2}-313y^{2}=1}is (32188120829134849,1819380158564160), and this is the equation which Frenicle challenged Wallis to solve.[20]Values ofnsuch that the smallest solution ofx2−ny2=1{\displaystyle x^{2}-ny^{2}=1}is greater than the smallest solution for any smaller value ofnare (For these records, seeOEIS:A033315forxandOEIS:A033319fory.) The following is a list of the fundamental solution tox2−ny2=1{\displaystyle x^{2}-ny^{2}=1}withn≤ 128. Whennis an integer square, there is no solution except for the trivial solution (1, 0). The values ofxare sequenceA002350and those ofyare sequenceA002349inOEIS. Pell's equation has connections to several other important subjects in mathematics. Pell's equation is closely related to the theory ofalgebraic numbers, as the formulax2−ny2=(x+yn)(x−yn){\displaystyle x^{2}-ny^{2}=(x+y{\sqrt {n}})(x-y{\sqrt {n}})}is thenormfor theringZ[n]{\displaystyle \mathbb {Z} [{\sqrt {n}}]}and for the closely relatedquadratic fieldQ(n){\displaystyle \mathbb {Q} ({\sqrt {n}})}. Thus, a pair of integers(x,y){\displaystyle (x,y)}solves Pell's equation if and only ifx+yn{\displaystyle x+y{\sqrt {n}}}is aunitwith norm 1 inZ[n]{\displaystyle \mathbb {Z} [{\sqrt {n}}]}.[21]Dirichlet's unit theorem, that all units ofZ[n]{\displaystyle \mathbb {Z} [{\sqrt {n}}]}can be expressed as powers of a singlefundamental unit(and multiplication by a sign), is an algebraic restatement of the fact that all solutions to the Pell's equation can be generated from the fundamental solution.[22]The fundamental unit can in general be found by solving a Pell-like equation but it does not always correspond directly to the fundamental solution of Pell's equation itself, because the fundamental unit may have norm −1 rather than 1 and its coefficients may be half integers rather than integers. Demeyer mentions a connection between Pell's equation and theChebyshev polynomials: IfTi(x){\displaystyle T_{i}(x)}andUi(x){\displaystyle U_{i}(x)}are the Chebyshev polynomials of the first and second kind respectively, then these polynomials satisfy a form of Pell's equation in anypolynomial ringR[x]{\displaystyle R[x]}, withn=x2−1{\displaystyle n=x^{2}-1}:[23]Ti2−(x2−1)Ui−12=1.{\displaystyle T_{i}^{2}-(x^{2}-1)U_{i-1}^{2}=1.}Thus, these polynomials can be generated by the standard technique for Pell's equations of taking powers of a fundamental solution:Ti+Ui−1x2−1=(x+x2−1)i.{\displaystyle T_{i}+U_{i-1}{\sqrt {x^{2}-1}}=(x+{\sqrt {x^{2}-1}})^{i}.}It may further be observed that if(xi,yi){\displaystyle (x_{i},y_{i})}are the solutions to any integer Pell's equation, thenxi=Ti(x1){\displaystyle x_{i}=T_{i}(x_{1})}andyi=y1Ui−1(x1){\displaystyle y_{i}=y_{1}U_{i-1}(x_{1})}.[24] A general development of solutions of Pell's equationx2−ny2=1{\displaystyle x^{2}-ny^{2}=1}in terms ofcontinued fractionsofn{\displaystyle {\sqrt {n}}}can be presented, as the solutionsxandyare approximates to the square root ofnand thus are a special case of continued fraction approximations forquadratic irrationals.[16] The relationship to the continued fractions implies that the solutions to Pell's equation form asemigroupsubset of themodular group. Thus, for example, ifpandqsatisfy Pell's equation, then(pqnqp){\displaystyle {\begin{pmatrix}p&q\\nq&p\end{pmatrix}}}is a matrix of unitdeterminant. Products of such matrices take exactly the same form, and thus all such products yield solutions to Pell's equation. This can be understood in part to arise from the fact that successive convergents of a continued fraction share the same property: Ifpk−1/qk−1andpk/qkare two successive convergents of a continued fraction, then the matrix(pk−1pkqk−1qk){\displaystyle {\begin{pmatrix}p_{k-1}&p_{k}\\q_{k-1}&q_{k}\end{pmatrix}}}has determinant (−1)k. Størmer's theoremapplies Pell equations to find pairs of consecutivesmooth numbers, positive integers whose prime factors are all smaller than a given value.[25][26]As part of this theory,Størmeralso investigated divisibility relations among solutions to Pell's equation; in particular, he showed that each solution other than the fundamental solution has aprime factorthat does not dividen.[25] The negative Pell's equation is given byx2−ny2=−1{\displaystyle x^{2}-ny^{2}=-1}and has also been extensively studied. It can be solved by the same method of continued fractions and has solutions if and only if the period of the continued fraction has odd length. A necessary (but not sufficient) condition for solvability is thatnis not divisible by 4 or by a prime of form 4k+ 3.[note 3]Thus, for example,x2− 3y2= −1 is never solvable, butx2− 5y2= −1 may be.[27] The first few numbersnfor whichx2−n y2= −1 is solvable are 1 (with only one trivial solution) and with infinitely many solutions. The solutions of the negative Pell's equation for1≤n≤298{\displaystyle 1\leq n\leq 298}are: Letα=Πjis odd(1−2j){\displaystyle \alpha =\Pi _{j{\text{ is odd}}}(1-2^{j})}. The proportion of square-freendivisible bykprimes of the form 4m+ 1 for which the negative Pell's equation is solvable is at leastα.[28]When the number of prime divisors is not fixed, the proportion is given by 1 −α.[29][30] If the negative Pell's equation does have a solution for a particularn, its fundamental solution leads to the fundamental one for the positive case by squaring both sides of the defining equation:(x2−ny2)2=(−1)2{\displaystyle (x^{2}-ny^{2})^{2}=(-1)^{2}}implies>(x2+ny2)2−n(2xy)2=1.{\displaystyle >(x^{2}+ny^{2})^{2}-n(2xy)^{2}=1.} As stated above, if the negative Pell's equation is solvable, a solution can be found using the method of continued fractions as in the positive Pell's equation. The recursion relation works slightly differently however. Since(x+yn)(x−yn)=−1{\displaystyle (x+y{\sqrt {n}})(x-y{\sqrt {n}})=-1}, the next solution is determined in terms ofi(xk+ykn)=(i(x+yn))k{\displaystyle i(x_{k}+y_{k}{\sqrt {n}})=(i(x+y{\sqrt {n}}))^{k}}whenever there is a match, that is, whenk{\displaystyle k}is odd. The resulting recursion relation is (modulo a minus sign, which is immaterial due to the quadratic nature of the equation)xk=xk−2x12+nxk−2y12+2nyk−2y1x1,{\displaystyle x_{k}=x_{k-2}x_{1}^{2}+nx_{k-2}y_{1}^{2}+2ny_{k-2}y_{1}x_{1},}yk=yk−2x12+nyk−2y12+2xk−2y1x1,{\displaystyle y_{k}=y_{k-2}x_{1}^{2}+ny_{k-2}y_{1}^{2}+2x_{k-2}y_{1}x_{1},}which gives an infinite tower of solutions to the negative Pell's equation (except forn=1{\displaystyle n=1}). The equationx2−ny2=N{\displaystyle x^{2}-ny^{2}=N}is called thegeneralized[31][32](orgeneral[16])Pell's equation. The equationu2−nv2=1{\displaystyle u^{2}-nv^{2}=1}is the correspondingPell's resolvent.[16]A recursive algorithm was given by Lagrange in 1768 for solving the equation, reducing the problem to the case|N|<n{\displaystyle |N|<{\sqrt {n}}}.[33][34]Such solutions can be derived using the continued-fractions method as outlined above. If(x0,y0){\displaystyle (x_{0},y_{0})}is a solution tox2−ny2=N,{\displaystyle x^{2}-ny^{2}=N,}and(uk,vk){\displaystyle (u_{k},v_{k})}is a solution tou2−nv2=1,{\displaystyle u^{2}-nv^{2}=1,}then(xk,yk){\displaystyle (x_{k},y_{k})}such thatxk+ykn=(x0+y0n)(uk+vkn){\displaystyle x_{k}+y_{k}{\sqrt {n}}={\big (}x_{0}+y_{0}{\sqrt {n}}{\big )}{\big (}u_{k}+v_{k}{\sqrt {n}}{\big )}}is a solution tox2−ny2=N{\displaystyle x^{2}-ny^{2}=N}, a principle named themultiplicative principle.[16]The solution(xk,yk){\displaystyle (x_{k},y_{k})}is called aPell multipleof the solution(x0,y0){\displaystyle (x_{0},y_{0})}. There exists a finite set of solutions tox2−ny2=N{\displaystyle x^{2}-ny^{2}=N}such that every solution is a Pell multiple of a solution from that set. In particular, if(u,v){\displaystyle (u,v)}is the fundamental solution tou2−nv2=1{\displaystyle u^{2}-nv^{2}=1}, then each solution to the equation is a Pell multiple of a solution(x,y){\displaystyle (x,y)}with|x|≤12|N|(|U|+1){\displaystyle |x|\leq {\tfrac {1}{2}}{\sqrt {|N|}}\left({\sqrt {|U|}}+1\right)}and|y|≤12n|N|(|U|+1){\displaystyle |y|\leq {\tfrac {1}{2{\sqrt {n}}}}{\sqrt {|N|}}\left({\sqrt {|U|}}+1\right)}, whereU=u+vn{\displaystyle U=u+v{\sqrt {n}}}.[35] Ifxandyare positive integer solutions to the Pell's equation with|N|<n{\displaystyle |N|<{\sqrt {n}}}, thenx/y{\displaystyle x/y}is a convergent to the continued fraction ofn{\displaystyle {\sqrt {n}}}.[35] Solutions to the generalized Pell's equation are used for solving certainDiophantine equationsandunitsof certainrings,[36][37]and they arise in the study ofSIC-POVMsinquantum information theory.[38] The equationx2−ny2=4{\displaystyle x^{2}-ny^{2}=4}is similar to the resolventx2−ny2=1{\displaystyle x^{2}-ny^{2}=1}in that if a minimal solution tox2−ny2=4{\displaystyle x^{2}-ny^{2}=4}can be found, then all solutions of the equation can be generated in a similar manner to the caseN=1{\displaystyle N=1}. For certainn{\displaystyle n}, solutions tox2−ny2=1{\displaystyle x^{2}-ny^{2}=1}can be generated from those withx2−ny2=4{\displaystyle x^{2}-ny^{2}=4}, in that ifn≡5(mod8),{\displaystyle n\equiv 5{\pmod {8}},}then every third solution tox2−ny2=4{\displaystyle x^{2}-ny^{2}=4}hasx,y{\displaystyle x,y}even, generating a solution tox2−ny2=1{\displaystyle x^{2}-ny^{2}=1}.[16]
https://en.wikipedia.org/wiki/Pell%27s_equation
Adiabatic quantum computation(AQC) is a form ofquantum computingwhich relies on theadiabatic theoremto perform calculations[1]and is closely related toquantum annealing.[2][3][4][5] First, a (potentially complicated)Hamiltonianis found whose ground state describes the solution to the problem of interest. Next, a system with a simpleHamiltonianis prepared and initialized to the ground state. Finally, the simple Hamiltonian is adiabatically evolved to the desired complicated Hamiltonian. By the adiabatic theorem, the system remains in the ground state, so at the end, the state of the system describes the solution to the problem. Adiabatic quantum computing has been shown to be polynomially equivalent to conventional quantum computing in the circuit model.[6] The time complexity for an adiabatic algorithm is the time taken to complete the adiabatic evolution which is dependent on the gap in the energyeigenvalues(spectral gap) of the Hamiltonian. Specifically, if the system is to be kept in the ground state, the energy gap between the ground state and the first excited state ofH(t){\displaystyle H(t)}provides an upper bound on the rate at which the Hamiltonian can be evolved at timet{\displaystyle t}.[7]When the spectral gap is small, the Hamiltonian has to be evolved slowly. The runtime for the entire algorithm can be bounded by: T=O(1gmin2){\displaystyle T=O\left({\frac {1}{g_{min}^{2}}}\right)} wheregmin{\displaystyle g_{min}}is the minimum spectral gap forH(t){\displaystyle H(t)}. AQC is a possible method to get around the problem ofenergy relaxation. Since the quantum system is in the ground state, interference with the outside world cannot make it move to a lower state. If the energy of the outside world (that is, the "temperature of the bath") is kept lower than the energy gap between the ground state and the next higher energy state, the system has a proportionally lower probability of going to a higher energy state. Thus the system can stay in a single system eigenstate as long as needed. Universality results in the adiabatic model are tied to quantum complexity andQMA-hard problems. The k-local Hamiltonian is QMA-complete for k ≥ 2.[8]QMA-hardness results are known for physically realisticlattice modelsofqubitssuch as[9] H=∑ihiZi+∑i<jJijZiZj+∑i<jKijXiXj{\displaystyle H=\sum _{i}h_{i}Z_{i}+\sum _{i<j}J^{ij}Z_{i}Z_{j}+\sum _{i<j}K^{ij}X_{i}X_{j}} whereZ,X{\displaystyle Z,X}represent thePauli matricesσz,σx{\displaystyle \sigma _{z},\sigma _{x}}.Such models are used for universal adiabatic quantum computation. The Hamiltonians for the QMA-complete problem can also be restricted to act on a two dimensional grid ofqubits[10]or a line of quantum particles with 12 states per particle.[11]If such models were found to be physically realizable, they too could be used to form the building blocks of a universal adiabatic quantum computer. In practice, there are problems during a computation. As the Hamiltonian is gradually changed, the interesting parts (quantum behavior as opposed to classical) occur when multiplequbitsare close to a tipping point. It is exactly at this point when the ground state (one set of qubit orientations) gets very close to a first energy state (a different arrangement of orientations). Adding a slight amount of energy (from the external bath, or as a result of slowly changing the Hamiltonian) could take the system out of the ground state, and ruin the calculation. Trying to perform the calculation more quickly increases the external energy; scaling the number of qubits makes the energy gap at the tipping points smaller. Adiabatic quantum computation solves satisfiability problems and other combinatorial search problems. Specifically, these kind of problems seek a state that satisfiesC1∧C2∧⋯∧CM{\displaystyle C_{1}\wedge C_{2}\wedge \cdots \wedge C_{M}}. This expression contains the satisfiability of M clauses, for which clauseCi{\displaystyle C_{i}}has the value True or False, and can involve n bits. Each bit is a variablexj∈{0,1}{\displaystyle x_{j}\in \{0,1\}}such thatCi{\displaystyle C_{i}}is a Boolean value function ofx1,x2,…,xn{\displaystyle x_{1},x_{2},\dots ,x_{n}}. QAA solves this kind of problem using quantum adiabatic evolution. It starts with an Initial HamiltonianHB{\displaystyle H_{B}}: HB=HB1+HB2+⋯+HBM{\displaystyle H_{B}=H_{B_{1}}+H_{B_{2}}+\dots +H_{B_{M}}} whereHBi{\displaystyle H_{B_{i}}}shows the Hamiltonian corresponding to the clauseCi{\displaystyle C_{i}}. Usually, the choice ofHBi{\displaystyle H_{B_{i}}}won't depend on different clauses, so only the total number of times each bit is involved in all clauses matters. Next, it goes through an adiabatic evolution, ending in the Problem HamiltonianHP{\displaystyle H_{P}}: HP=∑CHP,C{\displaystyle H_{P}=\sum \limits _{C}^{}H_{P,C}} whereHP,C{\displaystyle H_{P,C}}is the satisfying Hamiltonian of clause C. It has eigenvalues: hC(z1C,z2C…znC)={0clauseCsatisfied1clauseCviolated{\displaystyle h_{C}(z_{1C},z_{2C}\dots z_{nC})={\begin{cases}0&{\mbox{clause }}C{\mbox{ satisfied}}\\1&{\mbox{clause }}C{\mbox{ violated}}\end{cases}}} For a simple path of adiabatic evolution with run time T, consider: H(t)=(1−t/T)HB+(t/T)HP{\displaystyle H(t)=(1-t/T)H_{B}+(t/T)H_{P}} and lets=t/T{\displaystyle s=t/T}. This results in: H~(s)=(1−s)HB+sHP{\displaystyle {\tilde {H}}(s)=(1-s)H_{B}+sH_{P}}, which is the adiabatic evolution Hamiltonian of the algorithm. In accordance with the adiabatic theorem, start from the ground state of HamiltonianHB{\displaystyle H_{B}}at the beginning, proceed through an adiabatic process, and end in the ground state of problem HamiltonianHP{\displaystyle H_{P}}. Then measure the z-component of each of the n spins in the final state. This will produce a stringz1,z2,…,zn{\displaystyle z_{1},z_{2},\dots ,z_{n}}which is highly likely to be the result of the satisfiability problem. The run time T must be sufficiently long to assure correctness of the result. According to the adiabatic theorem, T is aboutε/gmin2{\displaystyle \varepsilon /g_{\mathrm {min} }^{2}}, wheregmin=min0≤s≤1(E1(s)−E0(s)){\displaystyle g_{\mathrm {min} }=\min _{0\leq s\leq 1}(E_{1}(s)-E_{0}(s))}is the minimum energy gap between ground state and first excited state.[12] Adiabatic quantum computing is equivalent in power to standard gate-based quantum computing that implements arbitrary unitary operations. However, the mapping challenge on gate-based quantum devices differs substantially fromquantum annealersas logical variables are mapped only to single qubits and not to chains.[13] TheD-Wave Oneis a device made by Canadian companyD-Wave Systems, which claims that it usesquantum annealingto solve optimization problems.[14][15]On 25 May 2011,Lockheed-Martinpurchased a D-Wave One for about US$10 million.[15]In May 2013,Googlepurchased a 512 qubitD-Wave Two.[16] The question of whether the D-Wave processors offer a speedup over a classical processor is still unanswered. Tests performed by researchers atQuantum Artificial Intelligence Lab(NASA),USC,ETH Zurich, andGoogleshow that as of 2015, there is no evidence of a quantum advantage.[17][18][19]
https://en.wikipedia.org/wiki/Adiabatic_quantum_computation
Incomputational complexity theory,bounded-error quantum polynomial time(BQP) is the class ofdecision problemssolvable by aquantum computerinpolynomial time, with an error probability of at most 1/3 for all instances.[1]It is the quantum analogue to thecomplexity classBPP. A decision problem is a member ofBQPif there exists aquantum algorithm(analgorithmthat runs on a quantum computer) that solves the decision problemwith high probabilityand is guaranteed to run in polynomial time. A run of the algorithm will correctly solve the decision problem with a probability of at least 2/3. BQPcan be viewed as thelanguagesassociated with certain bounded-error uniform families ofquantum circuits.[1]A languageLis inBQPif and only if there exists apolynomial-time uniformfamily of quantum circuits{Qn:n∈N}{\displaystyle \{Q_{n}\colon n\in \mathbb {N} \}}, such that Alternatively, one can defineBQPin terms ofquantum Turing machines. A languageLis inBQPif and only if there exists a polynomial quantum Turing machine that acceptsLwith an error probability of at most 1/3 for all instances.[2] Similarly to other "bounded error" probabilistic classes, the choice of 1/3 in the definition is arbitrary. We can run the algorithm a constant number of times and take a majority vote to achieve any desired probability of correctness less than 1, using theChernoff bound. The complexity class is unchanged by allowing error as high as 1/2 −n−con the one hand, or requiring error as small as 2−ncon the other hand, wherecis any positive constant, andnis the length of input.[3] BQP is defined for quantum computers; the corresponding complexity class for classical computers (or more formally forprobabilistic Turing machines) isBPP. Just likePandBPP,BQPislowfor itself, which meansBQPBQP= BQP.[2]Informally, this is true because polynomial time algorithms are closed under composition. If a polynomial time algorithm calls polynomial time algorithms as subroutines, the resulting algorithm is still polynomial time. BQPcontainsPandBPPand is contained inAWPP,[4]PP[5]andPSPACE.[2]In fact,BQPislowforPP, meaning that aPPmachine achieves no benefit from being able to solveBQPproblems instantly, an indication of the possible difference in power between these similar classes. The known relationships with classic complexity classes are: As the problem of⁠P=?PSPACE{\displaystyle {\mathsf {P}}\ {\stackrel {?}{=}}\ {\mathsf {PSPACE}}}⁠has not yet been solved, the proof of inequality betweenBQPand classes mentioned above is supposed to be difficult.[2]The relation betweenBQPandNPis not known. In May 2018, computer scientistsRan RazofPrinceton Universityand Avishay Tal ofStanford Universitypublished a paper[6]which showed that, relative to anoracle, BQP was not contained inPH. It can be proven that there exists an oracle A such thatBQPA⊈PHA{\displaystyle {\mathsf {BQP}}^{\mathrm {A} }\nsubseteq {\mathsf {PH}}^{\mathrm {A} }}.[7]In an extremely informal sense, this can be thought of as giving PH and BQP an identical, but additional, capability and verifying that BQP with the oracle (BQPA) can do things PHAcannot. While an oracle separation has been proven, the fact that BQP is not contained in PH has not been proven. An oracle separation does not prove whether or not complexity classes are the same. The oracle separation gives intuition that BQP may not be contained in PH. It has been suspected for many years that Fourier Sampling is a problem that exists within BQP, but not within the polynomial hierarchy. Recent conjectures have provided evidence that a similar problem, Fourier Checking, also exists in the class BQP without being contained in thepolynomial hierarchy. This conjecture is especially notable because it suggests that problems existing in BQP could be classified as harder thanNP-Completeproblems. Paired with the fact that many practical BQP problems are suspected to exist outside ofP(it is suspected and not verified because there is no proof thatP ≠ NP), this illustrates the potential power of quantum computing in relation to classical computing.[7] AddingpostselectiontoBQPresults in the complexity classPostBQPwhich is equal toPP.[8][9] Promise-BQP is the class ofpromise problemsthat can be solved by a uniform family of quantum circuits (i.e., within BQP).[10]Completeness proofs focus on this version of BQP. Similar to the notion ofNP-completenessand othercompleteproblems, we can define a complete problem as a problem that is in Promise-BQP and that every other problem in Promise-BQP reduces to it in polynomial time. The APPROX-QCIRCUIT-PROB problem is complete for efficient quantum computation, and the version presented below is complete for the Promise-BQP complexity class (and not for the total BQP complexity class, for which no complete problems are known). APPROX-QCIRCUIT-PROB's completeness makes it useful for proofs showing the relationships between other complexity classes and BQP. Given a description of a quantum circuitCacting onnqubits withmgates, wheremis a polynomial innand each gate acts on one or two qubits, and two numbersα,β∈[0,1],α>β{\displaystyle \alpha ,\beta \in [0,1],\alpha >\beta }, distinguish between the following two cases: Here, there is a promise on the inputs as the problem does not specify the behavior if an instance is not covered by these two cases. Claim.Any BQP problem reduces to APPROX-QCIRCUIT-PROB. Proof.Suppose we have an algorithmAthat solves APPROX-QCIRCUIT-PROB, i.e., given a quantum circuitCacting onnqubits, and two numbersα,β∈[0,1],α>β{\displaystyle \alpha ,\beta \in [0,1],\alpha >\beta },Adistinguishes between the above two cases. We can solve any problem in BQP with this oracle, by settingα=2/3,β=1/3{\displaystyle \alpha =2/3,\beta =1/3}. For anyL∈BQP{\displaystyle L\in {\mathsf {BQP}}}, there exists family of quantum circuits{Qn:n∈N}{\displaystyle \{Q_{n}\colon n\in \mathbb {N} \}}such that for alln∈N{\displaystyle n\in \mathbb {N} }, a state|x⟩{\displaystyle |x\rangle }ofn{\displaystyle n}qubits, ifx∈L,Pr(Qn(|x⟩)=1)≥2/3{\displaystyle x\in L,Pr(Q_{n}(|x\rangle )=1)\geq 2/3}; else ifx∉L,Pr(Qn(|x⟩)=0)≥2/3{\displaystyle x\notin L,Pr(Q_{n}(|x\rangle )=0)\geq 2/3}. Fix an input|x⟩{\displaystyle |x\rangle }ofnqubits, and the corresponding quantum circuitQn{\displaystyle Q_{n}}. We can first construct a circuitCx{\displaystyle C_{x}}such thatCx|0⟩⊗n=|x⟩{\displaystyle C_{x}|0\rangle ^{\otimes n}=|x\rangle }. This can be done easily by hardwiring|x⟩{\displaystyle |x\rangle }and apply a sequence ofCNOTgates to flip the qubits. Then we can combine two circuits to getC′=QnCx{\displaystyle C'=Q_{n}C_{x}}, and nowC′|0⟩⊗n=Qn|x⟩{\displaystyle C'|0\rangle ^{\otimes n}=Q_{n}|x\rangle }. And finally, necessarily the results ofQn{\displaystyle Q_{n}}is obtained by measuring several qubits and apply some (classical) logic gates to them. We can alwaysdefer the measurement[11][12]and reroute the circuits so that by measuring the first qubit ofC′|0⟩⊗n=Qn|x⟩{\displaystyle C'|0\rangle ^{\otimes n}=Q_{n}|x\rangle }, we get the output. This will be our circuitC, and we decide the membership ofx∈L{\displaystyle x\in L}by runningA(C){\displaystyle A(C)}withα=2/3,β=1/3{\displaystyle \alpha =2/3,\beta =1/3}. By definition of BQP, we will either fall into the first case (acceptance), or the second case (rejection), soL∈BQP{\displaystyle L\in {\mathsf {BQP}}}reduces to APPROX-QCIRCUIT-PROB. We begin with an easier containment. To show thatBQP⊆EXP{\displaystyle {\mathsf {BQP}}\subseteq {\mathsf {EXP}}}, it suffices to show that APPROX-QCIRCUIT-PROB is in EXP since APPROX-QCIRCUIT-PROB is BQP-complete. Claim—APPROX-QCIRCUIT-PROB∈EXP{\displaystyle {\text{APPROX-QCIRCUIT-PROB}}\in {\mathsf {EXP}}} The idea is simple. Since we have exponential power, given a quantum circuitC, we can use classical computer to stimulate each gate inCto get the final state. More formally, letCbe a polynomial sized quantum circuit onnqubits andmgates, where m is polynomial in n. Let|ψ0⟩=|0⟩⊗n{\displaystyle |\psi _{0}\rangle =|0\rangle ^{\otimes n}}and|ψi⟩{\displaystyle |\psi _{i}\rangle }be the state after thei-th gate in the circuit is applied to|ψi−1⟩{\displaystyle |\psi _{i-1}\rangle }. Each state|ψi⟩{\displaystyle |\psi _{i}\rangle }can be represented in a classical computer as a unit vector inC2n{\displaystyle \mathbb {C} ^{2^{n}}}. Furthermore, each gate can be represented by a matrix inC2n×2n{\displaystyle \mathbb {C} ^{2^{n}\times 2^{n}}}. Hence, the final state|ψm⟩{\displaystyle |\psi _{m}\rangle }can be computed inO(m⋅22n){\displaystyle O(m\cdot 2^{2n})}time, and therefore all together, we have an2O(n){\displaystyle 2^{O(n)}}time algorithm for calculating the final state, and thus the probability that the first qubit is measured to be one. This implies thatAPPROX-QCIRCUIT-PROB∈EXP{\displaystyle {\text{APPROX-QCIRCUIT-PROB}}\in {\mathsf {EXP}}}. Note that this algorithm also requires2O(n){\displaystyle 2^{O(n)}}space to store the vectors and the matrices. We will show in the following section that we can improve upon the space complexity. Sum of histories is a technique introduced by physicistRichard Feynmanforpath integral formulation. APPROX-QCIRCUIT-PROB can be formulated in the sum of histories technique to show thatBQP⊆PSPACE{\displaystyle {\mathsf {BQP}}\subseteq {\mathsf {PSPACE}}}.[13] Consider a quantum circuitC, which consists oftgates,g1,g2,⋯,gm{\displaystyle g_{1},g_{2},\cdots ,g_{m}}, where eachgj{\displaystyle g_{j}}comes from auniversal gate setand acts on at most two qubits. To understand what the sum of histories is, we visualize the evolution of a quantum state given a quantum circuit as a tree. The root is the input|0⟩⊗n{\displaystyle |0\rangle ^{\otimes n}}, and each node in the tree has2n{\displaystyle 2^{n}}children, each representing a state inCn{\displaystyle \mathbb {C} ^{n}}. The weight on a tree edge from a node inj-th level representing a state|x⟩{\displaystyle |x\rangle }to a node inj+1{\displaystyle j+1}-th level representing a state|y⟩{\displaystyle |y\rangle }is⟨y|gj+1|x⟩{\displaystyle \langle y|g_{j+1}|x\rangle }, the amplitude of|y⟩{\displaystyle |y\rangle }after applyinggj+1{\displaystyle g_{j+1}}on|x⟩{\displaystyle |x\rangle }. The transition amplitude of a root-to-leaf path is the product of all the weights on the edges along the path. To get the probability of the final state being|ψ⟩{\displaystyle |\psi \rangle }, we sum up the amplitudes of all root-to-leave paths that ends at a node representing|ψ⟩{\displaystyle |\psi \rangle }. More formally, for the quantum circuitC, its sum over histories tree is a tree of depthm, with one level for each gategi{\displaystyle g_{i}}in addition to the root, and with branching factor2n{\displaystyle 2^{n}}. Define—A history is a path in the sum of histories tree. We will denote a history by a sequence(u0=|0⟩⊗n→u1→⋯→um−1→um=x){\displaystyle (u_{0}=|0\rangle ^{\otimes n}\rightarrow u_{1}\rightarrow \cdots \rightarrow u_{m-1}\rightarrow u_{m}=x)}for some final statex. Define—Letu,v∈{0,1}n{\displaystyle u,v\in \{0,1\}^{n}}. Let amplitude of the edge(|u⟩,|v⟩){\displaystyle (|u\rangle ,|v\rangle )}in thej-th level of the sum over histories tree beαj(u→v)=⟨v|gj|u⟩{\displaystyle \alpha _{j}(u\rightarrow v)=\langle v|g_{j}|u\rangle }. For any historyh=(u0→u1→⋯→um−1→um){\displaystyle h=(u_{0}\rightarrow u_{1}\rightarrow \cdots \rightarrow u_{m-1}\rightarrow u_{m})}, the transition amplitude of the history is the productαh=α1(|0⟩⊗n→u1)α2(u1→u2)⋯αm(um−1→x){\displaystyle \alpha _{h}=\alpha _{1}(|0\rangle ^{\otimes n}\rightarrow u_{1})\alpha _{2}(u_{1}\rightarrow u_{2})\cdots \alpha _{m}(u_{m-1}\rightarrow x)}. Claim—For a history(u0→⋯→um){\displaystyle (u_{0}\rightarrow \cdots \rightarrow u_{m})}. The transition amplitude of the history is computable in polynomial time. Each gategj{\displaystyle g_{j}}can be decomposed intogj=I⊗g~j{\displaystyle g_{j}=I\otimes {\tilde {g}}_{j}}for some unitary operatorg~j{\displaystyle {\tilde {g}}_{j}}acting on two qubits, which without loss of generality can be taken to be the first two. Hence,⟨v|gj|u⟩=⟨v1,v2|g~j|u1,u2⟩⟨v3,⋯,vn|u3,⋯,un⟩{\displaystyle \langle v|g_{j}|u\rangle =\langle v_{1},v_{2}|{\tilde {g}}_{j}|u_{1},u_{2}\rangle \langle v_{3},\cdots ,v_{n}|u_{3},\cdots ,u_{n}\rangle }which can be computed in polynomial time inn. Sincemis polynomial inn, the transition amplitude of the history can be computed in polynomial time. Claim—LetC|0⟩⊗n=∑x∈{0,1}nαx|x⟩{\displaystyle C|0\rangle ^{\otimes n}=\sum _{x\in \{0,1\}^{n}}\alpha _{x}|x\rangle }be the final state of the quantum circuit. For somex∈{0,1}n{\displaystyle x\in \{0,1\}^{n}}, the amplitudeαx{\displaystyle \alpha _{x}}can be computed byαx=∑h=(|0⟩⊗n→u1→⋯→ut−1→|x⟩)αh{\displaystyle \alpha _{x}=\sum _{h=(|0\rangle ^{\otimes n}\rightarrow u_{1}\rightarrow \cdots \rightarrow u_{t-1}\rightarrow |x\rangle )}\alpha _{h}}. We haveαx=⟨x|C|0⟩⊗n=⟨x|gtgt−1⋯g1|C|0⟩⊗n{\displaystyle \alpha _{x}=\langle x|C|0\rangle ^{\otimes n}=\langle x|g_{t}g_{t-1}\cdots g_{1}|C|0\rangle ^{\otimes n}}. The result comes directly by insertingI=∑x∈{0,1}n|x⟩⟨x|{\displaystyle I=\sum _{x\in \{0,1\}^{n}}|x\rangle \langle x|}betweeng1,g2{\displaystyle g_{1},g_{2}}, andg2,g3{\displaystyle g_{2},g_{3}}, and so on, and then expand out the equation. Then each term corresponds to aαh{\displaystyle \alpha _{h}}, whereh=(|0⟩⊗n→u1→⋯→ut−1→|x⟩){\displaystyle h=(|0\rangle ^{\otimes n}\rightarrow u_{1}\rightarrow \cdots \rightarrow u_{t-1}\rightarrow |x\rangle )} Claim—APPROX-QCIRCUIT-PROB∈PSPACE{\displaystyle {\text{APPROX-QCIRCUIT-PROB}}\in {\mathsf {PSPACE}}} Notice in the sum over histories algorithm to compute some amplitudeαx{\displaystyle \alpha _{x}}, only one history is stored at any point in the computation. Hence, the sum over histories algorithm usesO(nm){\displaystyle O(nm)}space to computeαx{\displaystyle \alpha _{x}}for anyxsinceO(nm){\displaystyle O(nm)}bits are needed to store the histories in addition to some workspace variables. Therefore, in polynomial space, we may compute∑x|αx|2{\displaystyle \sum _{x}|\alpha _{x}|^{2}}over allxwith the first qubit being1, which is the probability that the first qubit is measured to be 1 by the end of the circuit. Notice that compared with the simulation given for the proof thatBQP⊆EXP{\displaystyle {\mathsf {BQP}}\subseteq {\mathsf {EXP}}}, our algorithm here takes far less space but far more time instead. In fact it takesO(m⋅2mn){\displaystyle O(m\cdot 2^{mn})}time to calculate a single amplitude! A similar sum-over-histories argument can be used to show thatBQP⊆PP{\displaystyle {\mathsf {BQP}}\subseteq {\mathsf {PP}}}.[14] We knowP⊆BQP{\displaystyle {\mathsf {P}}\subseteq {\mathsf {BQP}}}, since every classical circuit can be simulated by a quantum circuit.[15] It is conjectured that BQP solves hard problems outside of P, specifically, problems in NP. The claim is indefinite because we don't know if P=NP, so we don't know if those problems are actually in P. Below are some evidence of the conjecture:
https://en.wikipedia.org/wiki/BQP
Acellular automaton(pl.cellular automata, abbrev.CA) is a discretemodel of computationstudied inautomata theory. Cellular automata are also calledcellular spaces,tessellation automata,homogeneous structures,cellular structures,tessellation structures, anditerative arrays.[2]Cellular automata have found application in various areas, includingphysics,theoretical biologyandmicrostructuremodeling. A cellular automaton consists of a regular grid ofcells, each in one of a finite number ofstates, such asonandoff(in contrast to acoupled map lattice). The grid can be in any finite number of dimensions. For each cell, a set of cells called itsneighborhoodis defined relative to the specified cell. An initial state (timet= 0) is selected by assigning a state for each cell. A newgenerationis created (advancingtby 1), according to some fixedrule(generally, a mathematical function)[3]that determines the new state of each cell in terms of the current state of the cell and the states of the cells in its neighborhood. Typically, the rule for updating the state of cells is the same for each cell and does not change over time, and is applied to the whole grid simultaneously,[4]though exceptions are known, such as thestochastic cellular automatonandasynchronous cellular automaton. The concept was originally discovered in the 1940s byStanislaw UlamandJohn von Neumannwhile they were contemporaries atLos Alamos National Laboratory. While studied by some throughout the 1950s and 1960s, it was not until the 1970s andConway's Game of Life, a two-dimensional cellular automaton, that interest in the subject expanded beyond academia. In the 1980s,Stephen Wolframengaged in a systematic study of one-dimensional cellular automata, or what he callselementary cellular automata; his research assistantMatthew Cookshowed thatone of these rulesisTuring-complete. The primary classifications of cellular automata, as outlined by Wolfram, are numbered one to four. They are, in order, automata in which patterns generally stabilize intohomogeneity, automata in which patterns evolve into mostly stable or oscillating structures, automata in which patterns evolve in a seemingly chaotic fashion, and automata in which patterns become extremely complex and may last for a long time, with stable local structures. This last class is thought to becomputationally universal, or capable of simulating aTuring machine. Special types of cellular automata arereversible, where only a single configuration leads directly to a subsequent one, andtotalistic, in which the future value of individual cells only depends on the total value of a group of neighboring cells. Cellular automata can simulate a variety of real-world systems, including biological and chemical ones. One way to simulate a two-dimensional cellular automaton is with an infinite sheet ofgraph paperalong with a set of rules for the cells to follow. Each square is called a "cell" and each cell has two possible states, black and white. Theneighborhoodof a cell is the nearby, usually adjacent, cells. The two most common types of neighborhoods are thevon Neumann neighborhoodand theMoore neighborhood.[5]The former, named after the founding cellular automaton theorist, consists of the fourorthogonallyadjacent cells.[5]The latter includes the von Neumann neighborhood as well as the four diagonally adjacent cells.[5]For such a cell and its Moore neighborhood, there are 512 (= 29) possible patterns. For each of the 512 possible patterns, the rule table would state whether the center cell will be black or white on the next time interval.Conway's Game of Lifeis a popular version of this model. Another common neighborhood type is theextended von Neumann neighborhood, which includes the two closest cells in each orthogonal direction, for a total of eight.[5]The general equation for the total number of automata possible iskks, wherekis the number of possible states for a cell, andsis the number of neighboring cells (including the cell to be calculated itself) used to determine the cell's next state.[6]Thus, in the two-dimensional system with a Moore neighborhood, the total number of automata possible would be 229, or1.34×10154. It is usually assumed that every cell in the universe starts in the same state, except for a finite number of cells in other states; the assignment of state values is called aconfiguration.[7]More generally, it is sometimes assumed that the universe starts out covered with a periodic pattern, and only a finite number of cells violate that pattern. The latter assumption is common in one-dimensional cellular automata. Cellular automata are often simulated on a finite grid rather than an infinite one. In two dimensions, the universe would be a rectangle instead of an infinite plane. The obvious problem with finite grids is how to handle the cells on the edges. How they are handled will affect the values of all the cells in the grid. One possible method is to allow the values in those cells to remain constant. Another method is to define neighborhoods differently for these cells. One could say that they have fewer neighbors, but then one would also have to define new rules for the cells located on the edges. These cells are usually handled withperiodic boundary conditionsresulting in atoroidalarrangement: when one goes off the top, one comes in at the corresponding position on the bottom, and when one goes off the left, one comes in on the right. (This essentially simulates an infinite periodic tiling, and in the field ofpartial differential equationsis sometimes referred to asperiodicboundary conditions.) This can be visualized as taping the left and right edges of the rectangle to form a tube, then taping the top and bottom edges of the tube to form atorus(doughnut shape). Universes of otherdimensionsare handled similarly. This solves boundary problems with neighborhoods, but another advantage is that it is easily programmable usingmodular arithmeticfunctions. For example, in a 1-dimensional cellular automaton like the examples below, the neighborhood of a cellxitis {xi−1t−1,xit−1,xi+1t−1}, wheretis the time step (vertical), andiis the index (horizontal) in one generation. Stanislaw Ulam, while working at theLos Alamos National Laboratoryin the 1940s, studied the growth of crystals, using a simplelattice networkas his model.[8]At the same time,John von Neumann, Ulam's colleague at Los Alamos, was working on the problem ofself-replicating systems.[9]Von Neumann's initial design was founded upon the notion of one robot building another robot. This design is known as the kinematic model.[10][11]As he developed this design, von Neumann came to realize the great difficulty of building a self-replicating robot, and of the great cost in providing the robot with a "sea of parts" from which to build its replicant. Neumann wrote a paper entitled "The general and logical theory of automata" for theHixon Symposiumin 1948.[9]Ulam was the one who suggested using adiscretesystem for creating a reductionist model of self-replication.[12][13]Nils Aall Barricelliperformed many of the earliest explorations of these models ofartificial life. Ulam and von Neumann created a method for calculating liquid motion in the late 1950s. The driving concept of the method was to consider a liquid as a group of discrete units and calculate the motion of each based on its neighbors' behaviors.[14]Thus was born the first system of cellular automata. Like Ulam's lattice network,von Neumann's cellular automataare two-dimensional, with his self-replicator implemented algorithmically. The result was auniversal copier and constructorworking within a cellular automaton with a small neighborhood (only those cells that touch are neighbors; for von Neumann's cellular automata, onlyorthogonalcells), and with 29 states per cell.[15]Von Neumann gave anexistence proofthat a particular pattern would make endless copies of itself within the given cellular universe by designing a 200,000 cell configuration that could do so.[15]This design is known as thetessellationmodel, and is called avon Neumann universal constructor.[16] Also in the 1940s,Norbert WienerandArturo Rosenbluethdeveloped a model of excitable media with some of the characteristics of a cellular automaton.[17]Their specific motivation was the mathematical description of impulse conduction in cardiac systems. However their model is not a cellular automaton because the medium in which signals propagate is continuous, and wave fronts are curves.[17][18]A true cellular automaton model of excitable media was developed and studied by J. M. Greenberg and S. P. Hastings in 1978; seeGreenberg-Hastings cellular automaton. The original work of Wiener and Rosenblueth contains many insights and continues to be cited in modern research publications oncardiac arrhythmiaand excitable systems.[19] In the 1960s, cellular automata were studied as a particular type ofdynamical systemand the connection with the mathematical field ofsymbolic dynamicswas established for the first time. In 1969,Gustav A. Hedlundcompiled many results following this point of view[20]in what is still considered as a seminal paper for the mathematical study of cellular automata. The most fundamental result is the characterization in theCurtis–Hedlund–Lyndon theoremof the set of global rules of cellular automata as the set ofcontinuousendomorphismsofshift spaces. In 1969, German computer pioneerKonrad Zusepublished his bookCalculating Space, proposing that the physical laws of the universe are discrete by nature, and that the entire universe is the output of a deterministic computation on a single cellular automaton; "Zuse's Theory" became the foundation of the field of study calleddigital physics.[21] Also in 1969 computer scientistAlvy Ray Smithcompleted a Stanford PhD dissertation on Cellular Automata Theory, the first mathematical treatment of CA as a general class of computers. Many papers came from this dissertation: He showed the equivalence of neighborhoods of various shapes, how to reduce a Moore to a von Neumann neighborhood or how to reduce any neighborhood to a von Neumann neighborhood.[22]Heprovedthat two-dimensional CA are computation universal, introduced 1-dimensional CA, and showed that they too are computation universal, even with simple neighborhoods.[23]He showed how to subsume the complex von Neumann proof of construction universality (and hence self-reproducing machines) into a consequence of computation universality in a 1-dimensional CA.[24]Intended as the introduction to the German edition of von Neumann's book on CA, he wrote a survey of the field with dozens of references to papers, by many authors in many countries over a decade or so of work, often overlooked by modern CA researchers.[25] In the 1970s a two-state, two-dimensional cellular automaton namedGame of Lifebecame widely known, particularly among the early computing community. Invented byJohn Conwayand popularized byMartin Gardnerin aScientific Americanarticle,[26]its rules are as follows: Despite its simplicity, the system achieves an impressive diversity of behavior, fluctuating between apparentrandomnessand order. One of the most apparent features of the Game of Life is the frequent occurrence ofgliders, arrangements of cells that essentially move themselves across the grid. It is possible to arrange the automaton so that the gliders interact to perform computations, and after much effort it has been shown that the Game of Life can emulate auniversal Turing machine.[27]It was viewed as a largely recreational topic, and little follow-up work was done outside of investigating the particularities of the Game of Life and a few related rules in the early 1970s.[28] Stephen Wolframindependently began working on cellular automata in mid-1981 after considering how complex patterns seemed formed in nature in violation of thesecond law of thermodynamics.[29]His investigations were initially spurred by a desire to model systems such as theneural networksfound in brains.[29]He published his first paper inReviews of Modern Physicsinvestigatingelementary cellular automata(Rule 30in particular) in June 1983.[2][29]The unexpected complexity of the behavior of these simple rules led Wolfram to suspect that complexity in nature may be due to similar mechanisms.[29]His investigations, however, led him to realize that cellular automata were poor at modelling neural networks.[29]Additionally, during this period Wolfram formulated the concepts of intrinsicrandomnessandcomputational irreducibility,[30]and suggested thatrule 110may beuniversal—a fact proved later by Wolfram's research assistantMatthew Cookin the 1990s.[31] Wolfram, inA New Kind of Scienceand several papers dating from the mid-1980s, defined four classes into which cellular automata and several other simple computational models can be divided depending on their behavior. While earlier studies in cellular automata tended to try to identify types of patterns for specific rules, Wolfram's classification was the first attempt to classify the rules themselves. In order of complexity the classes are: These definitions are qualitative in nature and there is some room for interpretation. According to Wolfram, "...with almost any general classification scheme there are inevitably cases which get assigned to one class by one definition and another class by another definition. And so it is with cellular automata: there are occasionally rules...that show some features of one class and some of another."[34]Wolfram's classification has been empirically matched to a clustering of the compressed lengths of the outputs of cellular automata.[35] There have been several attempts to classify cellular automata in formally rigorous classes, inspired by Wolfram's classification. For instance, Culik and Yu proposed three well-defined classes (and a fourth one for the automata not matching any of these), which are sometimes called Culik–Yu classes; membership in these provedundecidable.[36][37][38]Wolfram's class 2 can be partitioned into two subgroups of stable (fixed-point) and oscillating (periodic) rules.[39] The idea that there are 4 classes of dynamical system came originally from Nobel-prize winning chemistIlya Prigoginewho identified these 4 classes of thermodynamical systems: (1) systems in thermodynamic equilibrium, (2) spatially/temporally uniform systems, (3) chaotic systems, and (4) complex far-from-equilibrium systems with dissipative structures (see figure 1 in the 1974 paper of Nicolis, Prigogine's student).[40] A cellular automaton isreversibleif, for every current configuration of the cellular automaton, there is exactly one past configuration (preimage).[41]If one thinks of a cellular automaton as a function mapping configurations to configurations, reversibility implies that this function isbijective.[41]If a cellular automaton is reversible, its time-reversed behavior can also be described as a cellular automaton; this fact is a consequence of theCurtis–Hedlund–Lyndon theorem, a topological characterization of cellular automata.[42][43]For cellular automata in which not every configuration has a preimage, the configurations without preimages are calledGarden of Edenpatterns.[44] For one-dimensional cellular automata there are known algorithms for deciding whether a rule is reversible or irreversible.[45][46]However, for cellular automata of two or more dimensions reversibility isundecidable; that is, there is no algorithm that takes as input an automaton rule and is guaranteed to determine correctly whether the automaton is reversible. The proof byJarkko Kariis related to the tiling problem byWang tiles.[47] Reversible cellular automata are often used to simulate such physical phenomena as gas and fluid dynamics, since they obey the laws ofthermodynamics. Such cellular automata have rules specially constructed to be reversible. Such systems have been studied byTommaso Toffoli,Norman Margolusand others. Several techniques can be used to explicitly construct reversible cellular automata with known inverses. Two common ones are thesecond-order cellular automatonand theblock cellular automaton, both of which involve modifying the definition of a cellular automaton in some way. Although such automata do not strictly satisfy the definition given above, it can be shown that they can be emulated by conventional cellular automata with sufficiently large neighborhoods and numbers of states, and can therefore be considered a subset of conventional cellular automata. Conversely, it has been shown that every reversible cellular automaton can be emulated by a block cellular automaton.[48][49] A special class of cellular automata aretotalisticcellular automata. The state of each cell in a totalistic cellular automaton is represented by a number (usually an integer value drawn from a finite set), and the value of a cell at timetdepends only on thesumof the values of the cells in its neighborhood (possibly including the cell itself) at timet− 1.[50][51]If the state of the cell at timetdepends on both its own state and the total of its neighbors at timet− 1 then the cellular automaton is properly calledouter totalistic.[51]Conway's Game of Lifeis an example of an outer totalistic cellular automaton with cell values 0 and 1; outer totalistic cellular automata with the sameMoore neighborhoodstructure as Life are sometimes calledlife-like cellular automata.[52][53] There are many possible generalizations of the cellular automaton concept. One way is by using something other than a rectangular (cubic,etc.) grid. For example, if a plane istiled with regular hexagons, those hexagons could be used as cells. In many cases the resulting cellular automata are equivalent to those with rectangular grids with specially designed neighborhoods and rules. Another variation would be to make the grid itself irregular, such as withPenrose tiles.[54] Also, rules can be probabilistic rather than deterministic. Such cellular automata are calledprobabilistic cellular automata. A probabilistic rule gives, for each pattern at timet, the probabilities that the central cell will transition to each possible state at timet+ 1. Sometimes a simpler rule is used; for example: "The rule is the Game of Life, but on each time step there is a 0.001% probability that each cell will transition to the opposite color." The neighborhood or rules could change over time or space. For example, initially the new state of a cell could be determined by the horizontally adjacent cells, but for the next generation the vertical cells would be used. In cellular automata, the new state of a cell is not affected by the new state of other cells. This could be changed so that, for instance, a 2 by 2 block of cells can be determined by itself and the cells adjacent to itself. There arecontinuous automata. These are like totalistic cellular automata, but instead of the rule and states being discrete (e.g.a table, using states {0,1,2}), continuous functions are used, and the states become continuous (usually values in[0,1]). The state of a location is a finite number of real numbers. Certain cellular automata can yield diffusion in liquid patterns in this way. Continuous spatial automatahave a continuum of locations. The state of a location is a finite number of real numbers. Time is also continuous, and the state evolves according to differential equations. One important example isreaction–diffusiontextures, differential equations proposed byAlan Turingto explain how chemical reactions could create the stripes onzebrasand spots on leopards.[55]When these are approximated by cellular automata, they often yield similar patterns. MacLennan[1]considers continuous spatial automata as a model of computation. There are known examples of continuous spatial automata, which exhibit propagating phenomena analogous to gliders in the Game of Life.[56] Graph rewriting automataare extensions of cellular automata based ongraph rewriting systems.[57] The simplest nontrivial cellular automaton would be one-dimensional, with two possible states per cell, and a cell's neighbors defined as the adjacent cells on either side of it. A cell and its two neighbors form a neighborhood of 3 cells, so there are 23= 8 possible patterns for a neighborhood. A rule consists of deciding, for each pattern, whether the cell will be a 1 or a 0 in the next generation. There are then 28= 256 possible rules.[6] These 256 cellular automata are generally referred to by theirWolfram code, a standard naming convention invented by Wolfram that gives each rule a number from 0 to 255. A number of papers have analyzed and compared the distinct cases among the 256 cellular automata (many are trivially isomorphic). Therule 30,rule 90,rule 110, andrule 184cellular automata are particularly interesting. The images below show the history of rules 30 and 110 when the starting configuration consists of a 1 (at the top of each image) surrounded by 0s. Each row of pixels represents a generation in the history of the automaton, witht=0 being the top row. Each pixel is colored white for 0 and black for 1. Rule 30 exhibitsclass 3behavior, meaning even simple input patterns such as that shown lead to chaotic, seemingly random histories. Rule 110, like the Game of Life, exhibits what Wolfram callsclass 4behavior, which is neither completely random nor completely repetitive. Localized structures appear and interact in various complicated-looking ways. In the course of the development ofA New Kind of Science, as a research assistant to Wolfram in 1994,Matthew Cookproved that some of these structures were rich enough to supportuniversality. This result is interesting because rule 110 is an extremely simple one-dimensional system, and difficult to engineer to perform specific behavior. This result therefore provides significant support for Wolfram's view that class 4 systems are inherently likely to be universal. Cook presented his proof at aSanta Fe Instituteconference on Cellular Automata in 1998, but Wolfram blocked the proof from being included in the conference proceedings, as Wolfram did not want the proof announced before the publication ofA New Kind of Science.[58]In 2004, Cook's proof was finally published in Wolfram's journalComplex Systems(Vol. 15, No. 1), over ten years after Cook came up with it. Rule 110 has been the basis for some of the smallest universal Turing machines.[59] An elementary cellular automaton rule is specified by 8 bits, and all elementary cellular automaton rules can be considered to sit on theverticesof the 8-dimensional unithypercube. This unit hypercube is the cellular automaton rule space. For next-nearest-neighbor cellular automata, a rule is specified by 25= 32 bits, and the cellular automaton rule space is a 32-dimensional unit hypercube. A distance between two rules can be defined by the number of steps required to move from one vertex, which represents the first rule, and another vertex, representing another rule, along theedgeof the hypercube. This rule-to-rule distance is also called theHamming distance. Cellular automaton rule space allows us to ask the question concerning whether rules with similar dynamical behavior are "close" to each other. Graphically drawing a high dimensional hypercube on the 2-dimensional plane remains a difficult task, and one crude locator of a rule in the hypercube is the number of bit-1 in the 8-bit string for elementary rules (or 32-bit string for the next-nearest-neighbor rules). Drawing the rules in different Wolfram classes in these slices of the rule space show that class 1 rules tend to have lower number of bit-1s, thus located in one region of the space, whereas class 3 rules tend to have higher proportion (50%) of bit-1s.[39] For larger cellular automaton rule space, it is shown that class 4 rules are located between the class 1 and class 3 rules.[60]This observation is the foundation for the phraseedge of chaos, and is reminiscent of thephase transitioninthermodynamics. Several biological processes occur—or can be simulated—by cellular automata. Some examples of biological phenomena modeled by cellular automata with a simple state space are: Additionally, biological phenomena which require explicit modeling of the agents' velocities (for example, those involved incollective cell migration) may be modeled by cellular automata with a more complex state space and rules, such asbiological lattice-gas cellular automata. These include phenomena of great medical importance, such as: TheBelousov–Zhabotinsky reactionis a spatio-temporalchemical oscillatorthat can be simulated by means of a cellular automaton. In the 1950sA. M. Zhabotinsky(extending the work ofB. P. Belousov) discovered that when a thin, homogenous layer of a mixture ofmalonic acid, acidified bromate, and a ceric salt were mixed together and left undisturbed, fascinating geometric patterns such as concentric circles and spirals propagate across the medium. In the "Computer Recreations" section of the August 1988 issue ofScientific American,[69]A. K. Dewdneydiscussed a cellular automaton[70]developed by Martin Gerhardt and Heike Schuster of the University of Bielefeld (Germany). This automaton produces wave patterns that resemble those in the Belousov-Zhabotinsky reaction. Combining the attachment to one only particle from the growing aggregate, following the seminal model of Witten and Sander[71]to simulate the diffusion-limited growth with the attachment to kink positions as proposed yet by Kossel and Stranski in 1920's, see[72]for the kinetics-limited version of the attachment, Goranova et al[73]proposed a model for electrochemical co-deposition of two metal cations. Probabilistic cellular automata are used instatisticalandcondensed matter physicsto study phenomena like fluid dynamics and phase transitions. TheIsing modelis a prototypical example, in which each cell can be in either of two states called "up" and "down", making an idealized representation of amagnet. By adjusting the parameters of the model, the proportion of cells being in the same state can be varied, in ways that help explicate howferromagnetsbecome demagnetized when heated. Moreover, results from studying the demagnetization phase transition can be transferred to other phase transitions, like the evaporation of a liquid into a gas; this convenient cross-applicability is known asuniversality.[74][75]The phase transition in thetwo-dimensional Ising modeland other systems in itsuniversality classhas been of particular interest, as it requiresconformal field theoryto understand in depth.[76]Other cellular automata that have been of significance in physics includelattice gas automata, which simulate fluid flows. In a series of works[77][78][79][80]the so-calledvicinal Cellular Automaton(vicCA) was proposed and further developed to model the possibly unstable growth and sublimation of vicinal crystal surfaces in 1+1D. Besides the attachment/detachment events being encoded in the rules of the CA, the adatoms on top of the vicinal form a thin layer, and their thermal motion is modeled by a Monte Carlo module.[77][79]A decisive step further was the transition of the model to 2+1D,[81]where a number of different structures were obtained, referred to by the authors as “vicinal creatures”—step bunches, step meanders, nano-pillars, nanowires, etc.[81]The vicCA model was extensively used by Alexey Redkov[82]to develop a Machine Learning algorithm on top of it, significantly speeding up calculations by a factor of 10⁵ while enabling systematic classification of the observed phenomena. Cellular automaton processors are physical implementations of CA concepts, which can process information computationally. Processing elements are arranged in a regular grid of identical cells. The grid is usually a square tiling, ortessellation, of two or three dimensions; other tilings are possible, but not yet used. Cell states are determined only by interactions with adjacent neighbor cells. No means exists to communicate directly with cells farther away.[83]One such cellular automaton processor array configuration is thesystolic array. Cell interaction can be via electric charge, magnetism, vibration (phononsat quantum scales), or any other physically useful means. This can be done in several ways so that no wires are needed between any elements. This is very unlike processors used in most computers today (von Neumann designs) which are divided into sections with elements that can communicate with distant elements over wires. Rule 30was originally suggested as a possibleblock cipherfor use incryptography. Two-dimensional cellular automata can be used for constructing apseudorandom number generator.[84] Cellular automata have been proposed forpublic-key cryptography. Theone-way functionis the evolution of a finite CA whose inverse is believed to be hard to find. Given the rule, anyone can easily calculate future states, but it appears to be very difficult to calculate previous states. Cellular automata have also been applied to designerror correction codes.[85] Other problems that can be solved with cellular automata include: Cellular automata have been used ingenerative music[86]andevolutionary musiccomposition[87]andprocedural terraingeneration in video games.[88] Certain types ofcellular automatacan be used to generate mazes.[89]Two well-known such cellular automata, Maze and Mazectric, have rulestrings B3/S12345 and B3/S1234.[89]In the former, this means that cells survive from one generation to the next if they have at least one and at most fiveneighbours. In the latter, this means that cells survive if they have one to four neighbours. If a cell has exactly three neighbours, it is born. It is similar toConway's Game of Lifein that patterns that do not have a living cell adjacent to 1, 4, or 5 other living cells in any generation will behave identically to it.[89]However, for large patterns, it behaves very differently from Life.[89] For a random starting pattern, these maze-generating cellular automata will evolve into complex mazes with well-defined walls outlining corridors. Mazecetric, which has the rule B3/S1234 has a tendency to generate longer and straighter corridors compared with Maze, with the rule B3/S12345.[89]Since these cellular automaton rules aredeterministic, each maze generated is uniquely determined by its random starting pattern. This is a significant drawback since the mazes tend to be relatively predictable. Specific cellular automata rules include:
https://en.wikipedia.org/wiki/Cellular_automaton
Cloud-basedquantum computingis the invocation of quantumemulators,simulatorsor processors through the cloud. Increasingly, cloud services are being looked on as the method for providing access to quantum processing. Quantum computers achieve their massive computing power by initiatingquantum physicsinto processing power and when users are allowed access to these quantum-powered computers through the internet it is known asquantum computingwithin the cloud. In 2016,IBMconnected a small quantum computer to the cloud and it allows for simple programs to be built and executed on the cloud.[1]In early 2017, researchers fromRigetti Computingdemonstrated the first programmable cloud access using thepyQuil Python library.[2]Many people from academic researchers and professors to schoolkids, have already built programs that run many differentquantum algorithmsusing the program tools. Some consumers hoped to use the fast computing to model financial markets or to build more advancedAIsystems. These use methods allow people outside a professional lab or institution to experience and learn more about such a phenomenal technology.[3] Cloud based quantum computing is used in several contexts:
https://en.wikipedia.org/wiki/Cloud-based_quantum_computing
Inquantum mechanics,counterfactual definiteness(CFD) is the ability to speak "meaningfully" of the definiteness of the results of measurements that have not been performed (i.e., the ability to assume theexistenceof objects, and properties of objects, even when they have not beenmeasured).[1][2]The term "counterfactualdefiniteness" is used in discussions of physics calculations, especially those related to the phenomenon calledquantum entanglementand those related to theBell inequalities.[3]In such discussions "meaningfully" means the ability to treat these unmeasured results on an equal footing with measured results in statistical calculations. It is this (sometimes assumed but unstated) aspect of counterfactual definiteness that is of direct relevance to physics and mathematical models of physical systems and not philosophical concerns regarding the meaning of unmeasured results. The subject of counterfactual definiteness receives attention in the study of quantum mechanics because it is argued that, when challenged by the findings of quantum mechanics, classical physics must give up its claim to one of three assumptions:locality(no "spooky action at a distance"),no-conspiracy(called also "asymmetry of time"),[4][5]or counterfactual definiteness (or "non-contextuality"). If physics gives up the claim to locality, it brings into question our ordinary ideas aboutcausalityand suggests that events may transpire at faster-than-light speeds.[6] If physics gives up the "no conspiracy" condition, it becomes possible for "nature to force experimenters to measure what she wants, and when she wants, hiding whatever she does not like physicists to see."[7] If physics rejects the possibility that, in all cases, there can be "counterfactual definiteness," then it rejects some features that humans are very much accustomed to regarding as enduring features of the universe. "The elements of reality the EPR paper is talking about are nothing but what the property interpretation calls properties existing independently of the measurements. In each run of the experiment, there exist some elements of reality, the system has particular properties < #ai> which unambiguously determine the measurement outcome < ai>, given that the corresponding measurementais performed."[8] As a noun, "counterfactual" may refer to an inferred effect or consequence of an unobserved macroscopic event. An example iscounterfactual quantum computation.[9] Aninterpretation of quantum mechanicscan be said to involve the use of counterfactual definiteness if it includes in the mathematical modelling outcomes of measurements that are counterfactual; in particular, those that are excluded according to quantum mechanics by the fact that quantum mechanics does not contain a description of simultaneous measurement of conjugate pairs of properties.[10] For example, theuncertainty principlestates that one cannot simultaneously know, with arbitrarily high precision, both the position andmomentumof a particle.[11]Suppose one measures the position of a particle. This act destroys any information about its momentum. Is it then possible to talk about the outcome that one would have obtained if one had measured its momentum instead of its position? In terms of mathematical formalism, is such a counterfactual momentum measurement to be included, together with the factual position measurement, in the statistical population of possible outcomes describing the particle? If the position were found to ber0then in an interpretation that permits counterfactual definiteness, the statistical population describing position and momentum would contain all pairs (r0,p) for every possible momentum valuep, whereas an interpretation that rejects counterfactual values completely would only have the pair (r0,⊥) where⊥(called "up tack" or "eet") denotes an undefined value.[12]To use a macroscopic analogy, an interpretation which rejects counterfactual definiteness views measuring the position as akin to asking where in a room a person is located, while measuring the momentum is akin to asking whether the person's lap is empty or has something on it. If the person's position has changed by making him or her stand rather than sit, then that person has no lap and neither the statement "the person's lap is empty" nor "there is something on the person's lap" is true. Any statistical calculation based on values where the person is standing at some place in the room and simultaneously has a lap as if sitting would be meaningless.[13] The dependability of counterfactually definite values is a basic assumption, which, together with "time asymmetry" and "local causality" led to theBell inequalities. Bell showed that the results of experiments intended to test the idea ofhidden variableswould be predicted to fall within certain limits based on all three of these assumptions, which are considered principles fundamental to classical physics, but that the results found within those limits would be inconsistent with the predictions of quantum mechanical theory. Experiments have shown that quantum mechanical results predictably exceed those classical limits. Calculating expectations based on Bell's work implies that for quantum physics the assumption of "local realism" must be abandoned.[14]Bell's theoremproves that every type of quantum theory must necessarily violatelocalityorreject the possibility of extending the mathematical description with outcomes of measurements which were not actually made.[15][16] Counterfactual definiteness is present in any interpretation of quantum mechanics that allows quantum mechanical measurement outcomes to be seen as deterministic functions of a system's state or of the state of the combined system and measurement apparatus. Cramer's (1986)transactional interpretationdoes not make that interpretation.[16] The traditionalCopenhagen interpretationof quantum mechanics rejects counterfactual definiteness as it does not ascribe any value at all to a measurement that was not performed. When measurements are performed, values result, but these are not considered to be revelations of pre-existing values. In the words ofAsher Peres, "unperformed experiments have no results".[17] Themany-worlds interpretationrejects counterfactual definiteness in a different sense; instead of not assigning a value to measurements that were not performed, it ascribes many values. When measurements are performed each of these values gets realized as the resulting value in a different world of a branching reality. As Prof. Guy Blaylock of theUniversity of Massachusetts Amherstputs it, "The many-worlds interpretation is not only counterfactually indefinite, it is factually indefinite as well."[18] Theconsistent historiesapproach rejects counterfactual definiteness in yet another manner; it ascribes single but hidden values to unperformed measurements and disallows combining values of incompatible measurements (counterfactual or factual) as such combinations do not produce results that would match any obtained purely from performed compatible measurements. When a measurement is performed the hidden value is nevertheless realized as the resulting value.Robert Griffithslikens these to "slips of paper" placed in "opaque envelopes".[19]Thus Consistent Histories does not reject counterfactual results per se, it rejects them only when they are being combined with incompatible results.[20]Whereas in the Copenhagen interpretation or the Many Worlds interpretation, the algebraic operations to derive Bell's inequality cannot proceed due to having no value or many values where a single value is required, in Consistent Histories, they can be performed but the resulting correlation coefficients can not be equated with those that would be obtained by actual measurements (which are instead given by the rules of quantum mechanical formalism). The derivation combines incompatible results, only some of which could be factual for a given experiment and the rest counterfactual.
https://en.wikipedia.org/wiki/Counterfactual_definiteness
Counterfactual quantum computationis a method of inferring the result of a computation without actually running aquantum computerotherwise capable of actively performing that computation. PhysicistsGraeme MitchisonandRichard Jozsaintroduced the notion of counterfactual computing[1]as an application of quantum computing, founded on the concepts ofcounterfactual definiteness, on a re-interpretation of theElitzur–Vaidman bomb testerthought experiment, and making theoretical use of the phenomenon ofinteraction-free measurement. After seeing a talk on counterfactual computation by Jozsa at theIsaac Newton Institute, Keith Bowden of the Theoretical Physics Research Unit atBirkbeck College, University of Londonpublished a paper[2]in 1997 describing a digital computer that could be counterfactually interrogated to calculate whether a light beam would fail to pass through a maze[3]as an example of this idea. More recently the idea of counterfactual quantum communication has been proposed and demonstrated.[4] The quantum computer may be physically implemented in arbitrary ways[5]but, to date, the common apparatus considered features aMach–Zehnder interferometer. The quantum computer is set in asuperpositionof "not running" and "running" states by means such as thequantum Zeno effect. Those state histories arequantum interfered. After many repetitions of very rapid projective measurements, the "not running" state evolves to a final value imprinted into the properties of the quantum computer.Measuringthat value allows for learning the result of some types of computations[6]such asGrover's algorithmeven though the result was derived from the non-running state of the quantum computer. The original formulation[1]of counterfactual quantum computation stated that a setmof measurement outcomes is a counterfactual outcome if there is only one history associated tomand that history contains only "off" (non-running) states, and there is only a single possible computational output associated tom. A refined definition[7]of counterfactual computation expressed in procedures and conditions is: (i) Identify and label all histories (quantum paths), with as many labels as needed, which lead to the same setmof measurement outcomes, and (ii) coherently superpose all possible histories. (iii) After cancelling the terms (if any) whose complex amplitudes together add to zero, the setmof measurement outcomes is a counterfactual outcome if (iv) there are no terms left with the computer-running label in their history labels, and (v) there is only a single possible computer output associated tom. In 1997, after discussions withAbner Shimonyand Richard Jozsa, and inspired by the idea of the (1993) Elitzur-Vaidman bomb tester, Keith Bowden (Birkbeck College) published a paper[2]describing a digital computer that could be counterfactually interrogated to calculate whether a photon would fail to pass through a maze of mirrors.[3]This so-called mirror array replaces the tentative bomb in Elitzur and Vaidman's device (actually a Mach–Zehnder interferometer). One time in four a photon will exit the device in such a way as to indicate that the maze is not navigable, even though the photon never passed through the mirror array. The mirror array itself is set up in such a way that it is defined by annbynmatrix of bits. The output (fail or otherwise) is itself defined by a single bit. Thus the mirror array itself is ann-squared bit in, 1 bit out digital computer which calculates mazes and can be run counterfactually. Although the overall device is clearly a quantum computer, the part which is counterfactually tested is semi classical. In 2015, counterfactual quantum computation was demonstrated in the experimental context of "spins of a negatively charged nitrogen-vacancy color center in a diamond".[8]Previously suspected limits of efficiency were exceeded, achieving counterfactual computational efficiency of 85% with the higher efficiency foreseen in principle.[9]
https://en.wikipedia.org/wiki/Counterfactual_quantum_computation
Landauer's principleis aphysical principlepertaining to a lowertheoreticallimit ofenergy consumptionofcomputation. It holds that an irreversible change ininformationstored in a computer, such as merging two computational paths, dissipates a minimum amount of heat to its surroundings.[1]It is hypothesized that energy consumption below this lower bound would require the development ofreversible computing. The principle was first proposed byRolf Landauerin 1961. Landauer's principle states that the minimum energy needed to erase one bit of information is proportional to the temperature at which the system is operating. Specifically, the energy needed for this computational task is given by wherekB{\displaystyle k_{\text{B}}}is theBoltzmann constantandT{\displaystyle T}is the temperature inKelvin.[2]Atroom temperature, the Landauer limit represents an energy of approximately 0.018 eV (2.9×10−21J). As of 2012[update], modern computers use about a billion times as much energy per operation.[3][4] Rolf Landauer first proposed the principle in 1961 while working atIBM.[5]He justified and stated important limits to an earlier conjecture byJohn von Neumann. This refinement is sometimes called the Landauer bound, or Landauer limit. In 2008 and 2009, researchers showed that Landauer's principle can be derived from thesecond law of thermodynamicsand the entropy change associated with information gain, developing the thermodynamics of quantum and classical feedback-controlled systems.[6][7] In 2011, the principle was generalized to show that while information erasure requires an increase in entropy, this increase could theoretically occur at no energy cost.[8]Instead, the cost can be taken in anotherconserved quantity, such asangular momentum. In a 2012 article published inNature, a team of physicists from theÉcole normale supérieure de Lyon,University of Augsburgand theUniversity of Kaiserslauterndescribed that for the first time they have measured the tiny amount of heat released when an individual bit of data is erased.[9] In 2014, physical experiments tested Landauer's principle and confirmed its predictions.[10] In 2016, researchers used a laser probe to measure the amount of energy dissipation that resulted when ananomagneticbit flipped from off to on. Flipping the bit required about 0.026 eV (4.2×10−21J) at 300 K, which is just 44% above the Landauer minimum.[11] A 2018 article published inNature Physicsfeatures a Landauer erasure performed at cryogenic temperatures(T= 1 K)on an array ofhigh-spin(S= 10) quantummolecular magnets. The array is made to act as a spin register where each nanomagnet encodes a single bit of information.[12]The experiment has laid the foundations for the extension of the validity of the Landauer principle to the quantum realm. Owing to the fast dynamics and low "inertia" of the single spins used in the experiment, the researchers also showed how an erasure operation can be carried out at the lowest possible thermodynamic cost—that imposed by the Landauer principle—and at a high speed.[12][1] The principle is widely accepted asphysical law, but it has been challenged for usingcircular reasoningand faulty assumptions.[13][14][15][16]Others[1][17][18]have defended the principle, and Sagawa and Ueda (2008)[6]and Cao and Feito (2009)[7]have shown that Landauer's principle is a consequence of the second law of thermodynamics and the entropy reduction associated with information gain. On the other hand, recent advances in non-equilibrium statistical physics have established that there is not a prior relationship between logical and thermodynamic reversibility.[19]It is possible that a physical process is logically reversible but thermodynamically irreversible. It is also possible that a physical process is logically irreversible but thermodynamically reversible. At best, the benefits of implementing a computation with a logically reversible system are nuanced.[20] In 2016, researchers at theUniversity of Perugiaclaimed to have demonstrated a violation of Landauer’s principle,[21]though their conclusions were disputed.[22]
https://en.wikipedia.org/wiki/Landauer%27s_principle
Inlogic, alogical connective(also called alogical operator,sentential connective, orsentential operator) is alogical constant. Connectives can be used to connect logical formulas. For instance in thesyntaxofpropositional logic, thebinaryconnective∨{\displaystyle \lor }can be used to join the twoatomic formulasP{\displaystyle P}andQ{\displaystyle Q}, rendering the complex formulaP∨Q{\displaystyle P\lor Q}. Common connectives includenegation,disjunction,conjunction,implication, andequivalence. In standard systems ofclassical logic, these connectives areinterpretedastruth functions, though they receive a variety of alternative interpretations innonclassical logics. Their classical interpretations are similar to the meanings of natural language expressions such asEnglish"not", "or", "and", and "if", but not identical. Discrepancies between natural language connectives and those of classical logic have motivated nonclassical approaches to natural language meaning as well as approaches which pair a classicalcompositional semanticswith a robustpragmatics. Informal languages, truth functions are represented by unambiguous symbols. This allows logical statements to not be understood in an ambiguous way. These symbols are calledlogical connectives,logical operators,propositional operators, or, inclassical logic,truth-functionalconnectives. For the rules which allow new well-formed formulas to be constructed by joining other well-formed formulas using truth-functional connectives, seewell-formed formula. Logical connectives can be used to link zero or more statements, so one can speak aboutn-arylogical connectives. ThebooleanconstantsTrueandFalsecan be thought of as zero-ary operators. Negation is a unary connective, and so on. Commonly used logical connectives include the following ones.[1] For example, the meaning of the statementsit is raining(denoted byp{\displaystyle p}) andI am indoors(denoted byq{\displaystyle q}) is transformed, when the two are combined with logical connectives: It is also common to consider thealways trueformula and thealways falseformula to be connective (in which case they arenullary). This table summarizes the terminology: Some authors used letters for connectives:u.{\displaystyle \operatorname {u.} }for conjunction (German's "und" for "and") ando.{\displaystyle \operatorname {o.} }for disjunction (German's "oder" for "or") in early works by Hilbert (1904);[16]Np{\displaystyle Np}for negation,Kpq{\displaystyle Kpq}for conjunction,Dpq{\displaystyle Dpq}for alternative denial,Apq{\displaystyle Apq}for disjunction,Cpq{\displaystyle Cpq}for implication,Epq{\displaystyle Epq}for biconditional inŁukasiewiczin 1929. Such a logical connective asconverse implication"←{\displaystyle \leftarrow }" is actually the same asmaterial conditionalwith swapped arguments; thus, the symbol for converse implication is redundant. In some logical calculi (notably, inclassical logic), certain essentially different compound statements arelogically equivalent. A lesstrivialexample of a redundancy is the classical equivalence between¬p∨q{\displaystyle \neg p\vee q}andp→q{\displaystyle p\to q}. Therefore, a classical-based logical system does not need the conditional operator "→{\displaystyle \to }" if "¬{\displaystyle \neg }" (not) and "∨{\displaystyle \vee }" (or) are already in use, or may use the "→{\displaystyle \to }" only as asyntactic sugarfor a compound having one negation and one disjunction. There are sixteenBoolean functionsassociating the inputtruth valuesp{\displaystyle p}andq{\displaystyle q}with four-digitbinaryoutputs.[17]These correspond to possible choices of binary logical connectives forclassical logic. Different implementations of classical logic can choose differentfunctionally completesubsets of connectives. One approach is to choose aminimalset, and define other connectives by some logical form, as in the example with the material conditional above. The following are theminimal functionally complete sets of operatorsin classical logic whose arities do not exceed 2: Another approach is to use with equal rights connectives of a certain convenient and functionally complete, butnot minimalset. This approach requires more propositionalaxioms, and each equivalence between logical forms must be either anaxiomor provable as a theorem. The situation, however, is more complicated inintuitionistic logic. Of its five connectives, {∧, ∨, →, ¬, ⊥}, only negation "¬" can be reduced to other connectives (seeFalse (logic) § False, negation and contradictionfor more). Neither conjunction, disjunction, nor material conditional has an equivalent form constructed from the other four logical connectives. The standard logical connectives of classical logic have rough equivalents in the grammars of natural languages. InEnglish, as in many languages, such expressions are typicallygrammatical conjunctions. However, they can also take the form ofcomplementizers,verbsuffixes, andparticles. Thedenotationsof natural language connectives is a major topic of research informal semantics, a field that studies the logical structure of natural languages. The meanings of natural language connectives are not precisely identical to their nearest equivalents in classical logic. In particular, disjunction can receive anexclusive interpretationin many languages. Some researchers have taken this fact as evidence that natural languagesemanticsisnonclassical. However, others maintain classical semantics by positingpragmaticaccounts of exclusivity which create the illusion of nonclassicality. In such accounts, exclusivity is typically treated as ascalar implicature. Related puzzles involving disjunction includefree choice inferences,Hurford's Constraint, and the contribution of disjunction inalternative questions. Other apparent discrepancies between natural language and classical logic include theparadoxes of material implication,donkey anaphoraand the problem ofcounterfactual conditionals. These phenomena have been taken as motivation for identifying the denotations of natural language conditionals with logical operators including thestrict conditional, thevariably strict conditional, as well as variousdynamicoperators. The following table shows the standard classically definable approximations for the English connectives. Some logical connectives possess properties that may be expressed in the theorems containing the connective. Some of those properties that a logical connective may have are: For classical and intuitionistic logic, the "=" symbol means that corresponding implications "...→..." and "...←..." for logical compounds can be both proved as theorems, and the "≤" symbol means that "...→..." for logical compounds is a consequence of corresponding "...→..." connectives for propositional variables. Somemany-valued logicsmay have incompatible definitions of equivalence and order (entailment). Both conjunction and disjunction are associative, commutative and idempotent in classical logic, most varieties of many-valued logic and intuitionistic logic. The same is true about distributivity of conjunction over disjunction and disjunction over conjunction, as well as for the absorption law. In classical logic and some varieties of many-valued logic, conjunction and disjunction are dual, and negation is self-dual, the latter is also self-dual in intuitionistic logic. As a way of reducing the number of necessary parentheses, one may introduceprecedence rules: ¬ has higher precedence than ∧, ∧ higher than ∨, and ∨ higher than →. So for example,P∨Q∧¬R→S{\displaystyle P\vee Q\land {\neg R}\rightarrow S}is short for(P∨(Q∧(¬R)))→S{\displaystyle (P\vee (Q\land (\neg R)))\rightarrow S}. Here is a table that shows a commonly used precedence of logical operators.[18][19] However, not all compilers use the same order; for instance, an ordering in which disjunction is lower precedence than implication or bi-implication has also been used.[20]Sometimes precedence between conjunction and disjunction is unspecified requiring to provide it explicitly in given formula with parentheses. The order of precedence determines which connective is the "main connective" when interpreting a non-atomic formula. The 16 logical connectives can bepartially orderedto produce the followingHasse diagram. The partial order is defined by declaring thatx≤y{\displaystyle x\leq y}if and only if wheneverx{\displaystyle x}holds then so doesy.{\displaystyle y.} Logical connectives are used incomputer scienceand inset theory. A truth-functional approach to logical operators is implemented aslogic gatesindigital circuits. Practically all digital circuits (the major exception isDRAM) are built up fromNAND,NOR,NOT, andtransmission gates; see more details inTruth function in computer science. Logical operators overbit vectors(corresponding to finiteBoolean algebras) arebitwise operations. But not every usage of a logical connective incomputer programminghas a Boolean semantic. For example,lazy evaluationis sometimes implemented forP∧QandP∨Q, so these connectives are not commutative if either or both of the expressionsP,Qhaveside effects. Also, aconditional, which in some sense corresponds to thematerial conditionalconnective, is essentially non-Boolean because forif (P) then Q;, the consequent Q is not executed if theantecedentP is false (although a compound as a whole is successful ≈ "true" in such case). This is closer to intuitionist andconstructivistviews on the material conditional— rather than to classical logic's views. Logical connectives are used to define the fundamental operations ofset theory,[21]as follows: This definition of set equality is equivalent to theaxiom of extensionality.
https://en.wikipedia.org/wiki/Logical_connective
Theone-way quantum computer, also known asmeasurement-based quantum computer(MBQC), is a method ofquantum computingthat first prepares anentangledresource state, usually acluster stateorgraph state, then performs singlequbitmeasurements on it. It is "one-way" because the resource state is destroyed by the measurements. The outcome of each individual measurement is random, but they are related in such a way that the computation always succeeds. In general, the choices ofbasisfor later measurements need to depend on the results of earlier measurements, and hence the measurements cannot all be performed at the same time. The implementation of MBQC is mainly considered forphotonic devices,[1]due to the difficulty of entanglingphotonswithout measurements, and the simplicity of creating and measuring them. However, MBQC is also possible with matter-based qubits.[2]The process of entanglement and measurement can be described with the help ofgraph toolsandgroup theory, in particular by the elements from the stabilizer group. The purpose of quantum computing focuses on building an information theory with the features ofquantum mechanics: instead of encoding a binary unit of information (bit), which can be switched to 1 or 0, a quantum binary unit of information (qubit) can simultaneously turn to be 0 and 1 at the same time, thanks to the phenomenon calledsuperposition.[3][4][5]Another key feature for quantum computing relies on theentanglementbetween the qubits.[6][7][8] In thequantum logic gate model, a set of qubits, called register, is prepared at the beginning of the computation, then a set of logic operations over the qubits, carried byunitary operators, is implemented.[9][10]A quantum circuit is formed by a register of qubits on which unitary transformations are applied over the qubits. In the measurement-based quantum computation, instead of implementing a logic operation via unitary transformations, the same operation is executed by entangling a numberk{\displaystyle k}of input qubits with a cluster ofa{\displaystyle a}ancillary qubits, forming an overall source state ofa+k=n{\displaystyle a+k=n}qubits, and then measuring a numberm{\displaystyle m}of them.[11][12]The remainingk=n−a{\displaystyle k=n-a}output qubits will be affected by the measurements because of the entanglement with the measured qubits. The one-way computer has been proved to be a universal quantum computer, which means it can reproduce any unitary operation over an arbitrary number of qubits.[9][13][14][15] The standard process of measurement-based quantum computing consists of three steps:[16][17]entangle the qubits, measure the ancillae (auxiliary qubits) and correct the outputs. In the first step, the qubits are entangled in order to prepare the source state. In the second step, the ancillae are measured, affecting the state of the output qubits. However, the measurement outputs are non-deterministic result, due to undetermined nature of quantum mechanics:[17]in order to carry on the computation in a deterministic way, some correction operators, called byproducts, are introduced. At the beginning of the computation, the qubits can be distinguished into two categories: the input and the ancillary qubits. The inputs represent the qubits set in a generic|ψ⟩=α|0⟩+β|1⟩{\displaystyle |\psi \rangle =\alpha |0\rangle +\beta |1\rangle }state, on which some unitary transformations are to be acted. In order to prepare the source state, all the ancillary qubits must be prepared in the|+⟩{\displaystyle |+\rangle }state:[11][18] where|0⟩{\displaystyle |0\rangle }and|1⟩{\displaystyle |1\rangle }are the quantum encoding for the classical0{\displaystyle 0}and1{\displaystyle 1}bits: A register withn{\displaystyle n}qubits will be therefore set as|+⟩⊗n{\displaystyle |+\rangle ^{\otimes n}}. Thereafter, the entanglement between two qubits can be performed by applying a (Controlled)CZ{\displaystyle CZ}gate operation.[19]The matrix representation of such two-qubits operator is given by The action of aCZ{\displaystyle CZ}gate over two qubits can be described by the following system: When applying aCZ{\displaystyle CZ}gate over two ancillae in the|+⟩{\displaystyle |+\rangle }state, the overall state turns to be an entangled pair of qubits. When entangling two ancillae, no importance is given about which is the control qubit and which one the target, as far as the outcome turns to be the same. Similarly, as theCZ{\displaystyle CZ}gates are represented in a diagonal form, they all commute each other, and no importance is given about which qubits to entangle first. Photons are the most common qubit system that is used in the context of one-way quantum computing.[20][21][22]However, deterministicCZ{\displaystyle CZ}gates between photons are difficult to realize. Therefore, probabilistic entangling gates such asBell statemeasurements are typically considered.[23]Furthermore, quantum emitters such as atoms[24]orquantum dots[25]can be used to create deterministic entanglement between photonic qubits.[26] The process of measurement over a single-particle state can be described by projecting the state on the eigenvector of an observable. Consider an observableO{\displaystyle O}with two possible eigenvectors, say|o1⟩{\displaystyle |o_{1}\rangle }and|o2⟩{\displaystyle |o_{2}\rangle }, and suppose to deal with a multi-particle quantum system|Ψ⟩{\displaystyle |\Psi \rangle }. Measuring thei{\displaystyle i}-th qubit by theO{\displaystyle O}observable means to project the|Ψ⟩{\displaystyle |\Psi \rangle }state over the eigenvectors ofO{\displaystyle O}:[18] The actual state of thei{\displaystyle i}-th qubit is now|oi⟩{\displaystyle |o_{i}\rangle }, which can turn to be|o1⟩{\displaystyle |o_{1}\rangle }or|o2⟩{\displaystyle |o_{2}\rangle }, depending on the outcome from the measurement (which is probabilistic in quantum mechanics). The measurement projection can be performed over the eigenstates of theM(θ)=cos⁡(θ)X+sin⁡(θ)Y{\displaystyle M(\theta )=\cos(\theta )X+\sin(\theta )Y}observable: whereX{\displaystyle X}andY{\displaystyle Y}belong to thePauli matrices. The eigenvectors ofM(θ){\displaystyle M(\theta )}are|θ±⟩=|0⟩±eiθ|1⟩{\displaystyle |\theta _{\pm }\rangle =|0\rangle \pm e^{i\theta }|1\rangle }. Measuring a qubit on theX{\displaystyle X}-Y{\displaystyle Y}plane, i.e. by theM(θ){\displaystyle M(\theta )}observable, means to project it over|θ+⟩{\displaystyle |\theta _{+}\rangle }or|θ−⟩{\displaystyle |\theta _{-}\rangle }. In the one-way quantum computing, once a qubit has been measured, there is no way to recycle it in the flow of computation. Therefore, instead of using the|oi⟩⟨oi|{\displaystyle |o_{i}\rangle \langle o_{i}|}notation, it is common to find⟨oi|{\displaystyle \langle o_{i}|}to indicate a projective measurement over thei{\displaystyle i}-th qubit. After all the measurements have been performed, the system has been reduced to a smaller number of qubits, which form the output state of the system. Due to the probabilistic outcome of measurements, the system is not set in a deterministic way: after a measurement on theX{\displaystyle X}-Y{\displaystyle Y}plane, the output may change whether the outcome had been|θ+⟩{\displaystyle |\theta _{+}\rangle }or|θ−⟩{\displaystyle |\theta _{-}\rangle }. In order to perform a deterministic computation, some corrections must be introduced. The correction operators, or byproduct operators, are applied to the output qubits after all the measurements have been performed.[18][27]The byproduct operators which can be implemented areX{\displaystyle X}andZ{\displaystyle Z}.[28]Depending on the outcome of the measurement, a byproduct operator can be applied or not to the output state: aX{\displaystyle X}correction over thej{\displaystyle j}-th qubit, depending on the outcome of the measurement performed over thei{\displaystyle i}-th qubit via theM(θ){\displaystyle M(\theta )}observable, can be described asXjsi{\displaystyle X_{j}^{s_{i}}}, wheresi{\displaystyle s_{i}}is set to be0{\displaystyle 0}if the outcome of measurement was|θ+⟩{\displaystyle |\theta _{+}\rangle }, otherwise is1{\displaystyle 1}if it was|θ−⟩{\displaystyle |\theta _{-}\rangle }. In the first case, no correction will occur, in the latter one aX{\displaystyle X}operator will be implemented on thej{\displaystyle j}-th qubit. Eventually, even though the outcome of a measurement is not deterministic in quantum mechanics, the results from measurements can be used in order to perform corrections, and carry on a deterministic computation. The operations of entanglement, measurement and correction can be performed in order to implement unitary gates. Such operations can be performed time by time for any logic gate in the circuit, or rather in a pattern which allocates all the entanglement operations at the beginning, the measurements in the middle and the corrections at the end of the circuit. Such pattern of computation is referred to asCMEstandard pattern.[16][17]In theCMEformalism, the operation of entanglement between thei{\displaystyle i}andj{\displaystyle j}qubits is referred to asEij{\displaystyle E_{ij}}. The measurement on thei{\displaystyle i}qubit, in theX{\displaystyle X}-Y{\displaystyle Y}plane, with respect to aθ{\displaystyle \theta }angle, is defined asMiθ{\displaystyle M_{i}^{\theta }}. At last, theX{\displaystyle X}byproduct over ai{\displaystyle i}qubit, with respect to the measurement over aj{\displaystyle j}qubit, is described asXisj{\displaystyle X_{i}^{s_{j}}}, wheresj{\displaystyle s_{j}}is set to0{\displaystyle 0}if the outcome is the|θ+⟩{\displaystyle |\theta _{+}\rangle }state,1{\displaystyle 1}when the outcome is|θ−⟩{\displaystyle |\theta _{-}\rangle }. The same notation holds for theZ{\displaystyle Z}byproducts. When performing a computation following theCMEpattern, it may happen that two measurementsMiθ1{\displaystyle M_{i}^{\theta _{1}}}andMjθ2{\displaystyle M_{j}^{\theta _{2}}}on theX{\displaystyle X}-Y{\displaystyle Y}plane depend one on the outcome from the other. For example, the sign in front of the angle of measurement on thej{\displaystyle j}-th qubit can be flipped with respect to the measurement over thei{\displaystyle i}-th qubit: in such case, the notation will be written as[Mjθ2]siMiθ1{\displaystyle [M_{j}^{\theta _{2}}]^{s_{i}}M_{i}^{\theta _{1}}}, and therefore the two operations of measurement do commute each other no more. Ifsi{\displaystyle s_{i}}is set to0{\displaystyle 0}, no flip on theθ2{\displaystyle \theta _{2}}sign will occur, otherwise (whensi=1{\displaystyle s_{i}=1}) theθ2{\displaystyle \theta _{2}}angle will be flipped to−θ2{\displaystyle -\theta _{2}}. The notation[Mjθ2]si{\displaystyle [M_{j}^{\theta _{2}}]^{s_{i}}}can therefore be rewritten asMj(−)siθ2{\displaystyle M_{j}^{(-)^{s_{i}}\theta _{2}}}. As an illustrative example, consider theEuler rotationin theXZX{\displaystyle XZX}basis: such operation, in the gate model of quantum computation, is described as[29] whereϕ,θ,λ{\displaystyle \phi ,\theta ,\lambda }are the angles for the rotation, whileγ{\displaystyle \gamma }defines a global phase which is irrelevant for the computation. To perform such operation in the one-way computing frame, it is possible to implement the followingCMEpattern:[27][30] where the input state|ψ⟩=α|0⟩+β|1⟩{\displaystyle |\psi \rangle =\alpha |0\rangle +\beta |1\rangle }is the qubit1{\displaystyle 1}, all the other qubits are auxiliary ancillae and therefore have to be prepared in the|+⟩{\displaystyle |+\rangle }state. In the first step, the input state|ψ⟩{\displaystyle |\psi \rangle }must be entangled with the second qubits; in turn, the second qubit must be entangled with the third one and so on. The entangling operationsEij{\displaystyle E_{ij}}between the qubits can be performed by theCZ{\displaystyle CZ}gates. In the second place, the first and the second qubits must be measured by theM(θ){\displaystyle M(\theta )}observable, which means they must be projected onto the eigenstates|θ⟩{\displaystyle |\theta \rangle }of such observable. When theθ{\displaystyle \theta }is zero, the|θ±⟩{\displaystyle |\theta _{\pm }\rangle }states reduce to|±⟩{\displaystyle |\pm \rangle }ones, i.e. the eigenvectors for theX{\displaystyle X}Pauli operator. The first measurementM10{\displaystyle M_{1}^{0}}is performed on the qubit1{\displaystyle 1}with aθ=0{\displaystyle \theta =0}angle, which means it has to be projected onto the⟨±|{\displaystyle \langle \pm |}states. The second measurement[M2−λ]s1{\displaystyle [M_{2}^{-\lambda }]^{s_{1}}}is performed with respect to the−λ{\displaystyle -\lambda }angle, i.e. the second qubit has to be projected on the⟨0|±eiλ⟨1|{\displaystyle \langle 0|\pm e^{i\lambda }\langle 1|}state. However, if the outcome from the previous measurement has been⟨−|{\displaystyle \langle -|}, the sign of theλ{\displaystyle \lambda }angle has to be flipped, and the second qubit will be projected to the⟨0|+e−iλ⟨1|{\displaystyle \langle 0|+e^{-i\lambda }\langle 1|}state; if the outcome from the first measurement has been⟨+|{\displaystyle \langle +|}, no flip needs to be performed. The same operations have to be repeated for the third[M3θ]s2{\displaystyle [M_{3}^{\theta }]^{s_{2}}}and the fourth[M4ϕ]s1+s3{\displaystyle [M_{4}^{\phi }]^{s_{1}+s_{3}}}measurements, according to the respective angles and sign flips. The sign over theϕ{\displaystyle \phi }angle is set to be(−)s1+s3{\displaystyle (-)^{s_{1}+s_{3}}}. Eventually the fifth qubit (the only one not to be measured) figures out to be the output state. At last, the correctionsZ5s1+s3X5s2+s4{\displaystyle Z_{5}^{s_{1}+s_{3}}X_{5}^{s_{2}+s_{4}}}over the output state have to be performed via the byproduct operators. For instance, if the measurements over the second and the fourth qubits turned to be⟨ϕ+|{\displaystyle \langle \phi _{+}|}and⟨λ+|{\displaystyle \langle \lambda _{+}|}, no correction will be conducted by theX5{\displaystyle X_{5}}operator, ass2=s4=0{\displaystyle s_{2}=s_{4}=0}. The same result holds for a⟨ϕ−|{\displaystyle \langle \phi _{-}|}⟨λ−|{\displaystyle \langle \lambda _{-}|}outcome, ass2=s4=1{\displaystyle s_{2}=s_{4}=1}and thus the squared Pauli operatorX2{\displaystyle X^{2}}returns the identity. As seen in such example, in the measurement-based computation model, the physical input qubit (the first one) and output qubit (the third one) may differ each other. The one-way quantum computer allows the implementation of a circuit of unitary transformations through the operations of entanglement and measurement. At the same time, any quantum circuit can be in turn converted into aCMEpattern: a technique to translate quantum circuits into aMBQCpattern of measurements has been formulated by V. Danos et al.[16][17][31] Such conversion can be carried on by using a universal set of logic gates composed by theCZ{\displaystyle CZ}and theJ(θ){\displaystyle J(\theta )}operators: therefore, any circuit can be decomposed into a set ofCZ{\displaystyle CZ}and theJ(θ){\displaystyle J(\theta )}gates. TheJ(θ){\displaystyle J(\theta )}single-qubit operator is defined as follows: TheJ(θ){\displaystyle J(\theta )}can be converted into aCMEpattern as follows, with qubit 1 being the input and qubit 2 being the output: which means, to implement aJ(θ){\displaystyle J(\theta )}operator, the input qubits|ψ⟩{\displaystyle |\psi \rangle }must be entangled with an ancilla qubit|+⟩{\displaystyle |+\rangle }, therefore the input must be measured on theX{\displaystyle X}-Y{\displaystyle Y}plane, thereafter the output qubit is corrected by theX2{\displaystyle X_{2}}byproduct. Once everyJ(θ){\displaystyle J(\theta )}gate has been decomposed into theCMEpattern, the operations in the overall computation will consist ofEij{\displaystyle E_{ij}}entanglements,Mi−θi{\displaystyle M_{i}^{-\theta _{i}}}measurements andXj{\displaystyle X_{j}}corrections. In order to lead the whole flow of computation to aCMEpattern, some rules are provided. In order to move all theEij{\displaystyle E_{ij}}entanglements at the beginning of the process, some rules ofcommutationmust be pointed out: The entanglement operatorEij{\displaystyle E_{ij}}commutes with theZ{\displaystyle Z}Pauli operators and with any other operatorAk{\displaystyle A_{k}}acting on a qubitk≠i,j{\displaystyle k\neq i,j}, but not with theX{\displaystyle X}Pauli operators acting on thei{\displaystyle i}-th orj{\displaystyle j}-th qubits. The measurement operationsMiθ{\displaystyle M_{i}^{\theta }}commute with the corrections in the following manner: where[Miθ]s=Mi(−)sθ{\displaystyle [M_{i}^{\theta }]^{s}=M_{i}^{(-)^{s}\theta }}. Such operation means that, when shifting theX{\displaystyle X}corrections at the end of the pattern, some dependencies between the measurements may occur. TheSit{\displaystyle S_{i}^{t}}operator is called signal shifting, whose action will be explained in the next paragraph. For particularθ{\displaystyle \theta }angles, some simplifications, called Pauli simplifications, can be introduced: The action of the signal shifting operatorSit{\displaystyle S_{i}^{t}}can be explained through its rules of commutation: Thes[(t+si)/si]{\displaystyle s[(t+s_{i})/s_{i}]}operation has to be explained: suppose to have a sequence of signalss{\displaystyle s}, consisting ofs1+s2+...+si+...{\displaystyle s_{1}+s_{2}+...+s_{i}+...}, the operations[(t+si)/si]{\displaystyle s[(t+s_{i})/s_{i}]}means to substitutesi{\displaystyle s_{i}}withsi+t{\displaystyle s_{i}+t}in the sequences{\displaystyle s}, which becomess1+s2+...+si+t+...{\displaystyle s_{1}+s_{2}+...+s_{i}+t+...}. If nosi{\displaystyle s_{i}}appears in thes{\displaystyle s}sequence, no substitution will occur. To perform a correctCMEpattern, every signal shifting operatorSit{\displaystyle S_{i}^{t}}must be translated at the end of the pattern. When preparing the source state of entangled qubits, a graph representation can be given by the stabilizer group. The stabilizer groupSn{\displaystyle {\mathcal {S}}_{n}}is anabeliansubgroupfrom thePauli groupPn{\displaystyle {\mathcal {P}}_{n}}, which one can be described by its generators{±1,±i}×{I,X,Y,Z}⊗n{\displaystyle \{\pm 1,\pm i\}\times \{I,X,Y,Z\}^{\otimes n}}.[32][33]A stabilizer state is an{\displaystyle n}-qubit state|Ψ⟩{\displaystyle |\Psi \rangle }which is a unique eigenstate for the generatorsSi{\displaystyle S_{i}}of theSn{\displaystyle {\mathcal {S}}_{n}}stabilizer group:[19] Of course,Si∈Sn∀i{\displaystyle S_{i}\in {\mathcal {S}}_{n}\,\forall i}. It is therefore possible to define an{\displaystyle n}qubit graph state|G⟩{\displaystyle |G\rangle }as a quantum state associated with a graph, i.e. a setG=(V,E){\displaystyle G=(V,E)}whoseverticesV{\displaystyle V}correspond to the qubits, while theedgesE{\displaystyle E}represent the entanglements between the qubits themselves. The vertices can be labelled by ai{\displaystyle i}index, while the edges, linking thei{\displaystyle i}-th vertex to thej{\displaystyle j}-th one, by two-indices labels, such as(i,j){\displaystyle (i,j)}.[34]In the stabilizer formalism, such graph structure can be encoded by theKi{\displaystyle K_{i}}generators ofSn{\displaystyle {\mathcal {S}}_{n}}, defined as[15][35][36] wherej∈(i,j){\displaystyle {j\in (i,j)}}stands for all thej{\displaystyle j}qubits neighboring with thei{\displaystyle i}-th one, i.e. thej{\displaystyle j}vertices linked by a(i,j){\displaystyle (i,j)}edge with thei{\displaystyle i}vertex. EachKi{\displaystyle K_{i}}generator commute with all the others. A graph composed byn{\displaystyle n}vertices can be described byn{\displaystyle n}generators from the stabilizer group: While the number ofXi{\displaystyle X_{i}}is fixed for eachKi{\displaystyle K_{i}}generator, the number ofZj{\displaystyle Z_{j}}may differ, with respect to the connections implemented by the edges in the graph. The Clifford groupCn{\displaystyle {\mathcal {C}}_{n}}is composed by elements which leave invariant the elements from the Pauli's groupPn{\displaystyle {\mathcal {P}}_{n}}:[19][33][37] The Clifford group requires three generators, which can be chosen as the Hadamard gateH{\displaystyle H}and the phase rotationS{\displaystyle S}for the single-qubit gates, and another two-qubits gate from theCNOT{\displaystyle CNOT}(controlled NOT gate) or theCZ{\displaystyle CZ}(controlled phase gate): Consider a state|G⟩{\displaystyle |G\rangle }which is stabilized by a set of stabilizersSi{\displaystyle S_{i}}. Acting via an elementU{\displaystyle U}from the Clifford group on such state, the following equalities hold:[33][38] Therefore, theU{\displaystyle U}operations map the|G⟩{\displaystyle |G\rangle }state toU|G⟩{\displaystyle U|G\rangle }and itsSi{\displaystyle S_{i}}stabilizers toUSiU†{\displaystyle US_{i}U^{\dagger }}. Such operation may give rise to different representations for theKi{\displaystyle K_{i}}generators of the stabilizer group. TheGottesman–Knill theoremstates that, given a set of logic gates from the Clifford group, followed byZ{\displaystyle Z}measurements, such computation can be efficiently simulated on a classical computer in the strong sense, i.e. a computation which elaborates in a polynomial-time the probabilityP(x){\displaystyle P(x)}for a given outputx{\displaystyle x}from the circuit.[19][33][39][40][41] Measurement-based computation on a periodic 3D lattice cluster state can be used to implement topological quantum error correction.[42]Topological cluster state computation is closely related to Kitaev'storic code, as the 3D topological cluster state can be constructed and measured over time by a repeated sequence of gates on a 2D array.[43] One-way quantum computation has been demonstrated by running the 2 qubitGrover's algorithmon a 2x2 cluster state of photons.[44][45]Alinear optics quantum computerbased on one-way computation has been proposed.[46] Cluster states have also been created inoptical lattices,[47]but were not used for computation as the atom qubits were too close together to measure individually. It has been shown that the (spin32{\displaystyle {\tfrac {3}{2}}})AKLTstate on a 2Dhoneycomb latticecan be used as a resource for MBQC.[48][49]More recently it has been shown that a spin-mixture AKLT state can be used as a resource.[50]
https://en.wikipedia.org/wiki/One-way_quantum_computer
Inquantum computing, aquantum algorithmis analgorithmthat runs on a realistic model ofquantum computation, the most commonly used model being thequantum circuitmodel of computation.[1][2]A classical (or non-quantum) algorithm is a finite sequence of instructions, or a step-by-step procedure for solving a problem, where each step or instruction can be performed on a classicalcomputer. Similarly, a quantum algorithm is a step-by-step procedure, where each of the steps can be performed on aquantum computer. Although all classical algorithms can also be performed on a quantum computer,[3]: 126the term quantum algorithm is generally reserved for algorithms that seem inherently quantum, or use some essential feature of quantum computation such asquantum superpositionorquantum entanglement. Problems that areundecidableusing classical computers remain undecidable using quantum computers.[4]: 127What makes quantum algorithms interesting is that they might be able to solve some problems faster than classical algorithms because the quantum superposition and quantum entanglement that quantum algorithms exploit generally cannot be efficiently simulated on classical computers (seeQuantum supremacy). The best-known algorithms areShor's algorithmfor factoring andGrover's algorithmfor searching an unstructured database or an unordered list. Shor's algorithm runs much (almost exponentially) faster than the most efficient known classical algorithm for factoring, thegeneral number field sieve.[5]Grover's algorithm runs quadratically faster than the best possible classical algorithm for the same task,[6]alinear search. Quantum algorithms are usually described, in the commonly used circuit model of quantum computation, by aquantum circuitthat acts on some inputqubitsand terminates with ameasurement. A quantum circuit consists of simplequantum gates, each of which acts on some finite number of qubits. Quantum algorithms may also be stated in other models of quantum computation, such as theHamiltonian oracle model.[7] Quantum algorithms can be categorized by the main techniques involved in the algorithm. Some commonly used techniques/ideas in quantum algorithms includephase kick-back,phase estimation, thequantum Fourier transform,quantum walks,amplitude amplificationandtopological quantum field theory. Quantum algorithms may also be grouped by the type of problem solved; see, e.g., the survey on quantum algorithms for algebraic problems.[8] Thequantum Fourier transformis the quantum analogue of thediscrete Fourier transform, and is used in several quantum algorithms. TheHadamard transformis also an example of a quantum Fourier transform over an n-dimensional vector space over the fieldF2. The quantum Fourier transform can be efficiently implemented on a quantum computer using only a polynomial number ofquantum gates.[citation needed] The Deutsch–Jozsa algorithm solves ablack-boxproblem that requires exponentially many queries to the black box for any deterministic classical computer, but can be done with a single query by a quantum computer. However, when comparing bounded-error classical and quantum algorithms, there is no speedup, since a classical probabilistic algorithm can solve the problem with a constant number of queries with small probability of error. The algorithm determines whether a functionfis either constant (0 on all inputs or 1 on all inputs) or balanced (returns 1 for half of the input domain and 0 for the other half). The Bernstein–Vazirani algorithm is the first quantum algorithm that solves a problem more efficiently than the best known classical algorithm. It was designed to create anoracle separationbetweenBQPandBPP. Simon's algorithm solves a black-box problem exponentially faster than any classical algorithm, including bounded-error probabilistic algorithms. This algorithm, which achieves an exponential speedup over all classical algorithms that we consider efficient, was the motivation forShor's algorithmfor factoring. Thequantum phase estimation algorithmis used to determine the eigenphase of an eigenvector of a unitary gate, given a quantum state proportional to the eigenvector and access to the gate. The algorithm is frequently used as a subroutine in other algorithms. Shor's algorithm solves thediscrete logarithmproblem and theinteger factorizationproblem in polynomial time,[9]whereas the best known classical algorithms take super-polynomial time. It is unknown whether these problems are inPorNP-complete. It is also one of the few quantum algorithms that solves a non-black-box problem in polynomial time, where the best known classical algorithms run in super-polynomial time. Theabelianhidden subgroup problemis a generalization of many problems that can be solved by a quantum computer, such as Simon's problem, solvingPell's equation, testing theprincipal idealof aringR andfactoring. There are efficient quantum algorithms known for the Abelian hidden subgroup problem.[10]The more general hidden subgroup problem, where the group is not necessarily abelian, is a generalization of the previously mentioned problems, as well asgraph isomorphismand certainlattice problems. Efficient quantum algorithms are known for certain non-abelian groups. However, no efficient algorithms are known for thesymmetric group, which would give an efficient algorithm for graph isomorphism[11]and thedihedral group, which would solve certain lattice problems.[12] AGauss sumis a type ofexponential sum. The best known classical algorithm for estimating these sums takes exponential time. Since the discrete logarithm problem reduces to Gauss sum estimation, an efficient classical algorithm for estimating Gauss sums would imply an efficient classical algorithm for computing discrete logarithms, which is considered unlikely. However, quantum computers can estimate Gauss sums to polynomial precision in polynomial time.[13] Consider anoracleconsisting ofnrandom Boolean functions mappingn-bit strings to a Boolean value, with the goal of finding nn-bit stringsz1,...,znsuch that for the Hadamard-Fourier transform, at least 3/4 of the strings satisfy and at least 1/4 satisfy This can be done inbounded-error quantum polynomial time(BQP).[14] Amplitude amplificationis a technique that allows the amplification of a chosen subspace of a quantum state. Applications of amplitude amplification usually lead to quadratic speedups over the corresponding classical algorithms. It can be considered as a generalization of Grover's algorithm.[citation needed] Grover's algorithm searches an unstructured database (or an unordered list) with N entries for a marked entry, using onlyO(N){\displaystyle O({\sqrt {N}})}queries instead of theO(N){\displaystyle O({N})}queries required classically.[15]Classically,O(N){\displaystyle O({N})}queries are required even allowing bounded-error probabilistic algorithms. Theorists have considered a hypothetical generalization of a standard quantum computer that could access the histories of the hidden variables inBohmian mechanics. (Such a computer is completely hypothetical and wouldnotbe a standard quantum computer, or even possible under the standard theory of quantum mechanics.) Such a hypothetical computer could implement a search of an N-item database in at mostO(N3){\displaystyle O({\sqrt[{3}]{N}})}steps. This is slightly faster than theO(N){\displaystyle O({\sqrt {N}})}steps taken by Grover's algorithm. However, neither search method would allow either model of quantum computer to solveNP-completeproblems in polynomial time.[16] Quantum countingsolves a generalization of the search problem. It solves the problem of counting the number of marked entries in an unordered list, instead of just detecting whether one exists. Specifically, it counts the number of marked entries in anN{\displaystyle N}-element list with an error of at mostε{\displaystyle \varepsilon }by making onlyΘ(ε−1N/k){\displaystyle \Theta \left(\varepsilon ^{-1}{\sqrt {N/k}}\right)}queries, wherek{\displaystyle k}is the number of marked elements in the list.[17][18]More precisely, the algorithm outputs an estimatek′{\displaystyle k'}fork{\displaystyle k}, the number of marked entries, with accuracy|k−k′|≤εk{\displaystyle |k-k'|\leq \varepsilon k}. A quantum walk is the quantum analogue of a classicalrandom walk. A classical random walk can be described by aprobability distributionover some states, while a quantum walk can be described by aquantum superpositionover states. Quantum walks are known to give exponential speedups for some black-box problems.[19][20]They also provide polynomial speedups for many problems. A framework for the creation of quantum walk algorithms exists and is a versatile tool.[21] The Boson Sampling Problem in an experimental configuration assumes[22]an input ofbosons(e.g., photons) of moderate number that are randomly scattered into a large number of output modes, constrained by a definedunitarity. When individual photons are used, the problem is isomorphic to a multi-photon quantum walk.[23]The problem is then to produce a fair sample of theprobability distributionof the output that depends on the input arrangement of bosons and the unitarity.[24]Solving this problem with a classical computer algorithm requires computing thepermanentof the unitary transform matrix, which may take a prohibitively long time or be outright impossible. In 2014, it was proposed[25]that existing technology and standard probabilistic methods of generating single-photon states could be used as an input into a suitable quantum computablelinear optical networkand that sampling of the output probability distribution would be demonstrably superior using quantum algorithms. In 2015, investigation predicted[26]the sampling problem had similar complexity for inputs other thanFock-statephotons and identified a transition incomputational complexityfrom classically simulable to just as hard as the Boson Sampling Problem, depending on the size of coherent amplitude inputs. The element distinctness problem is the problem of determining whether all the elements of a list are distinct. Classically,Ω(N){\displaystyle \Omega (N)}queries are required for a list of sizeN{\displaystyle N}; however, it can be solved inΘ(N2/3){\displaystyle \Theta (N^{2/3})}queries on a quantum computer. The optimal algorithm was put forth byAndris Ambainis,[27]andYaoyun Shifirst proved a tight lower bound when the size of the range is sufficiently large.[28]Ambainis[29]and Kutin[30]independently (and via different proofs) extended that work to obtain the lower bound for all functions. The triangle-finding problem is the problem of determining whether a given graph contains a triangle (acliqueof size 3). The best-known lower bound for quantum algorithms isΩ(N){\displaystyle \Omega (N)}, but the best algorithm known requires O(N1.297) queries,[31]an improvement over the previous best O(N1.3) queries.[21][32] A formula is a tree with a gate at each internal node and an input bit at each leaf node. The problem is to evaluate the formula, which is the output of the root node, given oracle access to the input. A well studied formula is the balanced binary tree with only NAND gates.[33]This type of formula requiresΘ(Nc){\displaystyle \Theta (N^{c})}queries using randomness,[34]wherec=log2⁡(1+33)/4≈0.754{\displaystyle c=\log _{2}(1+{\sqrt {33}})/4\approx 0.754}. With a quantum algorithm, however, it can be solved inΘ(N1/2){\displaystyle \Theta (N^{1/2})}queries. No better quantum algorithm for this case was known until one was found for the unconventional Hamiltonian oracle model.[7]The same result for the standard setting soon followed.[35] Fast quantum algorithms for more complicated formulas are also known.[36] The problem is to determine if ablack-box group, given bykgenerators, iscommutative. A black-box group is a group with an oracle function, which must be used to perform the group operations (multiplication, inversion, and comparison with identity). The interest in this context lies in the query complexity, which is the number of oracle calls needed to solve the problem. The deterministic and randomized query complexities areΘ(k2){\displaystyle \Theta (k^{2})}andΘ(k){\displaystyle \Theta (k)}, respectively.[37]A quantum algorithm requiresΩ(k2/3){\displaystyle \Omega (k^{2/3})}queries, while the best-known classical algorithm usesO(k2/3log⁡k){\displaystyle O(k^{2/3}\log k)}queries.[38] Thecomplexity classBQP(bounded-error quantum polynomial time) is the set ofdecision problemssolvable by aquantum computerinpolynomial timewith error probability of at most 1/3 for all instances.[39]It is the quantum analogue to the classical complexity classBPP. A problem isBQP-complete if it is inBQPand any problem inBQPcan bereducedto it inpolynomial time. Informally, the class ofBQP-complete problems are those that are as hard as the hardest problems inBQPand are themselves efficiently solvable by a quantum computer (with bounded error). Witten had shown that theChern-Simonstopological quantum field theory(TQFT) can be solved in terms ofJones polynomials. A quantum computer can simulate a TQFT, and thereby approximate the Jones polynomial,[40]which as far as we know, is hard to compute classically in the worst-case scenario.[citation needed] The idea that quantum computers might be more powerful than classical computers originated in Richard Feynman's observation that classical computers seem to require exponential time to simulate many-particle quantum systems, yet quantum many-body systems are able to "solve themselves."[41]Since then, the idea that quantum computers can simulate quantum physical processes exponentially faster than classical computers has been greatly fleshed out and elaborated. Efficient (i.e., polynomial-time) quantum algorithms have been developed for simulating both Bosonic and Fermionic systems,[42]as well as the simulation of chemical reactions beyond the capabilities of current classical supercomputers using only a few hundred qubits.[43]Quantum computers can also efficiently simulate topological quantum field theories.[44]In addition to its intrinsic interest, this result has led to efficient quantum algorithms for estimatingquantum topological invariantssuch asJones[45]andHOMFLY polynomials,[46]and theTuraev-Viro invariantof three-dimensional manifolds.[47] In 2009,Aram Harrow, Avinatan Hassidim, andSeth Lloyd, formulated a quantum algorithm for solvinglinear systems. Thealgorithmestimates the result of a scalar measurement on the solution vector to a given linear system of equations.[48] Provided that the linear system issparseand has a lowcondition numberκ{\displaystyle \kappa }, and that the user is interested in the result of a scalar measurement on the solution vector (instead of the values of the solution vector itself), then the algorithm has a runtime ofO(log⁡(N)κ2){\displaystyle O(\log(N)\kappa ^{2})}, whereN{\displaystyle N}is the number of variables in the linear system. This offers an exponential speedup over the fastest classical algorithm, which runs inO(Nκ){\displaystyle O(N\kappa )}(orO(Nκ){\displaystyle O(N{\sqrt {\kappa }})}for positive semidefinite matrices). Hybrid Quantum/Classical Algorithms combine quantum state preparation and measurement with classical optimization.[49]These algorithms generally aim to determine the ground-state eigenvector and eigenvalue of a Hermitian operator. Thequantum approximate optimization algorithmtakes inspiration from quantum annealing, performing a discretized approximation of quantum annealing using a quantum circuit. It can be used to solve problems in graph theory.[50]The algorithm makes use of classical optimization of quantum operations to maximize an "objective function." Thevariational quantum eigensolver(VQE) algorithm applies classical optimization to minimize the energy expectation value of anansatz stateto find the ground state of a Hermitian operator, such as a molecule's Hamiltonian.[51]It can also be extended to find excited energies of molecular Hamiltonians.[52] The contracted quantum eigensolver (CQE) algorithm minimizes the residual of a contraction (or projection) of the Schrödinger equation onto the space of two (or more) electrons to find the ground- or excited-state energy and two-electron reduced density matrix of a molecule.[53]It is based on classical methods for solving energies and two-electron reduced density matrices directly from the anti-Hermitian contracted Schrödinger equation.[54]
https://en.wikipedia.org/wiki/Quantum_algorithm
Aquantum cellular automaton(QCA) is an abstract model ofquantum computation, devised in analogy to conventional models ofcellular automataintroduced byJohn von Neumann. The same name may also refer toquantum dot cellular automata, which are a proposed physical implementation of "classical" cellular automata by exploitingquantum mechanicalphenomena. QCA have attracted a lot of attention as a result of its extremely small feature size (at the molecular or even atomic scale) and its ultra-low power consumption, making it one candidate for replacingCMOStechnology. In the context of models of computation or of physical systems,quantum cellular automatonrefers to the merger of elements of both (1) the study of cellular automata in conventionalcomputer scienceand (2) the study ofquantum information processing. In particular, the following are features of models of quantum cellular automata: Another feature that is often considered important for a model of quantum cellular automata is that it should beuniversalfor quantum computation (i.e. that it can efficiently simulatequantum Turing machines,[1][2]some arbitraryquantum circuit[3]or simply all other quantum cellular automata[4][5]). Models which have been proposed recently impose further conditions, e.g. that quantum cellular automata should be reversible and/or locally unitary, and have an easily determined global transition function from the rule for updating individual cells.[2]Recent results show that these properties can be derived axiomatically, from the symmetries of the global evolution.[6][7][8] In 1982,Richard Feynmansuggested an initial approach to quantizing a model of cellular automata.[9]In 1985,David Deutschpresented a formal development of the subject.[10]Later, Gerhard Grössing andAnton Zeilingerintroduced the term "quantum cellular automata" to refer to a model they defined in 1988,[11]although their model had very little in common with the concepts developed by Deutsch and so has not been developed significantly as a model of computation. The first formal model of quantum cellular automata to be researched in depth was that introduced byJohn Watrous.[1]This model was developed further by Wim van Dam,[12]as well as Christoph Dürr, Huong LêThanh, and Miklos Santha,[13][14]Jozef Gruska.[15]and Pablo Arrighi.[16]However it was later realised that this definition was too loose, in the sense that some instances of it allow superluminal signalling.[6][7]A second wave of models includes those of Susanne Richter and Reinhard Werner,[17]of Benjamin Schumacher and Reinhard Werner,[6]of Carlos Pérez-Delgado and Donny Cheung,[2]and of Pablo Arrighi, Vincent Nesme and Reinhard Werner.[7][8]These are all closely related, and do not suffer any such locality issue. In the end one can say that they all agree to picture quantum cellular automata as just some large quantum circuit, infinitely repeating across time and space. Recent reviews of the topic are available here.[18][19] Models of quantum cellular automata have been proposed by David Meyer,[20][21]Bruce Boghosianand Washington Taylor,[22]and Peter Love and Bruce Boghosian[23]as a means of simulating quantum lattice gases, motivated by the use of "classical" cellular automata to model classical physical phenomena such as gas dispersion.[24]Criteria determining when a quantum cellular automaton (QCA) can be described as quantum lattice gas automaton (QLGA) were given by Asif Shakeel and Peter Love.[25] A proposal for implementingclassicalcellular automata by systems designed withquantum dotshas been proposed under the name "quantum cellular automata" byDoug Tougawand Craig Lent,[26]as a replacement for classical computation using CMOS technology. In order to better differentiate between this proposal and models of cellular automata which perform quantum computation, many authors working on this subject now refer to this as aquantum dot cellular automaton.
https://en.wikipedia.org/wiki/Quantum_cellular_automaton
Inquantum information theory, aquantum channelis a communication channel that can transmitquantum information, as well as classical information. An example of quantum information is the general dynamics of aqubit. An example of classical information is a text document transmitted over theInternet. Terminologically, quantum channels arecompletely positive(CP) trace-preserving maps between spaces of operators. In other words, a quantum channel is just aquantum operationviewed not merely as thereduced dynamicsof a system but as a pipeline intended to carry quantum information. (Some authors use the term "quantum operation" to include trace-decreasing maps while reserving "quantum channel" for strictly trace-preserving maps.[1]) We will assume for the moment that all state spaces of the systems considered, classical or quantum, are finite-dimensional. Thememorylessin the section title carries the same meaning as in classicalinformation theory: the output of a channel at a given time depends only upon the corresponding input and not any previous ones. Consider quantum channels that transmit only quantum information. This is precisely aquantum operation, whose properties we now summarize. LetHA{\displaystyle H_{A}}andHB{\displaystyle H_{B}}be the state spaces (finite-dimensionalHilbert spaces) of the sending and receiving ends, respectively, of a channel.L(HA){\displaystyle L(H_{A})}will denote the family of operators onHA.{\displaystyle H_{A}.}In theSchrödinger picture, a purely quantum channel is a mapΦ{\displaystyle \Phi }betweendensity matricesacting onHA{\displaystyle H_{A}}andHB{\displaystyle H_{B}}with the following properties:[2] The adjectivescompletely positive and trace preservingused to describe a map are sometimes abbreviatedCPTP. In the literature, sometimes the fourth property is weakened so thatΦ{\displaystyle \Phi }is only required to be not trace-increasing. In this article, it will be assumed that all channels are CPTP. Density matrices acting onHAonly constitute a proper subset of the operators onHAand same can be said for systemB. However, once a linear mapΦ{\displaystyle \Phi }between the density matrices is specified, a standard linearity argument, together with the finite-dimensional assumption, allow us to extendΦ{\displaystyle \Phi }uniquely to the full space of operators. This leads to the adjoint mapΦ∗{\displaystyle \Phi ^{*}}, which describes the action ofΦ{\displaystyle \Phi }in theHeisenberg picture:[3] The spaces of operatorsL(HA) andL(HB) are Hilbert spaces with theHilbert–Schmidtinner product. Therefore, viewingΦ:L(HA)→L(HB){\displaystyle \Phi :L(H_{A})\rightarrow L(H_{B})}as a map between Hilbert spaces, we obtain its adjointΦ{\displaystyle \Phi }*given by WhileΦ{\displaystyle \Phi }takes states onAto those onB,Φ∗{\displaystyle \Phi ^{*}}maps observables on systemBto observables onA. This relationship is same as that between the Schrödinger and Heisenberg descriptions of dynamics. The measurement statistics remain unchanged whether the observables are considered fixed while the states undergo operation or vice versa. It can be directly checked that ifΦ{\displaystyle \Phi }is assumed to be trace preserving,Φ∗{\displaystyle \Phi ^{*}}isunital, that is,Φ∗(I)=I{\displaystyle \Phi ^{*}(I)=I}. Physically speaking, this means that, in the Heisenberg picture, the trivial observable remains trivial after applying the channel. So far we have only defined a quantum channel that transmits only quantum information. As stated in the introduction, the input and output of a channel can include classical information as well. To describe this, the formulation given so far needs to be generalized somewhat. A purely quantum channel, in the Heisenberg picture, is a linear map Ψ between spaces of operators: that is unital and completely positive (CP). The operator spaces can be viewed as finite-dimensionalC*-algebras. Therefore, we can say a channel is a unital CP map between C*-algebras: Classical information can then be included in this formulation. The observables of a classical system can be assumed to be a commutative C*-algebra, i.e. the space of continuous functionsC(X){\displaystyle C(X)}on some setX{\displaystyle X}. We assumeX{\displaystyle X}is finite soC(X){\displaystyle C(X)}can be identified with then-dimensional Euclidean spaceRn{\displaystyle \mathbb {R} ^{n}}with entry-wise multiplication. Therefore, in the Heisenberg picture, if the classical information is part of, say, the input, we would defineB{\displaystyle {\mathcal {B}}}to include the relevant classical observables. An example of this would be a channel NoticeL(HB)⊗C(X){\displaystyle L(H_{B})\otimes C(X)}is still a C*-algebra. An elementa{\displaystyle a}of a C*-algebraA{\displaystyle {\mathcal {A}}}is called positive ifa=x∗x{\displaystyle a=x^{*}x}for somex{\displaystyle x}. Positivity of a map is defined accordingly. This characterization is not universally accepted; thequantum instrumentis sometimes given as the generalized mathematical framework for conveying both quantum and classical information. In axiomatizations of quantum mechanics, the classical information is carried in aFrobenius algebraorFrobenius category. For a purely quantum system, the time evolution, at certain timet, is given by whereU=e−iHt/ℏ{\displaystyle U=e^{-iHt/\hbar }}andHis theHamiltonianandtis the time. This gives a CPTP map in the Schrödinger picture and is therefore a channel.[4]The dual map in the Heisenberg picture is Consider a composite quantum system with state spaceHA⊗HB.{\displaystyle H_{A}\otimes H_{B}.}For a state the reduced state ofρon systemA,ρA, is obtained by taking thepartial traceofρwith respect to theBsystem: The partial trace operation is a CPTP map, therefore a quantum channel in the Schrödinger picture.[5]In the Heisenberg picture, the dual map of this channel is whereAis an observable of systemA. An observable associates a numerical valuefi∈C{\displaystyle f_{i}\in \mathbb {C} }to a quantum mechanicaleffectFi{\displaystyle F_{i}}.Fi{\displaystyle F_{i}}'s are assumed to be positive operators acting on appropriate state space and∑iFi=I{\textstyle \sum _{i}F_{i}=I}. (Such a collection is called aPOVM.[6][7]) In the Heisenberg picture, the correspondingobservable mapΨ{\displaystyle \Psi }maps a classical observable to the quantum mechanical one In other words, oneintegratesfagainst the POVMto obtain the quantum mechanical observable. It can be easily checked thatΨ{\displaystyle \Psi }is CP and unital. The corresponding Schrödinger mapΨ∗{\displaystyle \Psi ^{*}}takes density matrices to classical states:[8] where the inner product is the Hilbert–Schmidt inner product. Furthermore, viewing states as normalizedfunctionals, and invoking theRiesz representation theorem, we can put The observable map, in the Schrödinger picture, has a purely classical output algebra and therefore only describes measurement statistics. To take the state change into account as well, we define what is called aquantum instrument. Let{F1,…,Fn}{\displaystyle \{F_{1},\dots ,F_{n}\}}be the effects (POVM) associated to an observable. In the Schrödinger picture, an instrument is a mapΦ{\displaystyle \Phi }with pure quantum inputρ∈L(H){\displaystyle \rho \in L(H)}and with output spaceC(X)⊗L(H){\displaystyle C(X)\otimes L(H)}: Let The dual map in the Heisenberg picture is whereΨi{\displaystyle \Psi _{i}}is defined in the following way: FactorFi=Mi2{\displaystyle F_{i}=M_{i}^{2}}(this can always be done since elements of a POVM are positive) thenΨi(A)=MiAMi{\displaystyle \;\Psi _{i}(A)=M_{i}AM_{i}}. We see thatΨ{\displaystyle \Psi }is CP and unital. Notice thatΨ(f⊗I){\displaystyle \Psi (f\otimes I)}gives precisely the observable map. The map describes the overall state change. Suppose two partiesAandBwish to communicate in the following manner:Aperforms the measurement of an observable and communicates the measurement outcome toBclassically. According to the message he receives,Bprepares his (quantum) system in a specific state. In the Schrödinger picture, the first part of the channelΦ{\displaystyle \Phi }1simply consists ofAmaking a measurement, i.e. it is the observable map: If, in the event of thei-th measurement outcome,Bprepares his system in stateRi, the second part of the channelΦ{\displaystyle \Phi }2takes the above classical state to the density matrix The total operation is the composition Channels of this form are calledmeasure-and-prepareorentanglement-breaking.[9][10][11][12] In the Heisenberg picture, the dual mapΦ∗=Φ1∗∘Φ2∗{\displaystyle \Phi ^{*}=\Phi _{1}^{*}\circ \Phi _{2}^{*}}is defined by A measure-and-prepare channel can not be the identity map. This is precisely the statement of theno teleportation theorem, which says classical teleportation (not to be confused withentanglement-assisted teleportation) is impossible. In other words, a quantum state can not be measured reliably. In thechannel-state duality, a channel is measure-and-prepare if and only if the corresponding state isseparable. Actually, all the states that result from the partial action of a measure-and-prepare channel are separable, which is why measure-and-prepare channels are also known as entanglement-breaking channels. Consider the case of a purely quantum channelΨ{\displaystyle \Psi }in the Heisenberg picture. With the assumption that everything is finite-dimensional,Ψ{\displaystyle \Psi }is a unital CP map between spaces of matrices ByChoi's theorem on completely positive maps,Ψ{\displaystyle \Psi }must take the form whereN≤nm. The matricesKiare calledKraus operatorsofΨ{\displaystyle \Psi }(after the German physicistKarl Kraus, who introduced them).[13][14][15]The minimum number of Kraus operators is called the Kraus rank ofΨ{\displaystyle \Psi }. A channel with Kraus rank 1 is calledpure. The time evolution is one example of a pure channel. This terminology again comes from the channel-state duality. A channel is pure if and only if its dual state is a pure state. Inquantum teleportation, a sender wishes to transmit an arbitrary quantum state of a particle to a possibly distant receiver. Consequently, the teleportation process is a quantum channel. The apparatus for the process itself requires a quantum channel for the transmission of one particle of an entangled-state to the receiver. Teleportation occurs by a joint measurement of the sent particle and the remaining entangled particle. This measurement results in classical information that must be sent to the receiver to complete the teleportation. Importantly, the classical information can be sent after the quantum channel has ceased to exist. Experimentally, a simple implementation of a quantum channel isfiber optic(or free-space for that matter) transmission of singlephotons. Single photons can be transmitted up to 100 km in standard fiber optics before losses dominate.[citation needed]The photon's time-of-arrival (time-bin entanglement) orpolarizationare used as a basis to encode quantum information for purposes such asquantum cryptography. The channel is capable of transmitting not only basis states (e.g.|0⟩{\displaystyle |0\rangle },|1⟩{\displaystyle |1\rangle }) but also superpositions of them (e.g.|0⟩+|1⟩{\displaystyle |0\rangle +|1\rangle }). Thecoherenceof the state is maintained during transmission through the channel. Contrast this with the transmission of electrical pulses through wires (a classical channel), where only classical information (e.g. 0s and 1s) can be sent. Before giving the definition of channel capacity, the preliminary notion of thenorm of complete boundedness, orcb-normof a channel needs to be discussed. When considering the capacity of a channelΦ{\displaystyle \Phi }, we need to compare it with an "ideal channel"Λ{\displaystyle \Lambda }. For instance, when the input and output algebras are identical, we can chooseΛ{\displaystyle \Lambda }to be the identity map. Such a comparison requires ametricbetween channels. Since a channel can be viewed as a linear operator, it is tempting to use the naturaloperator norm. In other words, the closeness ofΦ{\displaystyle \Phi }to the ideal channelΛ{\displaystyle \Lambda }can be defined by However, the operator norm may increase when we tensorΦ{\displaystyle \Phi }with the identity map on some ancilla. To make the operator norm even a more undesirable candidate, the quantity may increase without bound asn→∞.{\displaystyle n\rightarrow \infty .}The solution is to introduce, for any linear mapΦ{\displaystyle \Phi }between C*-algebras, the cb-norm The mathematical model of a channel used here is same as theclassical one. LetΨ:B1→A1{\displaystyle \Psi :{\mathcal {B}}_{1}\rightarrow {\mathcal {A}}_{1}}be a channel in the Heisenberg picture andΨid:B2→A2{\displaystyle \Psi _{id}:{\mathcal {B}}_{2}\rightarrow {\mathcal {A}}_{2}}be a chosen ideal channel. To make the comparison possible, one needs to encode and decode Φ via appropriate devices, i.e. we consider the composition whereEis an encoder andDis a decoder. In this context,EandDare unital CP maps with appropriate domains. The quantity of interest is thebest case scenario: with the infimum being taken over all possible encoders and decoders. To transmit words of lengthn, the ideal channel is to be appliedntimes, so we consider the tensor power The⊗{\displaystyle \otimes }operation describesninputs undergoing the operationΨid{\displaystyle \Psi _{id}}independently and is the quantum mechanical counterpart ofconcatenation. Similarly,m invocations of the channelcorresponds toΨ^⊗m{\displaystyle {\hat {\Psi }}^{\otimes m}}. The quantity is therefore a measure of the ability of the channel to transmit words of lengthnfaithfully by being invokedmtimes. This leads to the following definition: A sequence{nα}{\displaystyle \{n_{\alpha }\}}can be viewed as representing a message consisting of possibly infinite number of words. The limit supremum condition in the definition says that, in the limit, faithful transmission can be achieved by invoking the channel no more thanrtimes the length of a word. One can also say thatris the number of letters per invocation of the channel that can be sent without error. Thechannel capacity ofΨ{\displaystyle \Psi }with respect toΨid{\displaystyle \Psi _{id}}, denoted byC(Ψ,Ψid){\displaystyle \;C(\Psi ,\Psi _{id})}is the supremum of all achievable rates. From the definition, it is vacuously true that 0 is an achievable rate for any channel. As stated before, for a system with observable algebraB{\displaystyle {\mathcal {B}}}, the ideal channelΨid{\displaystyle \Psi _{id}}is by definition the identity mapIB{\displaystyle I_{\mathcal {B}}}. Thus for a purelyndimensional quantum system, the ideal channel is the identity map on the space ofn×nmatricesCn×n{\displaystyle \mathbb {C} ^{n\times n}}. As a slight abuse of notation, this ideal quantum channel will be also denoted byCn×n{\displaystyle \mathbb {C} ^{n\times n}}. Similarly, a classical system with output algebraCm{\displaystyle \mathbb {C} ^{m}}will have an ideal channel denoted by the same symbol. We can now state some fundamental channel capacities. The channel capacity of the classical ideal channelCm{\displaystyle \mathbb {C} ^{m}}with respect to a quantum ideal channelCn×n{\displaystyle \mathbb {C} ^{n\times n}}is This is equivalent to the no-teleportation theorem: it is impossible to transmit quantum information via a classical channel. Moreover, the following equalities hold: The above says, for instance, an ideal quantum channel is no more efficient at transmitting classical information than an ideal classical channel. Whenn=m, the best one can achieve isone bit per qubit. It is relevant to note here that both of the above bounds on capacities can be broken, with the aid ofentanglement. Theentanglement-assisted teleportation schemeallows one to transmit quantum information using a classical channel.Superdense codingachieves two bits per qubit. These results indicate the significant role played by entanglement in quantum communication. Using the same notation as the previous subsection, theclassical capacityof a channel Ψ is that is, it is the capacity of Ψ with respect to the ideal channel on the classical one-bit systemC2{\displaystyle \mathbb {C} ^{2}}. Similarly thequantum capacityof Ψ is where the reference system is now the one qubit systemC2×2{\displaystyle \mathbb {C} ^{2\times 2}}. Another measure of how well a quantum channel preserves information is calledchannel fidelity, and it arises fromfidelity of quantum states. Given two pure states|ψ⟩{\displaystyle |\psi \rangle }and|ϕ⟩{\displaystyle |\phi \rangle }, their fidelity is the probability that one of them passes a test designed to identify the other:F(|ψ⟩,|ϕ⟩)=|⟨ψ|ϕ⟩|2.{\displaystyle F(|\psi \rangle ,|\phi \rangle )=|\langle \psi |\phi \rangle |^{2}.}This can be generalized to the case where the two states being compared are given by density matrices:[16][17]F(ρ,σ)=(trρσρ)2.{\displaystyle F(\rho ,\sigma )=\left(\mathrm {tr} {\sqrt {{\sqrt {\rho }}\sigma {\sqrt {\rho }}}}\right)^{2}.} The channel fidelity for a given channel is found by sending one half of a maximally entangled pair of systems through that channel, and calculating the fidelity between the resulting state and the original input.[18] A bistochastic quantum channel is a quantum channelΦ(ρ){\displaystyle \Phi (\rho )}that isunital,[19]i.e.Φ(I)=I{\displaystyle \Phi (I)=I}. These channels include unitary evolutions, convex combinations of unitaries, and (in dimensions larger than 2) other possibilities as well.[20]
https://en.wikipedia.org/wiki/Quantum_channel
In themathematical study of logicand thephysicalanalysis ofquantum foundations,quantum logicis a set of rules for manip­ulation ofpropositionsinspired by the structure ofquantum theory. The formal system takes as its starting point an obs­ervation ofGarrett BirkhoffandJohn von Neumann, that the structure of experimental tests in classical mechanics forms aBoolean algebra, but the structure of experimental tests in quantum mechanics forms a much more complicated structure. A number of other logics have also been proposed to analyze quantum-mechanical phenomena, unfortunately also under the name of "quantum logic(s)". They are not the subject of this article. For discussion of the similarities and differences between quantum logic and some of these competitors, see§ Relationship to other logics. Quantum logic has been proposed as the correct logic for propositional inference generally, most notably by the philosopherHilary Putnam, at least at one point in his career. This thesis was an important ingredient in Putnam's 1968 paper "Is Logic Empirical?" in which he analysed theepistemologicalstatus of the rules of propositional logic. Modern philosophers reject quantum logic as a basis for reasoning, because it lacks amaterial conditional; a common alternative is the system oflinear logic, of which quantum logic is a fragment. Mathematically, quantum logic is formulated by weakening thedistributive lawfor a Boolean algebra, resulting in anortho­complemented lattice. Quantum-mechanicalobservablesandstatescan be defined in terms of functions on or to the lattice, giving an alternateformalismfor quantum computations. The most notable difference between quantum logic andclassical logicis the failure of thepropositionaldistributive law:[1] where the symbolsp,qandrare propositional variables. To illustrate why the distributive law fails, consider a particle moving on a line and (using some system of units where thereduced Planck constantis 1) let[Note 1] We might observe that: in other words, that the state of the particle is a weightedsuperpositionof momenta between 0 and +1/6 and positions between −1 and +3. On the other hand, the propositions "pandq" and "pandr" each assert tighter restrictions on simultaneous values of position and momentum than are allowed by theuncertainty principle(they each have uncertainty 1/3, which is less than the allowed minimum of 1/2). So there are no states that can support either proposition, and In his classic 1932 treatiseMathematical Foundations of Quantum Mechanics,John von Neumannnoted thatprojectionson aHilbert spacecan be viewed as propositions about physical observables; that is, as potentialyes-or-no questionsan observer might ask about the state of a physical system, questions that could be settled by some measurement.[2]Principles for manipulating these quantum propositions were then calledquantum logicby von Neumann and Birkhoff in a 1936 paper.[3] George Mackey, in his 1963 book (also calledMathematical Foundations of Quantum Mechanics), attempted to axiomatize quantum logic as the structure of anortho­complemented lattice, and recognized that a physical observable could bedefinedin terms of quantum propositions. Although Mackey's presentation still assumed that the ortho­complemented lattice is thelatticeofclosedlinear subspacesof aseparableHilbert space,[4]Constantin Piron, Günther Ludwig and others later developed axiomatizations that do not assume an underlying Hilbert space.[5] Inspired byHans Reichenbach's then-recent defence ofgeneral relativity, the philosopherHilary Putnampopularized Mackey's work in two papers in 1968 and 1975,[6]in which he attributed the idea that anomalies associated to quantum measurements originate with a failure of logic itself to his coauthor, physicistDavid Finkelstein.[7]Putnam hoped to develop a possible alternative tohidden variablesorwavefunction collapsein the problem ofquantum measurement, butGleason's theorempresents severe difficulties for this goal.[6][8]Later, Putnam retracted his views, albeit with much less fanfare,[6]but the damage had been done. While Birkhoff and von Neumann's original work only attempted to organize the calculations associated with theCopenhagen interpretationof quantum mechanics, a school of researchers had now sprung up, either hoping that quantum logic would provide a viable hidden-variable theory, or obviate the need for one.[9]Their work proved fruitless, and now lies in poor repute.[10] Most philosophers would agree that quantum logic is not a competitor toclassical logic. It is far from evident (albeit true[11]) that quantum logic is alogic, in the sense of describing a process of reasoning, as opposed to a particularly convenient language to summarize the measurements performed by quantum apparatuses.[12][13]In particular, some modernphilosophers of scienceargue that quantum logic attempts to substitute metaphysical difficulties for unsolved problems in physics, rather than properly solving the physics problems.[14]Tim Maudlinwrites that quantum "logic 'solves' the[measurement] problemby making the problem impossible to state."[15] Quantum logic remains in use among logicians[16]and interests are expanding through the recent development ofquantum computing, which has engendered a proliferation of new logics for formal analysis of quantum protocols and algorithms (see also§ Relationship to other logics).[17]The logic may also find application in (computational) linguistics. Quantum logic can be axiomatized as the theory of propositions modulo the following identities:[18] ("¬" is the traditional notation for "not", "∨" the notation for "or", and "∧" the notation for "and".) Some authors restrict toorthomodular lattices, which additionally satisfy the orthomodular law:[19] ("⊤" is the traditional notation fortruthand ""⊥" the traditional notation forfalsity.) Alternative formulations include propositions derivable via anatural deduction,[16]sequent calculus[20][21]ortableauxsystem.[22]Despite the relatively developedproof theory, quantum logic is not known to bedecidable.[18] The remainder of this article assumes the reader is familiar with thespectral theoryofself-adjoint operatorson a Hilbert space. However, the main ideas can be under­stood in thefinite-dimensionalcase. TheHamiltonianformulations ofclassical mechanicshave three ingredients:states,observablesanddynamics. In the simplest case of a single particle moving inR3, the state space is the position–momentum spaceR6. An observable is somereal-valued functionfon the state space. Examples of observables are position, momentum or energy of a particle. For classical systems, the valuef(x), that is the value offfor some particular system statex, is obtained by a process of measurement off. Thepropositionsconcerning a classical system are generated from basic statements of the form through the conventional arithmetic operations andpointwise limits. It follows easily from this characterization of propositions in classical systems that the corresponding logic is identical to theBoolean algebraofBorel subsetsof the state space. They thus obey the laws ofclassicalpropositional logic(such asde Morgan's laws) with the set operations of union and intersection corresponding to theBoolean conjunctivesand subset inclusion corresponding tomaterial implication. In fact, a stronger claim is true: they must obey theinfinitary logicLω1,ω. We summarize these remarks as follows: The proposition system of a classical system is a lattice with a distinguishedorthocomplementationoperation: The lattice operations ofmeetandjoinare respectively set intersection and set union. The orthocomplementation operation is set complement. Moreover, this lattice issequentially complete, in the sense that any sequence {Ei}i∈Nof elements of the lattice has aleast upper bound, specifically the set-theoretic union:LUB⁡({Ei})=⋃i=1∞Ei.{\displaystyle \operatorname {LUB} (\{E_{i}\})=\bigcup _{i=1}^{\infty }E_{i}{\text{.}}} In theHilbert spaceformulation of quantum mechanics as presented by von Neumann, a physical observable is represented by some (possiblyunbounded) densely definedself-adjoint operatorAon a Hilbert spaceH.Ahas aspectral decomposition, which is aprojection-valued measureE defined on the Borel subsets ofR. In particular, for any boundedBorel functionfonR, the following extension offto operators can be made:f(A)=∫Rf(λ)dE⁡(λ).{\displaystyle f(A)=\int _{\mathbb {R} }f(\lambda )\,d\operatorname {E} (\lambda ).} In casefis the indicator function of an interval [a,b], the operatorf(A) is a self-adjoint projection onto the subspace ofgeneralized eigenvectorsofAwith eigenvalue in[a,b]. That subspace can be interpreted as the quantum analogue of the classical proposition This suggests the following quantum mechanical replacement for the orthocomplemented lattice of propositions in classical mechanics, essentially Mackey'sAxiom VII: The spaceQof quantum propositions is also sequentially complete: any pairwise-disjoint sequence {Vi}iof elements ofQhas a least upper bound. Here disjointness ofW1andW2meansW2is a subspace ofW1⊥. The least upper bound of {Vi}iis the closed internaldirect sum. The standard semantics of quantum logic is that quantum logic is the logic ofprojection operatorsin aseparableHilbertorpre-Hilbert space, where an observablepis associated with theset of quantum statesfor whichp(when measured) haseigenvalue1. From there, This semantics has the nice property that the pre-Hilbert space is complete (i.e., Hilbert) if and only if the propositions satisfy the orthomodular law, a result known as theSolèr theorem.[23]Although much of the development of quantum logic has been motivated by the standard semantics, it is not characterized by the latter; there are additional properties satisfied by that lattice that need not hold in quantum logic.[16] The structure ofQimmediately points to a difference with the partial order structure of a classical proposition system. In the classical case, given a propositionp, the equations have exactly one solution, namely the set-theoretic complement ofp. In the case of the lattice of projections there are infinitely many solutions to the above equations (any closed, algebraic complement ofpsolves it; it need not be the orthocomplement). More generally,propositional valuationhas unusual properties in quantum logic. An orthocomplemented lattice admitting atotallattice homomorphismto {⊥,⊤} must be Boolean. A standard workaround is to study maximal partial homomorphismsqwith a filtering property: Expressions in quantum logic describe observables using a syntax that resembles classical logic. However, unlike classical logic, the distributive lawa∧ (b∨c) = (a∧b) ∨ (a∧c) fails when dealing withnoncommuting observables, such as position and momentum. This occurs because measurement affects the system, and measurement of whether a disjunction holds does not measure which of the disjuncts is true. For example, consider a simple one-dimensional particle with position denoted byxand momentum byp, and define observables: Now, position and momentum are Fourier transforms of each other, and theFourier transformof asquare-integrablenonzero function with acompact supportisentireand hence does not have non-isolated zeroes. Therefore, there is no wave function that is bothnormalizablein momentum space and vanishes on preciselyx≥ 0. Thus,a∧band similarlya∧care false, so (a∧b) ∨ (a∧c) is false. However,a∧ (b∨c) equalsa, which is certainly not false (there are states for which it is a viablemeasurement outcome). Moreover: if the relevant Hilbert space for the particle's dynamics only admits momenta no greater than 1, thenais true. To understand more, letp1andp2be the momentum functions (Fourier transforms) for the projections of the particle wave function tox≤ 0 andx≥ 0 respectively. Let |pi|↾≥1be the restriction ofpito momenta that are (in absolute value) ≥1. (a∧b) ∨ (a∧c) corresponds to states with |p1|↾≥1= |p2|↾≥1= 0 (this holds even if we definedpdifferently so as to make such states possible; also,a∧bcorresponds to |p1|↾≥1=0 andp2=0). Meanwhile,acorresponds to states with |p|↾≥1= 0. As an operator,p=p1+p2, and nonzero |p1|↾≥1and |p2|↾≥1might interfere to produce zero |p|↾≥1. Such interference is key to the richness of quantum logic and quantum mechanics. Given aorthocomplemented latticeQ, a Mackey observable φ is acountably additive homomorphismfrom the orthocomplemented lattice of Borel subsets ofRtoQ. In symbols, this means that for any sequence {Si}iof pairwise-disjoint Borel subsets ofR, {φ(Si)}iare pairwise-orthogonal propositions (elements ofQ) and Equivalently, a Mackey observable is aprojection-valued measureonR. Theorem(Spectral theorem). IfQis the lattice of closed subspaces of HilbertH, then there is a bijective correspondence between Mackey observables and densely-defined self-adjoint operators onH. Aquantum probability measureis a function P defined onQwith values in [0,1] such that P("⊥)=0, P(⊤)=1 and if {Ei}iis a sequence of pairwise-orthogonal elements ofQthen Every quantum probability measure on the closed subspaces of a Hilbert space is induced by adensity matrix— anonnegative operatoroftrace1. Formally, Quantum logic embeds intolinear logic[25]and themodal logicB.[16]Indeed, modern logics for the analysis of quantum computation often begin with quantum logic, and attempt to graft desirable features of an extension of classical logic thereonto; the results then necessarily embed quantum logic.[26][27] The orthocomplemented lattice of any set of quantum propositions can be embedded into a Boolean algebra, which is then amenable to classical logic.[28] Although many treatments of quantum logic assume that the underlying lattice must be orthomodular, such logics cannot handle multiple interacting quantum systems. In an example due to Foulis and Randall, there are orthomodular propositions with finite-dimensional Hilbert models whose pairing admits no orthomodular model.[8]Likewise, quantum logic with the orthomodular law falsifies thededuction theorem.[29] Quantum logic admits no reasonablematerial conditional; anyconnectivethat ismonotonein a certain technical sense reduces the class of propositions to aBoolean algebra.[30]Consequently, quantum logic struggles to represent the passage of time.[25]One possible workaround is the theory ofquantum filtrationsdeveloped in the late 1970s and 1980s byBelavkin.[31][32]It is known, however, that SystemBV, adeep inferencefragment oflinear logicthat is very close to quantum logic, can handle arbitrarydiscrete spacetimes.[33]
https://en.wikipedia.org/wiki/Quantum_logic
Inquantum computing,quantum memoryis thequantum-mechanicalversion of ordinarycomputer memory. Whereas ordinary memory stores information asbinarystates (represented by "1"s and "0"s), quantum memory stores aquantum statefor later retrieval. These states hold useful computational information known asqubits. Unlike the classical memory of everyday computers, the states stored in quantum memory can be in aquantum superposition, giving much more practical flexibility inquantum algorithmsthan classical information storage. Quantum memory is essential for the development of many devices inquantum information processing, including a synchronization tool that can match the variousprocessesin aquantum computer, a quantum gate that maintains the identity of any state, and a mechanism for converting predetermined photons into on-demand photons. Quantum memory can be used in many aspects, such as quantum computing and quantum communication. Continuous research and experiments have enabled quantum memory to realize the storage of qubits.[1] The interaction of quantum radiation with multiple particles has sparked scientific interest over the past decade.[needs context]Quantum memory is one such field, mapping the quantum state of light onto a group of atoms and then restoring it to its original shape. Quantum memory is a key element in information processing, such as optical quantum computing andquantum communication, while opening a new way for the foundation of light-atom interaction. However, restoring the quantum state of light is no easy task. While impressive progress has been made, researchers are still working to make it happen.[2] Quantum memory based on the quantum exchange to store photon qubits has been demonstrated to be possible. Kessel and Moiseev[3]discussed quantum storage in the singlephotonstate in 1993. The experiment was analyzed in 1998 and demonstrated in 2003. In summary, the study of quantum storage in the single photon state can be regarded as the product of the classical opticaldata storagetechnology proposed in 1979 and 1982, an idea inspired by the high density of data storage in the mid-1970s[citation needed]. Optical data storage can be achieved by usingabsorbersto absorb different frequencies of light, which are then directed to beam space points and stored. Normal, classical optical signals are transmitted by varying the amplitude of light. In this case, a piece of paper, or a computer hard disk, can be used to store information on the lamp[clarification needed]. In the quantum information scenario, however, the information may be encoded according to the amplitude and phase of the light. For some signals, you cannot measure both the amplitude and phase of the light without interfering with the signal. To store quantum information, light itself needs to be stored without being measured. An atomic gas quantum memory is recording the state of light into the atomic cloud. When light's information is stored by atoms, relative amplitude and phase of light is mapped to atoms and can be retrieved on-demand.[4] Inclassical computing, memory is a trivial resource that can be replicated in long-lived memory hardware and retrieved later for further processing. In quantum computing, this is forbidden because, according to theno clone theorem, anyquantum statecannot be reproduced completely. Therefore, in the absence ofquantum error correction, the storage of qubits is limited by the internal coherence time of the physical qubits holding the information. "Quantum memory" beyond the given physical qubit storage limits will be a quantum information transmission to "storing qubits" not easily affected by environmental noise and other factors. The information would later be transferred back to the preferred "process qubits" to allow rapid operations or reads.[5] Optical quantum memory is usually used to detect and store single photon quantum states. However, producing efficient memory of this kind is still a huge challenge for current science. A single photon is so low in energy as to be lost in a complex light background. These problems have long kept quantum storage rates below 50%. A team led by professor Du Shengwang of the department of physics at the Hong Kong University of science and technology[6]and William Mong Institute of Nano Science and Technology atHKUST[7]has found a way to increase the efficiency of optical quantum memory to more than 85 percent. The discovery also brings the popularity of quantum computers closer to reality. At the same time, the quantum memory can also be used as a repeater in the quantum network, which lays the foundation for the quantum Internet. Quantum memory is an important component ofquantum informationprocessing applications such asquantum network, quantum repeater, linear optical quantum computation or long-distancequantum communication.[8] Optical data storage has been an important research topic for many years. Its most interesting function is the use of the laws of quantum physics to protect data from theft, through quantum computing andquantum cryptographyunconditionally guaranteed communication security.[9] They allow particles to be superimposed and in asuperposition state, which means they can represent multiple combinations at the same time. These particles are called quantum bits, or qubits. From a cybersecurity perspective, the magic of qubits is that if a hacker tries to observe them in transit, their fragile quantum states shatter. This means it is impossible for hackers to tamper with network data without leaving a trace. Now, many companies are taking advantage of this feature to create networks that transmit highly sensitive data. In theory, these networks are secure.[10] Thenitrogen-vacancy centerin diamond has attracted a lot of research in the past decade due to its excellent performance in optical nanophotonic devices. In a recent experiment,electromagnetically induced transparencywas implemented on a multi-passdiamondchip to achieve full photoelectric magnetic field sensing. Despite these closely related experiments, optical storage has yet to be implemented in practice. The existingnitrogen-vacancy center(negative charge and neutral nitrogen-vacancy center) energy level structure makes the optical storage of the diamond nitrogen-vacancy center possible. The coupling between the nitrogen-vacancy spin ensemble and superconducting qubits provides the possibility for microwave storage of superconducting qubits. Optical storage combines the coupling of electron spin state and superconducting quantum bits, which enables the nitrogen-vacancy center in diamond to play a role in the hybrid quantum system of the mutual conversion of coherent light and microwave.[11] Large resonant light depth is the premise of constructing efficient quantum-optical memory. Alkali metal vapor isotopes of a large number of near-infrared wavelengthoptical depth, because they are relatively narrow spectrum line and the number of high density in the warm temperature of 50-100 ∘ C. Alkali vapors have been used in some of the most important memory developments, from early research to the latest results we are discussing, due to their high optical depth, long coherent time and easy near-infrared optical transition. Because of its high information transmission ability, people are more and more interested in its application in the field of quantum information. Structured light can carryorbital angular momentum, which must be stored in the memory to faithfully reproduce the stored structural photons. An atomic vapor quantum memory is ideal for storing such beams because the orbital angular momentum of photons can be mapped to the phase and amplitude of the distributed integration excitation. Diffusion is a major limitation of this technique because the motion of hot atoms destroys the spatial coherence of the storage excitation. Early successes included storing weakly coherent pulses of spatial structure in a warm, ultracold atomic whole. In one experiment, the same group of scientists in a caesiummagneto-optical trapwas able to store and retrieve vector beams at the single-photon level.[12]The memory preserves the rotation invariance of the vector beam, making it possible to use it in conjunction with qubits encoded for maladjusted immune quantum communication. The first storage structure, a real single photon, was achieved with electromagnetically induced transparency in rubidium magneto-optical trap. The predicted single photon generated by spontaneousfour-wave mixingin onemagneto-optical trapis prepared by an orbital angular momentum unit using spiral phase plates, stored in the second magneto-optical trap and recovered. The dual-orbit setup also proves coherence in multimode memory, where a preannounced single photon stores the orbital angular momentum superposition state for 100 nanoseconds.[11] GEM (Gradient Echo Memory) is a protocol for storing optical information and it can be applied to both atomic gas and solid-state memories. The idea was first demonstrated by researchers atANU. The experiment in a three-level system based on hot atomic vapor resulted in demonstration of coherent storage with efficiency up to 87%.[13] Electromagnetically induced transparency (EIT) was first introduced by Harris and his colleagues atStanford Universityin 1990.[14]The work showed that when alaser beamcauses aquantum interferencebetween the excitation paths, the optical response of the atomic medium is modified to eliminate absorption and refraction at theresonant frequenciesof atomic transitions. Slow light, optical storage, and quantum memories can be achieved based on EIT. In contrast to other approaches, EIT has a long storage time and is a relatively easy and inexpensive solution to implement. For example, electromagnetically induced transparency does not require the very high power control beams usually needed for Raman quantum memories, nor does it require the use of liquidheliumtemperatures. In addition,photon echocan read EIT while the spin coherence survives due to the time delay of readout pulse caused by a spin recovery in non-uniformly broadened media. Although there are some limitations on operating wavelength, bandwidth, and mode capacity, techniques have been developed to make EIT-based quantum memories a valuable tool in the development ofquantum telecommunication systems.[11]In 2018, a highly efficient EIT-based optical memory incold atomdemonstrated a 92% storage-and-retrieval efficiency in the classical regime with coherent beams[15]and a 70% storage-and-retrieval efficiency was demonstrated for polarization qubits encoded in weak coherent states, beating any classical benchmark.[16]Following these demonstrations, single-photon polarization qubits were then stored via EIT in a85Rb cold atomic ensemble and retrieved with an 85% efficiency[17]and entanglement between twocesium-based quantum memories was also achieved with an overall transfer efficiency close to 90%.[18] The mutual transformation of quantum information between light and matter is the focus ofquantum informatics. The interaction between a single photon and a cooled crystal doped withrare earth ionsis investigated. Crystals doped with rare earth have broad application prospects in the field of quantum storage because they provide a unique application system.[19]Li Chengfeng from the quantum information laboratory of theChinese Academy of Sciencesdeveloped a solid-state quantum memory and demonstrated the photon computing function using time and frequency. Based on this research, a large-scalequantum networkbased on quantum repeater can be constructed by utilizing the storage and coherence of quantum states in the material system. Researchers have shown for the first time in rare-earth ion-doped crystals. By combining the three-dimensional space with two-dimensional time and two-dimensional spectrum, a kind of memory that is different from the general one is created. It has the multimode capacity and can also be used as a high fidelity quantum converter. Experimental results show that in all these operations, the fidelity of the three-dimensional quantum state carried by the photon can be maintained at around 89%.[20] Diamond has very high Raman gain inoptical phononmode of 40 THz and has a wide transient window in a visible and near-infrared band, which makes it suitable for being an optical memory with a very wide band. After the Raman storage interaction, the optical phonon decays into a pair of photons through the channel, and the decay lifetime is 3.5 ps, which makes the diamond memory unsuitable for communication protocol. Nevertheless, diamond memory has allowed some revealing studies of the interactions between light and matter at the quantum level: optical phonons in a diamond can be used to demonstrate emission quantum memory, macroscopic entanglement, pre-predicted single-photon storage, and single-photon frequency manipulation.[11] For quantum memory, quantum communication and cryptography are the future research directions. However, there are many challenges to building a global quantum network. One of the most important challenges is to create memories that can store the quantum information carried by light. Researchers at theUniversity of GenevainSwitzerlandworking with France'sCNRShave discovered a new material in which an element calledytterbiumcan store and protect quantum information, even at high frequencies. This makes ytterbium an ideal candidate for future quantum networks. Because signals cannot be replicated, scientists are now studying how quantum memories can be made to travel farther and farther by capturing photons to synchronize them. In order to do this, it becomes important to find the right materials for making quantum memories. Ytterbium is a good insulator and works at high frequencies so that photons can be stored and quickly restored.
https://en.wikipedia.org/wiki/Quantum_memory
Quantum networksform an important element ofquantum computingandquantum communicationsystems. Quantum networks facilitate the transmission of information in the form of quantum bits, also calledqubits, between physically separated quantum processors. A quantum processor is a machine able to performquantum circuitson a certain number of qubits. Quantum networks work in a similar way to classical networks. The main difference is that quantum networking, like quantum computing, is better at solving certain problems, such as modeling quantum systems. Networkedquantum computingor distributed quantum computing[1][2]works by linking multiple quantum processors through a quantum network by sending qubits in between them. Doing this creates a quantum computing cluster and therefore creates more computing potential. Less powerful computers can be linked in this way to create one more powerful processor. This is analogous to connecting several classical computers to form acomputer clusterin classical computing. Like classical computing, this system is scalable by adding more and more quantum computers to the network. Currently quantum processors are only separated by short distances. In the realm ofquantum communication, one wants to sendqubitsfrom one quantumprocessorto another over long distances.[3]This way, local quantum networks can be intra connected into a quantuminternet. A quantum internet[1]supports many applications, which derive their power from the fact that by creatingquantum entangledqubits, information can be transmitted between the remote quantum processors. Most applications of a quantum internet require only very modest quantum processors. For most quantum internet protocols, such asquantum key distributioninquantum cryptography, it is sufficient if these processors are capable of preparing and measuring only a single qubit at a time. This is in contrast toquantum computingwhere interesting applications can be realized only if the (combined) quantum processors can easily simulate more qubits than a classical computer (around 60[4]). Quantum internet applications require only small quantum processors, often just a single qubit, because quantum entanglement can already be realized between just two qubits. A simulation of an entangled quantum system on a classical computer cannot simultaneously provide the same security and speed. The basic structure of a quantum network and more generally a quantum internet is analogous to a classical network. First, we have end nodes on which applications are ultimately run. These end nodes are quantum processors of at least one qubit. Some applications of a quantum internet require quantum processors of several qubits as well as a quantum memory at the end nodes. Second, to transport qubits from one node to another, we need communication lines. For the purpose of quantum communication, standardtelecomfibers can be used. For networked quantum computing, in which quantum processors are linked at short distances, different wavelengths are chosen depending on the exact hardware platform of the quantum processor. Third, to make maximum use of communication infrastructure, one requiresoptical switchescapable of delivering qubits to the intended quantum processor. These switches need to preservequantum coherence, which makes them more challenging to realize than standard optical switches. Finally, one requires a quantumrepeaterto transport qubits over long distances. Repeaters appear in between end nodes.[5]Since qubits cannot be copied (No-cloning theorem), classical signal amplification is not possible. By necessity, a quantum repeater works in a fundamentally different way than a classical repeater. End nodes can both receive and emit information.[5]Telecommunication lasers andparametric down-conversioncombined with photodetectors can be used forquantum key distribution. In this case, the end nodes can in many cases be very simple devices consisting only ofbeamsplittersand photodetectors. However, for many protocols more sophisticated end nodes are desirable. These systems provide advanced processing capabilities and can also be used as quantum repeaters. Their chief advantage is that they can store and retransmit quantum information without disrupting the underlyingquantum state. The quantum state being stored can either be the relative spin of an electron in a magnetic field or the energy state of an electron.[5]They can also performquantum logic gates. One way of realizing such end nodes is by using color centers in diamond, such as thenitrogen-vacancy center. This system forms a small quantum processor featuring severalqubits. NV centers can be utilized at room temperatures.[5]Small scale quantum algorithms and quantum error correction[6]has already been demonstrated in this system, as well as the ability to entangle two[7]and three[8]quantum processors, and perform deterministicquantum teleportation.[9] Another possible platform are quantum processors based onion traps, which utilize radio-frequency magnetic fields and lasers.[5]In a multispecies trapped-ion node network, photons entangled with a parent atom are used to entangle different nodes.[10]Also, cavity quantum electrodynamics (Cavity QED) is one possible method of doing this. In Cavity QED, photonic quantum states can be transferred to and from atomic quantum states stored in single atoms contained in optical cavities. This allows for the transfer of quantum states between single atoms usingoptical fiberin addition to the creation of remoteentanglementbetween distant atoms.[5][11][12] Over long distances, the primary method of operating quantum networks is to use optical networks and photon-basedqubits. This is due to optical networks having a reduced chance ofdecoherence. Optical networks have the advantage of being able to re-use existingoptical fiber. Alternately, free space networks can be implemented that transmit quantum information through the atmosphere or through a vacuum.[13] Optical networks using existingtelecommunication fibercan be implemented using hardware similar to existing telecommunication equipment. This fiber can be either single-mode or multi-mode, with single-mode allowing for more precise communication.[5]At the sender, asingle photonsource can be created by heavily attenuating a standard telecommunication laser such that the mean number ofphotonsper pulse is less than 1. For receiving, anavalanche photodetectorcan be used. Various methods of phase orpolarizationcontrol can be used such asinterferometersandbeam splitters. In the case ofentanglementbased protocols, entangled photons can be generated throughspontaneous parametric down-conversion. In both cases, the telecom fiber can be multiplexed to send non-quantum timing and control signals. In 2020 a team of researchers affiliated with several institutions in China has succeeded in sending entangled quantum memories over a 50-kilometer coiled fiber cable.[14] Free space quantum networks operate similar to fiber optic networks but rely on line of sight between the communicating parties instead of using a fiber optic connection. Free space networks can typically support higher transmission rates than fiber optic networks and do not have to account forpolarizationscrambling caused byoptical fiber.[15]However, over long distances, free space communication is subject to an increased chance of environmental disturbance on thephotons.[5] Free space communication is also possible from a satellite to the ground. A quantum satellite capable ofentanglementdistribution over a distance of 1,203 km[16]has been demonstrated. The experimental exchange of single photons from a global navigation satellite system at a slant distance of 20,000 km has also been reported.[17]These satellites can play an important role in linking smaller ground-based networks over larger distances. In free-space networks, atmospheric conditions such as turbulence, scattering, and absorption present challenges that affect the fidelity of transmitted quantum states. To mitigate these effects, researchers employ adaptive optics, advanced modulation schemes, and error correction techniques.[18]The resilience of QKD protocols against eavesdropping plays a crucial role in ensuring the security of the transmitted data. Specifically, protocols like BB84 and decoy-state schemes have been adapted for free-space environments to improve robustness against potential security vulnerabilities. Long-distance communication is hindered by the effects of signal loss anddecoherenceinherent to most transport mediums such as optical fiber. In classical communication, amplifiers can be used to boost the signal during transmission, but in a quantum network amplifiers cannot be used sincequbitscannot be copied – known as theno-cloning theorem. That is, to implement an amplifier, the complete state of the flying qubit would need to be determined, something which is both unwanted and impossible. An intermediary step which allows the testing of communication infrastructure are trusted repeaters. Importantly, a trusted repeater cannot be used to transmitqubitsover long distances. Instead, a trusted repeater can only be used to performquantum key distributionwith the additional assumption that the repeater is trusted. Consider two end nodes A and B, and a trusted repeater R in the middle. A and R now performquantum key distributionto generate a keykAR{\displaystyle k_{AR}}. Similarly, R and B runquantum key distributionto generate a keykRB{\displaystyle k_{RB}}. A and B can now obtain a keykAB{\displaystyle k_{AB}}between themselves as follows: A sendskAB{\displaystyle k_{AB}}to R encrypted with the keykAR{\displaystyle k_{AR}}. R decrypts to obtainkAB{\displaystyle k_{AB}}. R then re-encryptskAB{\displaystyle k_{AB}}using the keykRB{\displaystyle k_{RB}}and sends it to B. B decrypts to obtainkAB{\displaystyle k_{AB}}. A and B now share the keykAB{\displaystyle k_{AB}}. The key is secure from an outside eavesdropper, but clearly the repeater R also knowskAB{\displaystyle k_{AB}}. This means that any subsequent communication between A and B does not provide end to end security, but is only secure as long as A and B trust the repeater R. A true quantum repeater allows the end to end generation of quantum entanglement, and thus – by usingquantum teleportation– the end to end transmission ofqubits. Inquantum key distributionprotocols one can test for such entanglement. This means that when making encryption keys, the sender and receiver are secure even if they do not trust the quantum repeater. Any other application of a quantum internet also requires the end to end transmission of qubits, and thus a quantum repeater. Quantum repeaters allow entanglement and can be established at distant nodes without physically sending an entangled qubit the entire distance.[19] In this case, the quantum network consists of many short distance links of perhaps tens or hundreds of kilometers. In the simplest case of a single repeater, two pairs of entangled qubits are established:|A⟩{\displaystyle |A\rangle }and|Ra⟩{\displaystyle |R_{a}\rangle }located at the sender and the repeater, and a second pair|Rb⟩{\displaystyle |R_{b}\rangle }and|B⟩{\displaystyle |B\rangle }located at the repeater and the receiver. These initial entangled qubits can be easily created, for example throughparametric down conversion, with one qubit physically transmitted to an adjacent node. At this point, the repeater can perform aBell measurementon the qubits|Ra⟩{\displaystyle |R_{a}\rangle }and|Rb⟩{\displaystyle |R_{b}\rangle }thus teleporting the quantum state of|Ra⟩{\displaystyle |R_{a}\rangle }onto|B⟩{\displaystyle |B\rangle }. This has the effect of "swapping" the entanglement such that|A⟩{\displaystyle |A\rangle }and|B⟩{\displaystyle |B\rangle }are now entangled at a distance twice that of the initial entangled pairs. It can be seen that a network of such repeaters can be used linearly or in a hierarchical fashion to establish entanglement over great distances.[20][21] Hardware platforms suitable as end nodes above can also function as quantum repeaters. However, there are also hardware platforms specific only[22]to the task of acting as a repeater, without the capabilities of performing quantum gates. Error correction can be used in quantum repeaters. Due to technological limitations, however, the applicability is limited to very short distances as quantum error correction schemes capable of protectingqubitsover long distances would require an extremely large amount of qubits and hence extremely large quantum computers. Errors in communication can be broadly classified into two types: Loss errors (due tooptical fiber/environment) and operation errors (such asdepolarization, dephasing etc.). While redundancy can be used to detect and correct classical errors, redundant qubits cannot be created due to the no-cloning theorem. As a result, other types of error correction must be introduced such as theShor codeor one of a number of more general and efficient codes. All of these codes work by distributing the quantum information across multiple entangled qubits so that operation errors as well as loss errors can be corrected.[23] In addition to quantum error correction, classical error correction can be employed by quantum networks in special cases such as quantum key distribution. In these cases, the goal of the quantum communication is to securely transmit a string of classical bits. Traditional error correction codes such asHamming codescan be applied to the bit string before encoding and transmission on the quantum network. Quantum decoherencecan occur when one qubit from a maximally entangled bell state is transmitted across a quantum network. Entanglement purification allows for the creation of nearly maximally entangled qubits from a large number of arbitrary weakly entangled qubits, and thus provides additional protection against errors. Entanglement purification (also known asEntanglement distillation) has already been demonstrated inNitrogen-vacancy centersin diamond.[24] A quantum internet supports numerous applications, enabled byquantum entanglement. In general, quantum entanglement is well suited for tasks that require coordination, synchronization or privacy. Examples of such applications includequantum key distribution,[25][26]clock stabilization,[27]protocols for distributed system problems such as leader election orByzantine agreement,[5]extending the baseline oftelescopes,[28][29]as well as position verification,[30][31]secure identification and two-party cryptography in thenoisy-storage model. A quantum internet also enables secure access to a quantum computer[32]in the cloud. Specifically, a quantum internet enables very simple quantum devices to connect to a remote quantum computer in such a way that computations can be performed there without the quantum computer finding out what this computation actually is (the input and output quantum states can not be measured without destroying the computation, but the circuit composition used for the calculation will be known). When it comes to communicating in any form the largest issue has always been keeping these communications private.[33]Quantum networks would allow for information to be created, stored and transmitted, potentially achieving "a level of privacy, security and computational clout that is impossible to achieve with today’s Internet."[34] By applying aquantum operatorthat the user selects to a system of information the information can then be sent to the receiver without a chance of an eavesdropper being able to accurately be able to record the sent information without either the sender or receiver knowing. Unlike classical information that is transmitted in bits and assigned either a 0 or 1 value, the quantum information used in quantum networks uses quantum bits (qubits), which can have both 0 and 1 value at the same time, being in a state ofsuperposition.[34][35]This works because if a listener tries to listen in then they will change the information in an unintended way by listening, thereby tipping their hand to the people on whom they are attacking. Secondly, without the proper quantum operator to decode the information they will corrupt the sent information without being able to use it themselves. Furthermore, qubits can be encoded in a variety of materials, including in the polarization ofphotonsor thespin statesofelectrons.[34] One example of a prototype quantum communication network is the eight-user city-scale quantum network described in a paper published in September 2020. The network located in Bristol used already deployed fibre-infrastructure and worked without active switching or trusted nodes.[36][37] In 2022, Researchers at the University of Science and Technology of China and Jinan Institute of Quantum Technology demonstrated quantum entanglement between two memory devices located at 12.5 km apart from each other within an urban environment.[38] In the same year, Physicist at the Delft University of Technology in Netherlands has taken a significant step toward the network of the future by using a technique called quantum teleportation that sends data to three physical locations which was previously only possible with two locations.[39] In 2024, researchers in the U.K and Germany achieved a first by producing, storing, and retrieving quantum information. This milestone involved interfacing a quantum dot light source and a quantum memory system, paving the way for practical applications despite challenges like quantum information loss over long distances.[40] In February 2025, researchers fromOxford Universityexperimentally demonstrated the distribution of quantum computations between two photonically interconnected trapped-ion modules. Each module contained dedicated network and circuit qubits, and they were separated by approximately two meters. The team achieved deterministic teleportation of a controlled-Z gate between two circuit qubits located in separate modules, attaining an 86% fidelity. This experiment also marked the first implementation of a distributed quantum algorithm comprising multiple non-local two-qubit gates, specificallyGrover's search algorithm, which was executed with a 71% success rate. These advancements represented significant progress toward scalable quantum computing and the development of a quantum internet.[41] In 2021, researchers at the Max Planck Institute of Quantum Optics in Germany reported a first prototype ofquantum logic gatesfor distributed quantum computers.[42][43] A research team at theMax-Planck-Institute of Quantum Opticsin Garching, Germany is finding success in transporting quantum data from flying and stable qubits via infrared spectrum matching. This requires a sophisticated, super-cooledyttriumsilicate crystal to sandwicherbiumin a mirrored environment to achieve resonance matching of infrared wavelengths found in fiber optic networks. The team successfully demonstrated the device works without data loss.[44] In 2021, researchers in China reported the successful transmission of entangled photons betweendrones, used as nodes for the development of mobile quantum networks or flexible network extensions. This could be the first work in which entangled particles were sent between two moving devices.[45][46]Also, it has been researched the application of quantum communications to improve6G mobile networksfor joint detection and data transfer withquantum entanglement,[47][48]where there are possible advantages such as security and energy efficiency.[49] Several test networks have been deployed that are tailored to the task ofquantum key distributioneither at short distances (but connecting many users), or over larger distances by relying on trusted repeaters. These networks do not yet allow for the end to end transmission ofqubitsor the end to end creation of entanglement between far away nodes.
https://en.wikipedia.org/wiki/Quantum_network
Inquantum mechanics, frequent measurements cause thequantum Zeno effect, a reduction in transitions away from the systems initial state, slowing a systemstime evolution.[1]: 5 Sometimes this effect is interpreted as "a system cannot change while you are watching it".[2]One can "freeze" the evolution of the system by measuring it frequently enough in its known initial state. The meaning of the term has since expanded, leading to a more technical definition, in which time evolution can be suppressed not only by measurement: the quantum Zeno effect is the suppression of unitary time evolution inquantum systemsprovided by a variety of sources: measurement, interactions with the environment,stochastic fields, among other factors.[3]As an outgrowth of study of the quantum Zeno effect, it has become clear that applying a series of sufficiently strong and fast pulses with appropriate symmetry can alsodecouplea system from itsdecoheringenvironment.[4] The comparison with Zeno's paradox is due to a 1977 article by Baidyanath Misra &E. C. George Sudarshan. The name comes by analogy toZeno's arrow paradox, which states that because an arrow in flight is not seen to move during any single instant, it cannot possibly be moving at all. In the quantum Zeno effect an unstable state seems frozen – to not 'move' – due to a constant series of observations. According to the reduction postulate, each measurement causes thewavefunctiontocollapseto aneigenstateof the measurement basis. In the context of this effect, anobservationcan simply be theabsorptionof a particle, without the need of an observer in any conventional sense. However, there is controversy over the interpretation of the effect, sometimes referred to as the "measurement problem" in traversing the interface between microscopic and macroscopic objects.[5][6] Another crucial problem related to the effect is strictly connected to thetime–energy indeterminacy relation(part of theindeterminacy principle). If one wants to make the measurement process more and more frequent, one has to correspondingly decrease the time duration of the measurement itself. But the request that the measurement last only a very short time implies that the energy spread of the state in which reduction occurs becomes increasingly large. However, the deviations from theexponential decaylaw for small times is crucially related to the inverse of the energy spread, so that the region in which the deviations are appreciable shrinks when one makes the measurement process duration shorter and shorter. An explicit evaluation of these two competing requests shows that it is inappropriate, without taking into account this basic fact, to deal with the actual occurrence and emergence of Zeno's effect.[7] Closely related (and sometimes not distinguished from the quantum Zeno effect) is thewatchdog effect, in which the time evolution of a system is affected by its continuous coupling to the environment.[8][9][10][11] Unstable quantum systems are predicted to exhibit a short-time deviation from the exponential decay law.[12][13]This universal phenomenon has led to the prediction that frequent measurements during this nonexponential period could inhibit decay of the system, one form of the quantum Zeno effect. Subsequently, it was predicted that measurements applied more slowly could alsoenhancedecay rates, a phenomenon known as thequantum anti-Zeno effect.[14] Inquantum mechanics, the interaction mentioned is called "measurement" because its result can be interpreted in terms ofclassical mechanics. Frequent measurement prohibits the transition. It can be a transition of a particle from one half-space to another (which could be used for anatomic mirrorin anatomic nanoscope[15]) as in the time-of-arrival problem,[16][17]a transition of aphotonin awaveguidefrom one mode to another, and it can be a transition of an atom from onequantum stateto another. It can be a transition from the subspace without decoherent loss of aqubitto a state with a qubit lost in aquantum computer.[18][19]In this sense, for the qubit correction, it is sufficient to determine whether the decoherence has already occurred or not. All these can be considered as applications of the Zeno effect.[20]By its nature, the effect appears only in systems with distinguishable quantum states, and hence is inapplicable to classical phenomena and macroscopic bodies. The idea is implicit in the early work ofJohn von Neumannon themathematical foundations of quantum mechanics, and in particular the rule sometimes called thereduction postulate.[21]It was later shown that the quantum Zeno effect of a single system is equivalent to the indetermination of the quantum state of a single system.[22][23][24] The unusual nature of the short-time evolution of quantum systems and the consequences for measurement was noted byJohn von Neumannin hisMathematical Foundations of Quantum Mechanics, published in 1932. This aspect of quantum mechanics lay unexplored until 1967 when Beskow and Nilsson[25]suggested that the mathematics indicated that an unstable particle in abubble chamberwould not decay. In 1977, Baidyanath Misra andE. C. George Sudarshanpresented[26]a mathematical analysis of this quantum effect and proposed its association withZeno's arrow paradox. ThisparadoxofZeno of Eleaimagines seeing an flying arrow at any fixed instant: it is immobile, froze in the space it occupies.[1] Despite continued theoretical work, experimental confirmation did not appear[1]until 1990 when Itano et al.[27]applied the idea proposed by Cook[28]to study oscillating systems rather than unstable ones. Itano drove a transition between two levels in trapped9Be+ions while simultaneously measuring absorption of laser pulses proportional to population of the lower level. The treatment of the Zeno effect as aparadoxis not limited to the processes ofquantum decay. In general, the termZeno effectis applied to various transitions, and sometimes these transitions may be very different from a mere "decay" (whether exponential or non-exponential). One realization refers to the observation of an object (Zeno's arrow, or anyquantum particle) as it leaves some region of space. In the 20th century, the trapping (confinement) of a particle in some region by its observation outside the region was considered as nonsensical, indicating some non-completeness of quantum mechanics.[29]Even as late as 2001, confinement by absorption was considered as a paradox.[30]Later, similar effects of the suppression ofRaman scatteringwas considered an expectedeffect,[31][32][33]not a paradox at all. The absorption of a photon at some wavelength, the release of a photon (for example one that has escaped from some mode of a fiber), or even the relaxation of a particle as it enters some region, are all processes that can be interpreted as measurement. Such a measurement suppresses the transition, and is called the Zeno effect in the scientific literature. In order to cover all of these phenomena (including the original effect of suppression of quantum decay), the Zeno effect can be defined as a class of phenomena in which some transition is suppressed by an interaction – one that allows the interpretation of the resulting state in the terms 'transition did not yet happen' and 'transition has already occurred', or 'The proposition that the evolution of a quantum system is halted' if the state of the system is continuously measured by a macroscopic device to check whether the system is still in its initial state.[34] Consider a system in a stateA{\displaystyle A}, which is theeigenstateof some measurement operator. Say the system under free time evolution will decay with a certain probability into stateB{\displaystyle B}. If measurements are made periodically, with some finite interval between each one, at each measurement, the wave function collapses to an eigenstate of the measurement operator. Between the measurements, the system evolves away from this eigenstate into asuperpositionstate of the statesA{\displaystyle A}andB{\displaystyle B}. When the superposition state is measured, it will again collapse, either back into stateA{\displaystyle A}as in the first measurement, or away into stateB{\displaystyle B}. However, its probability of collapsing into stateB{\displaystyle B}after a very short amount of timet{\displaystyle t}is proportional tot2{\displaystyle t^{2}}, since probabilities are proportional to squared amplitudes, and amplitudes behave linearly. Thus, in the limit of a large number of short intervals, with a measurement at the end of every interval, the probability of making the transition toB{\displaystyle B}goes to zero. According todecoherence theory, measurement of a system is not a one-way "collapse" but an interaction with its surrounding environment, which in particular includes the measurement apparatus.[citation needed]A measurement is equivalent to correlating or coupling the quantum state to the apparatus state in such a way as to register the measured information. If this leaves it still able to decohere further to a different state perhaps due to the noisy thermalenvironment, this state may last only for a brief period of time; the probability of decaying increases with time. Then frequent measurement reestablishes or strengthens the coupling, and with it the measured state, if frequent enough for the probability to remain low. The time it expectedly takes to decay is related to the expected decoherence time of the system when coupled to the environment. The stronger the coupling is, and the shorter the decoherence time, the faster it will decay. So in the decoherence picture, an "ideal" quantum Zeno effect corresponds to the mathematical limit where a quantum system is continuously coupled to the environment, and where that coupling is infinitely strong, and where the "environment" is an infinitely large source of thermal randomness. Experimentally, strong suppression of the evolution of a quantum system due to environmental coupling has been observed in a number of microscopic systems. In 1989,David J. Winelandand his group atNIST[35]observed the quantum Zeno effect for a two-level atomic system that was interrogated during its evolution. Approximately 5,0009Be+ions were stored in a cylindricalPenning trapandlaser-cooledto below 250 mK. A resonantRFpulse was applied, which, if applied alone, would cause the entireground-statepopulation to migrate into anexcited state. After the pulse was applied, the ions were monitored for photons emitted due to relaxation. The ion trap was then regularly "measured" by applying a sequence ofultravioletpulses during the RF pulse. As expected, the ultraviolet pulses suppressed the evolution of the system into the excited state. The results were in good agreement with theoretical models. In 2001,Mark G. Raizenand his group at theUniversity of Texas at Austinobserved the quantum Zeno effect for an unstable quantum system,[36]as originally proposed by Sudarshan and Misra.[26]They also observed an anti-Zeno effect. Ultracold sodium atoms were trapped in an acceleratingoptical lattice, and the loss due to tunneling was measured. The evolution was interrupted by reducing the acceleration, thereby stoppingquantum tunneling. The group observed suppression or enhancement of the decay rate, depending on the regime of measurement. In 2015, Mukund Vengalattore and his group atCornell Universitydemonstrated a quantum Zeno effect as the modulation of the rate of quantum tunnelling in an ultracold lattice gas by the intensity of light used to image the atoms.[37] In 2024, Björn Annby-Andersson and his colleagues from Lund University in their experiment with a system of two quantum dots with one electron сame to the conclusion that "As the measurement strength is further increased, the Zeno effect prohibits interdot tunneling. A Zeno-like effect is also observed for weak measurements, where measurement errors lead to fluctuations in the on-site energies, dephasing the system."https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.6.043216 The quantum Zeno effect is used in commercialatomic magnetometersand proposed to be part of birds' magnetic compass sensory mechanism (magnetoreception).[38] It is still an open question how closely one can approach the limit of an infinite number of interrogations due to the Heisenberg uncertainty involved in shorter measurement times. It has been shown, however, that measurements performed at a finite frequency can yield arbitrarily strong Zeno effects.[39]In 2006, Streedet al.at MIT observed the dependence of the Zeno effect on measurement pulse characteristics.[40] The interpretation of experiments in terms of the "Zeno effect" helps describe the origin of a phenomenon. Nevertheless, such an interpretation does not bring any principally new features not described with theSchrödinger equationof the quantum system.[41][42] Even more, the detailed description of experiments with the "Zeno effect", especially at the limit of high frequency of measurements (high efficiency of suppression of transition, or high reflectivity of aridged mirror) usually do not behave as expected for an idealized measurement.[15] It was shown that the quantum Zeno effect persists in the many-worlds and relative-states interpretations of quantum mechanics.[43]
https://en.wikipedia.org/wiki/Quantum_Zeno_effect
Reversible computingis anymodel of computationwhere every step of theprocessistime-reversible. This means that, given the output of a computation, it's possible to perfectly reconstruct the input. In systems thatprogressdeterministicallyfrom one state to another, a key requirement for reversibility is aone-to-onecorrespondencebetween each state and its successor. Reversible computing is considered an unconventional approach to computation and is closely linked toquantum computing, where the principles of quantum mechanics inherently ensure reversibility (as long asquantum statesare not measured or "collapsed").[1] There are two major, closely related types of reversibility that are of particular interest for this purpose:physical reversibilityandlogical reversibility.[2] A process is said to bephysically reversibleif it results in no increase in physicalentropy; it isisentropic. There is a style of circuit design ideally exhibiting this property that is referred to ascharge recovery logic,adiabatic circuits, oradiabatic computing(seeAdiabatic process). Althoughin practiceno nonstationary physical process can beexactlyphysically reversible or isentropic, there is no known limit to the closeness with which we can approach perfect reversibility, in systems that are sufficiently well isolated from interactions with unknown external environments, when thelaws of physicsdescribing the system's evolution are precisely known. A motivation for the study of technologies aimed at implementing reversible computing is that they offer what is predicted to be the only potential way to improve the computationalenergy efficiency(i.e., useful operations performed per unit energy dissipated) of computers beyond the fundamentalvon Neumann–Landauer limit[3][4]ofkTln(2)energy dissipated per irreversiblebit operation. Although the Landauer limit was millions of times below the energy consumption of computers in the 2000s and thousands of times less in the 2010s,[5]proponents of reversible computing argue that this can be attributed largely to architectural overheads which effectively magnify the impact of Landauer's limit in practical circuit designs, so that it may prove difficult for practical technology to progress very far beyond current levels of energy efficiency if reversible computing principles are not used.[6] As was first argued byRolf Landauerwhile working atIBM,[7]in order for a computational process to be physically reversible, it must also belogically reversible.Landauer's principleis the observation that the oblivious erasure ofnbits of known information must always incur a cost ofnkTln(2)in thermodynamicentropy. A discrete, deterministic computational process is said to be logically reversible if the transition function that maps old computational states to new ones is aone-to-one function; i.e. the output logical states uniquely determine the input logical states of the computational operation. For computational processes that are nondeterministic (in the sense of being probabilistic or random), the relation between old and new states is not asingle-valued function, and the requirement needed to obtain physical reversibility becomes a slightly weaker condition, namely that the size of a given ensemble of possible initial computational states does not decrease, on average, as the computation proceeds forwards. Landauer's principle (and indeed, thesecond law of thermodynamics) can also be understood to be a directlogical consequenceof the underlyingreversibility of physics, as is reflected in thegeneral Hamiltonian formulation of mechanics, and in theunitary time-evolution operatorofquantum mechanicsmore specifically.[8] The implementation of reversible computing thus amounts to learning how to characterize and control the physical dynamics of mechanisms to carry out desired computational operations so precisely that the experiment accumulates a negligible total amount of uncertainty regarding the complete physical state of the mechanism, per each logic operation that is performed. In other words, precisely track the state of the active energy that is involved in carrying out computational operations within the machine, and design the machine so that the majority of this energy is recovered in an organized form that can be reused for subsequent operations, rather than being permitted to dissipate into the form of heat. Although achieving this goal presents a significant challenge for the design, manufacturing, and characterization of ultra-precise new physical mechanisms forcomputing, there is at present no fundamental reason to think that this goal cannot eventually be accomplished, allowing someday to build computers that generate much less than 1 bit's worth of physical entropy (and dissipate much less thankTln 2 energy to heat) for each useful logical operation that they carry out internally. Today, the field has a substantial body of academic literature. A wide variety of reversible device concepts,logic gates,electronic circuits, processor architectures,programming languages, and applicationalgorithmshave been designed and analyzed byphysicists,electrical engineers, andcomputer scientists. This field of research awaits the detailed development of a high-quality, cost-effective, nearly reversible logic device technology, one that includes highly energy-efficientclockingandsynchronizationmechanisms, or avoids the need for these through asynchronous design. This sort of solid engineering progress will be needed before the large body of theoretical research on reversible computing can find practical application in enabling real computer technology to circumvent the various near-term barriers to its energy efficiency, including the von Neumann–Landauer bound. This may only be circumvented by the use of logically reversible computing, due to thesecond law of thermodynamics.[9] For a computational operation to be logically reversible means that the output (or final state) of the operation can be computed from the input (or initial state), and vice versa. Reversible functions arebijective. This means that reversible gates (andcircuits, i.e. compositions of multiple gates) generally have the same number of input bits as output bits (assuming that all input bits are consumed by the operation, and that all input/output states are possible). Aninverter(NOT) gate is logically reversible because it can beundone. The NOT gate may however not be physically reversible, depending on its implementation. Theexclusive or(XOR) gate is irreversible because its two inputs cannot be unambiguously reconstructed from its single output, or alternatively, because information erasure is not reversible. However, a reversible version of the XOR gate—thecontrolled NOT gate(CNOT)—can be defined by preserving one of the inputs as a 2nd output. The three-input variant of the CNOT gate is called theToffoli gate. It preserves two of its inputsa,band replaces the thirdcbyc⊕(a⋅b){\displaystyle c\oplus (a\cdot b)}. Withc=0{\displaystyle c=0}, this gives the AND function, and witha⋅b=1{\displaystyle a\cdot b=1}this gives the NOT function. Because AND and NOT together is afunctionally completeset, the Toffoli gate is universal and can implement anyBoolean function(if given enough initializedancilla bits). Surveys of reversible circuits, their construction and optimization, as well as recent research challenges, are available.[10][11][12][13][14] The Reversible Turing Machine (RTM) is a foundational model in reversible computing. An RTM is defined as a Turing machine whose transition function is invertible, ensuring that each machine configuration (state and tape content) has at most one predecessor configuration. This guarantees backward determinism, allowing the computation history to be traced uniquely[15]. Formal definitions of RTMs have evolved over the last decades. While early definitions focused on invertible transition functions, more general formulations allow for bounded head movement and cell modification per step. This generalization ensures that the set of RTMs is closed under composition (executing one RTM after another results in another RTM) and inversion (the inverse of an RTM is also an RTM), forming a group structure for reversible computations[16]. This contrasts with some classical TM definitions where composition might not yield a machine of the same class[17]. The dynamics of an RTM can be described by a global transition function that maps configurations based on a local rule[18]. Yves Lecerfproposed a reversible Turing machine in a 1963 paper,[19]but apparently unaware of Landauer's principle, did not pursue the subject further, devoting most of the rest of his career to ethnolinguistics. A landmark result byCharles H. Bennettin 1973 demonstrated that any standard Turing machine can be simulated by a reversible one[20]. Bennett's construction involves augmenting the TM with an auxiliary "history tape". The simulation proceeds in three stages[21]: This construction proves that RTMs are computationally equivalent to standard TMs in terms of the functions they can compute, establishing that reversibility does not limit computational power in this regard[22]. However, this standard simulation technique comes at a cost. The history tape can grow linearly with the computation time, leading to a potentially large space overhead, often expressed asS'(n) = O(S(n)T(n))where S and T are the space and time of the original computation[23]. Furthermore, history-based approaches face challenges with local compositionality; combining two independently reversibilized computations using this method is not straightforward[24]. This indicates that while theoretically powerful, Bennett's original construction is not necessarily the most practical or efficient way to achieve reversible computation, motivating the search for methods that avoid accumulating large amounts of "garbage" history[25]. RTMs compute precisely the set of injective (one-to-one) computable functions[26]. They are not strictly universal in the classical sense because they cannot directly compute non-injective functions (which inherently lose information). However, they possess a form of universality termed "RTM-universality" and are capable of self-interpretation[27]. London-based Vaire Computing is prototyping a chip in 2025, for release in 2027.[28]
https://en.wikipedia.org/wiki/Reversible_computing
Inquantum mechanics, theSchrödinger equationdescribes how a system changes with time. It does this by relating changes in the state of the system to the energy in the system (given by an operator called theHamiltonian). Therefore, once the Hamiltonian is known, the time dynamics are in principle known. All that remains is to plug the Hamiltonian into the Schrödinger equation and solve for the system state as a function of time.[1][2] Often, however, the Schrödinger equation is difficult to solve (even with a computer). Therefore, physicists have developed mathematical techniques to simplify these problems and clarify what is happening physically. One such technique is to apply aunitary transformationto the Hamiltonian. Doing so can result in a simplified version of the Schrödinger equation which nonetheless has the same solution as the original. A unitary transformation (or frame change) can be expressed in terms of a time-dependent HamiltonianH(t){\displaystyle H(t)}and unitary operatorU(t){\displaystyle U(t)}. Under this change, the Hamiltonian transforms as: The Schrödinger equation applies to the new Hamiltonian. Solutions to the untransformed and transformed equations are also related byU{\displaystyle U}. Specifically, if thewave functionψ(t){\displaystyle \psi (t)}satisfies the original equation, thenUψ(t){\displaystyle U\psi (t)}will satisfy the new equation.[3] Recall that by the definition of aunitary matrix,U†U=1{\displaystyle U^{\dagger }U=1}. Beginning with the Schrödinger equation, we can therefore insert the identityU†U=I{\displaystyle U^{\dagger }U=I}at will. In particular, inserting it afterH/ℏ{\displaystyle H/\hbar }and also premultiplying both sides byU{\displaystyle U}, we get Next, note that by theproduct rule, Inserting anotherU†U{\displaystyle U^{\dagger }U}and rearranging, we get Finally, combining (1) and (2) above results in the desired transformation: If we adopt the notationψ˘:=Uψ{\displaystyle {\breve {\psi }}:=U\psi }to describe the transformed wave function, the equations can be written in a clearer form. For instance,(3){\displaystyle (3)}can be rewritten as which can be rewritten in the form of the original Schrödinger equation, The original wave function can be recovered asψ=U†ψ˘{\displaystyle \psi =U^{\dagger }{\breve {\psi }}}. Unitary transformations can be seen as a generalization of theinteraction (Dirac) picture. In the latter approach, a Hamiltonian is broken into a time-independent part and a time-dependent part, In this case, the Schrödinger equation becomes The correspondence to a unitary transformation can be shown by choosingU(t)=exp⁡[+iH0t/ℏ]{\textstyle U(t)=\exp \left[{+iH_{0}t/\hbar }\right]}. As a result,U†(t)=exp⁡[−iH0t/ℏ].{\displaystyle {U^{\dagger }}(t)=\exp \left[{-iH_{0}t}/\hbar \right].} Using the notation from(a){\displaystyle (a)}above, our transformed Hamiltonian becomes First note that sinceU{\displaystyle U}is a function ofH0{\displaystyle H_{0}}, the two mustcommute. Then which takes care of the first term in the transformation in(b){\displaystyle (b)}, i.e.H˘=H0+UV(t)U†+iℏU˙U†{\displaystyle {\breve {H}}=H_{0}+UV(t)U^{\dagger }+i\hbar {\dot {U}}U^{\dagger }}. Next use thechain ruleto calculate which cancels with the otherH0{\displaystyle H_{0}}. Evidently we are left withH˘=UVU†{\displaystyle {\breve {H}}=UVU^{\dagger }}, yieldingψI˙=−iℏUVU†ψI{\displaystyle {\dot {\psi _{I}}}=-{\frac {i}{\hbar }}UVU^{\dagger }\psi _{I}}as shown above. When applying a general unitary transformation, however, it is not necessary thatH(t){\displaystyle H(t)}be broken into parts, or even thatU(t){\displaystyle U(t)}be a function of any part of the Hamiltonian. Consider an atomwith two states,ground|g⟩{\displaystyle |g\rangle }andexcited|e⟩{\displaystyle |e\rangle }. The atom has a HamiltonianH=ℏω|e⟩⟨e|{\displaystyle H=\hbar \omega {|{e}\rangle \langle {e}|}}, whereω{\displaystyle \omega }is thefrequencyoflightassociated with the ground-to-excitedtransition. Now suppose we illuminate the atom with adriveat frequencyωd{\displaystyle \omega _{d}}whichcouplesthe two states, and that the time-dependent driven Hamiltonian is for some complex drive strengthΩ{\displaystyle \Omega }. Because of the competing frequency scales (ω{\displaystyle \omega },ωd{\displaystyle \omega _{d}}, andΩ{\displaystyle \Omega }), it is difficult to anticipate the effect of the drive (seedriven harmonic motion). Without a drive, the phase of|e⟩{\displaystyle |e\rangle }would oscillate relative to|g⟩{\displaystyle |g\rangle }. In theBloch sphererepresentation of a two-state system, this corresponds to rotation around the z-axis. Conceptually, we can remove this component of the dynamics by entering arotating frame of referencedefined by the unitary transformationU=eiωt|e⟩⟨e|{\displaystyle U=e^{i\omega t|e\rangle \langle e|}}. Under this transformation, the Hamiltonian becomes If the driving frequency is equal to the g-e transition's frequency,ωd=ω{\displaystyle \omega _{d}=\omega },resonancewill occur and then the equation above reduces to From this it is apparent, even without getting into details, that the dynamics will involve anoscillationbetween the ground and excited states at frequencyΩ{\displaystyle \Omega }.[4] As another limiting case, suppose the drive is far off-resonant,|ωd−ω|≫0{\displaystyle |\omega _{d}-\omega |\gg 0}. We can figure out the dynamics in that case without solving the Schrödinger equation directly. Suppose the system starts in the ground state|g⟩{\displaystyle |g\rangle }. Initially, the Hamiltonian will populate some component of|e⟩{\displaystyle |e\rangle }. A small time later, however, it will populate roughly the same amount of|e⟩{\displaystyle |e\rangle }but with completely different phase. Thus the effect of an off-resonant drive will tend to cancel itself out. This can also be expressed by saying that an off-resonant drive israpidly rotatingin the frame of the atom. These concepts are illustrated in the table below, where the sphere represents theBloch sphere, the arrow represents the state of the atom, and the hand represents the drive. The example above could also have been analyzed in the interaction picture. The following example, however, is more difficult to analyze without the general formulation of unitary transformations. Consider twoharmonic oscillators, between which we would like to engineer abeam splitterinteraction, This was achieved experimentally with twomicrowave cavityresonators serving asa{\displaystyle a}andb{\displaystyle b}.[5]Below, we sketch the analysis of a simplified version of this experiment. In addition to the microwave cavities, the experiment also involved atransmonqubit,c{\displaystyle c}, coupled to both modes. The qubit is driven simultaneously at two frequencies,ω1{\displaystyle \omega _{1}}andω2{\displaystyle \omega _{2}}, for whichω1−ω2=ωa−ωb{\displaystyle \omega _{1}-\omega _{2}=\omega _{a}-\omega _{b}}. In addition, there are manyfourth-ordertermscoupling the modes, but most of them can be neglected. In this experiment, two such terms which will become important are (H.c. isshorthandfor theHermitian conjugate.) We can apply adisplacementtransformation,U=D(−ξ1e−iω1t−ξ2e−iω2t){\displaystyle U=D(-\xi _{1}e^{-i\omega _{1}t}-\xi _{2}e^{-i\omega _{2}t})}, to modec{\displaystyle c}[clarification needed]. For carefully chosen amplitudes, this transformation will cancelHdrive{\displaystyle H_{\textrm {drive}}}while also displacing theladder operator,c→c+ξ1e−iω1t+ξ2e−iω2t{\displaystyle c\to c+\xi _{1}e^{-i\omega _{1}t}+\xi _{2}e^{-i\omega _{2}t}}. This leaves us with Expanding this expression and dropping the rapidly rotating terms, we are left with the desired Hamiltonian, It is common for the operators involved in unitary transformations to be written as exponentials of operators,U=eX{\displaystyle U=e^{X}}, as seen above. Further, the operators in the exponentials commonly obey the relationX†=−X{\displaystyle X^{\dagger }=-X}, so that the transform of an operatorY{\displaystyle Y}is,UYU†=eXYe−X{\displaystyle UYU^{\dagger }=e^{X}Ye^{-X}}. By now introducing the iterator commutator, we can use a special result of the Baker-Campbell-Hausdorff formula to write this transformation compactly as, or, in long form for completeness,
https://en.wikipedia.org/wiki/Unitary_transformation_(quantum_mechanics)
Exponential growthoccurs when a quantity grows as anexponential functionof time. The quantity grows at a ratedirectly proportionalto its present size. For example, when it is 3 times as big as it is now, it will be growing 3 times as fast as it is now. In more technical language, its instantaneousrate of change(that is, thederivative) of a quantity with respect to an independent variable isproportionalto the quantity itself. Often the independent variable is time. Described as afunction, a quantity undergoing exponential growth is anexponential functionof time, that is, the variable representing time is the exponent (in contrast to other types of growth, such asquadratic growth). Exponential growth isthe inverseoflogarithmic growth. Not all cases of growth at an always increasing rate are instances of exponential growth. For example the functionf(x)=x3{\textstyle f(x)=x^{3}}grows at an ever increasing rate, but is much slower than growing exponentially. For example, whenx=1,{\textstyle x=1,}it grows at 3 times its size, but whenx=10{\textstyle x=10}it grows at 30% of its size. If an exponentially growing function grows at a rate that is 3 times is present size, then it always grows at a rate that is 3 times its present size. When it is 10 times as big as it is now, it will grow 10 times as fast. If the constant of proportionality is negative, then the quantity decreases over time, and is said to be undergoingexponential decayinstead. In the case of a discretedomainof definition with equal intervals, it is also calledgeometric growthorgeometric decaysince the function values form ageometric progression. The formula for exponential growth of a variablexat the growth rater, as timetgoes on in discrete intervals (that is, at integer times 0, 1, 2, 3, ...), is xt=x0(1+r)t{\displaystyle x_{t}=x_{0}(1+r)^{t}} wherex0is the value ofxat time 0. The growth of a bacterialcolonyis often used to illustrate it. One bacterium splits itself into two, each of which splits itself resulting in four, then eight, 16, 32, and so on. The amount of increase keeps increasing because it is proportional to the ever-increasing number of bacteria. Growth like this is observed in real-life activity or phenomena, such as the spread of virus infection, the growth of debt due tocompound interest, and the spread ofviral videos. In real cases, initial exponential growth often does not last forever, instead slowing down eventually due to upper limits caused by external factors and turning intologistic growth. Terms like "exponential growth" are sometimes incorrectly interpreted as "rapid growth". Indeed, something that grows exponentially can in fact be growing slowly at first.[1][2] A quantityxdepends exponentially on timetifx(t)=a⋅bt/τ{\displaystyle x(t)=a\cdot b^{t/\tau }}where the constantais the initial value ofx,x(0)=a,{\displaystyle x(0)=a\,,}the constantbis a positive growth factor, andτis thetime constant—the time required forxto increase by one factor ofb:x(t+τ)=a⋅b(t+τ)/τ=a⋅bt/τ⋅bτ/τ=x(t)⋅b.{\displaystyle x(t+\tau )=a\cdot b^{(t+\tau )/\tau }=a\cdot b^{t/\tau }\cdot b^{\tau /\tau }=x(t)\cdot b\,.} Ifτ> 0andb> 1, thenxhas exponential growth. Ifτ< 0andb> 1, orτ> 0and0 <b< 1, thenxhasexponential decay. Example:If a species of bacteria doubles every ten minutes, starting out with only one bacterium, how many bacteria would be present after one hour?The question impliesa= 1,b= 2andτ= 10 min. x(t)=a⋅bt/τ=1⋅2t/(10min){\displaystyle x(t)=a\cdot b^{t/\tau }=1\cdot 2^{t/(10{\text{ min}})}}x(1hr)=1⋅2(60min)/(10min)=1⋅26=64.{\displaystyle x(1{\text{ hr}})=1\cdot 2^{(60{\text{ min}})/(10{\text{ min}})}=1\cdot 2^{6}=64.} After one hour, or six ten-minute intervals, there would be sixty-four bacteria. Many pairs(b,τ)of adimensionlessnon-negative numberband an amount of timeτ(aphysical quantitywhich can be expressed as the product of a number of units and a unit of time) represent the same growth rate, withτproportional tologb. For any fixedbnot equal to 1 (e.g.eor 2), the growth rate is given by the non-zero timeτ. For any non-zero timeτthe growth rate is given by the dimensionless positive numberb. Thus the law of exponential growth can be written in different but mathematically equivalent forms, by using a differentbase. The most common forms are the following:x(t)=x0⋅ekt=x0⋅et/τ=x0⋅2t/T=x0⋅(1+r100)t/p,{\displaystyle x(t)=x_{0}\cdot e^{kt}=x_{0}\cdot e^{t/\tau }=x_{0}\cdot 2^{t/T}=x_{0}\cdot \left(1+{\frac {r}{100}}\right)^{t/p},}wherex0expresses the initial quantityx(0). Parameters (negative in the case of exponential decay): The quantitiesk,τ, andT, and for a givenpalsor, have a one-to-one connection given by the following equation (which can be derived by taking the natural logarithm of the above):k=1τ=ln⁡2T=ln⁡(1+r100)p{\displaystyle k={\frac {1}{\tau }}={\frac {\ln 2}{T}}={\frac {\ln \left(1+{\frac {r}{100}}\right)}{p}}}wherek= 0corresponds tor= 0and toτandTbeing infinite. Ifpis the unit of time the quotientt/pis simply the number of units of time. Using the notationtfor the (dimensionless) number of units of time rather than the time itself,t/pcan be replaced byt, but for uniformity this has been avoided here. In this case the division bypin the last formula is not a numerical division either, but converts a dimensionless number to the correct quantity including unit. A popular approximated method for calculating the doubling time from the growth rate is therule of 70, that is,T≃70/r{\displaystyle T\simeq 70/r}. If a variablexexhibits exponential growth according tox(t)=x0(1+r)t{\displaystyle x(t)=x_{0}(1+r)^{t}}, then the log (to any base) ofxgrows linearlyover time, as can be seen by takinglogarithmsof both sides of the exponential growth equation:log⁡x(t)=log⁡x0+t⋅log⁡(1+r).{\displaystyle \log x(t)=\log x_{0}+t\cdot \log(1+r).} This allows an exponentially growing variable to be modeled with alog-linear model. For example, if one wishes to empirically estimate the growth rate from intertemporal data onx, one canlinearly regresslogxont. Theexponential functionx(t)=x0ekt{\displaystyle x(t)=x_{0}e^{kt}}satisfies thelinear differential equation:dxdt=kx{\displaystyle {\frac {dx}{dt}}=kx}saying that the change per instant of time ofxat timetis proportional to the value ofx(t), andx(t)has theinitial valuex(0)=x0{\displaystyle x(0)=x_{0}}. The differential equation is solved by direct integration:dxdt=kxdxx=kdt∫x0x(t)dxx=k∫0tdtln⁡x(t)x0=kt.{\displaystyle {\begin{aligned}{\frac {dx}{dt}}&=kx\\[5pt]{\frac {dx}{x}}&=k\,dt\\[5pt]\int _{x_{0}}^{x(t)}{\frac {dx}{x}}&=k\int _{0}^{t}\,dt\\[5pt]\ln {\frac {x(t)}{x_{0}}}&=kt.\end{aligned}}}so thatx(t)=x0ekt.{\displaystyle x(t)=x_{0}e^{kt}.} In the above differential equation, ifk< 0, then the quantity experiencesexponential decay. For anonlinearvariation of this growth model seelogistic function. In the long run, exponential growth of any kind will overtake linear growth of any kind (that is the basis of theMalthusian catastrophe) as well as anypolynomialgrowth, that is, for allα:limt→∞tαaet=0.{\displaystyle \lim _{t\to \infty }{\frac {t^{\alpha }}{ae^{t}}}=0.} There is a whole hierarchy of conceivable growth rates that are slower than exponential and faster than linear (in the long run). SeeDegree of a polynomial § Computed from the function values. Growth rates may also be faster than exponential. In the most extreme case, when growth increases without bound in finite time, it is calledhyperbolic growth. In between exponential and hyperbolic growth lie more classes of growth behavior, like thehyperoperationsbeginning attetration, andA(n,n){\displaystyle A(n,n)}, the diagonal of theAckermann function. In reality, initial exponential growth is often not sustained forever. After some period, it will be slowed by external or environmental factors. For example, population growth may reach an upper limit due to resource limitations.[9]In 1845, the Belgian mathematicianPierre François Verhulstfirst proposed a mathematical model of growth like this, called the "logistic growth".[10] Exponential growth models of physical phenomena only apply within limited regions, as unbounded growth is not physically realistic. Although growth may initially be exponential, the modelled phenomena will eventually enter a region in which previously ignorednegative feedbackfactors become significant (leading to alogistic growthmodel) or other underlying assumptions of the exponential growth model, such as continuity or instantaneous feedback, break down. Studies show that human beings have difficulty understanding exponential growth. Exponential growth bias is the tendency to underestimate compound growth processes. This bias can have financial implications as well.[11] According to legend, vizier Sissa Ben Dahir presented an Indian King Sharim with a beautiful handmadechessboard. The king asked what he would like in return for his gift and the courtier surprised the king by asking for one grain of rice on the first square, two grains on the second, four grains on the third, and so on. The king readily agreed and asked for the rice to be brought. All went well at first, but the requirement for2n−1grains on thenth square demanded over a million grains on the 21st square, more than a million million (a.k.a.trillion) on the 41st and there simply was not enough rice in the whole world for the final squares. (From Swirski, 2006)[12] The "second half of the chessboard" refers to the time when an exponentially growing influence is having a significant economic impact on an organization's overall business strategy. French children are offered a riddle, which appears to be an aspect of exponential growth: "the apparent suddenness with which an exponentially growing quantity approaches a fixed limit". The riddle imagines a water lily plant growing in a pond. The plant doubles in size every day and, if left alone, it would smother the pond in 30 days killing all the other living things in the water. Day after day, the plant's growth is small, so it is decided that it won't be a concern until it covers half of the pond. Which day will that be? The 29th day, leaving only one day to save the pond.[13][12]
https://en.wikipedia.org/wiki/Exponential_growth
Radioactive decay(also known asnuclear decay,radioactivity,radioactive disintegration, ornuclear disintegration) is the process by which an unstableatomic nucleusloses energy byradiation. A material containing unstable nuclei is consideredradioactive. Three of the most common types of decay arealpha,beta, andgamma decay. Theweak forceis themechanismthat is responsible for beta decay, while the other two are governed by theelectromagneticandnuclear forces.[1] Radioactive decay is arandomprocess at the level of single atoms. According toquantum theory, it is impossible to predict when a particular atom will decay, regardless of how long the atom has existed.[2][3][4]However, for a significant number of identical atoms, the overall decay rate can be expressed as adecay constantor as ahalf-life. The half-lives of radioactive atoms have a huge range: from nearly instantaneous to far longer than theage of the universe. The decaying nucleus is called the parentradionuclide(or parentradioisotope), and the process produces at least onedaughter nuclide. Except for gamma decay orinternal conversionfrom a nuclearexcited state, the decay is anuclear transmutationresulting in a daughter containing a different number ofprotonsorneutrons(or both). When the number of protons changes, an atom of a differentchemical elementis created. There are 28 naturally occurring chemical elements on Earth that are radioactive, consisting of 35radionuclides(seven elements have two different radionuclides each) that date before the time of formation of theSolar System. These 35 are known asprimordial radionuclides. Well-known examples areuraniumandthorium, but also included are naturally occurring long-lived radioisotopes, such aspotassium-40. Each of the heavyprimordial radionuclidesparticipates in one of the fourdecay chains. Henri Poincarélaid the seeds for the discovery of radioactivity through his interest in and studies ofX-rays, which significantly influenced physicistHenri Becquerel.[5]Radioactivity was discovered in 1896 by Becquerel and independently byMarie Curie, while working withphosphorescentmaterials.[6][7][8][9][10]These materials glow in the dark after exposure to light, and Becquerel suspected that the glow produced incathode-ray tubesby X-rays might be associated with phosphorescence. He wrapped a photographic plate in black paper and placed various phosphorescentsaltson it. All results were negative until he useduraniumsalts. The uranium salts caused a blackening of the plate in spite of the plate being wrapped in black paper. These radiations were given the name "Becquerel Rays". It soon became clear that the blackening of the plate had nothing to do with phosphorescence, as the blackening was also produced by non-phosphorescentsaltsof uranium and by metallic uranium. It became clear from these experiments that there was a form of invisible radiation that could pass through paper and was causing the plate to react as if exposed to light. At first, it seemed as though the new radiation was similar to the then recently discovered X-rays. Further research by Becquerel,Ernest Rutherford,Paul Villard,Pierre Curie,Marie Curie, and others showed that this form of radioactivity was significantly more complicated. Rutherford was the first to realize that all such elements decay in accordance with the same mathematical exponential formula. Rutherford and his studentFrederick Soddywere the first to realize that many decay processes resulted in thetransmutationof one element to another. Subsequently, theradioactive displacement law of Fajans and Soddywas formulated to describe the products of alpha andbeta decay.[11][12] The early researchers also discovered that many otherchemical elements, besides uranium, have radioactive isotopes. A systematic search for the total radioactivity in uranium ores also guided Pierre and Marie Curie to isolate two new elements:poloniumandradium. Except for the radioactivity of radium, the chemical similarity of radium tobariummade these two elements difficult to distinguish. Marie and Pierre Curie's study of radioactivity is an important factor in science and medicine. After their research on Becquerel's rays led them to the discovery of both radium and polonium, they coined the term "radioactivity"[13]to define the emission ofionizing radiationby some heavy elements.[14](Later the term was generalized to all elements.) Their research on the penetrating rays in uranium and the discovery of radium launched an era of using radium for the treatment of cancer. Their exploration of radium could be seen as the first peaceful use of nuclear energy and the start of modernnuclear medicine.[13] The dangers ofionizing radiationdue to radioactivity and X-rays were not immediately recognized. The discovery of X‑rays byWilhelm Röntgenin 1895 led to widespread experimentation by scientists, physicians, and inventors. Many people began recounting stories of burns, hair loss and worse in technical journals as early as 1896. In February of that year, Professor Daniel and Dr. Dudley ofVanderbilt Universityperformed an experiment involving X-raying Dudley's head that resulted in his hair loss. A report by Dr. H.D. Hawks, of his suffering severe hand and chest burns in an X-ray demonstration, was the first of many other reports inElectrical Review.[15] Other experimenters, includingElihu ThomsonandNikola Tesla, also reported burns. Thomson deliberately exposed a finger to an X-ray tube over a period of time and suffered pain, swelling, and blistering.[16]Other effects, including ultraviolet rays and ozone, were sometimes blamed for the damage,[17]and many physicians still claimed that there were no effects from X-ray exposure at all.[16] Despite this, there were some early systematic hazard investigations, and as early as 1902William Herbert Rollinswrote almost despairingly that his warnings about the dangers involved in the careless use of X-rays were not being heeded, either by industry or by his colleagues. By this time, Rollins had proved that X-rays could kill experimental animals, could cause a pregnant guinea pig to abort, and that they could kill a foetus. He also stressed that "animals vary in susceptibility to the external action of X-light" and warned that these differences be considered when patients were treated by means of X-rays.[citation needed] However, the biological effects of radiation due to radioactive substances were less easy to gauge. This gave the opportunity for many physicians and corporations to market radioactive substances aspatent medicines. Examples were radiumenematreatments, and radium-containing waters to be drunk as tonics. Marie Curie protested against this sort of treatment, warning that "radium is dangerous in untrained hands".[18]Curie later died fromaplastic anaemia, likely caused by exposure to ionizing radiation. By the 1930s, after a number of cases of bone necrosis and death of radium treatment enthusiasts, radium-containing medicinal products had been largely removed from the market (radioactive quackery). Only a year afterRöntgen's discovery of X-rays, the American engineerWolfram Fuchs(1896) gave what is probably the first protection advice, but it was not until 1925 that the firstInternational Congress of Radiology(ICR) was held and considered establishing international protection standards. The effects of radiation on genes, including the effect of cancer risk, were recognized much later. In 1927,Hermann Joseph Mullerpublished research showing genetic effects and, in 1946, was awarded theNobel Prize in Physiology or Medicinefor his findings. The second ICR was held in Stockholm in 1928 and proposed the adoption of theröntgenunit, and theInternational X-ray and Radium Protection Committee(IXRPC) was formed.Rolf Sievertwas named chairman, but a driving force wasGeorge Kayeof the BritishNational Physical Laboratory. The committee met in 1931, 1934, and 1937. AfterWorld War II, the increased range and quantity of radioactive substances being handled as a result of military and civil nuclear programs led to large groups of occupational workers and the public being potentially exposed to harmful levels of ionising radiation. This was considered at the first post-war ICR convened in London in 1950, when the presentInternational Commission on Radiological Protection(ICRP) was born.[19]Since then the ICRP has developed the present international system of radiation protection, covering all aspects of radiation hazards. In 2020, Hauptmann and another 15 international researchers from eight nations (among them: Institutes of Biostatistics, Registry Research, Centers of Cancer Epidemiology, Radiation Epidemiology, and also theU.S. National Cancer Institute(NCI),International Agency for Research on Cancer(IARC) and theRadiation Effects Research Foundation of Hiroshima) studied definitively throughmeta-analysisthe damage resulting from the "low doses" that have afflicted survivors of theatomic bombings of Hiroshima and Nagasakiand also in numerousaccidents at nuclear plantsthat have occurred. These scientists reported, inJNCI Monographs: Epidemiological Studies of Low Dose Ionizing Radiation and Cancer Risk, that the new epidemiological studies directly support excess cancer risks from low-dose ionizing radiation.[20]In 2021, Italian researcher Sebastiano Venturi reported the first correlations between radio-caesium andpancreatic cancerwith the role ofcaesiumin biology, in pancreatitis and in diabetes of pancreatic origin.[21] TheInternational System of Units(SI) unit of radioactive activity is thebecquerel(Bq), named in honor of the scientistHenri Becquerel. One Bq is defined as one transformation (or decay or disintegration) per second. An older unit of radioactivity is thecurie, Ci, which was originally defined as "the quantity or mass ofradium emanationinequilibriumwith one gram ofradium(element)".[22]Today, the curie is defined as3.7×1010disintegrations per second, so that 1curie(Ci) =3.7×1010Bq. For radiological protection purposes, although the United States Nuclear Regulatory Commission permits the use of the unit curie alongside SI units,[23]theEuropean UnionEuropean units of measurement directivesrequired that its use for "public health ... purposes" be phased out by 31 December 1985.[24] The effects of ionizing radiation are often measured in units ofgrayfor mechanical orsievertfor damage to tissue. Radioactive decay results in a reduction of summed restmass, once the released energy (thedisintegration energy) has escaped in some way. Althoughdecay energyis sometimes defined as associated with the difference between the mass of the parent nuclide products and the mass of the decay products, this is true only of rest mass measurements, where some energy has been removed from the product system. This is true because the decay energy must always carry mass with it, wherever it appears (seemass in special relativity) according to the formulaE=mc2. The decay energy is initially released as the energy of emitted photons plus the kinetic energy of massive emitted particles (that is, particles that have rest mass). If these particles come tothermal equilibriumwith their surroundings and photons are absorbed, then the decay energy is transformed to thermal energy, which retains its mass. Decay energy, therefore, remains associated with a certain measure of the mass of the decay system, calledinvariant mass, which does not change during the decay, even though the energy of decay is distributed among decay particles. The energy of photons, the kinetic energy of emitted particles, and, later, the thermal energy of the surrounding matter, all contribute to theinvariant massof the system. Thus, while the sum of the rest masses of the particles is not conserved in radioactive decay, thesystemmass and system invariant mass (and also the system total energy) is conserved throughout any decay process. This is a restatement of the equivalent laws ofconservation of energyandconservation of mass. Early researchers found that anelectricormagnetic fieldcould split radioactive emissions into three types of beams. The rays were given the namesalpha,beta, and gamma, in increasing order of their ability to penetrate matter. Alpha decay is observed only in heavier elements of atomic number 52 (tellurium) and greater, with the exception ofberyllium-8(which decays to two alpha particles). The other two types of decay are observed in all the elements. Lead,atomic number82, is the heaviest element to have any isotopes stable (to the limit of measurement) to radioactive decay. Radioactive decay is seen in all isotopes of all elements of atomic number 83 (bismuth) or greater.Bismuth-209, however, is only very slightly radioactive, with a half-life greater than the age of the universe; radioisotopes with extremely long half-lives are considered effectively stable for practical purposes. In analyzing the nature of the decay products, it was obvious from the direction of theelectromagnetic forcesapplied to the radiations by external magnetic and electric fields that alpha particles carried a positive charge, beta particles carried a negative charge, and gamma rays were neutral. From the magnitude of deflection, it was clear thatalpha particleswere much more massive thanbeta particles. Passing alpha particles through a very thin glass window and trapping them in adischarge tubeallowed researchers to study theemission spectrumof the captured particles, and ultimately proved that alpha particles areheliumnuclei. Other experiments showed beta radiation, resulting from decay andcathode rays, were high-speedelectrons. Likewise, gamma radiation and X-rays were found to be high-energyelectromagnetic radiation. The relationship between the types of decays also began to be examined: For example, gamma decay was almost always found to be associated with other types of decay, and occurred at about the same time, or afterwards. Gamma decay as a separate phenomenon, with its own half-life (now termedisomeric transition), was found in natural radioactivity to be a result of the gamma decay of excited metastablenuclear isomers, which were in turn created from other types of decay. Although alpha, beta, and gamma radiations were most commonly found, other types of emission were eventually discovered. Shortly after the discovery of thepositronin cosmic ray products, it was realized that the same process that operates in classical beta decay can also produce positrons (positron emission), along withneutrinos(classical beta decay produces antineutrinos). In electron capture, some proton-rich nuclides were found to capture their own atomic electrons instead of emitting positrons, and subsequently, these nuclides emit only a neutrino and a gamma ray from the excited nucleus (and often alsoAuger electronsandcharacteristic X-rays, as a result of the re-ordering of electrons to fill the place of the missing captured electron). These types of decay involve the nuclear capture of electrons or emission of electrons or positrons, and thus acts to move a nucleus toward the ratio of neutrons to protons that has the least energy for a given total number ofnucleons. This consequently produces a more stable (lower energy) nucleus. A hypothetical process of positron capture, analogous to electron capture, is theoretically possible in antimatter atoms, but has not been observed, as complex antimatter atoms beyondantiheliumare not experimentally available.[25]Such a decay would require antimatter atoms at least as complex asberyllium-7, which is the lightest known isotope of normal matter to undergo decay by electron capture.[26] Shortly after the discovery of the neutron in 1932,Enrico Fermirealized that certain rare beta-decay reactions immediately yield neutrons as an additional decay particle, so called beta-delayedneutron emission. Neutron emission usually happens from nuclei that are in an excited state, such as the excited17O* produced from the beta decay of17N. The neutron emission process itself is controlled by thenuclear forceand therefore is extremely fast, sometimes referred to as "nearly instantaneous". Isolatedproton emissionwas eventually observed in some elements. It was also found that some heavy elements may undergospontaneous fissioninto products that vary in composition. In a phenomenon calledcluster decay, specific combinations of neutrons and protons other than alpha particles (helium nuclei) were found to be spontaneously emitted from atoms. Other types of radioactive decay were found to emit previously seen particles but via different mechanisms. An example isinternal conversion, which results in an initial electron emission, and then often furthercharacteristic X-raysandAuger electronsemissions, although the internal conversion process involves neither beta nor gamma decay. A neutrino is not emitted, and none of the electron(s) and photon(s) emitted originate in the nucleus, even though the energy to emit all of them does originate there. Internal conversion decay, likeisomeric transitiongamma decay and neutron emission, involves the release of energy by an excited nuclide, without the transmutation of one element into another. Rare events that involve a combination of two beta-decay-type events happening simultaneously are known (see below). Any decay process that does not violate the conservation of energy or momentum laws (and perhaps other particle conservation laws) is permitted to happen, although not all have been detected. An interesting example discussed in a final section, isbound state beta decayofrhenium-187. In this process, the beta electron-decay of the parent nuclide is not accompanied by beta electron emission, because the beta particle has been captured into the K-shell of the emitting atom. An antineutrino is emitted, as in all negative beta decays. If energy circumstances are favorable, a given radionuclide may undergo many competing types of decay, with some atoms decaying by one route, and others decaying by another. An example iscopper-64, which has 29 protons, and 35 neutrons, which decays with a half-life of12.7004(13)hours.[27]This isotope has one unpaired proton and one unpaired neutron, so either the proton or the neutron can decay to the other particle, which has oppositeisospin. This particular nuclide (though not all nuclides in this situation) is more likely to decay throughbeta plus decay(61.52(26)%[27]) than throughelectron capture(38.48(26)%[27]). The excited energy states resulting from these decays which fail to end in a ground energy state, also produce later internal conversion andgamma decayin almost 0.5% of the time. The daughter nuclide of a decay event may also be unstable (radioactive). In this case, it too will decay, producing radiation. The resulting second daughter nuclide may also be radioactive. This can lead to a sequence of several decay events called adecay chain(see this article for specific details of important natural decay chains). Eventually, a stable nuclide is produced. Any decay daughters that are the result of an alpha decay will also result in helium atoms being created. Some radionuclides may have several different paths of decay. For example,35.94(6)%[27]ofbismuth-212decays, through alpha-emission, tothallium-208while64.06(6)%[27]ofbismuth-212decays, through beta-emission, topolonium-212. Boththallium-208andpolonium-212are radioactive daughter products of bismuth-212, and both decay directly to stablelead-208. According to theBig Bang theory, stable isotopes of the lightest three elements (H, He, and traces ofLi) were produced very shortly after the emergence of the universe, in a process calledBig Bang nucleosynthesis. These lightest stable nuclides (includingdeuterium) survive to today, but any radioactive isotopes of the light elements produced in the Big Bang (such astritium) have long since decayed. Isotopes of elements heavier than boron were not produced at all in the Big Bang, and these first five elements do not have any long-lived radioisotopes. Thus, all radioactive nuclei are, therefore, relatively young with respect to the birth of the universe, having formed later in various other types ofnucleosynthesisinstars(in particular,supernovae), and also during ongoing interactions between stable isotopes and energetic particles. For example,carbon-14, a radioactive nuclide with a half-life of only5700(30)years,[27]is constantly produced in Earth's upper atmosphere due to interactions between cosmic rays and nitrogen. Nuclides that are produced by radioactive decay are calledradiogenic nuclides, whether they themselves arestableor not. There exist stable radiogenic nuclides that were formed from short-livedextinct radionuclidesin the early Solar System.[28][29]The extra presence of these stable radiogenic nuclides (such as xenon-129 from extinctiodine-129) against the background of primordialstable nuclidescan be inferred by various means. Radioactive decay has been put to use in the technique ofradioisotopic labeling, which is used to track the passage of a chemical substance through a complex system (such as a livingorganism). A sample of the substance is synthesized with a high concentration of unstable atoms. The presence of the substance in one or another part of the system is determined by detecting the locations of decay events. On the premise that radioactive decay is trulyrandom(rather than merelychaotic), it has been used inhardware random-number generators. Because the process is not thought to vary significantly in mechanism over time, it is also a valuable tool in estimating the absolute ages of certain materials. For geological materials, the radioisotopes and some of their decay products become trapped when a rock solidifies, and can then later be used (subject to many well-known qualifications) to estimate the date of the solidification. These include checking the results of several simultaneous processes and their products against each other, within the same sample. In a similar fashion, and also subject to qualification, the rate of formation of carbon-14 in various eras, the date of formation of organic matter within a certain period related to the isotope's half-life may be estimated, because the carbon-14 becomes trapped when the organic matter grows and incorporates the new carbon-14 from the air. Thereafter, the amount of carbon-14 in organic matter decreases according to decay processes that may also be independently cross-checked by other means (such as checking the carbon-14 in individual tree rings, for example). The Szilard–Chalmers effect is the breaking of a chemical bond as a result of a kinetic energy imparted from radioactive decay. It operates by the absorption of neutrons by an atom and subsequent emission of gamma rays, often with significant amounts of kinetic energy. This kinetic energy, byNewton's third law, pushes back on the decaying atom, which causes it to move with enough speed to break a chemical bond.[30]This effect can be used to separate isotopes by chemical means. The Szilard–Chalmers effect was discovered in 1934 byLeó Szilárdand Thomas A. Chalmers.[31]They observed that after bombardment by neutrons, the breaking of a bond in liquid ethyl iodide allowed radioactive iodine to be removed.[32] Radioactiveprimordial nuclidesfound in theEarthare residues from ancientsupernovaexplosions that occurred before the formation of theSolar System. They are the fraction of radionuclides that survived from that time, through the formation of the primordial solarnebula, through planetaccretion, and up to the present time. The naturally occurring short-livedradiogenicradionuclides found in today'srocks, are the daughters of those radioactive primordial nuclides. Another minor source of naturally occurring radioactive nuclides arecosmogenic nuclides, that are formed by cosmic ray bombardment of material in the Earth'satmosphereorcrust. The decay of the radionuclides in rocks of the Earth'smantleandcrustcontribute significantly toEarth's internal heat budget. While the underlying process of radioactive decay is subatomic, historically and in most practical cases it is encountered in bulk materials with very large numbers of atoms. This section discusses models that connect events at the atomic level to observations in aggregate. Thedecay rate, oractivity, of a radioactive substance is characterized by the following time-independent parameters: Although these are constants, they are associated with thestatistical behavior of populationsof atoms. In consequence, predictions using these constants are less accurate for minuscule samples of atoms. In principle a half-life, a third-life, or even a (1/√2)-life, could be used in exactly the same way as half-life; but the mean life and half-lifet1/2have been adopted as standard times associated with exponential decay. Those parameters can be related to the following time-dependent parameters: These are related as follows: whereN0is the initial amount of active substance — substance that has the same percentage of unstable particles as when the substance was formed. The mathematics of radioactive decay depend on a key assumption that a nucleus of a radionuclide has no "memory" or way of translating its history into its present behavior. A nucleus does not "age" with the passage of time. Thus, the probability of its breaking down does not increase with time but stays constant, no matter how long the nucleus has existed. This constant probability may differ greatly between one type of nucleus and another, leading to the many different observed decay rates. However, whatever the probability is, it does not change over time. This is in marked contrast to complex objects that do show aging, such as automobiles and humans. These aging systems do have a chance of breakdown per unit of time that increases from the moment they begin their existence. Aggregate processes, like the radioactive decay of a lump of atoms, for which the single-event probability of realization is very small but in which the number of time-slices is so large that there is nevertheless a reasonable rate of events, are modelled by thePoisson distribution, which is discrete. Radioactive decay andnuclear particle reactionsare two examples of such aggregate processes.[33]The mathematics ofPoisson processesreduce to the law ofexponential decay, which describes the statistical behaviour of a large number of nuclei, rather than one individual nucleus. In the following formalism, the number of nuclei or the nuclei populationN, is of course a discrete variable (anatural number)—but for any physical sampleNis so large that it can be treated as a continuous variable.Differential calculusis used to model the behaviour of nuclear decay. Consider the case of a nuclideAthat decays into anotherBby some processA→B(emission of other particles, likeelectron neutrinosνeandelectronse−as inbeta decay, are irrelevant in what follows). The decay of an unstable nucleus is entirely random in time so it is impossible to predict when a particular atom will decay. However, it is equally likely to decay at any instant in time. Therefore, given a sample of a particular radioisotope, the number of decay events−dNexpected to occur in a small interval of timedtis proportional to the number of atoms presentN, that is[34] Particular radionuclides decay at different rates, so each has its own decay constantλ. The expected decay−dN/Nis proportional to an increment of time,dt: −dNN=λdt{\displaystyle -{\frac {\mathrm {d} N}{N}}=\lambda \mathrm {d} t} The negative sign indicates thatNdecreases as time increases, as the decay events follow one after another. The solution to this first-orderdifferential equationis thefunction: whereN0is the value ofNat timet= 0, with the decay constant expressed asλ[34] We have for all timet: whereNtotalis the constant number of particles throughout the decay process, which is equal to the initial number ofAnuclides since this is the initial substance. If the number of non-decayedAnuclei is: then the number of nuclei ofB(i.e. the number of decayedAnuclei) is The number of decays observed over a given interval obeysPoisson statistics. If the average number of decays is⟨N⟩, the probability of a given number of decaysNis[34] Now consider the case of a chain of two decays: one nuclideAdecaying into anotherBby one process, thenBdecaying into anotherCby a second process, i.e.A → B → C. The previous equation cannot be applied to the decay chain, but can be generalized as follows. SinceAdecays intoB,thenBdecays intoC, the activity ofAadds to the total number ofBnuclides in the present sample,beforethoseBnuclides decay and reduce the number of nuclides leading to the later sample. In other words, the number of second generation nucleiBincreases as a result of the first generation nuclei decay ofA, and decreases as a result of its own decay into the third generation nucleiC.[35]The sum of these two terms gives the law for a decay chain for two nuclides: The rate of change ofNB, that isdNB/dt, is related to the changes in the amounts ofAandB,NBcan increase asBis produced fromAand decrease asBproducesC. Re-writing using the previous results: dNBdt=−λBNB+λANA0e−λAt{\displaystyle {\frac {\mathrm {d} N_{B}}{\mathrm {d} t}}=-\lambda _{B}N_{B}+\lambda _{A}N_{A0}e^{-\lambda _{A}t}} The subscripts simply refer to the respective nuclides, i.e.NAis the number of nuclides of typeA;NA0is the initial number of nuclides of typeA;λAis the decay constant forA– and similarly for nuclideB. Solving this equation forNBgives: In the case whereBis a stable nuclide (λB= 0), this equation reduces to the previous solution: as shown above for one decay. The solution can be found by theintegration factormethod, where the integrating factor iseλBt. This case is perhaps the most useful since it can derive both the one-decay equation (above) and the equation for multi-decay chains (below) more directly. For the general case of any number of consecutive decays in a decay chain, i.e.A1→ A2··· → Ai··· → AD, whereDis the number of decays andiis a dummy index (i= 1, 2, 3, ...,D), each nuclide population can be found in terms of the previous population. In this caseN2= 0,N3= 0, ...,ND= 0. Using the above result in a recursive form: The general solution to the recursive problem is given byBateman's equations:[36] ND=N1(0)λD∑i=1Dλicie−λitci=∏j=1,i≠jDλjλj−λi{\displaystyle {\begin{aligned}N_{D}&={\frac {N_{1}(0)}{\lambda _{D}}}\sum _{i=1}^{D}\lambda _{i}c_{i}e^{-\lambda _{i}t}\\[3pt]c_{i}&=\prod _{j=1,i\neq j}^{D}{\frac {\lambda _{j}}{\lambda _{j}-\lambda _{i}}}\end{aligned}}} In all of the above examples, the initial nuclide decays into just one product.[37]Consider the case of one initial nuclide that can decay into either of two products, that isA → BandA → Cin parallel. For example, in a sample ofpotassium-40, 89.3% of the nuclei decay tocalcium-40and 10.7% toargon-40. We have for all timet: which is constant, since the total number of nuclides remains constant. Differentiating with respect to time: defining thetotal decay constantλin terms of the sum ofpartial decay constantsλBandλC: Solving this equation forNA: whereNA0is the initial number of nuclide A. When measuring the production of one nuclide, one can only observe the total decay constantλ. The decay constantsλBandλCdetermine the probability for the decay to result in productsBorCas follows: because the fractionλB/λof nuclei decay intoBwhile the fractionλC/λof nuclei decay intoC. The above equations can also be written using quantities related to the number of nuclide particlesNin a sample; whereNA=6.02214076×1023mol−1‍[38]is theAvogadro constant,Mis themolar massof the substance in kg/mol, and the amount of the substancenis inmoles. For the one-decay solutionA → B: the equation indicates that the decay constantλhas units oft−1, and can thus also be represented as 1/τ, whereτis a characteristic time of the process called thetime constant. In a radioactive decay process, this time constant is also themean lifetimefor decaying atoms. Each atom "lives" for a finite amount of time before it decays, and it may be shown that this mean lifetime is thearithmetic meanof all the atoms' lifetimes, and that it isτ, which again is related to the decay constant as follows: This form is also true for two-decay processes simultaneouslyA → B + C, inserting the equivalent values of decay constants (as given above) into the decay solution leads to: A more commonly used parameter is the half-lifeT1/2. Given a sample of a particular radionuclide, the half-life is the time taken for half the radionuclide's atoms to decay. For the case of one-decay nuclear reactions: the half-life is related to the decay constant as follows: setN =N0/2andt=T1/2to obtain This relationship between the half-life and the decay constant shows that highly radioactive substances are quickly spent, while those that radiate weakly endure longer.Half-lives of known radionuclidesvary by almost 54 orders of magnitude, from more than2.25(9)×1024years (6.9×1031sec) for the very nearly stable nuclide128Te, to8.6(6)×10−23seconds for the highly unstable nuclide5H.[27] The factor ofln(2)in the above relations results from the fact that the concept of "half-life" is merely a way of selecting a different base other than the natural baseefor the lifetime expression. The time constantτis thee−1-life, the time until only 1/eremains, about 36.8%, rather than the 50% in the half-life of a radionuclide. Thus,τis longer thant1/2. The following equation can be shown to be valid: Since radioactive decay is exponential with a constant probability, each process could as easily be described with a different constant time period that (for example) gave its "(1/3)-life" (how long until only 1/3 is left) or "(1/10)-life" (a time period until only 10% is left), and so on. Thus, the choice ofτandt1/2for marker-times, are only for convenience, and from convention. They reflect a fundamental principle only in so much as they show that thesame proportionof a given radioactive substance will decay, during any time-period that one chooses. Mathematically, thenthlife for the above situation would be found in the same way as above—by settingN = N0/n,t=T1/nand substituting into the decay solution to obtain Carbon-14has a half-life of5700(30)years[27]and a decay rate of 14 disintegrations per minute (dpm) per gram of natural carbon. If an artifact is found to have radioactivity of 4 dpm per gram of its present C, we can find the approximate age of the object using the above equation: where: The radioactive decay modes of electron capture and internal conversion are known to be slightly sensitive to chemical and environmental effects that change the electronic structure of the atom, which in turn affects the presence of1sand2selectrons that participate in the decay process. A small number of nuclides are affected.[39]For example,chemical bondscan affect the rate of electron capture to a small degree (in general, less than 1%) depending on the proximity of electrons to the nucleus. In7Be, a difference of 0.9% has been observed between half-lives in metallic and insulating environments.[40]This relatively large effect is because beryllium is a small atom whose valence electrons are in2satomic orbitals, which are subject to electron capture in7Be because (like allsatomic orbitals in all atoms) they naturally penetrate into the nucleus. In 1992, Jung et al. of the Darmstadt Heavy-Ion Research group observed an accelerated β−decay of163Dy66+. Although neutral163Dy is a stable isotope, the fully ionized163Dy66+undergoes β−decayinto the K and L shellsto163Ho66+with a half-life of 47 days.[41] Rhenium-187is another spectacular example.187Re normally undergoes beta decay to187Os with a half-life of 41.6 × 109years,[42]but studies using fully ionised187Reatoms (bare nuclei) have found that this can decrease to only 32.9 years.[43]This is attributed to "bound-state β−decay" of the fully ionised atom – the electron is emitted into the "K-shell" (1satomic orbital), which cannot occur for neutral atoms in which all low-lying bound states are occupied.[44] A number of experiments have found that decay rates of other modes of artificial and naturally occurring radioisotopes are, to a high degree of precision, unaffected by external conditions such as temperature, pressure, the chemical environment, and electric, magnetic, or gravitational fields.[45]Comparison of laboratory experiments over the last century, studies of the Oklonatural nuclear reactor(which exemplified the effects of thermal neutrons on nuclear decay), and astrophysical observations of the luminosity decays of distant supernovae (which occurred far away so the light has taken a great deal of time to reach us), for example, strongly indicate that unperturbed decay rates have been constant (at least to within the limitations of small experimental errors) as a function of time as well.[citation needed] Recent results suggest the possibility that decay rates might have a weak dependence on environmental factors. It has been suggested that measurements of decay rates ofsilicon-32,manganese-54, andradium-226exhibit small seasonal variations (of the order of 0.1%).[46][47][48]However, such measurements are highly susceptible to systematic errors, and a subsequent paper[49]has found no evidence for such correlations in seven other isotopes (22Na,44Ti,108Ag,121Sn,133Ba,241Am,238Pu), and sets upper limits on the size of any such effects. The decay ofradon-222was once reported to exhibit large 4% peak-to-peak seasonal variations (see plot),[50]which were proposed to be related to eithersolar flareactivity or the distance from the Sun, but detailed analysis of the experiment's design flaws, along with comparisons to other, much more stringent and systematically controlled, experiments refute this claim.[51] An unexpected series of experimental results for the rate of decay of heavyhighly chargedradioactiveionscirculating in astorage ringhas provoked theoretical activity in an effort to find a convincing explanation. The rates ofweakdecay of two radioactive species with half lives of about 40 s and 200 s are found to have a significantoscillatorymodulation, with a period of about 7 s.[52]The observed phenomenon is known as theGSI anomaly, as the storage ring is a facility at theGSI Helmholtz Centre for Heavy Ion ResearchinDarmstadt,Germany. As the decay process produces anelectron neutrino, some of the proposed explanations for the observed rate oscillation invoke neutrino properties. Initial ideas related toflavour oscillationmet with skepticism.[53]A more recent proposal involves mass differences between neutrino masseigenstates.[54] A nuclide is considered to "exist" if it has a half-life greater than 2x10−14s. This is an arbitrary boundary; shorter half-lives are considered resonances, such as a system undergoing a nuclear reaction. This time scale is characteristic of thestrong interactionwhich creates thenuclear force. Only nuclides are considered to decay and produce radioactivity.[55]: 568 Nuclides can be stable or unstable. Unstable nuclides decay, possibly in several steps, until they become stable. There are 251 knownstable nuclides. The number of unstable nuclides discovered has grown, with about 3000 known in 2006.[55] The most common and consequently historically the most important forms of natural radioactive decay involve the emission of alpha-particles, beta-particles, and gamma rays. Each of these correspond to afundamental interactionpredominantly responsible for the radioactivity:[56]: 142 In alpha decay, a particle containing two protons and two neutrons, equivalent to a He nucleus, breaks out of the parent nucleus. The process represents a competition between the electromagnetic repulsion between the protons in the nucleus and attractivenuclear force, a residual of the strong interaction. The alpha particle is an especially strongly bound nucleus, helping it win the competition more often.[57]: 872However some nuclei break up orfissioninto larger particles and artificial nuclei decay with the emission of single protons, double protons, and other combinations.[55] Beta decay transforms a neutron into proton or vice versa. When a neutron inside a parent nuclide decays to a proton, an electron, aanti-neutrino, and nuclide with high atomic number results. When a proton in a parent nuclide transforms to a neutron, apositron, aneutrino, and nuclide with a lower atomic number results. These changes are a direct manifestation of the weak interaction.[57]: 874 Gamma decay resembles other kinds of electromagnetic emission: it corresponds to transitions between an excited quantum state and lower energy state. Any of the particle decay mechanisms often leave the daughter in an excited state, which then decays via gamma emission.[57]: 876 Other forms of decay includeneutron emission,electron capture,internal conversion,cluster decay.[58] Nuclear technology portalPhysics portal
https://en.wikipedia.org/wiki/Radioactive_decay
Inmathematics, specifically inelementary arithmeticandelementary algebra, given an equation between twofractionsorrational expressions, one cancross-multiplyto simplify the equation or determine the value of a variable. The method is also occasionally known as the "cross your heart" method because lines resembling a heart outline can be drawn to remember which things to multiply together. Given an equation like wherebanddare not zero, one can cross-multiply to get InEuclidean geometrythe same calculation can be achieved by considering theratiosas those ofsimilar triangles. In practice, the method of cross-multiplying means that we multiply the numerator of each (or one) side by the denominator of the other side, effectively crossing the terms over: The mathematical justification for the method is from the following longer mathematical procedure. If we start with the basic equation we can multiply the terms on each side by the same number, and the terms will remain equal. Therefore, if we multiply the fraction on each side by the product of the denominators of both sides—bd—we get We can reduce the fractions to lowest terms by noting that the two occurrences ofbon the left-hand side cancel, as do the two occurrences ofdon the right-hand side, leaving and we can divide both sides of the equation by any of the elements—in this case we will used—getting Another justification of cross-multiplication is as follows. Starting with the given equation multiply by⁠d/d⁠= 1 on the left and by⁠b/b⁠= 1 on the right, getting and so Cancel the common denominatorbd=db, leaving Each step in these procedures is based on a single, fundamental property ofequations. Cross-multiplication is a shortcut, an easily understandable procedure that can be taught to students. This is a common procedure in mathematics, used to reduce fractions or calculate a value for a given variable in a fraction. If we have an equation wherexis a variable we are interested in solving for, we can use cross-multiplication to determine that For example, suppose we want to know how far a car will travel in 7 hours, if we know that its speed is constant and that it already travelled 90 miles in the last 3 hours. Converting the word problem into ratios, we get Cross-multiplying yields and so Alternate solution ⁠90miles/3hours⁠=30mph So, 30mph×7hours=210miles. Note that even simple equations like are solved using cross-multiplication, since the missingbterm is implicitly equal to 1: Any equation containing fractions or rational expressions can be simplified by multiplying both sides by theleast common denominator. This step is calledclearing fractions. Therule of three[1]was a historical shorthand version for a particular form of cross-multiplication that could be taught to students by rote. It was considered the height ofColonialmaths education[2]and still figures in the French national curriculum for secondary education,[3]and in the primary education curriculum of Spain.[4] For an equation of the form where the variable to be evaluated is in the right-hand denominator, the rule of three states that In this context,ais referred to as theextremeof the proportion, andbandcare called themeans. This rule was already known to Chinese mathematicians prior to the 2nd century CE,[5]though it was not used in Europe until much later. Cocker's Arithmetick, the premier textbook in the 17th century, introduces its discussion of the rule of three[6]with the problem "If 4 yards of cloth cost 12 shillings, what will 6 yards cost at that rate?" The rule of three gives the answer to this problem directly; whereas in modern arithmetic, we would solve it by introducing a variablexto stand for the cost of 6 yards of cloth, writing down the equation and then using cross-multiplication to calculatex: An anonymous manuscript dated 1570[7]said: "Multiplication is vexation, / Division is as bad; / The Rule of three doth puzzle me, / And Practice drives me mad." Charles Darwinrefers to his use of the rule of three in estimating the number of species in a newly discerned genus.[8]In a letter toWilliam Darwin Foxin 1855, Charles Darwin declared “I have no faith in anything short of actual measurement and the Rule of Three.”[9]Karl Pearsonadopted this declaration as the motto of his newly founded journalBiometrika.[10] An extension to the rule of three was thedouble rule of three, which involved finding an unknown value where five rather than three other values are known. An example of such a problem might beIf 6 builders can build 8 houses in 100 days, how many days would it take 10 builders to build 20 houses at the same rate?, and this can be set up as which, with cross-multiplication twice, gives Lewis Carroll's "The Mad Gardener's Song" includes the lines "He thought he saw a Garden-Door / That opened with a key: / He looked again, and found it was / A double Rule of Three".[11]
https://en.wikipedia.org/wiki/Cross_multiplication
Inmathematics, amultipleis theproductof any quantity and aninteger.[1]In other words, for the quantitiesaandb, it can be said thatbis a multiple ofaifb=nafor some integern, which is called themultiplier. Ifais notzero, this is equivalent to saying thatb/a{\displaystyle b/a}is an integer. Whenaandbare both integers, andbis a multiple ofa, thenais called adivisorofb. One says also thatadividesb. Ifaandbare not integers, mathematicians prefer generally to useinteger multipleinstead ofmultiple, for clarification. In fact,multipleis used for other kinds of product; for example, apolynomialpis a multiple of another polynomialqif there exists third polynomialrsuch thatp=qr. 14, 49, −21 and 0 are multiples of 7, whereas 3 and −6 are not. This is because there are integers that 7 may be multiplied by to reach the values of 14, 49, 0 and −21, while there are no suchintegersfor 3 and −6. Each of the products listed below, and in particular, the products for 3 and −6, is theonlyway that the relevant number can be written as a product of 7 and another real number: In some texts[which?], "ais asubmultipleofb" has the meaning of "abeing aunit fractionofb" (a=b/n) or, equivalently, "bbeing aninteger multiplenofa" (b=na). This terminology is also used withunits of measurement(for example by theBIPM[2]andNIST[3]), where aunit submultipleis obtained byprefixingthe main unit, defined as the quotient of the main unit by an integer, mostly a power of 103. For example, amillimetreis the 1000-fold submultiple of ametre.[2][3]As another example, oneinchmay be considered as a 12-fold submultiple of afoot, or a 36-fold submultiple of ayard.
https://en.wikipedia.org/wiki/Multiple_(mathematics)
FRACTRANis aTuring-completeesoteric programming languageinvented by the mathematicianJohn Conway. A FRACTRAN program is anordered listof positivefractionstogether with an initial positive integer inputn. The program is run by updating the integernas follows: Conway 1987gives the following FRACTRAN program, called PRIMEGAME, which finds successiveprime numbers: (1791,7885,1951,2338,2933,7729,9523,7719,117,1113,1311,152,17,551){\displaystyle \left({\frac {17}{91}},{\frac {78}{85}},{\frac {19}{51}},{\frac {23}{38}},{\frac {29}{33}},{\frac {77}{29}},{\frac {95}{23}},{\frac {77}{19}},{\frac {1}{17}},{\frac {11}{13}},{\frac {13}{11}},{\frac {15}{2}},{\frac {1}{7}},{\frac {55}{1}}\right)} Starting withn=2, this FRACTRAN program generates the following sequence of integers: After 2, this sequence contains the following powers of 2: 22=4,23=8,25=32,27=128,211=2048,213=8192,217=131072,219=524288,…{\displaystyle 2^{2}=4,\,2^{3}=8,\,2^{5}=32,\,2^{7}=128,\,2^{11}=2048,\,2^{13}=8192,\,2^{17}=131072,\,2^{19}=524288,\,\dots }(sequenceA034785in theOEIS) The exponent part of these powers of two are primes, 2, 3, 5, etc. A FRACTRAN program can be seen as a type ofregister machinewhere the registers are stored in prime exponents in the argumentn{\displaystyle n}. UsingGödel numbering, a positive integern{\displaystyle n}can encode an arbitrary number of arbitrarily large positive integer variables.[note 1]The value of each variable is encoded as the exponent of a prime number in theprime factorizationof the integer. For example, the integer 60=22×31×51{\displaystyle 60=2^{2}\times 3^{1}\times 5^{1}} represents a register state in which one variable (which we will callv2{\displaystyle v_{2}}) holds the value 2 and two other variables (v3{\displaystyle v_{3}}andv5{\displaystyle v_{5}}) hold the value 1. All other variables hold the value 0. A FRACTRAN program is an ordered list of positive fractions. Each fraction represents an instruction that tests one or more variables, represented by the prime factors of itsdenominator. For example: f1=2120=3×722×51{\displaystyle f_{1}={\frac {21}{20}}={\frac {3\times 7}{2^{2}\times 5^{1}}}} testsv2{\displaystyle v_{2}}andv5{\displaystyle v_{5}}. Ifv2≥2{\displaystyle v_{2}\geq 2}andv5≥1{\displaystyle v_{5}\geq 1}, then it subtracts 2 fromv2{\displaystyle v_{2}}and 1 fromv5{\displaystyle v_{5}}and adds 1 to v3 and 1 tov7{\displaystyle v_{7}}. For example: 60⋅f1=22×31×51⋅3×722×51=32×71{\displaystyle 60\cdot f_{1}=2^{2}\times 3^{1}\times 5^{1}\cdot {\frac {3\times 7}{2^{2}\times 5^{1}}}=3^{2}\times 7^{1}} Since the FRACTRAN program is just a list of fractions, these test-decrement-increment instructions are the only allowed instructions in the FRACTRAN language. In addition the following restrictions apply: The simplest FRACTRAN program is a single instruction such as (32){\displaystyle \left({\frac {3}{2}}\right)} This program can be represented as a (very simple) algorithm as follows: Given an initial input of the form2a3b{\displaystyle 2^{a}3^{b}}, this program will compute the sequence2a−13b+1{\displaystyle 2^{a-1}3^{b+1}},2a−23b+2{\displaystyle 2^{a-2}3^{b+2}}, etc., until eventually, aftera{\displaystyle a}steps, no factors of 2 remain and the product with32{\displaystyle {\frac {3}{2}}}no longer yields an integer; the machine then stops with a final output of3a+b{\displaystyle 3^{a+b}}. It therefore adds two integers together. We can create a "multiplier" by "looping" through the "adder". In order to do this we need to introducestatesinto our algorithm. This algorithm will take a number2a3b{\displaystyle 2^{a}3^{b}}and produce5ab{\displaystyle 5^{ab}}: State B is a loop that addsv3{\displaystyle v_{3}}tov5{\displaystyle v_{5}}and also movesv3{\displaystyle v_{3}}tov7{\displaystyle v_{7}}, and state A is an outer control loop that repeats the loop in state Bv2{\displaystyle v_{2}}times. State A also restores the value ofv3{\displaystyle v_{3}}fromv7{\displaystyle v_{7}}after the loop in state B has completed. We can implement states using new variables as state indicators. The state indicators for state B will bev11{\displaystyle v_{11}}andv13{\displaystyle v_{13}}. Note that we require two state control indicators for one loop; a primary flag (v11{\displaystyle v_{11}}) and a secondary flag (v13{\displaystyle v_{13}}). Because each indicator is consumed whenever it is tested, we need a secondary indicator to say "continue in the current state"; this secondary indicator is swapped back to the primary indicator in the next instruction, and the loop continues. Adding FRACTRAN state indicators and instructions to the multiplication algorithm table, we have: When we write out the FRACTRAN instructions, we must put the state A instructions last, because state A has no state indicators - it is the default state if no state indicators are set. So as a FRACTRAN program, the multiplier becomes: (45533,1113,111,37,112,13){\displaystyle \left({\frac {455}{33}},{\frac {11}{13}},{\frac {1}{11}},{\frac {3}{7}},{\frac {11}{2}},{\frac {1}{3}}\right)} With input 2a3bthis program produces output 5ab.[note 2] In a similar way, we can create a FRACTRAN "subtractor", and repeated subtractions allow us to create a "quotient and remainder" algorithm as follows: Writing out the FRACTRAN program, we have: (9166,1113,133,8511,57119,1719,1117,13){\displaystyle \left({\frac {91}{66}},{\frac {11}{13}},{\frac {1}{33}},{\frac {85}{11}},{\frac {57}{119}},{\frac {17}{19}},{\frac {11}{17}},{\frac {1}{3}}\right)} and input 2n3d11 produces output 5q7rwheren=qd+rand 0 ≤r<d. Conway's prime generating algorithm above is essentially a quotient and remainder algorithm within two loops. Given input of the form2n7m{\displaystyle 2^{n}7^{m}}where 0 ≤m<n, the algorithm tries to dividen+1 by each number fromndown to 1, until it finds the largest numberkthat is a divisor ofn+1. It then returns 2n+17k-1and repeats. The only times that the sequence of state numbers generated by the algorithm produces a power of 2 is whenkis 1 (so that the exponent of 7 is 0), which only occurs if the exponent of 2 is a prime. A step-by-step explanation of Conway's algorithm can be found in Havil (2007). For this program, reaching the prime number 2, 3, 5, 7... requires respectively 19, 69, 281, 710,... steps (sequenceA007547in theOEIS). A variant of Conway's program also exists,[1]which differs from the above version by two fractions:(1791,7885,1951,2338,2933,7729,9523,7719,117,1113,1311,1514,152,551){\displaystyle \left({\frac {17}{91}},{\frac {78}{85}},{\frac {19}{51}},{\frac {23}{38}},{\frac {29}{33}},{\frac {77}{29}},{\frac {95}{23}},{\frac {77}{19}},{\frac {1}{17}},{\frac {11}{13}},{\frac {13}{11}},{\frac {15}{14}},{\frac {15}{2}},{\frac {55}{1}}\right)} This variant is a little faster: reaching 2, 3, 5, 7... takes it 19, 69, 280, 707... steps (sequenceA007546in theOEIS). A single iteration of this program, checking a particular numberNfor primeness, takes the following number of steps:N−1+(6N+2)(N−b)+2∑d=bN−1⌊Nd⌋,{\displaystyle N-1+(6N+2)(N-b)+2\sum \limits _{d=b}^{N-1}\left\lfloor {\frac {N}{d}}\right\rfloor ,}whereb<N{\displaystyle b<N}is the largest integer divisor ofNand⌊x⌋{\displaystyle \lfloor x\rfloor }is thefloor function.[2] In 1999, Devin Kilminster demonstrated a shorter, ten-instruction program:[3](73,9998,1349,3935,3691,10143,4913,711,12,911).{\displaystyle \left({\frac {7}{3}},{\frac {99}{98}},{\frac {13}{49}},{\frac {39}{35}},{\frac {36}{91}},{\frac {10}{143}},{\frac {49}{13}},{\frac {7}{11}},{\frac {1}{2}},{\frac {91}{1}}\right).}For the initial inputn = 10successive primes are generated by subsequent powers of 10. The following FRACTRAN program: (3⋅1122⋅5,511,132⋅5,15,23,2⋅57,72){\displaystyle \left({\frac {3\cdot 11}{2^{2}\cdot 5}},{\frac {5}{11}},{\frac {13}{2\cdot 5}},{\frac {1}{5}},{\frac {2}{3}},{\frac {2\cdot 5}{7}},{\frac {7}{2}}\right)} calculates theHamming weightH(a) of the binary expansion ofai.e. the number of 1s in the binary expansion ofa.[4]Given input 2a, its output is 13H(a). The program can be analysed as follows:
https://en.wikipedia.org/wiki/FRACTRAN
Acircleis ashapeconsisting of allpointsin aplanethat are at a given distance from a given point, thecentre. The distance between any point of the circle and the centre is called theradius. The length of a line segment connecting two points on the circle and passing through the centre is called thediameter. A circle bounds a region of the plane called adisc. The circle has been known since before the beginning of recorded history. Natural circles are common, such as thefull moonor a slice of round fruit. The circle is the basis for thewheel, which, with related inventions such asgears, makes much of modern machinery possible. In mathematics, the study of the circle has helped inspire the development of geometry,astronomyandcalculus. All of the specified regions may be considered asopen, that is, not containing their boundaries, or asclosed, including their respective boundaries. The wordcirclederives from theGreekκίρκος/κύκλος (kirkos/kuklos), itself ametathesisof theHomeric Greekκρίκος (krikos), meaning "hoop" or "ring".[1]The origins of the wordscircusandcircuitare closely related. Prehistoric people madestone circlesandtimber circles, and circular elements are common inpetroglyphsandcave paintings.[2]Disc-shaped prehistoric artifacts include theNebra sky discand jade discs calledBi. The EgyptianRhind papyrus, dated to 1700 BCE, gives a method to find the area of a circle. The result corresponds to⁠256/81⁠(3.16049...) as an approximate value ofπ.[3] Book 3 ofEuclid'sElementsdeals with the properties of circles. Euclid's definition of a circle is: A circle is a plane figure bounded by one curved line, and such that all straight lines drawn from a certain point within it to the bounding line, are equal. The bounding line is called its circumference and the point, its centre. InPlato'sSeventh Letterthere is a detailed definition and explanation of the circle. Plato explains the perfect circle, and how it is different from any drawing, words, definition or explanation. Earlyscience, particularlygeometryandastrology and astronomy, was connected to the divine for mostmedieval scholars, and many believed that there was something intrinsically "divine" or "perfect" that could be found in circles.[5][6] In 1880 CE,Ferdinand von Lindemannproved thatπistranscendental, proving that the millennia-old problem ofsquaring the circlecannot be performed with straightedge and compass.[7] With the advent ofabstract artin the early 20th century, geometric objects became an artistic subject in their own right.Wassily Kandinskyin particular often used circles as an element of his compositions.[8][9] From the time of the earliest known civilisations – such as the Assyrians and ancient Egyptians, those in the Indus Valley and along the Yellow River in China, and the Western civilisations of ancient Greece and Rome during classical Antiquity – the circle has been used directly or indirectly in visual art to convey the artist's message and to express certain ideas. However, differences in worldview (beliefs and culture) had a great impact on artists' perceptions. While some emphasised the circle's perimeter to demonstrate their democratic manifestation, others focused on its centre to symbolise the concept of cosmic unity. In mystical doctrines, the circle mainly symbolises the infinite and cyclical nature of existence, but in religious traditions it represents heavenly bodies and divine spirits. The circle signifies many sacred and spiritual concepts, including unity, infinity, wholeness, the universe, divinity, balance, stability and perfection, among others. Such concepts have been conveyed in cultures worldwide through the use of symbols, for example, a compass, a halo, the vesica piscis and its derivatives (fish, eye, aureole, mandorla, etc.), the ouroboros, theDharma wheel, a rainbow, mandalas, rose windows and so forth.[10]Magic circlesare part of some traditions ofWestern esotericism. The ratio of a circle's circumference to its diameter isπ(pi), anirrationalconstantapproximately equal to 3.141592654. The ratio of a circle's circumference to its radius is2π.[a]Thus the circumferenceCis related to the radiusrand diameterdby:C=2πr=πd.{\displaystyle C=2\pi r=\pi d.} As proved byArchimedes, in hisMeasurement of a Circle, thearea enclosed by a circleis equal to that of a triangle whose base has the length of the circle's circumference and whose height equals the circle's radius,[11]which comes toπmultiplied by the radius squared:Area=πr2.{\displaystyle \mathrm {Area} =\pi r^{2}.} Equivalently, denoting diameter byd,Area=πd24≈0.7854d2,{\displaystyle \mathrm {Area} ={\frac {\pi d^{2}}{4}}\approx 0.7854d^{2},}that is, approximately 79% of thecircumscribingsquare (whose side is of lengthd). The circle is the plane curve enclosing the maximum area for a given arc length. This relates the circle to a problem in the calculus of variations, namely theisoperimetric inequality. If a circle of radiusris centred at thevertexof anangle, and that angle intercepts anarc of the circlewith anarc lengthofs, then theradianmeasure 𝜃 of the angle is the ratio of the arc length to the radius:θ=sr.{\displaystyle \theta ={\frac {s}{r}}.} The circular arc is said tosubtendthe angle, known as thecentral angle, at the centre of the circle. One radian is the measure of the central angle subtended by a circular arc whose length is equal to its radius. The angle subtended by a complete circle at its centre is acomplete angle, which measures2πradians, 360degrees, or oneturn. Using radians, the formula for the arc lengthsof a circular arc of radiusrand subtending a central angle of measure 𝜃 iss=θr,{\displaystyle s=\theta r,} and the formula for the areaAof acircular sectorof radiusrand with central angle of measure 𝜃 isA=12θr2.{\displaystyle A={\frac {1}{2}}\theta r^{2}.} In the special case𝜃 = 2π, these formulae yield the circumference of a complete circle and area of a complete disc, respectively. In anx–yCartesian coordinate system, the circle with centrecoordinates(a,b) and radiusris the set of all points (x,y) such that(x−a)2+(y−b)2=r2.{\displaystyle (x-a)^{2}+(y-b)^{2}=r^{2}.} Thisequation, known as theequation of the circle, follows from thePythagorean theoremapplied to any point on the circle: as shown in the adjacent diagram, the radius is the hypotenuse of a right-angled triangle whose other sides are of length |x−a| and |y−b|. If the circle is centred at the origin (0, 0), then the equation simplifies tox2+y2=r2.{\displaystyle x^{2}+y^{2}=r^{2}.} The circle of radius⁠r{\displaystyle r}⁠with center at⁠(x0,y0){\displaystyle (x_{0},y_{0})}⁠in the⁠x{\displaystyle x}⁠–⁠y{\displaystyle y}⁠plane can be broken into two semicircles each of which is thegraph of a function,⁠y+(x){\displaystyle y_{+}(x)}⁠and⁠y−(x){\displaystyle y_{-}(x)}⁠, respectively:y+(x)=y0+r2−(x−x0)2,y−(x)=y0−r2−(x−x0)2,{\displaystyle {\begin{aligned}y_{+}(x)=y_{0}+{\sqrt {r^{2}-(x-x_{0})^{2}}},\\[5mu]y_{-}(x)=y_{0}-{\sqrt {r^{2}-(x-x_{0})^{2}}},\end{aligned}}}for values of⁠x{\displaystyle x}⁠ranging from⁠x0−r{\displaystyle x_{0}-r}⁠to⁠x0+r{\displaystyle x_{0}+r}⁠. The equation can be written inparametric formusing thetrigonometric functionssine and cosine asx=a+rcos⁡t,y=b+rsin⁡t,{\displaystyle {\begin{aligned}x&=a+r\,\cos t,\\y&=b+r\,\sin t,\end{aligned}}}wheretis aparametric variablein the range 0 to 2π, interpreted geometrically as theanglethat the ray from (a,b) to (x,y) makes with the positivexaxis. An alternative parametrisation of the circle isx=a+r1−t21+t2,y=b+r2t1+t2.{\displaystyle {\begin{aligned}x&=a+r{\frac {1-t^{2}}{1+t^{2}}},\\y&=b+r{\frac {2t}{1+t^{2}}}.\end{aligned}}} In this parameterisation, the ratio ofttorcan be interpreted geometrically as thestereographic projectionof the line passing through the centre parallel to thexaxis (seeTangent half-angle substitution). However, this parameterisation works only iftis made to range not only through all reals but also to a point at infinity; otherwise, the leftmost point of the circle would be omitted. The equation of the circle determined by three points(x1,y1),(x2,y2),(x3,y3){\displaystyle (x_{1},y_{1}),(x_{2},y_{2}),(x_{3},y_{3})}not on a line is obtained by a conversion of the3-point form of a circle equation:(x−x1)(x−x2)+(y−y1)(y−y2)(y−y1)(x−x2)−(y−y2)(x−x1)=(x3−x1)(x3−x2)+(y3−y1)(y3−y2)(y3−y1)(x3−x2)−(y3−y2)(x3−x1).{\displaystyle {\frac {({\color {green}x}-x_{1})({\color {green}x}-x_{2})+({\color {red}y}-y_{1})({\color {red}y}-y_{2})}{({\color {red}y}-y_{1})({\color {green}x}-x_{2})-({\color {red}y}-y_{2})({\color {green}x}-x_{1})}}={\frac {(x_{3}-x_{1})(x_{3}-x_{2})+(y_{3}-y_{1})(y_{3}-y_{2})}{(y_{3}-y_{1})(x_{3}-x_{2})-(y_{3}-y_{2})(x_{3}-x_{1})}}.} Inhomogeneous coordinates, eachconic sectionwith the equation of a circle has the formx2+y2−2axz−2byz+cz2=0.{\displaystyle x^{2}+y^{2}-2axz-2byz+cz^{2}=0.} It can be proven that a conic section is a circle exactly when it contains (when extended to thecomplex projective plane) the pointsI(1:i: 0) andJ(1: −i: 0). These points are called thecircular points at infinity. Inpolar coordinates, the equation of a circle isr2−2rr0cos⁡(θ−ϕ)+r02=a2,{\displaystyle r^{2}-2rr_{0}\cos(\theta -\phi )+r_{0}^{2}=a^{2},} whereais the radius of the circle,(r,θ){\displaystyle (r,\theta )}are the polar coordinates of a generic point on the circle, and(r0,ϕ){\displaystyle (r_{0},\phi )}are the polar coordinates of the centre of the circle (i.e.,r0is the distance from the origin to the centre of the circle, andφis the anticlockwise angle from the positivexaxis to the line connecting the origin to the centre of the circle). For a circle centred on the origin, i.e.r0= 0, this reduces tor=a. Whenr0=a, or when the origin lies on the circle, the equation becomesr=2acos⁡(θ−ϕ).{\displaystyle r=2a\cos(\theta -\phi ).} In the general case, the equation can be solved forr, givingr=r0cos⁡(θ−ϕ)±a2−r02sin2⁡(θ−ϕ).{\displaystyle r=r_{0}\cos(\theta -\phi )\pm {\sqrt {a^{2}-r_{0}^{2}\sin ^{2}(\theta -\phi )}}.}Without the ± sign, the equation would in some cases describe only half a circle. In thecomplex plane, a circle with a centre atcand radiusrhas the equation|z−c|=r.{\displaystyle |z-c|=r.} In parametric form, this can be written asz=reit+c.{\displaystyle z=re^{it}+c.} The slightly generalised equationpzz¯+gz+gz¯=q{\displaystyle pz{\overline {z}}+gz+{\overline {gz}}=q} for realp,qand complexgis sometimes called ageneralised circle. This becomes the above equation for a circle withp=1,g=−c¯,q=r2−|c|2{\displaystyle p=1,\ g=-{\overline {c}},\ q=r^{2}-|c|^{2}}, since|z−c|2=zz¯−c¯z−cz¯+cc¯{\displaystyle |z-c|^{2}=z{\overline {z}}-{\overline {c}}z-c{\overline {z}}+c{\overline {c}}}. Not all generalised circles are actually circles: a generalised circle is either a (true) circle or aline. Thetangent linethrough a pointPon the circle is perpendicular to the diameter passing throughP. IfP = (x1,y1)and the circle has centre (a,b) and radiusr, then the tangent line is perpendicular to the line from (a,b) to (x1,y1), so it has the form(x1−a)x+ (y1−b)y=c. Evaluating at (x1,y1) determines the value ofc, and the result is that the equation of the tangent is(x1−a)x+(y1−b)y=(x1−a)x1+(y1−b)y1,{\displaystyle (x_{1}-a)x+(y_{1}-b)y=(x_{1}-a)x_{1}+(y_{1}-b)y_{1},}or(x1−a)(x−a)+(y1−b)(y−b)=r2.{\displaystyle (x_{1}-a)(x-a)+(y_{1}-b)(y-b)=r^{2}.} Ify1≠b, then the slope of this line isdydx=−x1−ay1−b.{\displaystyle {\frac {dy}{dx}}=-{\frac {x_{1}-a}{y_{1}-b}}.} This can also be found usingimplicit differentiation. When the centre of the circle is at the origin, then the equation of the tangent line becomesx1x+y1y=r2,{\displaystyle x_{1}x+y_{1}y=r^{2},}and its slope isdydx=−x1y1.{\displaystyle {\frac {dy}{dx}}=-{\frac {x_{1}}{y_{1}}}.} An inscribed angle (examples are the blue and green angles in the figure) is exactly half the correspondingcentral angle(red). Hence, all inscribed angles that subtend the same arc (pink) are equal. Angles inscribed on the arc (brown) are supplementary. In particular, every inscribed angle that subtends a diameter is aright angle(since the central angle is 180°). Thesagitta(also known as theversine) is a line segment drawn perpendicular to a chord, between the midpoint of that chord and the arc of the circle. Given the lengthyof a chord and the lengthxof the sagitta, the Pythagorean theorem can be used to calculate the radius of the unique circle that will fit around the two lines:r=y28x+x2.{\displaystyle r={\frac {y^{2}}{8x}}+{\frac {x}{2}}.} Another proof of this result, which relies only on two chord properties given above, is as follows. Given a chord of lengthyand with sagitta of lengthx, since the sagitta intersects the midpoint of the chord, we know that it is a part of a diameter of the circle. Since the diameter is twice the radius, the "missing" part of the diameter is (2r−x) in length. Using the fact that one part of one chord times the other part is equal to the same product taken along a chord intersecting the first chord, we find that (2r−x)x= (y/ 2)2. Solving forr, we find the required result. There are manycompass-and-straightedge constructionsresulting in circles. The simplest and most basic is the construction given the centre of the circle and a point on the circle. Place the fixed leg of thecompasson the centre point, the movable leg on the point on the circle and rotate the compass. Apollonius of Pergashowed that a circle may also be defined as the set of points in a plane having a constantratio(other than 1) of distances to two fixed foci,AandB.[16][17](The set of points where the distances are equal is the perpendicular bisector of segmentAB, a line.) That circle is sometimes said to be drawnabouttwo points. The proof is in two parts. First, one must prove that, given two fociAandBand a ratio of distances, any pointPsatisfying the ratio of distances must fall on a particular circle. LetCbe another point, also satisfying the ratio and lying on segmentAB. By theangle bisector theoremthe line segmentPCwill bisect theinterior angleAPB, since the segments are similar:APBP=ACBC.{\displaystyle {\frac {AP}{BP}}={\frac {AC}{BC}}.} Analogously, a line segmentPDthrough some pointDonABextended bisects the corresponding exterior angleBPQwhereQis onAPextended. Since the interior and exterior angles sum to 180 degrees, the angleCPDis exactly 90 degrees; that is, a right angle. The set of pointsPsuch that angleCPDis a right angle forms a circle, of whichCDis a diameter. Second, see[18]: 15for a proof that every point on the indicated circle satisfies the given ratio. A closely related property of circles involves the geometry of thecross-ratioof points in the complex plane. IfA,B, andCare as above, then the circle of Apollonius for these three points is the collection of pointsPfor which the absolute value of the cross-ratio is equal to one:|[A,B;C,P]|=1.{\displaystyle {\bigl |}[A,B;C,P]{\bigr |}=1.} Stated another way,Pis a point on the circle of Apollonius if and only if the cross-ratio[A,B;C,P]is on the unit circle in the complex plane. IfCis the midpoint of the segmentAB, then the collection of pointsPsatisfying the Apollonius condition|AP||BP|=|AC||BC|{\displaystyle {\frac {|AP|}{|BP|}}={\frac {|AC|}{|BC|}}}is not a circle, but rather a line. Thus, ifA,B, andCare given distinct points in the plane, then thelocusof pointsPsatisfying the above equation is called a "generalised circle." It may either be a true circle or a line. In this sense a line is a generalised circle of infinite radius. In everytrianglea unique circle, called theincircle, can be inscribed such that it is tangent to each of the three sides of the triangle.[19] About every triangle a unique circle, called the circumcircle, can be circumscribed such that it goes through each of the triangle's threevertices.[20] Atangential polygon, such as atangential quadrilateral, is anyconvex polygonwithin which acircle can be inscribedthat is tangent to each side of the polygon.[21]Everyregular polygonand every triangle is a tangential polygon. Acyclic polygonis any convex polygon about which acircle can be circumscribed, passing through each vertex. A well-studied example is the cyclic quadrilateral. Every regular polygon and every triangle is a cyclic polygon. A polygon that is both cyclic and tangential is called abicentric polygon. Ahypocycloidis a curve that is inscribed in a given circle by tracing a fixed point on a smaller circle that rolls within and tangent to the given circle. The circle can be viewed as alimiting caseof various other figures: Consider a finite set ofn{\displaystyle n}points in the plane. The locus of points such that the sum of the squares of the distances to the given points is constant is a circle, whose centre is at the centroid of the given points.[22]A generalisation for higher powers of distances is obtained if, instead ofn{\displaystyle n}points, the vertices of the regular polygonPn{\displaystyle P_{n}}are taken.[23]The locus of points such that the sum of the2m{\displaystyle 2m}-th power of distancesdi{\displaystyle d_{i}}to the vertices of a given regular polygon with circumradiusR{\displaystyle R}is constant is a circle, if∑i=1ndi2m>nR2m,wherem=1,2,…,n−1;{\displaystyle \sum _{i=1}^{n}d_{i}^{2m}>nR^{2m},\quad {\text{ where }}~m=1,2,\dots ,n-1;}whose centre is the centroid of thePn{\displaystyle P_{n}}. In the case of theequilateral triangle, the loci of the constant sums of the second and fourth powers are circles, whereas for the square, the loci are circles for the constant sums of the second, fourth, and sixth powers. For theregular pentagonthe constant sum of the eighth powers of the distances will be added and so forth. Squaring the circle is the problem, proposed byancientgeometers, of constructing a square with the same area as a given circle by using only a finite number of steps withcompass and straightedge. In 1882, the task was proven to be impossible, as a consequence of theLindemann–Weierstrass theorem, which proves that pi (π) is atranscendental number, rather than analgebraic irrational number; that is, it is not therootof anypolynomialwithrationalcoefficients. Despite the impossibility, this topic continues to be of interest forpseudomathenthusiasts. Defining a circle as the set of points with a fixed distance from a point, different shapes can be considered circles under different definitions of distance. Inp-norm, distance is determined by‖x‖p=(|x1|p+|x2|p+⋯+|xn|p)1/p.{\displaystyle \left\|x\right\|_{p}=\left(\left|x_{1}\right|^{p}+\left|x_{2}\right|^{p}+\dotsb +\left|x_{n}\right|^{p}\right)^{1/p}.}In Euclidean geometry,p= 2, giving the familiar‖x‖2=|x1|2+|x2|2+⋯+|xn|2.{\displaystyle \left\|x\right\|_{2}={\sqrt {\left|x_{1}\right|^{2}+\left|x_{2}\right|^{2}+\dotsb +\left|x_{n}\right|^{2}}}.} Intaxicab geometry,p= 1. Taxicab circles aresquareswith sides oriented at a 45° angle to the coordinate axes. While each side would have length2r{\displaystyle {\sqrt {2}}r}using aEuclidean metric, whereris the circle's radius, its length in taxicab geometry is 2r. Thus, a circle's circumference is 8r. Thus, the value of a geometric analog toπ{\displaystyle \pi }is 4 in this geometry. The formula for the unit circle in taxicab geometry is|x|+|y|=1{\displaystyle |x|+|y|=1}in Cartesian coordinates andr=1|sin⁡θ|+|cos⁡θ|{\displaystyle r={\frac {1}{\left|\sin \theta \right|+\left|\cos \theta \right|}}}in polar coordinates. A circle of radius 1 (using this distance) is thevon Neumann neighborhoodof its centre. A circle of radiusrfor theChebyshev distance(L∞metric) on a plane is also a square with side length 2rparallel to the coordinate axes, so planar Chebyshev distance can be viewed as equivalent by rotation and scaling to planar taxicab distance. However, this equivalence betweenL1andL∞metrics does not generalise to higher dimensions. The circle is theone-dimensionalhypersphere(the 1-sphere). Intopology, a circle is not limited to the geometric concept, but to all of itshomeomorphisms. Two topological circles are equivalent if one can be transformed into the other via a deformation ofR3upon itself (known as anambient isotopy).[24]
https://en.wikipedia.org/wiki/Circle
Inmathematics, anellipseis aplane curvesurrounding twofocal points, such that for all points on the curve, the sum of the two distances to the focal points is a constant. It generalizes acircle, which is the special type of ellipse in which the two focal points are the same. The elongation of an ellipse is measured by itseccentricitye{\displaystyle e}, a number ranging frome=0{\displaystyle e=0}(thelimiting caseof a circle) toe=1{\displaystyle e=1}(the limiting case of infinite elongation, no longer an ellipse but aparabola). An ellipse has a simplealgebraicsolution for its area, but forits perimeter(also known ascircumference),integrationis required to obtain an exact solution. The largest and smallestdiametersof an ellipse, also known as its width and height, are typically denoted2aand2b. An ellipse has fourextreme points: twoverticesat the endpoints of themajor axisand twoco-verticesat the endpoints of the minor axis. Analytically, the equation of a standard ellipse centered at the origin is:x2a2+y2b2=1.{\displaystyle {\frac {x^{2}}{a^{2}}}+{\frac {y^{2}}{b^{2}}}=1.}Assuminga≥b{\displaystyle a\geq b}, the foci are(±c,0){\displaystyle (\pm c,0)}wherec=a2−b2{\textstyle c={\sqrt {a^{2}-b^{2}}}}, calledlinear eccentricity, is the distance from the center to a focus. The standardparametric equationis:(x,y)=(acos⁡(t),bsin⁡(t))for0≤t≤2π.{\displaystyle (x,y)=(a\cos(t),b\sin(t))\quad {\text{for}}\quad 0\leq t\leq 2\pi .} Ellipses are theclosedtype ofconic section: a plane curve tracing the intersection of aconewith aplane(see figure). Ellipses have many similarities with the other two forms of conic sections, parabolas andhyperbolas, both of which areopenandunbounded. An angledcross sectionof a right circularcylinderis also an ellipse. An ellipse may also be defined in terms of one focal point and a line outside the ellipse called thedirectrix: for all points on the ellipse, the ratio between the distance to the focus and the distance to the directrix is a constant, called theeccentricity:e=ca=1−b2a2.{\displaystyle e={\frac {c}{a}}={\sqrt {1-{\frac {b^{2}}{a^{2}}}}}.} Ellipses are common inphysics,astronomyandengineering. For example, theorbitof each planet in theSolar Systemis approximately an ellipse with the Sun at one focus point (more precisely, the focus is thebarycenterof the Sun–planet pair). The same is true for moons orbiting planets and all other systems of two astronomical bodies. The shapes of planets and stars are often well described byellipsoids. A circle viewed from a side angle looks like an ellipse: that is, the ellipse is the image of a circle underparallelorperspective projection. The ellipse is also the simplestLissajous figureformed when the horizontal and vertical motions aresinusoidswith the same frequency: a similar effect leads toelliptical polarizationof light inoptics. The name,ἔλλειψις(élleipsis, "omission"), was given byApollonius of Pergain hisConics. An ellipse can be defined geometrically as a set orlocus of pointsin the Euclidean plane: The midpointC{\displaystyle C}of the line segment joining the foci is called thecenterof the ellipse. The line through the foci is called themajor axis, and the line perpendicular to it through the center is theminor axis.The major axis intersects the ellipse at twoverticesV1,V2{\displaystyle V_{1},V_{2}}, which have distancea{\displaystyle a}to the center. The distancec{\displaystyle c}of the foci to the center is called thefocal distanceor linear eccentricity. The quotiente=ca{\displaystyle e={\tfrac {c}{a}}}is defined as theeccentricity. The caseF1=F2{\displaystyle F_{1}=F_{2}}yields a circle and is included as a special type of ellipse. The equation|PF2|+|PF1|=2a{\displaystyle \left|PF_{2}\right|+\left|PF_{1}\right|=2a}can be viewed in a different way (see figure): c2{\displaystyle c_{2}}is called thecircular directrix(related to focusF2{\displaystyle F_{2}})of the ellipse.[1][2]This property should not be confused with the definition of an ellipse using a directrix line below. UsingDandelin spheres, one can prove that any section of a cone with a plane is an ellipse, assuming the plane does not contain the apex and has slope less than that of the lines on the cone. The standard form of an ellipse in Cartesian coordinates assumes that the origin is the center of the ellipse, thex-axis is the major axis, and: For an arbitrary point(x,y){\displaystyle (x,y)}the distance to the focus(c,0){\displaystyle (c,0)}is(x−c)2+y2{\textstyle {\sqrt {(x-c)^{2}+y^{2}}}}and to the other focus(x+c)2+y2{\textstyle {\sqrt {(x+c)^{2}+y^{2}}}}. Hence the point(x,y){\displaystyle (x,\,y)}is on the ellipse whenever:(x−c)2+y2+(x+c)2+y2=2a.{\displaystyle {\sqrt {(x-c)^{2}+y^{2}}}+{\sqrt {(x+c)^{2}+y^{2}}}=2a\ .} Removing theradicalsby suitable squarings and usingb2=a2−c2{\displaystyle b^{2}=a^{2}-c^{2}}(see diagram) produces the standard equation of the ellipse:[3]x2a2+y2b2=1,{\displaystyle {\frac {x^{2}}{a^{2}}}+{\frac {y^{2}}{b^{2}}}=1,}or, solved fory:y=±baa2−x2=±(a2−x2)(1−e2).{\displaystyle y=\pm {\frac {b}{a}}{\sqrt {a^{2}-x^{2}}}=\pm {\sqrt {\left(a^{2}-x^{2}\right)\left(1-e^{2}\right)}}.} The width and height parametersa,b{\displaystyle a,\;b}are called thesemi-major and semi-minor axes. The top and bottom pointsV3=(0,b),V4=(0,−b){\displaystyle V_{3}=(0,\,b),\;V_{4}=(0,\,-b)}are theco-vertices. The distances from a point(x,y){\displaystyle (x,\,y)}on the ellipse to the left and right foci area+ex{\displaystyle a+ex}anda−ex{\displaystyle a-ex}. It follows from the equation that the ellipse issymmetricwith respect to the coordinate axes and hence with respect to the origin. Throughout this article, thesemi-major and semi-minor axesare denoteda{\displaystyle a}andb{\displaystyle b}, respectively, i.e.a≥b>0.{\displaystyle a\geq b>0\ .} In principle, the canonical ellipse equationx2a2+y2b2=1{\displaystyle {\tfrac {x^{2}}{a^{2}}}+{\tfrac {y^{2}}{b^{2}}}=1}may havea<b{\displaystyle a<b}(and hence the ellipse would be taller than it is wide). This form can be converted to the standard form by transposing the variable namesx{\displaystyle x}andy{\displaystyle y}and the parameter namesa{\displaystyle a}andb.{\displaystyle b.} This is the distance from the center to a focus:c=a2−b2{\displaystyle c={\sqrt {a^{2}-b^{2}}}}. The eccentricity can be expressed as:e=ca=1−(ba)2,{\displaystyle e={\frac {c}{a}}={\sqrt {1-\left({\frac {b}{a}}\right)^{2}}},} assuminga>b.{\displaystyle a>b.}An ellipse with equal axes (a=b{\displaystyle a=b}) has zero eccentricity, and is a circle. The length of the chord through one focus, perpendicular to the major axis, is called thelatus rectum. One half of it is thesemi-latus rectumℓ{\displaystyle \ell }. A calculation shows:[4]ℓ=b2a=a(1−e2).{\displaystyle \ell ={\frac {b^{2}}{a}}=a\left(1-e^{2}\right).} The semi-latus rectumℓ{\displaystyle \ell }is equal to theradius of curvatureat the vertices (see sectioncurvature). An arbitrary lineg{\displaystyle g}intersects an ellipse at 0, 1, or 2 points, respectively called anexterior line,tangentandsecant. Through any point of an ellipse there is a unique tangent. The tangent at a point(x1,y1){\displaystyle (x_{1},\,y_{1})}of the ellipsex2a2+y2b2=1{\displaystyle {\tfrac {x^{2}}{a^{2}}}+{\tfrac {y^{2}}{b^{2}}}=1}has the coordinate equation:x1a2x+y1b2y=1.{\displaystyle {\frac {x_{1}}{a^{2}}}x+{\frac {y_{1}}{b^{2}}}y=1.} A vectorparametric equationof the tangent is:x→=(x1y1)+s(−y1a2x1b2),s∈R.{\displaystyle {\vec {x}}={\begin{pmatrix}x_{1}\\y_{1}\end{pmatrix}}+s\left({\begin{array}{r}-y_{1}a^{2}\\x_{1}b^{2}\end{array}}\right),\quad s\in \mathbb {R} .} Proof:Let(x1,y1){\displaystyle (x_{1},\,y_{1})}be a point on an ellipse andx→=(x1y1)+s(uv){\textstyle {\vec {x}}={\begin{pmatrix}x_{1}\\y_{1}\end{pmatrix}}+s{\begin{pmatrix}u\\v\end{pmatrix}}}be the equation of any lineg{\displaystyle g}containing(x1,y1){\displaystyle (x_{1},\,y_{1})}. Inserting the line's equation into the ellipse equation and respectingx12a2+y12b2=1{\textstyle {\frac {x_{1}^{2}}{a^{2}}}+{\frac {y_{1}^{2}}{b^{2}}}=1}yields:(x1+su)2a2+(y1+sv)2b2=1⟹2s(x1ua2+y1vb2)+s2(u2a2+v2b2)=0.{\displaystyle {\frac {\left(x_{1}+su\right)^{2}}{a^{2}}}+{\frac {\left(y_{1}+sv\right)^{2}}{b^{2}}}=1\ \quad \Longrightarrow \quad 2s\left({\frac {x_{1}u}{a^{2}}}+{\frac {y_{1}v}{b^{2}}}\right)+s^{2}\left({\frac {u^{2}}{a^{2}}}+{\frac {v^{2}}{b^{2}}}\right)=0\ .}There are then cases: Using (1) one finds that(−y1a2x1b2){\displaystyle {\begin{pmatrix}-y_{1}a^{2}&x_{1}b^{2}\end{pmatrix}}}is a tangent vector at point(x1,y1){\displaystyle (x_{1},\,y_{1})}, which proves the vector equation. If(x1,y1){\displaystyle (x_{1},y_{1})}and(u,v){\displaystyle (u,v)}are two points of the ellipse such thatx1ua2+y1vb2=0{\textstyle {\frac {x_{1}u}{a^{2}}}+{\tfrac {y_{1}v}{b^{2}}}=0}, then the points lie on twoconjugate diameters(seebelow). (Ifa=b{\displaystyle a=b}, the ellipse is a circle and "conjugate" means "orthogonal".) If the standard ellipse is shifted to have center(x∘,y∘){\displaystyle \left(x_{\circ },\,y_{\circ }\right)}, its equation is(x−x∘)2a2+(y−y∘)2b2=1.{\displaystyle {\frac {\left(x-x_{\circ }\right)^{2}}{a^{2}}}+{\frac {\left(y-y_{\circ }\right)^{2}}{b^{2}}}=1\ .} The axes are still parallel to thex- andy-axes. Inanalytic geometry, the ellipse is defined as aquadric: the set of points(x,y){\displaystyle (x,\,y)}of theCartesian planethat, in non-degenerate cases, satisfy theimplicitequation[5][6]Ax2+Bxy+Cy2+Dx+Ey+F=0{\displaystyle Ax^{2}+Bxy+Cy^{2}+Dx+Ey+F=0}providedB2−4AC<0.{\displaystyle B^{2}-4AC<0.} To distinguish thedegenerate casesfrom the non-degenerate case, let∆be thedeterminantΔ=|A12B12D12BC12E12D12EF|=ACF+14BDE−14(AE2+CD2+FB2).{\displaystyle \Delta ={\begin{vmatrix}A&{\frac {1}{2}}B&{\frac {1}{2}}D\\{\frac {1}{2}}B&C&{\frac {1}{2}}E\\{\frac {1}{2}}D&{\frac {1}{2}}E&F\end{vmatrix}}=ACF+{\tfrac {1}{4}}BDE-{\tfrac {1}{4}}(AE^{2}+CD^{2}+FB^{2}).} Then the ellipse is a non-degenerate real ellipse if and only ifC∆< 0. IfC∆> 0, we have an imaginary ellipse, and if∆= 0, we have a point ellipse.[7]: 63 The general equation's coefficients can be obtained from known semi-major axisa{\displaystyle a}, semi-minor axisb{\displaystyle b}, center coordinates(x∘,y∘){\displaystyle \left(x_{\circ },\,y_{\circ }\right)}, and rotation angleθ{\displaystyle \theta }(the angle from the positive horizontal axis to the ellipse's major axis) using the formulae:A=a2sin2⁡θ+b2cos2⁡θB=2(b2−a2)sin⁡θcos⁡θC=a2cos2⁡θ+b2sin2⁡θD=−2Ax∘−By∘E=−Bx∘−2Cy∘F=Ax∘2+Bx∘y∘+Cy∘2−a2b2.{\displaystyle {\begin{aligned}A&=a^{2}\sin ^{2}\theta +b^{2}\cos ^{2}\theta &B&=2\left(b^{2}-a^{2}\right)\sin \theta \cos \theta \\[1ex]C&=a^{2}\cos ^{2}\theta +b^{2}\sin ^{2}\theta &D&=-2Ax_{\circ }-By_{\circ }\\[1ex]E&=-Bx_{\circ }-2Cy_{\circ }&F&=Ax_{\circ }^{2}+Bx_{\circ }y_{\circ }+Cy_{\circ }^{2}-a^{2}b^{2}.\end{aligned}}} These expressions can be derived from the canonical equationX2a2+Y2b2=1{\displaystyle {\frac {X^{2}}{a^{2}}}+{\frac {Y^{2}}{b^{2}}}=1}by a Euclidean transformation of the coordinates(X,Y){\displaystyle (X,\,Y)}:X=(x−x∘)cos⁡θ+(y−y∘)sin⁡θ,Y=−(x−x∘)sin⁡θ+(y−y∘)cos⁡θ.{\displaystyle {\begin{aligned}X&=\left(x-x_{\circ }\right)\cos \theta +\left(y-y_{\circ }\right)\sin \theta ,\\Y&=-\left(x-x_{\circ }\right)\sin \theta +\left(y-y_{\circ }\right)\cos \theta .\end{aligned}}} Conversely, the canonical form parameters can be obtained from the general-form coefficients by the equations:[3] a,b=−2(AE2+CD2−BDE+(B2−4AC)F)((A+C)±(A−C)2+B2)B2−4AC,x∘=2CD−BEB2−4AC,y∘=2AE−BDB2−4AC,θ=12atan2⁡(−B,C−A),{\displaystyle {\begin{aligned}a,b&={\frac {-{\sqrt {2{\big (}AE^{2}+CD^{2}-BDE+(B^{2}-4AC)F{\big )}{\big (}(A+C)\pm {\sqrt {(A-C)^{2}+B^{2}}}{\big )}}}}{B^{2}-4AC}},\\x_{\circ }&={\frac {2CD-BE}{B^{2}-4AC}},\\[5mu]y_{\circ }&={\frac {2AE-BD}{B^{2}-4AC}},\\[5mu]\theta &={\tfrac {1}{2}}\operatorname {atan2} (-B,\,C-A),\end{aligned}}} whereatan2is the 2-argument arctangent function. Usingtrigonometric functions, a parametric representation of the standard ellipsex2a2+y2b2=1{\displaystyle {\tfrac {x^{2}}{a^{2}}}+{\tfrac {y^{2}}{b^{2}}}=1}is:(x,y)=(acos⁡t,bsin⁡t),0≤t<2π.{\displaystyle (x,\,y)=(a\cos t,\,b\sin t),\ 0\leq t<2\pi \,.} The parametert(called theeccentric anomalyin astronomy) is not the angle of(x(t),y(t)){\displaystyle (x(t),y(t))}with thex-axis, but has a geometric meaning due toPhilippe de La Hire(see§ Drawing ellipsesbelow).[8] With the substitutionu=tan⁡(t2){\textstyle u=\tan \left({\frac {t}{2}}\right)}and trigonometric formulae one obtainscos⁡t=1−u21+u2,sin⁡t=2u1+u2{\displaystyle \cos t={\frac {1-u^{2}}{1+u^{2}}}\ ,\quad \sin t={\frac {2u}{1+u^{2}}}} and therationalparametric equation of an ellipse{x(u)=a1−u21+u2y(u)=b2u1+u2−∞<u<∞{\displaystyle {\begin{cases}x(u)=a\,{\dfrac {1-u^{2}}{1+u^{2}}}\\[10mu]y(u)=b\,{\dfrac {2u}{1+u^{2}}}\\[10mu]-\infty <u<\infty \end{cases}}} which covers any point of the ellipsex2a2+y2b2=1{\displaystyle {\tfrac {x^{2}}{a^{2}}}+{\tfrac {y^{2}}{b^{2}}}=1}except the left vertex(−a,0){\displaystyle (-a,\,0)}. Foru∈[0,1],{\displaystyle u\in [0,\,1],}this formula represents the right upper quarter of the ellipse moving counter-clockwise with increasingu.{\displaystyle u.}The left vertex is the limitlimu→±∞(x(u),y(u))=(−a,0).{\textstyle \lim _{u\to \pm \infty }(x(u),\,y(u))=(-a,\,0)\;.} Alternately, if the parameter[u:v]{\displaystyle [u:v]}is considered to be a point on thereal projective lineP(R){\textstyle \mathbf {P} (\mathbf {R} )}, then the corresponding rational parametrization is[u:v]↦(av2−u2v2+u2,b2uvv2+u2).{\displaystyle [u:v]\mapsto \left(a{\frac {v^{2}-u^{2}}{v^{2}+u^{2}}},b{\frac {2uv}{v^{2}+u^{2}}}\right).} Then[1:0]↦(−a,0).{\textstyle [1:0]\mapsto (-a,\,0).} Rational representations of conic sections are commonly used incomputer-aided design(seeBézier curve). A parametric representation, which uses the slopem{\displaystyle m}of the tangent at a point of the ellipse can be obtained from the derivative of the standard representationx→(t)=(acos⁡t,bsin⁡t)T{\displaystyle {\vec {x}}(t)=(a\cos t,\,b\sin t)^{\mathsf {T}}}:x→′(t)=(−asin⁡t,bcos⁡t)T→m=−bacot⁡t→cot⁡t=−mab.{\displaystyle {\vec {x}}'(t)=(-a\sin t,\,b\cos t)^{\mathsf {T}}\quad \rightarrow \quad m=-{\frac {b}{a}}\cot t\quad \rightarrow \quad \cot t=-{\frac {ma}{b}}.} With help oftrigonometric formulaeone obtains:cos⁡t=cot⁡t±1+cot2⁡t=−ma±m2a2+b2,sin⁡t=1±1+cot2⁡t=b±m2a2+b2.{\displaystyle \cos t={\frac {\cot t}{\pm {\sqrt {1+\cot ^{2}t}}}}={\frac {-ma}{\pm {\sqrt {m^{2}a^{2}+b^{2}}}}}\ ,\quad \quad \sin t={\frac {1}{\pm {\sqrt {1+\cot ^{2}t}}}}={\frac {b}{\pm {\sqrt {m^{2}a^{2}+b^{2}}}}}.} Replacingcos⁡t{\displaystyle \cos t}andsin⁡t{\displaystyle \sin t}of the standard representation yields:c→±(m)=(−ma2±m2a2+b2,b2±m2a2+b2),m∈R.{\displaystyle {\vec {c}}_{\pm }(m)=\left(-{\frac {ma^{2}}{\pm {\sqrt {m^{2}a^{2}+b^{2}}}}},\;{\frac {b^{2}}{\pm {\sqrt {m^{2}a^{2}+b^{2}}}}}\right),\,m\in \mathbb {R} .} Herem{\displaystyle m}is the slope of the tangent at the corresponding ellipse point,c→+{\displaystyle {\vec {c}}_{+}}is the upper andc→−{\displaystyle {\vec {c}}_{-}}the lower half of the ellipse. The vertices(±a,0){\displaystyle (\pm a,\,0)}, having vertical tangents, are not covered by the representation. The equation of the tangent at pointc→±(m){\displaystyle {\vec {c}}_{\pm }(m)}has the formy=mx+n{\displaystyle y=mx+n}. The still unknownn{\displaystyle n}can be determined by inserting the coordinates of the corresponding ellipse pointc→±(m){\displaystyle {\vec {c}}_{\pm }(m)}:y=mx±m2a2+b2.{\displaystyle y=mx\pm {\sqrt {m^{2}a^{2}+b^{2}}}\,.} This description of the tangents of an ellipse is an essential tool for the determination of theorthopticof an ellipse. The orthoptic article contains another proof, without differential calculus and trigonometric formulae. Another definition of an ellipse usesaffine transformations: An affine transformation of the Euclidean plane has the formx→↦f→0+Ax→{\displaystyle {\vec {x}}\mapsto {\vec {f}}\!_{0}+A{\vec {x}}}, whereA{\displaystyle A}is a regularmatrix(with non-zerodeterminant) andf→0{\displaystyle {\vec {f}}\!_{0}}is an arbitrary vector. Iff→1,f→2{\displaystyle {\vec {f}}\!_{1},{\vec {f}}\!_{2}}are the column vectors of the matrixA{\displaystyle A}, the unit circle(cos⁡(t),sin⁡(t)){\displaystyle (\cos(t),\sin(t))},0≤t≤2π{\displaystyle 0\leq t\leq 2\pi }, is mapped onto the ellipse:x→=p→(t)=f→0+f→1cos⁡t+f→2sin⁡t.{\displaystyle {\vec {x}}={\vec {p}}(t)={\vec {f}}\!_{0}+{\vec {f}}\!_{1}\cos t+{\vec {f}}\!_{2}\sin t\,.} Heref→0{\displaystyle {\vec {f}}\!_{0}}is the center andf→1,f→2{\displaystyle {\vec {f}}\!_{1},\;{\vec {f}}\!_{2}}are the directions of twoconjugate diameters, in general not perpendicular. The four vertices of the ellipse arep→(t0),p→(t0±π2),p→(t0+π){\displaystyle {\vec {p}}(t_{0}),\;{\vec {p}}\left(t_{0}\pm {\tfrac {\pi }{2}}\right),\;{\vec {p}}\left(t_{0}+\pi \right)}, for a parametert=t0{\displaystyle t=t_{0}}defined by:cot⁡(2t0)=f→12−f→222f→1⋅f→2.{\displaystyle \cot(2t_{0})={\frac {{\vec {f}}\!_{1}^{\,2}-{\vec {f}}\!_{2}^{\,2}}{2{\vec {f}}\!_{1}\cdot {\vec {f}}\!_{2}}}.} (Iff→1⋅f→2=0{\displaystyle {\vec {f}}\!_{1}\cdot {\vec {f}}\!_{2}=0}, thent0=0{\displaystyle t_{0}=0}.) This is derived as follows. The tangent vector at pointp→(t){\displaystyle {\vec {p}}(t)}is:p→′(t)=−f→1sin⁡t+f→2cos⁡t.{\displaystyle {\vec {p}}\,'(t)=-{\vec {f}}\!_{1}\sin t+{\vec {f}}\!_{2}\cos t\ .} At a vertex parametert=t0{\displaystyle t=t_{0}}, the tangent is perpendicular to the major/minor axes, so:0=p→′(t)⋅(p→(t)−f→0)=(−f→1sin⁡t+f→2cos⁡t)⋅(f→1cos⁡t+f→2sin⁡t).{\displaystyle 0={\vec {p}}'(t)\cdot \left({\vec {p}}(t)-{\vec {f}}\!_{0}\right)=\left(-{\vec {f}}\!_{1}\sin t+{\vec {f}}\!_{2}\cos t\right)\cdot \left({\vec {f}}\!_{1}\cos t+{\vec {f}}\!_{2}\sin t\right).} Expanding and applying the identitiescos2⁡t−sin2⁡t=cos⁡2t,2sin⁡tcos⁡t=sin⁡2t{\displaystyle \;\cos ^{2}t-\sin ^{2}t=\cos 2t,\ \ 2\sin t\cos t=\sin 2t\;}gives the equation fort=t0.{\displaystyle t=t_{0}\;.} From Apollonios theorem (see below) one obtains:The area of an ellipsex→=f→0+f→1cos⁡t+f→2sin⁡t{\displaystyle \;{\vec {x}}={\vec {f}}_{0}+{\vec {f}}_{1}\cos t+{\vec {f}}_{2}\sin t\;}isA=π|det(f→1,f→2)|.{\displaystyle A=\pi \left|\det({\vec {f}}_{1},{\vec {f}}_{2})\right|.} With the abbreviationsM=f→12+f→22,N=|det(f→1,f→2)|{\displaystyle \;M={\vec {f}}_{1}^{2}+{\vec {f}}_{2}^{2},\ N=\left|\det({\vec {f}}_{1},{\vec {f}}_{2})\right|}the statements of Apollonios's theorem can be written as:a2+b2=M,ab=N.{\displaystyle a^{2}+b^{2}=M,\quad ab=N\ .}Solving this nonlinear system fora,b{\displaystyle a,b}yields the semiaxes:a=12(M+2N+M−2N)b=12(M+2N−M−2N).{\displaystyle {\begin{aligned}a&={\frac {1}{2}}({\sqrt {M+2N}}+{\sqrt {M-2N}})\\[1ex]b&={\frac {1}{2}}({\sqrt {M+2N}}-{\sqrt {M-2N}})\,.\end{aligned}}} Solving the parametric representation forcos⁡t,sin⁡t{\displaystyle \;\cos t,\sin t\;}byCramer's ruleand usingcos2⁡t+sin2⁡t−1=0{\displaystyle \;\cos ^{2}t+\sin ^{2}t-1=0\;}, one obtains the implicit representationdet(x→−f→0,f→2)2+det(f→1,x→−f→0)2−det(f→1,f→2)2=0.{\displaystyle \det {\left({\vec {x}}\!-\!{\vec {f}}\!_{0},{\vec {f}}\!_{2}\right)^{2}}+\det {\left({\vec {f}}\!_{1},{\vec {x}}\!-\!{\vec {f}}\!_{0}\right)^{2}}-\det {\left({\vec {f}}\!_{1},{\vec {f}}\!_{2}\right)^{2}}=0.} Conversely: If theequation of an ellipse centered at the origin is given, then the two vectorsf→1=(e0),f→2=ed2−c2(−c1){\displaystyle {\vec {f}}_{1}={e \choose 0},\quad {\vec {f}}_{2}={\frac {e}{\sqrt {d^{2}-c^{2}}}}{-c \choose 1}}point to two conjugate points and the tools developed above are applicable. Example: For the ellipse with equationx2+2xy+3y2−1=0{\displaystyle \;x^{2}+2xy+3y^{2}-1=0\;}the vectors aref→1=(10),f→2=12(−11).{\displaystyle {\vec {f}}_{1}={1 \choose 0},\quad {\vec {f}}_{2}={\frac {1}{\sqrt {2}}}{-1 \choose 1}.} Forf→0=(00),f→1=a(cos⁡θsin⁡θ),f→2=b(−sin⁡θcos⁡θ){\displaystyle {\vec {f}}_{0}={0 \choose 0},\;{\vec {f}}_{1}=a{\cos \theta \choose \sin \theta },\;{\vec {f}}_{2}=b{-\sin \theta \choose \;\cos \theta }}one obtains a parametric representation of the standard ellipserotatedby angleθ{\displaystyle \theta }:x=xθ(t)=acos⁡θcos⁡t−bsin⁡θsin⁡t,y=yθ(t)=asin⁡θcos⁡t+bcos⁡θsin⁡t.{\displaystyle {\begin{aligned}x&=x_{\theta }(t)=a\cos \theta \cos t-b\sin \theta \sin t\,,\\y&=y_{\theta }(t)=a\sin \theta \cos t+b\cos \theta \sin t\,.\end{aligned}}} The definition of an ellipse in this section gives a parametric representation of an arbitrary ellipse, even in space, if one allowsf→0,f→1,f→2{\displaystyle {\vec {f}}\!_{0},{\vec {f}}\!_{1},{\vec {f}}\!_{2}}to be vectors in space. Inpolar coordinates, with the origin at the center of the ellipse and with the angular coordinateθ{\displaystyle \theta }measured from the major axis, the ellipse's equation is[7]: 75r(θ)=ab(bcos⁡θ)2+(asin⁡θ)2=b1−(ecos⁡θ)2{\displaystyle r(\theta )={\frac {ab}{\sqrt {(b\cos \theta )^{2}+(a\sin \theta )^{2}}}}={\frac {b}{\sqrt {1-(e\cos \theta )^{2}}}}}wheree{\displaystyle e}is the eccentricity (notEuler's number). If instead we use polar coordinates with the origin at one focus, with the angular coordinateθ=0{\displaystyle \theta =0}still measured from the major axis, the ellipse's equation isr(θ)=a(1−e2)1±ecos⁡θ{\displaystyle r(\theta )={\frac {a(1-e^{2})}{1\pm e\cos \theta }}} where the sign in the denominator is negative if the reference directionθ=0{\displaystyle \theta =0}points towards the center (as illustrated on the right), and positive if that direction points away from the center. The angleθ{\displaystyle \theta }is called thetrue anomalyof the point. The numeratorℓ=a(1−e2){\displaystyle \ell =a(1-e^{2})}is thesemi-latus rectum. Each of the two lines parallel to the minor axis, and at a distance ofd=a2c=ae{\textstyle d={\frac {a^{2}}{c}}={\frac {a}{e}}}from it, is called adirectrixof the ellipse (see diagram). The proof for the pairF1,l1{\displaystyle F_{1},l_{1}}follows from the fact that|PF1|2=(x−c)2+y2,|Pl1|2=(x−a2c)2{\textstyle \left|PF_{1}\right|^{2}=(x-c)^{2}+y^{2},\ \left|Pl_{1}\right|^{2}=\left(x-{\tfrac {a^{2}}{c}}\right)^{2}}andy2=b2−b2a2x2{\displaystyle y^{2}=b^{2}-{\tfrac {b^{2}}{a^{2}}}x^{2}}satisfy the equation|PF1|2−c2a2|Pl1|2=0.{\displaystyle \left|PF_{1}\right|^{2}-{\frac {c^{2}}{a^{2}}}\left|Pl_{1}\right|^{2}=0\,.} The second case is proven analogously. The converse is also true and can be used to define an ellipse (in a manner similar to the definition of a parabola): The extension toe=0{\displaystyle e=0}, which is the eccentricity of a circle, is not allowed in this context in the Euclidean plane. However, one may consider the directrix of a circle to be theline at infinityin theprojective plane. (The choicee=1{\displaystyle e=1}yields a parabola, and ife>1{\displaystyle e>1}, a hyperbola.) LetF=(f,0),e>0{\displaystyle F=(f,\,0),\ e>0}, and assume(0,0){\displaystyle (0,\,0)}is a point on the curve. The directrixl{\displaystyle l}has equationx=−fe{\displaystyle x=-{\tfrac {f}{e}}}. WithP=(x,y){\displaystyle P=(x,\,y)}, the relation|PF|2=e2|Pl|2{\displaystyle |PF|^{2}=e^{2}|Pl|^{2}}produces the equations The substitutionp=f(1+e){\displaystyle p=f(1+e)}yieldsx2(e2−1)+2px−y2=0.{\displaystyle x^{2}\left(e^{2}-1\right)+2px-y^{2}=0.} This is the equation of anellipse(e<1{\displaystyle e<1}), or aparabola(e=1{\displaystyle e=1}), or ahyperbola(e>1{\displaystyle e>1}). All of these non-degenerate conics have, in common, the origin as a vertex (see diagram). Ife<1{\displaystyle e<1}, introduce new parametersa,b{\displaystyle a,\,b}so that1−e2=b2a2,andp=b2a{\displaystyle 1-e^{2}={\tfrac {b^{2}}{a^{2}}},{\text{ and }}\ p={\tfrac {b^{2}}{a}}}, and then the equation above becomes(x−a)2a2+y2b2=1,{\displaystyle {\frac {(x-a)^{2}}{a^{2}}}+{\frac {y^{2}}{b^{2}}}=1\,,} which is the equation of an ellipse with center(a,0){\displaystyle (a,\,0)}, thex-axis as major axis, and the major/minor semi axisa,b{\displaystyle a,\,b}. Because ofc⋅a2c=a2{\displaystyle c\cdot {\tfrac {a^{2}}{c}}=a^{2}}pointL1{\displaystyle L_{1}}of directrixl1{\displaystyle l_{1}}(see diagram) and focusF1{\displaystyle F_{1}}are inverse with respect to thecircle inversionat circlex2+y2=a2{\displaystyle x^{2}+y^{2}=a^{2}}(in diagram green). HenceL1{\displaystyle L_{1}}can be constructed as shown in the diagram. Directrixl1{\displaystyle l_{1}}is the perpendicular to the main axis at pointL1{\displaystyle L_{1}}. If the focus isF=(f1,f2){\displaystyle F=\left(f_{1},\,f_{2}\right)}and the directrixux+vy+w=0{\displaystyle ux+vy+w=0}, one obtains the equation(x−f1)2+(y−f2)2=e2(ux+vy+w)2u2+v2.{\displaystyle \left(x-f_{1}\right)^{2}+\left(y-f_{2}\right)^{2}=e^{2}{\frac {\left(ux+vy+w\right)^{2}}{u^{2}+v^{2}}}\ .} (The right side of the equation uses theHesse normal formof a line to calculate the distance|Pl|{\displaystyle |Pl|}.) An ellipse possesses the following property: Because the tangent line is perpendicular to the normal, an equivalent statement is that the tangent is the external angle bisector of the lines to the foci (see diagram). LetL{\displaystyle L}be the point on the linePF2¯{\displaystyle {\overline {PF_{2}}}}with distance2a{\displaystyle 2a}to the focusF2{\displaystyle F_{2}}, wherea{\displaystyle a}is the semi-major axis of the ellipse. Let linew{\displaystyle w}be the external angle bisector of the linesPF1¯{\displaystyle {\overline {PF_{1}}}}andPF2¯.{\displaystyle {\overline {PF_{2}}}.}Take any other pointQ{\displaystyle Q}onw.{\displaystyle w.}By thetriangle inequalityand theangle bisector theorem,2a=|LF2|<{\displaystyle 2a=\left|LF_{2}\right|<{}}|QF2|+|QL|={\displaystyle \left|QF_{2}\right|+\left|QL\right|={}}|QF2|+|QF1|,{\displaystyle \left|QF_{2}\right|+\left|QF_{1}\right|,}soQ{\displaystyle Q}must be outside the ellipse. As this is true for every choice ofQ,{\displaystyle Q,}w{\displaystyle w}only intersects the ellipse at the single pointP{\displaystyle P}so must be the tangent line. The rays from one focus are reflected by the ellipse to the second focus. This property has optical and acoustic applications similar to the reflective property of a parabola (seewhispering gallery). Additionally, because of the focus-to-focus reflection property of ellipses, if the rays are allowed to continue propagating, reflected rays will eventually align closely with the major axis. A circle has the following property: An affine transformation preserves parallelism and midpoints of line segments, so this property is true for any ellipse. (Note that the parallel chords and the diameter are no longer orthogonal.) Two diametersd1,d2{\displaystyle d_{1},\,d_{2}}of an ellipse areconjugateif the midpoints of chords parallel tod1{\displaystyle d_{1}}lie ond2.{\displaystyle d_{2}\ .} From the diagram one finds: Conjugate diameters in an ellipse generalize orthogonal diameters in a circle. In the parametric equation for a general ellipse given above,x→=p→(t)=f→0+f→1cos⁡t+f→2sin⁡t,{\displaystyle {\vec {x}}={\vec {p}}(t)={\vec {f}}\!_{0}+{\vec {f}}\!_{1}\cos t+{\vec {f}}\!_{2}\sin t,} any pair of pointsp→(t),p→(t+π){\displaystyle {\vec {p}}(t),\ {\vec {p}}(t+\pi )}belong to a diameter, and the pairp→(t+π2),p→(t−π2){\displaystyle {\vec {p}}\left(t+{\tfrac {\pi }{2}}\right),\ {\vec {p}}\left(t-{\tfrac {\pi }{2}}\right)}belong to its conjugate diameter. For the common parametric representation(acos⁡t,bsin⁡t){\displaystyle (a\cos t,b\sin t)}of the ellipse with equationx2a2+y2b2=1{\displaystyle {\tfrac {x^{2}}{a^{2}}}+{\tfrac {y^{2}}{b^{2}}}=1}one gets: The points In case of a circle the last equation collapses tox1x2+y1y2=0.{\displaystyle x_{1}x_{2}+y_{1}y_{2}=0\ .} For an ellipse with semi-axesa,b{\displaystyle a,\,b}the following is true:[9][10] Let the ellipse be in the canonical form with parametric equationp→(t)=(acos⁡t,bsin⁡t).{\displaystyle {\vec {p}}(t)=(a\cos t,\,b\sin t).} The two pointsc→1=p→(t),c→2=p→(t+π2){\textstyle {\vec {c}}_{1}={\vec {p}}(t),\ {\vec {c}}_{2}={\vec {p}}\left(t+{\frac {\pi }{2}}\right)}are on conjugate diameters (see previous section). From trigonometric formulae one obtainsc→2=(−asin⁡t,bcos⁡t)T{\displaystyle {\vec {c}}_{2}=(-a\sin t,\,b\cos t)^{\mathsf {T}}}and|c→1|2+|c→2|2=⋯=a2+b2.{\displaystyle \left|{\vec {c}}_{1}\right|^{2}+\left|{\vec {c}}_{2}\right|^{2}=\cdots =a^{2}+b^{2}\,.} The area of the triangle generated byc→1,c→2{\displaystyle {\vec {c}}_{1},\,{\vec {c}}_{2}}isAΔ=12det(c→1,c→2)=⋯=12ab{\displaystyle A_{\Delta }={\tfrac {1}{2}}\det \left({\vec {c}}_{1},\,{\vec {c}}_{2}\right)=\cdots ={\tfrac {1}{2}}ab} and from the diagram it can be seen that the area of the parallelogram is 8 times that ofAΔ{\displaystyle A_{\Delta }}. HenceArea12=4ab.{\displaystyle {\text{Area}}_{12}=4ab\,.} For the ellipsex2a2+y2b2=1{\displaystyle {\tfrac {x^{2}}{a^{2}}}+{\tfrac {y^{2}}{b^{2}}}=1}the intersection points oforthogonaltangents lie on the circlex2+y2=a2+b2{\displaystyle x^{2}+y^{2}=a^{2}+b^{2}}. This circle is calledorthopticordirector circleof the ellipse (not to be confused with the circular directrix defined above). Ellipses appear indescriptive geometryas images (parallel or central projection) of circles. There exist various tools to draw an ellipse. Computers provide the fastest and most accurate method for drawing an ellipse. However, technical tools (ellipsographs) to draw an ellipse without a computer exist. The principle was known to the 5th century mathematicianProclus, and the tool now known as anelliptical trammelwas invented byLeonardo da Vinci.[11] If there is no ellipsograph available, one can draw an ellipse using anapproximation by the four osculating circles at the vertices. For any method described below, knowledge of the axes and the semi-axes is necessary (or equivalently: the foci and the semi-major axis). If this presumption is not fulfilled one has to know at least two conjugate diameters. With help ofRytz's constructionthe axes and semi-axes can be retrieved. The following construction of single points of an ellipse is due tode La Hire.[12]It is based on thestandard parametric representation(acos⁡t,bsin⁡t){\displaystyle (a\cos t,\,b\sin t)}of an ellipse: The characterization of an ellipse as the locus of points so that sum of the distances to the foci is constant leads to a method of drawing one using twodrawing pins, a length of string, and a pencil. In this method, pins are pushed into the paper at two points, which become the ellipse's foci. A string is tied at each end to the two pins; its length after tying is2a{\displaystyle 2a}. The tip of the pencil then traces an ellipse if it is moved while keeping the string taut. Using two pegs and a rope, gardeners use this procedure to outline an elliptical flower bed—thus it is called thegardener's ellipse. The Byzantine architectAnthemius of Tralles(c.600) described how this method could be used to construct an elliptical reflector,[13]and it was elaborated in a now-lost 9th-century treatise byAl-Ḥasan ibn Mūsā.[14] A similar method for drawingconfocal ellipseswith aclosedstring is due to the Irish bishopCharles Graves. The two following methods rely on the parametric representation (see§ Standard parametric representation, above):(acos⁡t,bsin⁡t){\displaystyle (a\cos t,\,b\sin t)} This representation can be modeled technically by two simple methods. In both cases center, the axes and semi axesa,b{\displaystyle a,\,b}have to be known. The first method starts with The point, where the semi axes meet is marked byP{\displaystyle P}. If the strip slides with both ends on the axes of the desired ellipse, then pointP{\displaystyle P}traces the ellipse. For the proof one shows that pointP{\displaystyle P}has the parametric representation(acos⁡t,bsin⁡t){\displaystyle (a\cos t,\,b\sin t)}, where parametert{\displaystyle t}is the angle of the slope of the paper strip. A technical realization of the motion of the paper strip can be achieved by aTusi couple(see animation). The device is able to draw any ellipse with afixedsuma+b{\displaystyle a+b}, which is the radius of the large circle. This restriction may be a disadvantage in real life. More flexible is the second paper strip method. A variation of the paper strip method 1 uses the observation that the midpointN{\displaystyle N}of the paper strip is moving on the circle with centerM{\displaystyle M}(of the ellipse) and radiusa+b2{\displaystyle {\tfrac {a+b}{2}}}. Hence, the paperstrip can be cut at pointN{\displaystyle N}into halves, connected again by a joint atN{\displaystyle N}and the sliding endK{\displaystyle K}fixed at the centerM{\displaystyle M}(see diagram). After this operation the movement of the unchanged half of the paperstrip is unchanged.[15]This variation requires only one sliding shoe. The second method starts with One marks the point, which divides the strip into two substrips of lengthb{\displaystyle b}anda−b{\displaystyle a-b}. The strip is positioned onto the axes as described in the diagram. Then the free end of the strip traces an ellipse, while the strip is moved. For the proof, one recognizes that the tracing point can be described parametrically by(acos⁡t,bsin⁡t){\displaystyle (a\cos t,\,b\sin t)}, where parametert{\displaystyle t}is the angle of slope of the paper strip. This method is the base for severalellipsographs(see section below). Similar to the variation of the paper strip method 1 avariation of the paper strip method 2can be established (see diagram) by cutting the part between the axes into halves. Most ellipsographdraftinginstruments are based on the second paperstrip method. FromMetric propertiesbelow, one obtains: The diagram shows an easy way to find the centers of curvatureC1=(a−b2a,0),C3=(0,b−a2b){\displaystyle C_{1}=\left(a-{\tfrac {b^{2}}{a}},0\right),\,C_{3}=\left(0,b-{\tfrac {a^{2}}{b}}\right)}at vertexV1{\displaystyle V_{1}}and co-vertexV3{\displaystyle V_{3}}, respectively: (proof: simple calculation.) The centers for the remaining vertices are found by symmetry. With help of aFrench curveone draws a curve, which has smooth contact to theosculating circles. The following method to construct single points of an ellipse relies on theSteiner generation of a conic section: For the generation of points of the ellipsex2a2+y2b2=1{\displaystyle {\tfrac {x^{2}}{a^{2}}}+{\tfrac {y^{2}}{b^{2}}}=1}one uses the pencils at the verticesV1,V2{\displaystyle V_{1},\,V_{2}}. LetP=(0,b){\displaystyle P=(0,\,b)}be an upper co-vertex of the ellipse andA=(−a,2b),B=(a,2b){\displaystyle A=(-a,\,2b),\,B=(a,\,2b)}. P{\displaystyle P}is the center of the rectangleV1,V2,B,A{\displaystyle V_{1},\,V_{2},\,B,\,A}. The sideAB¯{\displaystyle {\overline {AB}}}of the rectangle is divided into n equal spaced line segments and this division is projected parallel with the diagonalAV2{\displaystyle AV_{2}}as direction onto the line segmentV1B¯{\displaystyle {\overline {V_{1}B}}}and assign the division as shown in the diagram. The parallel projection together with the reverse of the orientation is part of the projective mapping between the pencils atV1{\displaystyle V_{1}}andV2{\displaystyle V_{2}}needed. The intersection points of any two related linesV1Bi{\displaystyle V_{1}B_{i}}andV2Ai{\displaystyle V_{2}A_{i}}are points of the uniquely defined ellipse. With help of the pointsC1,…{\displaystyle C_{1},\,\dotsc }the points of the second quarter of the ellipse can be determined. Analogously one obtains the points of the lower half of the ellipse. Steiner generation can also be defined for hyperbolas and parabolas. It is sometimes called aparallelogram methodbecause one can use other points rather than the vertices, which starts with a parallelogram instead of a rectangle. The ellipse is a special case of thehypotrochoidwhenR=2r{\displaystyle R=2r}, as shown in the adjacent image. The special case of a moving circle with radiusr{\displaystyle r}inside a circle with radiusR=2r{\displaystyle R=2r}is called aTusi couple. A circle with equation(x−x∘)2+(y−y∘)2=r2{\displaystyle \left(x-x_{\circ }\right)^{2}+\left(y-y_{\circ }\right)^{2}=r^{2}}is uniquely determined by three points(x1,y1),(x2,y2),(x3,y3){\displaystyle \left(x_{1},y_{1}\right),\;\left(x_{2},\,y_{2}\right),\;\left(x_{3},\,y_{3}\right)}not on a line. A simple way to determine the parametersx∘,y∘,r{\displaystyle x_{\circ },y_{\circ },r}uses theinscribed angle theoremfor circles: Usually one measures inscribed angles by a degree or radianθ, but here the following measurement is more convenient: For four pointsPi=(xi,yi),i=1,2,3,4,{\displaystyle P_{i}=\left(x_{i},\,y_{i}\right),\ i=1,\,2,\,3,\,4,\,}no three of them on a line, we have the following (see diagram): At first the measure is available only for chords not parallel to the y-axis, but the final formula works for any chord. For example, forP1=(2,0),P2=(0,1),P3=(0,0){\displaystyle P_{1}=(2,\,0),\;P_{2}=(0,\,1),\;P_{3}=(0,\,0)}the three-point equation is: Using vectors,dot productsanddeterminantsthis formula can be arranged more clearly, lettingx→=(x,y){\displaystyle {\vec {x}}=(x,\,y)}:(x→−x→1)⋅(x→−x→2)det(x→−x→1,x→−x→2)=(x→3−x→1)⋅(x→3−x→2)det(x→3−x→1,x→3−x→2).{\displaystyle {\frac {\left({\color {red}{\vec {x}}}-{\vec {x}}_{1}\right)\cdot \left({\color {red}{\vec {x}}}-{\vec {x}}_{2}\right)}{\det \left({\color {red}{\vec {x}}}-{\vec {x}}_{1},{\color {red}{\vec {x}}}-{\vec {x}}_{2}\right)}}={\frac {\left({\vec {x}}_{3}-{\vec {x}}_{1}\right)\cdot \left({\vec {x}}_{3}-{\vec {x}}_{2}\right)}{\det \left({\vec {x}}_{3}-{\vec {x}}_{1},{\vec {x}}_{3}-{\vec {x}}_{2}\right)}}.} The center of the circle(x∘,y∘){\displaystyle \left(x_{\circ },\,y_{\circ }\right)}satisfies:[1y1−y2x1−x2x1−x3y1−y31][x∘y∘]=[x12−x22+y12−y222(x1−x2)y12−y32+x12−x322(y1−y3)].{\displaystyle {\begin{bmatrix}1&{\dfrac {y_{1}-y_{2}}{x_{1}-x_{2}}}\\[2ex]{\dfrac {x_{1}-x_{3}}{y_{1}-y_{3}}}&1\end{bmatrix}}{\begin{bmatrix}x_{\circ }\\[1ex]y_{\circ }\end{bmatrix}}={\begin{bmatrix}{\dfrac {x_{1}^{2}-x_{2}^{2}+y_{1}^{2}-y_{2}^{2}}{2(x_{1}-x_{2})}}\\[2ex]{\dfrac {y_{1}^{2}-y_{3}^{2}+x_{1}^{2}-x_{3}^{2}}{2(y_{1}-y_{3})}}\end{bmatrix}}.} The radius is the distance between any of the three points and the center.r=(x1−x∘)2+(y1−y∘)2=(x2−x∘)2+(y2−y∘)2=(x3−x∘)2+(y3−y∘)2.{\displaystyle r={\sqrt {\left(x_{1}-x_{\circ }\right)^{2}+\left(y_{1}-y_{\circ }\right)^{2}}}={\sqrt {\left(x_{2}-x_{\circ }\right)^{2}+\left(y_{2}-y_{\circ }\right)^{2}}}={\sqrt {\left(x_{3}-x_{\circ }\right)^{2}+\left(y_{3}-y_{\circ }\right)^{2}}}.} This section considers the family of ellipses defined by equations(x−x∘)2a2+(y−y∘)2b2=1{\displaystyle {\tfrac {\left(x-x_{\circ }\right)^{2}}{a^{2}}}+{\tfrac {\left(y-y_{\circ }\right)^{2}}{b^{2}}}=1}with afixedeccentricitye{\displaystyle e}. It is convenient to use the parameter:q=a2b2=11−e2,{\displaystyle {\color {blue}q}={\frac {a^{2}}{b^{2}}}={\frac {1}{1-e^{2}}},} and to write the ellipse equation as:(x−x∘)2+q(y−y∘)2=a2,{\displaystyle \left(x-x_{\circ }\right)^{2}+{\color {blue}q}\,\left(y-y_{\circ }\right)^{2}=a^{2},} whereqis fixed andx∘,y∘,a{\displaystyle x_{\circ },\,y_{\circ },\,a}vary over the real numbers. (Such ellipses have their axes parallel to the coordinate axes: ifq<1{\displaystyle q<1}, the major axis is parallel to thex-axis; ifq>1{\displaystyle q>1}, it is parallel to they-axis.) Like a circle, such an ellipse is determined by three points not on a line. For this family of ellipses, one introduces the followingq-analogangle measure, which isnota function of the usual angle measureθ:[16][17] At first the measure is available only for chords which are not parallel to the y-axis. But the final formula works for any chord. The proof follows from a straightforward calculation. For the direction of proof given that the points are on an ellipse, one can assume that the center of the ellipse is the origin. For example, forP1=(2,0),P2=(0,1),P3=(0,0){\displaystyle P_{1}=(2,\,0),\;P_{2}=(0,\,1),\;P_{3}=(0,\,0)}andq=4{\displaystyle q=4}one obtains the three-point form Analogously to the circle case, the equation can be written more clearly using vectors:(x→−x→1)∗(x→−x→2)det(x→−x→1,x→−x→2)=(x→3−x→1)∗(x→3−x→2)det(x→3−x→1,x→3−x→2),{\displaystyle {\frac {\left({\color {red}{\vec {x}}}-{\vec {x}}_{1}\right)*\left({\color {red}{\vec {x}}}-{\vec {x}}_{2}\right)}{\det \left({\color {red}{\vec {x}}}-{\vec {x}}_{1},{\color {red}{\vec {x}}}-{\vec {x}}_{2}\right)}}={\frac {\left({\vec {x}}_{3}-{\vec {x}}_{1}\right)*\left({\vec {x}}_{3}-{\vec {x}}_{2}\right)}{\det \left({\vec {x}}_{3}-{\vec {x}}_{1},{\vec {x}}_{3}-{\vec {x}}_{2}\right)}},} where∗{\displaystyle *}is the modifieddot productu→∗v→=uxvx+quyvy.{\displaystyle {\vec {u}}*{\vec {v}}=u_{x}v_{x}+{\color {blue}q}\,u_{y}v_{y}.} Any ellipse can be described in a suitable coordinate system by an equationx2a2+y2b2=1{\displaystyle {\tfrac {x^{2}}{a^{2}}}+{\tfrac {y^{2}}{b^{2}}}=1}. The equation of the tangent at a pointP1=(x1,y1){\displaystyle P_{1}=\left(x_{1},\,y_{1}\right)}of the ellipse isx1xa2+y1yb2=1.{\displaystyle {\tfrac {x_{1}x}{a^{2}}}+{\tfrac {y_{1}y}{b^{2}}}=1.}If one allows pointP1=(x1,y1){\displaystyle P_{1}=\left(x_{1},\,y_{1}\right)}to be an arbitrary point different from the origin, then This relation between points and lines is abijection. Theinverse functionmaps Such a relation between points and lines generated by a conic is calledpole-polar relationorpolarity. The pole is the point; the polar the line. By calculation one can confirm the following properties of the pole-polar relation of the ellipse: Pole-polar relations exist for hyperbolas and parabolas as well. All metric properties given below refer to an ellipse with equation except for the section on the area enclosed by a tilted ellipse, where the generalized form of Eq.(1) will be given. TheareaAellipse{\displaystyle A_{\text{ellipse}}}enclosed by an ellipse is: wherea{\displaystyle a}andb{\displaystyle b}are the lengths of the semi-major and semi-minor axes, respectively. The area formulaπab{\displaystyle \pi ab}is intuitive: start with a circle of radiusb{\displaystyle b}(so its area isπb2{\displaystyle \pi b^{2}}) and stretch it by a factora/b{\displaystyle a/b}to make an ellipse. This scales the area by the same factor:πb2(a/b)=πab.{\displaystyle \pi b^{2}(a/b)=\pi ab.}[18]However, using the same approach for the circumference would be fallacious – compare theintegrals∫f(x)dx{\textstyle \int f(x)\,dx}and∫1+f′2(x)dx{\textstyle \int {\sqrt {1+f'^{2}(x)}}\,dx}. It is also easy to rigorously prove the area formula using integration as follows. Equation (1) can be rewritten asy(x)=b1−x2/a2.{\textstyle y(x)=b{\sqrt {1-x^{2}/a^{2}}}.}Forx∈[−a,a],{\displaystyle x\in [-a,a],}this curve is the top half of the ellipse. So twice the integral ofy(x){\displaystyle y(x)}over the interval[−a,a]{\displaystyle [-a,a]}will be the area of the ellipse:Aellipse=∫−aa2b1−x2a2dx=ba∫−aa2a2−x2dx.{\displaystyle {\begin{aligned}A_{\text{ellipse}}&=\int _{-a}^{a}2b{\sqrt {1-{\frac {x^{2}}{a^{2}}}}}\,dx\\&={\frac {b}{a}}\int _{-a}^{a}2{\sqrt {a^{2}-x^{2}}}\,dx.\end{aligned}}} The second integral is the area of a circle of radiusa,{\displaystyle a,}that is,πa2.{\displaystyle \pi a^{2}.}SoAellipse=baπa2=πab.{\displaystyle A_{\text{ellipse}}={\frac {b}{a}}\pi a^{2}=\pi ab.} An ellipse defined implicitly byAx2+Bxy+Cy2=1{\displaystyle Ax^{2}+Bxy+Cy^{2}=1}has area2π/4AC−B2.{\displaystyle 2\pi /{\sqrt {4AC-B^{2}}}.} The area can also be expressed in terms of eccentricity and the length of the semi-major axis asa2π1−e2{\displaystyle a^{2}\pi {\sqrt {1-e^{2}}}}(obtained by solving forflattening, then computing the semi-minor axis). So far we have dealt witherectellipses, whose major and minor axes are parallel to thex{\displaystyle x}andy{\displaystyle y}axes. However, some applications requiretiltedellipses. In charged-particle beam optics, for instance, the enclosed area of an erect or tilted ellipse is an important property of the beam, itsemittance. In this case a simple formula still applies, namely whereyint{\displaystyle y_{\text{int}}},xint{\displaystyle x_{\text{int}}}are intercepts andxmax{\displaystyle x_{\text{max}}},ymax{\displaystyle y_{\text{max}}}are maximum values. It follows directly fromApollonios's theorem. The circumferenceC{\displaystyle C}of an ellipse is:C=4a∫0π/21−e2sin2⁡θdθ=4aE(e){\displaystyle C\,=\,4a\int _{0}^{\pi /2}{\sqrt {1-e^{2}\sin ^{2}\theta }}\ d\theta \,=\,4a\,E(e)} where againa{\displaystyle a}is the length of the semi-major axis,e=1−b2/a2{\textstyle e={\sqrt {1-b^{2}/a^{2}}}}is the eccentricity, and the functionE{\displaystyle E}is thecomplete elliptic integral of the second kind,E(e)=∫0π/21−e2sin2⁡θdθ{\displaystyle E(e)\,=\,\int _{0}^{\pi /2}{\sqrt {1-e^{2}\sin ^{2}\theta }}\ d\theta }which is in general not anelementary function. The circumference of the ellipse may be evaluated in terms ofE(e){\displaystyle E(e)}usingGauss's arithmetic-geometric mean;[19]this is a quadratically converging iterative method (seeherefor details). The exactinfinite seriesis:C2πa=1−(12)2e2−(1⋅32⋅4)2e43−(1⋅3⋅52⋅4⋅6)2e65−⋯=1−∑n=1∞((2n−1)!!(2n)!!)2e2n2n−1=−∑n=0∞((2n−1)!!(2n)!!)2e2n2n−1,{\displaystyle {\begin{aligned}{\frac {C}{2\pi a}}&=1-\left({\frac {1}{2}}\right)^{2}e^{2}-\left({\frac {1\cdot 3}{2\cdot 4}}\right)^{2}{\frac {e^{4}}{3}}-\left({\frac {1\cdot 3\cdot 5}{2\cdot 4\cdot 6}}\right)^{2}{\frac {e^{6}}{5}}-\cdots \\&=1-\sum _{n=1}^{\infty }\left({\frac {(2n-1)!!}{(2n)!!}}\right)^{2}{\frac {e^{2n}}{2n-1}}\\&=-\sum _{n=0}^{\infty }\left({\frac {(2n-1)!!}{(2n)!!}}\right)^{2}{\frac {e^{2n}}{2n-1}},\end{aligned}}}wheren!!{\displaystyle n!!}is thedouble factorial(extended to negative odd integers in the usual way, giving(−1)!!=1{\displaystyle (-1)!!=1}and(−3)!!=−1{\displaystyle (-3)!!=-1}). This series converges, but by expanding in terms ofh=(a−b)2/(a+b)2,{\displaystyle h=(a-b)^{2}/(a+b)^{2},}James Ivory,[20]Bessel[21]andKummer[22]derived a series that converges much more rapidly. It is most concisely written in terms of thebinomial coefficient withn=1/2{\displaystyle n=1/2}:Cπ(a+b)=∑n=0∞(12n)2hn=∑n=0∞((2n−3)!!(2n)!!)2hn=∑n=0∞((2n−3)!!2nn!)2hn=∑n=0∞(1(2n−1)4n(2nn))2hn=1+h4+h264+h3256+25h416384+49h565536+441h6220+1089h7222+⋯.{\displaystyle {\begin{aligned}{\frac {C}{\pi (a+b)}}&=\sum _{n=0}^{\infty }{{\frac {1}{2}} \choose n}^{2}h^{n}\\&=\sum _{n=0}^{\infty }\left({\frac {(2n-3)!!}{(2n)!!}}\right)^{2}h^{n}\\&=\sum _{n=0}^{\infty }\left({\frac {(2n-3)!!}{2^{n}n!}}\right)^{2}h^{n}\\&=\sum _{n=0}^{\infty }\left({\frac {1}{(2n-1)4^{n}}}{\binom {2n}{n}}\right)^{2}h^{n}\\&=1+{\frac {h}{4}}+{\frac {h^{2}}{64}}+{\frac {h^{3}}{256}}+{\frac {25\,h^{4}}{16384}}+{\frac {49\,h^{5}}{65536}}+{\frac {441\,h^{6}}{2^{20}}}+{\frac {1089\,h^{7}}{2^{22}}}+\cdots .\end{aligned}}}The coefficients are slightly smaller (by a factor of2n−1{\displaystyle 2n-1}), but alsoe4/16≤h≤e4{\displaystyle e^{4}/16\leq h\leq e^{4}}is numerically much smaller thane{\displaystyle e}except ath=e=0{\displaystyle h=e=0}andh=e=1{\displaystyle h=e=1}. For eccentricities less than 0.5(h<0.005{\displaystyle h<0.005}),the error is at the limits ofdouble-precision floating-pointafter theh4{\displaystyle h^{4}}term.[23] Srinivasa Ramanujangave two closeapproximationsfor the circumference in §16 of "Modular Equations and Approximations toπ{\displaystyle \pi }";[24]they areCπ≈3(a+b)−(3a+b)(a+3b)=3(a+b)−3(a+b)2+4ab{\displaystyle {\frac {C}{\pi }}\approx 3(a+b)-{\sqrt {(3a+b)(a+3b)}}=3(a+b)-{\sqrt {3(a+b)^{2}+4ab}}}andCπ(a+b)≈1+3h10+4−3h,{\displaystyle {\frac {C}{\pi (a+b)}}\approx 1+{\frac {3h}{10+{\sqrt {4-3h}}}},}whereh{\displaystyle h}takes on the same meaning as above. The errors in these approximations, which were obtained empirically, are of orderh3{\displaystyle h^{3}}andh5,{\displaystyle h^{5},}respectively.[25][26]This is because the second formula's infinite series expansion matches Ivory's formula up to theh4{\displaystyle h^{4}}term.[25]: 3 More generally, thearc lengthof a portion of the circumference, as a function of the angle subtended (orxcoordinatesof any two points on the upper half of the ellipse), is given by an incompleteelliptic integral. The upper half of an ellipse is parameterized byy=b1−x2a2.{\displaystyle y=b\ {\sqrt {1-{\frac {x^{2}}{a^{2}}}\ }}~.} Then the arc lengths{\displaystyle s}fromx1{\displaystyle \ x_{1}\ }tox2{\displaystyle \ x_{2}\ }is:s=−b∫arccos⁡x1aarccos⁡x2a1+(a2b2−1)sin2⁡zdz.{\displaystyle s=-b\int _{\arccos {\frac {x_{1}}{a}}}^{\arccos {\frac {x_{2}}{a}}}{\sqrt {\ 1+\left({\tfrac {a^{2}}{b^{2}}}-1\right)\ \sin ^{2}z~}}\;dz~.} This is equivalent tos=b[E(z|1−a2b2)]z=arccos⁡x2aarccos⁡x1a{\displaystyle s=b\ \left[\;E\left(z\;{\Biggl |}\;1-{\frac {a^{2}}{b^{2}}}\right)\;\right]_{z\ =\ \arccos {\frac {x_{2}}{a}}}^{\arccos {\frac {x_{1}}{a}}}} whereE(z∣m){\displaystyle E(z\mid m)}is the incomplete elliptic integral of the second kind with parameterm=k2.{\displaystyle m=k^{2}.} Some lower and upper bounds on the circumference of the canonical ellipsex2/a2+y2/b2=1{\displaystyle \ x^{2}/a^{2}+y^{2}/b^{2}=1\ }witha≥b{\displaystyle \ a\geq b\ }are[27]2πb≤C≤2πa,π(a+b)≤C≤4(a+b),4a2+b2≤C≤2πa2+b2.{\displaystyle {\begin{aligned}2\pi b&\leq C\leq 2\pi a\ ,\\\pi (a+b)&\leq C\leq 4(a+b)\ ,\\4{\sqrt {a^{2}+b^{2}\ }}&\leq C\leq {\sqrt {2\ }}\pi {\sqrt {a^{2}+b^{2}\ }}~.\end{aligned}}} Here the upper bound2πa{\displaystyle \ 2\pi a\ }is the circumference of acircumscribedconcentric circlepassing through the endpoints of the ellipse's major axis, and the lower bound4a2+b2{\displaystyle 4{\sqrt {a^{2}+b^{2}}}}is the perimeter of aninscribedrhombuswithverticesat the endpoints of the major and the minor axes. Given an ellipse whose axes are drawn, we can construct the endpoints of a particular elliptic arc whose length is one eighth of the ellipse's circumference using onlystraightedge and compassin a finite number of steps; for some specific shapes of ellipses, such as when the axes have a length ratio of⁠2:1{\displaystyle {\sqrt {2}}:1}⁠, it is additionally possible to construct the endpoints of a particular arc whose length is one twelfth of the circumference.[28](The vertices and co-vertices are already endpoints of arcs whose length is one half or one quarter of the ellipse's circumference.) However, the general theory of straightedge-and-compass elliptic division appears to be unknown, unlike inthe case of the circleandthe lemniscate. The division in special cases has been investigated byLegendrein his classical treatise.[29] Thecurvatureis given by: κ=1a2b2(x2a4+y2b4)−32,{\displaystyle \kappa ={\frac {1}{a^{2}b^{2}}}\left({\frac {x^{2}}{a^{4}}}+{\frac {y^{2}}{b^{4}}}\right)^{-{\frac {3}{2}}}\ ,} and theradius of curvature, ρ = 1/κ, at point(x,y){\displaystyle (x,y)}:ρ=a2b2(x2a4+y2b4)32=1a4b4(a4y2+b4x2)3.{\displaystyle \rho =a^{2}b^{2}\left({\frac {x^{2}}{a^{4}}}+{\frac {y^{2}}{b^{4}}}\right)^{\frac {3}{2}}={\frac {1}{a^{4}b^{4}}}{\sqrt {\left(a^{4}y^{2}+b^{4}x^{2}\right)^{3}}}\ .}The radius of curvature of an ellipse, as a function of angleθfrom the center, is:R(θ)=a2b(1−e2(2−e2)(cos⁡θ)2)1−e2(cos⁡θ)2)3/2,{\displaystyle R(\theta )={\frac {a^{2}}{b}}{\biggl (}{\frac {1-e^{2}(2-e^{2})(\cos \theta )^{2})}{1-e^{2}(\cos \theta )^{2}}}{\biggr )}^{3/2}\,,}where e is the eccentricity. Radius of curvature at the twovertices(±a,0){\displaystyle (\pm a,0)}and the centers of curvature:ρ0=b2a=p,(±c2a|0).{\displaystyle \rho _{0}={\frac {b^{2}}{a}}=p\ ,\qquad \left(\pm {\frac {c^{2}}{a}}\,{\bigg |}\,0\right)\ .} Radius of curvature at the twoco-vertices(0,±b){\displaystyle (0,\pm b)}and the centers of curvature:ρ1=a2b,(0|±c2b).{\displaystyle \rho _{1}={\frac {a^{2}}{b}}\ ,\qquad \left(0\,{\bigg |}\,\pm {\frac {c^{2}}{b}}\right)\ .}The locus of all the centers of curvature is called anevolute. In the case of an ellipse, the evolute is anastroid. Ellipses appear in triangle geometry as Ellipses appear as plane sections of the followingquadrics: If the water's surface is disturbed at one focus of an elliptical water tank, the circular waves of that disturbance, afterreflectingoff the walls, converge simultaneously to a single point: thesecond focus. This is a consequence of the total travel length being the same along any wall-bouncing path between the two foci. Similarly, if a light source is placed at one focus of an ellipticmirror, all light rays on the plane of the ellipse are reflected to the second focus. Since no other smooth curve has such a property, it can be used as an alternative definition of an ellipse. (In the special case of a circle with a source at its center all light would be reflected back to the center.) If the ellipse is rotated along its major axis to produce an ellipsoidal mirror (specifically, aprolate spheroid), this property holds for all rays out of the source. Alternatively, a cylindrical mirror with elliptical cross-section can be used to focus light from a linearfluorescent lampalong a line of the paper; such mirrors are used in somedocument scanners. Sound waves are reflected in a similar way, so in a large elliptical room a person standing at one focus can hear a person standing at the other focus remarkably well. The effect is even more evident under avaulted roofshaped as a section of a prolate spheroid. Such a room is called awhisper chamber. The same effect can be demonstrated with two reflectors shaped like the end caps of such a spheroid, placed facing each other at the proper distance. Examples are theNational Statuary Hallat theUnited States Capitol(whereJohn Quincy Adamsis said to have used this property for eavesdropping on political matters); theMormon TabernacleatTemple SquareinSalt Lake City,Utah; at an exhibit on sound at theMuseum of Science and IndustryinChicago; in front of theUniversity of Illinois at Urbana–ChampaignFoellinger Auditorium; and also at a side chamber of the Palace of Charles V, in theAlhambra. In the 17th century,Johannes Keplerdiscovered that the orbits along which the planets travel around the Sun are ellipses with the Sun [approximately] at one focus, in hisfirst law of planetary motion. Later,Isaac Newtonexplained this as a corollary of hislaw of universal gravitation. More generally, in the gravitationaltwo-body problem, if the two bodies are bound to each other (that is, the total energy is negative), their orbits aresimilarellipses with the commonbarycenterbeing one of the foci of each ellipse. The other focus of either ellipse has no known physical significance. The orbit of either body in the reference frame of the other is also an ellipse, with the other body at the same focus. Keplerian elliptical orbits are the result of any radially directed attraction force whose strength is inversely proportional to the square of the distance. Thus, in principle, the motion of two oppositely charged particles in empty space would also be an ellipse. (However, this conclusion ignores losses due toelectromagnetic radiationandquantum effects, which become significant when the particles are moving at high speed.) Forelliptical orbits, useful relations involving the eccentricitye{\displaystyle e}are:e=ra−rpra+rp=ra−rp2ara=(1+e)arp=(1−e)a{\displaystyle {\begin{aligned}e&={\frac {r_{a}-r_{p}}{r_{a}+r_{p}}}={\frac {r_{a}-r_{p}}{2a}}\\r_{a}&=(1+e)a\\r_{p}&=(1-e)a\end{aligned}}} where Also, in terms ofra{\displaystyle r_{a}}andrp{\displaystyle r_{p}}, the semi-major axisa{\displaystyle a}is theirarithmetic mean, the semi-minor axisb{\displaystyle b}is theirgeometric mean, and thesemi-latus rectumℓ{\displaystyle \ell }is theirharmonic mean. In other words,a=ra+rp2b=rarpℓ=21ra+1rp=2rarpra+rp.{\displaystyle {\begin{aligned}a&={\frac {r_{a}+r_{p}}{2}}\\[2pt]b&={\sqrt {r_{a}r_{p}}}\\[2pt]\ell &={\frac {2}{{\frac {1}{r_{a}}}+{\frac {1}{r_{p}}}}}={\frac {2r_{a}r_{p}}{r_{a}+r_{p}}}.\end{aligned}}} The general solution for aharmonic oscillatorin two or moredimensionsis also an ellipse. Such is the case, for instance, of a long pendulum that is free to move in two dimensions; of a mass attached to a fixed point by a perfectly elasticspring; or of any object that moves under influence of an attractive force that is directly proportional to its distance from a fixed attractor. Unlike Keplerian orbits, however, these "harmonic orbits" have the center of attraction at the geometric center of the ellipse, and have fairly simple equations of motion. Inelectronics, the relative phase of two sinusoidal signals can be compared by feeding them to the vertical and horizontal inputs of anoscilloscope. If theLissajous figuredisplay is an ellipse, rather than a straight line, the two signals are out of phase. Twonon-circular gearswith the same elliptical outline, each pivoting around one focus and positioned at the proper angle, turn smoothly while maintaining contact at all times. Alternatively, they can be connected by alink chainortiming belt, or in the case of a bicycle the mainchainringmay be elliptical, or anovoidsimilar to an ellipse in form. Such elliptical gears may be used in mechanical equipment to produce variableangular speedortorquefrom a constant rotation of the driving axle, or in the case of a bicycle to allow a varying crank rotation speed with inversely varyingmechanical advantage. Elliptical bicycle gears make it easier for the chain to slide off the cog when changing gears.[30] An example gear application would be a device that winds thread onto a conicalbobbinon aspinningmachine. The bobbin would need to wind faster when the thread is near the apex than when it is near the base.[31] Instatistics, a bivariaterandom vector(X,Y){\displaystyle (X,Y)}isjointly elliptically distributedif its iso-density contours—loci of equal values of the density function—are ellipses. The concept extends to an arbitrary number of elements of the random vector, in which case in general the iso-density contours are ellipsoids. A special case is themultivariate normal distribution. The elliptical distributions are important in the financial field because if rates of return on assets are jointly elliptically distributed then all portfolios can be characterized completely by their mean and variance—that is, any two portfolios with identical mean and variance of portfolio return have identical distributions of portfolio return.[34][35] Drawing an ellipse as agraphics primitiveis common in standard display libraries, such as the MacIntoshQuickDrawAPI, andDirect2Don Windows.Jack Bresenhamat IBM is most famous for the invention of 2D drawing primitives, including line and circle drawing, using only fast integer operations such as addition and branch on carry bit. M. L. V. Pitteway extended Bresenham's algorithm for lines to conics in 1967.[36]Another efficient generalization to draw ellipses was invented in 1984 by Jerry Van Aken.[37] In 1970 Danny Cohen presented at the "Computer Graphics 1970" conference in England a linear algorithm for drawing ellipses and circles. In 1971, L. B. Smith published similar algorithms for all conic sections and proved them to have good properties.[38]These algorithms need only a few multiplications and additions to calculate each vector. It is beneficial to use a parametric formulation in computer graphics because the density of points is greatest where there is the most curvature. Thus, the change in slope between each successive point is small, reducing the apparent "jaggedness" of the approximation. Composite Bézier curvesmay also be used to draw an ellipse to sufficient accuracy, since any ellipse may be construed as anaffine transformationof a circle. The spline methods used to draw a circle may be used to draw an ellipse, since the constituentBézier curvesbehave appropriately under such transformations. It is sometimes useful to find the minimum bounding ellipse on a set of points. Theellipsoid methodis quite useful for solving this problem.
https://en.wikipedia.org/wiki/Ellipse
Inmathematics, aparabolais aplane curvewhich ismirror-symmetricaland is approximately U-shaped. It fits several superficially differentmathematicaldescriptions, which can all be proved to define exactly the same curves. One description of a parabola involves apoint(thefocus) and aline(thedirectrix). The focus does not lie on the directrix. The parabola is thelocus of pointsin that plane that areequidistantfrom the directrix and the focus. Another description of a parabola is as aconic section, created from the intersection of a right circularconical surfaceand aplaneparallelto another plane that istangentialto the conical surface.[a] Thegraphof aquadratic functiony=ax2+bx+c{\displaystyle y=ax^{2}+bx+c}(witha≠0{\displaystyle a\neq 0}) is a parabola with its axis parallel to they-axis. Conversely, every such parabola is the graph of a quadratic function. The line perpendicular to the directrix and passing through the focus (that is, the line that splits the parabola through the middle) is called the "axis of symmetry". The point where the parabola intersects its axis of symmetry is called the "vertex" and is the point where the parabola is most sharply curved. The distance between the vertex and the focus, measured along the axis of symmetry, is the "focal length". The "latus rectum" is thechordof the parabola that is parallel to the directrix and passes through the focus. Parabolas can open up, down, left, right, or in some other arbitrary direction. Any parabola can be repositioned and rescaled to fit exactly on any other parabola—that is, all parabolas are geometricallysimilar. Parabolas have the property that, if they are made of material thatreflectslight, then light that travels parallel to the axis of symmetry of a parabola and strikes its concave side is reflected to its focus, regardless of where on the parabola the reflection occurs. Conversely, light that originates from a point source at the focus is reflected into a parallel ("collimated") beam, leaving the parabola parallel to the axis of symmetry. The same effects occur withsoundand otherwaves. This reflective property is the basis of many practical uses of parabolas. The parabola has many important applications, from aparabolic antennaorparabolic microphoneto automobileheadlightreflectors and the design ofballistic missiles. It is frequently used inphysics,engineering, and many other areas. The earliest known work on conic sections was byMenaechmusin the 4th century BC. He discovered a way to solve the problem ofdoubling the cubeusing parabolas. (The solution, however, does not meet the requirements ofcompass-and-straightedge construction.) The area enclosed by a parabola and a line segment, the so-called "parabola segment", was computed byArchimedesby themethod of exhaustionin the 3rd century BC, in hisThe Quadrature of the Parabola. The name "parabola" is due toApollonius, who discovered many properties of conic sections. It means "application", referring to "application of areas" concept, that has a connection with this curve, as Apollonius had proved.[1]The focus–directrix property of the parabola and other conic sections was mentioned in the works ofPappus. Galileoshowed that the path of a projectile follows a parabola, a consequence of uniform acceleration due to gravity. The idea that aparabolic reflectorcould produce an image was already well known before the invention of thereflecting telescope.[2]Designs were proposed in the early to mid-17th century by manymathematicians, includingRené Descartes,Marin Mersenne,[3]andJames Gregory.[4]WhenIsaac Newtonbuilt thefirst reflecting telescopein 1668, he skipped using a parabolic mirror because of the difficulty of fabrication, opting for aspherical mirror. Parabolic mirrors are used in most modern reflecting telescopes and insatellite dishesandradarreceivers.[5] A parabola can be defined geometrically as a set of points (locus of points) in the Euclidean plane: The midpointV{\displaystyle V}of the perpendicular from the focusF{\displaystyle F}onto the directrixl{\displaystyle l}is called thevertex, and the lineFV{\displaystyle FV}is theaxis of symmetryof the parabola. If one introducesCartesian coordinates, such thatF=(0,f),f>0,{\displaystyle F=(0,f),\ f>0,}and the directrix has the equationy=−f{\displaystyle y=-f}, one obtains for a pointP=(x,y){\displaystyle P=(x,y)}from|PF|2=|Pl|2{\displaystyle |PF|^{2}=|Pl|^{2}}the equationx2+(y−f)2=(y+f)2{\displaystyle x^{2}+(y-f)^{2}=(y+f)^{2}}. Solving fory{\displaystyle y}yieldsy=14fx2.{\displaystyle y={\frac {1}{4f}}x^{2}.} This parabola is U-shaped (opening to the top). The horizontal chord through the focus (see picture in opening section) is called thelatus rectum; one half of it is thesemi-latus rectum. The latus rectum is parallel to the directrix. The semi-latus rectum is designated by the letterp{\displaystyle p}. From the picture one obtainsp=2f.{\displaystyle p=2f.} The latus rectum is defined similarly for the other two conics – the ellipse and the hyperbola. The latus rectum is the line drawn through a focus of a conic section parallel to the directrix and terminated both ways by the curve. For any case,p{\displaystyle p}is the radius of theosculating circleat the vertex. For a parabola, the semi-latus rectum,p{\displaystyle p}, is the distance of the focus from the directrix. Using the parameterp{\displaystyle p}, the equation of the parabola can be rewritten asx2=2py.{\displaystyle x^{2}=2py.} More generally, if the vertex isV=(v1,v2){\displaystyle V=(v_{1},v_{2})}, the focusF=(v1,v2+f){\displaystyle F=(v_{1},v_{2}+f)}, and the directrixy=v2−f{\displaystyle y=v_{2}-f}, one obtains the equationy=14f(x−v1)2+v2=14fx2−v12fx+v124f+v2.{\displaystyle y={\frac {1}{4f}}(x-v_{1})^{2}+v_{2}={\frac {1}{4f}}x^{2}-{\frac {v_{1}}{2f}}x+{\frac {v_{1}^{2}}{4f}}+v_{2}.} Remarks: If the focus isF=(f1,f2){\displaystyle F=(f_{1},f_{2})}, and the directrixax+by+c=0{\displaystyle ax+by+c=0}, then one obtains the equation(ax+by+c)2a2+b2=(x−f1)2+(y−f2)2{\displaystyle {\frac {(ax+by+c)^{2}}{a^{2}+b^{2}}}=(x-f_{1})^{2}+(y-f_{2})^{2}} (the left side of the equation uses theHesse normal formof a line to calculate the distance|Pl|{\displaystyle |Pl|}). For aparametric equationof a parabola in general position see§ As the affine image of the unit parabola. Theimplicit equationof a parabola is defined by anirreducible polynomialof degree two:ax2+bxy+cy2+dx+ey+f=0,{\displaystyle ax^{2}+bxy+cy^{2}+dx+ey+f=0,}such thatb2−4ac=0,{\displaystyle b^{2}-4ac=0,}or, equivalently, such thatax2+bxy+cy2{\displaystyle ax^{2}+bxy+cy^{2}}is the square of alinear polynomial. The previous section shows that any parabola with the origin as vertex and theyaxis as axis of symmetry can be considered as the graph of a functionf(x)=ax2witha≠0.{\displaystyle f(x)=ax^{2}{\text{ with }}a\neq 0.} Fora>0{\displaystyle a>0}the parabolas are opening to the top, and fora<0{\displaystyle a<0}are opening to the bottom (see picture). From the section above one obtains: Fora=1{\displaystyle a=1}the parabola is theunit parabolawith equationy=x2{\displaystyle y=x^{2}}. Its focus is(0,14){\displaystyle \left(0,{\tfrac {1}{4}}\right)}, the semi-latus rectump=12{\displaystyle p={\tfrac {1}{2}}}, and the directrix has the equationy=−14{\displaystyle y=-{\tfrac {1}{4}}}. The general function of degree 2 isf(x)=ax2+bx+cwitha,b,c∈R,a≠0.{\displaystyle f(x)=ax^{2}+bx+c~~{\text{ with }}~~a,b,c\in \mathbb {R} ,\ a\neq 0.}Completing the squareyieldsf(x)=a(x+b2a)2+4ac−b24a,{\displaystyle f(x)=a\left(x+{\frac {b}{2a}}\right)^{2}+{\frac {4ac-b^{2}}{4a}},}which is the equation of a parabola with Two objects in the Euclidean plane aresimilarif one can be transformed to the other by asimilarity, that is, an arbitrarycompositionof rigid motions (translationsandrotations) anduniform scalings. A parabolaP{\displaystyle {\mathcal {P}}}with vertexV=(v1,v2){\displaystyle V=(v_{1},v_{2})}can be transformed by the translation(x,y)→(x−v1,y−v2){\displaystyle (x,y)\to (x-v_{1},y-v_{2})}to one with the origin as vertex. A suitable rotation around the origin can then transform the parabola to one that has theyaxis as axis of symmetry. Hence the parabolaP{\displaystyle {\mathcal {P}}}can be transformed by a rigid motion to a parabola with an equationy=ax2,a≠0{\displaystyle y=ax^{2},\ a\neq 0}. Such a parabola can then be transformed by theuniform scaling(x,y)→(ax,ay){\displaystyle (x,y)\to (ax,ay)}into the unit parabola with equationy=x2{\displaystyle y=x^{2}}. Thus, any parabola can be mapped to the unit parabola by a similarity.[6] Asyntheticapproach, using similar triangles, can also be used to establish this result.[7] The general result is that two conic sections (necessarily of the same type) are similar if and only if they have the same eccentricity.[6]Therefore, only circles (all having eccentricity 0) share this property with parabolas (all having eccentricity 1), while general ellipses and hyperbolas do not. There are other simple affine transformations that map the parabolay=ax2{\displaystyle y=ax^{2}}onto the unit parabola, such as(x,y)→(x,ya){\displaystyle (x,y)\to \left(x,{\tfrac {y}{a}}\right)}. But this mapping is not a similarity, and only shows that all parabolas are affinely equivalent (see§ As the affine image of the unit parabola). Thepencilofconic sectionswith thexaxis as axis of symmetry, one vertex at the origin (0, 0) and the same semi-latus rectump{\displaystyle p}can be represented by the equationy2=2px+(e2−1)x2,e≥0,{\displaystyle y^{2}=2px+(e^{2}-1)x^{2},\quad e\geq 0,}withe{\displaystyle e}theeccentricity. Ifp> 0, the parabola with equationy2=2px{\displaystyle y^{2}=2px}(opening to the right) has thepolarrepresentationr=2pcos⁡φsin2⁡φ,φ∈[−π2,π2]∖{0}{\displaystyle r=2p{\frac {\cos \varphi }{\sin ^{2}\varphi }},\quad \varphi \in \left[-{\tfrac {\pi }{2}},{\tfrac {\pi }{2}}\right]\setminus \{0\}}wherer2=x2+y2,x=rcos⁡φ{\displaystyle r^{2}=x^{2}+y^{2},\ x=r\cos \varphi }. Its vertex isV=(0,0){\displaystyle V=(0,0)}, and its focus isF=(p2,0){\displaystyle F=\left({\tfrac {p}{2}},0\right)}. If one shifts the origin into the focus, that is,F=(0,0){\displaystyle F=(0,0)}, one obtains the equationr=p1−cos⁡φ,φ≠2πk.{\displaystyle r={\frac {p}{1-\cos \varphi }},\quad \varphi \neq 2\pi k.} Remark 1:Inverting this polar form shows that a parabola is theinverseof acardioid. Remark 2:The second polar form is a special case of a pencil of conics with focusF=(0,0){\displaystyle F=(0,0)}(see picture):r=p1−ecos⁡φ{\displaystyle r={\frac {p}{1-e\cos \varphi }}}(e{\displaystyle e}is the eccentricity). The diagram represents aconewith its axisAV. The point A is itsapex. An inclinedcross-sectionof the cone, shown in pink, is inclined from the axis by the same angleθ, as the side of the cone. According to the definition of a parabola as a conic section, the boundary of this pink cross-section EPD is a parabola. A cross-section perpendicular to the axis of the cone passes through the vertex P of the parabola. This cross-section is circular, but appearsellipticalwhen viewed obliquely, as is shown in the diagram. Its centre is V, andPKis a diameter. We will call its radiusr. Another perpendicular to the axis, circular cross-section of the cone is farther from the apex A than the one just described. It has achordDE, which joins the points where the parabolaintersectsthe circle. Another chordBCis theperpendicular bisectorofDEand is consequently a diameter of the circle. These two chords and the parabola's axis of symmetryPMall intersect at the point M. All the labelled points, except D and E, arecoplanar. They are in the plane of symmetry of the whole figure. This includes the point F, which is not mentioned above. It is defined and discussed below, in§ Position of the focus. Let us call the length ofDMand ofEMx, and the length ofPMy. The lengths ofBMandCMare: Using theintersecting chords theoremon the chordsBCandDE, we getBM¯⋅CM¯=DM¯⋅EM¯.{\displaystyle {\overline {\mathrm {BM} }}\cdot {\overline {\mathrm {CM} }}={\overline {\mathrm {DM} }}\cdot {\overline {\mathrm {EM} }}.} Substituting:4rycos⁡θ=x2.{\displaystyle 4ry\cos \theta =x^{2}.} Rearranging:y=x24rcos⁡θ.{\displaystyle y={\frac {x^{2}}{4r\cos \theta }}.} For any given cone and parabola,randθare constants, butxandyare variables that depend on the arbitrary height at which the horizontal cross-section BECD is made. This last equation shows the relationship between these variables. They can be interpreted asCartesian coordinatesof the points D and E, in a system in the pink plane with P as its origin. Sincexis squared in the equation, the fact that D and E are on opposite sides of theyaxis is unimportant. If the horizontal cross-section moves up or down, toward or away from the apex of the cone, D and E move along the parabola, always maintaining the relationship betweenxandyshown in the equation. The parabolic curve is therefore thelocusof points where the equation is satisfied, which makes it aCartesian graphof the quadratic function in the equation. It is proved in apreceding sectionthat if a parabola has its vertex at the origin, and if it opens in the positiveydirection, then its equation isy=⁠x2/4f⁠, wherefis its focal length.[b]Comparing this with the last equation above shows that the focal length of the parabola in the cone isrsinθ. In the diagram above, the point V is thefoot of the perpendicularfrom the vertex of the parabola to the axis of the cone.The point F is the foot of the perpendicular from the point V to the plane of the parabola.[c]By symmetry, F is on the axis of symmetry of the parabola. Angle VPF iscomplementarytoθ, and angle PVF is complementary to angle VPF, therefore angle PVF isθ. Since the length ofPVisr, the distance of F from the vertex of the parabola isrsinθ. It is shown above that this distance equals the focal length of the parabola, which is the distance from the vertex to the focus. The focus and the point F are therefore equally distant from the vertex, along the same line, which implies that they are the same point. Therefore,the point F, defined above, is the focus of the parabola. This discussion started from the definition of a parabola as a conic section, but it has now led to a description as a graph of a quadratic function. This shows that these two descriptions are equivalent. They both define curves of exactly the same shape. An alternative proof can be done usingDandelin spheres. It works without calculation and uses elementary geometric considerations only (see the derivation below). The intersection of an upright cone by a planeπ{\displaystyle \pi }, whose inclination from vertical is the same as ageneratrix(a.k.a. generator line, a line containing the apex and a point on the cone surface)m0{\displaystyle m_{0}}of the cone, is a parabola (red curve in the diagram). This generatrixm0{\displaystyle m_{0}}is the only generatrix of the cone that is parallel to planeπ{\displaystyle \pi }. Otherwise, if there are two generatrices parallel to the intersecting plane, the intersection curve will be ahyperbola(ordegenerate hyperbola, if the two generatrices are in the intersecting plane). If there is no generatrix parallel to the intersecting plane, the intersection curve will be anellipseor acircle(ora point). Let planeσ{\displaystyle \sigma }be the plane that contains the vertical axis of the cone and linem0{\displaystyle m_{0}}. The inclination of planeπ{\displaystyle \pi }from vertical is the same as linem0{\displaystyle m_{0}}means that, viewing from the side (that is, the planeπ{\displaystyle \pi }is perpendicular to planeσ{\displaystyle \sigma }),m0∥π{\displaystyle m_{0}\parallel \pi }. In order to prove the directrix property of a parabola (see§ Definition as a locus of pointsabove), one uses aDandelin sphered{\displaystyle d}, which is a sphere that touches the cone along a circlec{\displaystyle c}and planeπ{\displaystyle \pi }at pointF{\displaystyle F}. The plane containing the circlec{\displaystyle c}intersects with planeπ{\displaystyle \pi }at linel{\displaystyle l}. There is amirror symmetryin the system consisting of planeπ{\displaystyle \pi }, Dandelin sphered{\displaystyle d}and the cone (theplane of symmetryisσ{\displaystyle \sigma }). Since the plane containing the circlec{\displaystyle c}is perpendicular to planeσ{\displaystyle \sigma }, andπ⊥σ{\displaystyle \pi \perp \sigma }, their intersection linel{\displaystyle l}must also be perpendicular to planeσ{\displaystyle \sigma }. Since linem0{\displaystyle m_{0}}is in planeσ{\displaystyle \sigma },l⊥m0{\displaystyle l\perp m_{0}}. It turns out thatF{\displaystyle F}is thefocusof the parabola, andl{\displaystyle l}is thedirectrixof the parabola. The reflective property states that if a parabola can reflect light, then light that enters it travelling parallel to the axis of symmetry is reflected toward the focus. This is derived fromgeometrical optics, based on the assumption that light travels in rays. Consider the parabolay=x2. Since all parabolas are similar, this simple case represents all others. The point E is an arbitrary point on the parabola. The focus is F, the vertex is A (the origin), and the lineFAis the axis of symmetry. The lineECis parallel to the axis of symmetry, intersects thexaxis at D and intersects the directrix at C. The point B is the midpoint of the line segmentFC. The vertex A is equidistant from the focus F and from the directrix. Since C is on the directrix, theycoordinates of F and C are equal in absolute value and opposite in sign. B is the midpoint ofFC. Itsxcoordinate is half that of D, that is,x/2. The slope of the lineBEis the quotient of the lengths ofEDandBD, which is⁠x2/x/2⁠= 2x. But2xis also the slope (first derivative) of the parabola at E. Therefore, the lineBEis the tangent to the parabola at E. The distancesEFandECare equal because E is on the parabola, F is the focus and C is on the directrix. Therefore, since B is the midpoint ofFC, triangles △FEB and △CEB are congruent (three sides), which implies that the angles markedαare congruent. (The angle above E is vertically opposite angle ∠BEC.) This means that a ray of light that enters the parabola and arrives at E travelling parallel to the axis of symmetry will be reflected by the lineBEso it travels along the lineEF, as shown in red in the diagram (assuming that the lines can somehow reflect light). SinceBEis the tangent to the parabola at E, the same reflection will be done by an infinitesimal arc of the parabola at E. Therefore, light that enters the parabola and arrives at E travelling parallel to the axis of symmetry of the parabola is reflected by the parabola toward its focus. This conclusion about reflected light applies to all points on the parabola, as is shown on the left side of the diagram. This is the reflective property. There are other theorems that can be deduced simply from the above argument. The above proof and the accompanying diagram show that the tangentBEbisects the angle ∠FEC. In other words, the tangent to the parabola at any point bisects the angle between the lines joining the point to the focus and perpendicularly to the directrix. Since triangles △FBE and △CBE are congruent,FBis perpendicular to the tangentBE. Since B is on thexaxis, which is the tangent to the parabola at its vertex, it follows that the point of intersection between any tangent to a parabola and the perpendicular from the focus to that tangent lies on the line that is tangential to the parabola at its vertex. See animated diagram[8]andpedal curve. If light travels along the lineCE, it moves parallel to the axis of symmetry and strikes the convex side of the parabola at E. It is clear from the above diagram that this light will be reflected directly away from the focus, along an extension of the segmentFE. The above proofs of the reflective and tangent bisection properties use a line of calculus. Here a geometric proof is presented. In this diagram, F is the focus of the parabola, and T and U lie on its directrix. P is an arbitrary point on the parabola.PTis perpendicular to the directrix, and the lineMPbisects angle ∠FPT. Q is another point on the parabola, withQUperpendicular to the directrix. We know thatFP=PTandFQ=QU. Clearly,QT>QU, soQT>FQ. All points on the bisectorMPare equidistant from F and T, but Q is closer to F than to T. This means that Q is to the left ofMP, that is, on the same side of it as the focus. The same would be true if Q were located anywhere else on the parabola (except at the point P), so the entire parabola, except the point P, is on the focus side ofMP. Therefore,MPis the tangent to the parabola at P. Since it bisects the angle ∠FPT, this proves the tangent bisection property. The logic of the last paragraph can be applied to modify the above proof of the reflective property. It effectively proves the lineBEto be the tangent to the parabola at E if the anglesαare equal. The reflective property follows as shown previously. The definition of a parabola by its focus and directrix can be used for drawing it with help of pins and strings:[9] A parabola can be considered as the affine part of a non-degenerated projective conic with a pointY∞{\displaystyle Y_{\infty }}on the line of infinityg∞{\displaystyle g_{\infty }}, which is the tangent atY∞{\displaystyle Y_{\infty }}. The 5-, 4- and 3- point degenerations ofPascal's theoremare properties of a conic dealing with at least one tangent. If one considers this tangent as the line at infinity and its point of contact as the point at infinity of theyaxis, one obtains three statements for a parabola. The following properties of a parabola deal only with termsconnect,intersect,parallel, which are invariants ofsimilarities. So, it is sufficient to prove any property for theunit parabolawith equationy=x2{\displaystyle y=x^{2}}. Any parabola can be described in a suitable coordinate system by an equationy=ax2{\displaystyle y=ax^{2}}. Proof:straightforward calculation for the unit parabolay=x2{\displaystyle y=x^{2}}. Application:The 4-points property of a parabola can be used for the construction of pointP4{\displaystyle P_{4}}, whileP1,P2,P3{\displaystyle P_{1},P_{2},P_{3}}andQ2{\displaystyle Q_{2}}are given. Remark:the 4-points property of a parabola is an affine version of the 5-point degeneration of Pascal's theorem. LetP0=(x0,y0),P1=(x1,y1),P2=(x2,y2){\displaystyle P_{0}=(x_{0},y_{0}),P_{1}=(x_{1},y_{1}),P_{2}=(x_{2},y_{2})}be three points of the parabola with equationy=ax2{\displaystyle y=ax^{2}}andQ2{\displaystyle Q_{2}}the intersection of the secant lineP0P1{\displaystyle P_{0}P_{1}}with the linex=x2{\displaystyle x=x_{2}}andQ1{\displaystyle Q_{1}}the intersection of the secant lineP0P2{\displaystyle P_{0}P_{2}}with the linex=x1{\displaystyle x=x_{1}}(see picture). Then the tangent at pointP0{\displaystyle P_{0}}is parallel to the lineQ1Q2{\displaystyle Q_{1}Q_{2}}. (The linesx=x1{\displaystyle x=x_{1}}andx=x2{\displaystyle x=x_{2}}are parallel to the axis of the parabola.) Proof:can be performed for the unit parabolay=x2{\displaystyle y=x^{2}}. A short calculation shows: lineQ1Q2{\displaystyle Q_{1}Q_{2}}has slope2x0{\displaystyle 2x_{0}}which is the slope of the tangent at pointP0{\displaystyle P_{0}}. Application:The 3-points-1-tangent-property of a parabola can be used for the construction of the tangent at pointP0{\displaystyle P_{0}}, whileP1,P2,P0{\displaystyle P_{1},P_{2},P_{0}}are given. Remark:The 3-points-1-tangent-property of a parabola is an affine version of the 4-point-degeneration of Pascal's theorem. LetP1=(x1,y1),P2=(x2,y2){\displaystyle P_{1}=(x_{1},y_{1}),\ P_{2}=(x_{2},y_{2})}be two points of the parabola with equationy=ax2{\displaystyle y=ax^{2}}, andQ2{\displaystyle Q_{2}}the intersection of the tangent at pointP1{\displaystyle P_{1}}with the linex=x2{\displaystyle x=x_{2}}, andQ1{\displaystyle Q_{1}}the intersection of the tangent at pointP2{\displaystyle P_{2}}with the linex=x1{\displaystyle x=x_{1}}(see picture). Then the secantP1P2{\displaystyle P_{1}P_{2}}is parallel to the lineQ1Q2{\displaystyle Q_{1}Q_{2}}. (The linesx=x1{\displaystyle x=x_{1}}andx=x2{\displaystyle x=x_{2}}are parallel to the axis of the parabola.) Proof:straight forward calculation for the unit parabolay=x2{\displaystyle y=x^{2}}. Application:The 2-points–2-tangents property can be used for the construction of the tangent of a parabola at pointP2{\displaystyle P_{2}}, ifP1,P2{\displaystyle P_{1},P_{2}}and the tangent atP1{\displaystyle P_{1}}are given. Remark 1:The 2-points–2-tangents property of a parabola is an affine version of the 3-point degeneration of Pascal's theorem. Remark 2:The 2-points–2-tangents property should not be confused with the following property of a parabola, which also deals with 2 points and 2 tangents, but isnotrelated to Pascal's theorem. The statements above presume the knowledge of the axis direction of the parabola, in order to construct the pointsQ1,Q2{\displaystyle Q_{1},Q_{2}}. The following property determines the pointsQ1,Q2{\displaystyle Q_{1},Q_{2}}by two given points and their tangents only, and the result is that the lineQ1Q2{\displaystyle Q_{1}Q_{2}}is parallel to the axis of the parabola. Let Then the lineQ1Q2{\displaystyle Q_{1}Q_{2}}is parallel to the axis of the parabola and has the equationx=(x1+x2)/2.{\displaystyle x=(x_{1}+x_{2})/2.} Proof:can be done (like the properties above) for the unit parabolay=x2{\displaystyle y=x^{2}}. Application:This property can be used to determine the direction of the axis of a parabola, if two points and their tangents are given. An alternative way is to determine the midpoints of two parallel chords, seesection on parallel chords. Remark:This property is an affine version of the theorem of twoperspective trianglesof a non-degenerate conic.[10] Related: ChordP1P2{\displaystyle P_{1}P_{2}}has two additional properties: Steinerestablished the following procedure for the construction of a non-degenerate conic (seeSteiner conic): This procedure can be used for a simple construction of points on the parabolay=ax2{\displaystyle y=ax^{2}}: Proof:straightforward calculation. Remark:Steiner's generation is also available forellipsesandhyperbolas. Adual parabolaconsists of the set of tangents of an ordinary parabola. The Steiner generation of a conic can be applied to the generation of a dual conic by changing the meanings of points and lines: In order to generate elements of a dual parabola, one starts with Theproofis a consequence of thede Casteljau algorithmfor a Bézier curve of degree 2. A parabola with equationy=ax2+bx+c,a≠0{\displaystyle y=ax^{2}+bx+c,\ a\neq 0}is uniquely determined by three points(x1,y1),(x2,y2),(x3,y3){\displaystyle (x_{1},y_{1}),(x_{2},y_{2}),(x_{3},y_{3})}with differentxcoordinates. The usual procedure to determine the coefficientsa,b,c{\displaystyle a,b,c}is to insert the point coordinates into the equation. The result is a linear system of three equations, which can be solved byGaussian eliminationorCramer's rule, for example. An alternative way uses theinscribed angle theoremfor parabolas. In the following, the angle of two lines will be measured by the difference of the slopes of the line with respect to the directrix of the parabola. That is, for a parabola of equationy=ax2+bx+c,{\displaystyle y=ax^{2}+bx+c,}the angle between two lines of equationsy=m1x+d1,y=m2x+d2{\displaystyle y=m_{1}x+d_{1},\ y=m_{2}x+d_{2}}is measured bym1−m2.{\displaystyle m_{1}-m_{2}.} Analogous to theinscribed angle theoremfor circles, one has theinscribed angle theorem for parabolas:[11][12] (Proof: straightforward calculation: If the points are on a parabola, one may translate the coordinates for having the equationy=ax2{\displaystyle y=ax^{2}}, then one hasyi−yjxi−xj=xi+xj{\displaystyle {\frac {y_{i}-y_{j}}{x_{i}-x_{j}}}=x_{i}+x_{j}}if the points are on the parabola.) A consequence is that the equation (inx,y{\displaystyle {\color {green}x},{\color {red}y}}) of the parabola determined by 3 pointsPi=(xi,yi),i=1,2,3,{\displaystyle P_{i}=(x_{i},y_{i}),\ i=1,2,3,}with differentxcoordinates is (if twoxcoordinates are equal, there is no parabola with directrix parallel to thexaxis, which passes through the points)y−y1x−x1−y−y2x−x2=y3−y1x3−x1−y3−y2x3−x2.{\displaystyle {\frac {{\color {red}y}-y_{1}}{{\color {green}x}-x_{1}}}-{\frac {{\color {red}y}-y_{2}}{{\color {green}x}-x_{2}}}={\frac {y_{3}-y_{1}}{x_{3}-x_{1}}}-{\frac {y_{3}-y_{2}}{x_{3}-x_{2}}}.}Multiplying by the denominators that depend onx,{\displaystyle {\color {green}x},}one obtains the more standard form(x1−x2)y=(x−x1)(x−x2)(y3−y1x3−x1−y3−y2x3−x2)+(y1−y2)x+x1y2−x2y1.{\displaystyle (x_{1}-x_{2}){\color {red}y}=({\color {green}x}-x_{1})({\color {green}x}-x_{2})\left({\frac {y_{3}-y_{1}}{x_{3}-x_{1}}}-{\frac {y_{3}-y_{2}}{x_{3}-x_{2}}}\right)+(y_{1}-y_{2}){\color {green}x}+x_{1}y_{2}-x_{2}y_{1}.} In a suitable coordinate system any parabola can be described by an equationy=ax2{\displaystyle y=ax^{2}}. The equation of the tangent at a pointP0=(x0,y0),y0=ax02{\displaystyle P_{0}=(x_{0},y_{0}),\ y_{0}=ax_{0}^{2}}isy=2ax0(x−x0)+y0=2ax0x−ax02=2ax0x−y0.{\displaystyle y=2ax_{0}(x-x_{0})+y_{0}=2ax_{0}x-ax_{0}^{2}=2ax_{0}x-y_{0}.}One obtains the function(x0,y0)→y=2ax0x−y0{\displaystyle (x_{0},y_{0})\to y=2ax_{0}x-y_{0}}on the set of points of the parabola onto the set of tangents. Obviously, this function can be extended onto the set of all points ofR2{\displaystyle \mathbb {R} ^{2}}to a bijection between the points ofR2{\displaystyle \mathbb {R} ^{2}}and the lines with equationsy=mx+d,m,d∈R{\displaystyle y=mx+d,\ m,d\in \mathbb {R} }. The inverse mapping isliney=mx+d→point(m2a,−d).{\displaystyle {\text{line }}y=mx+d~~\rightarrow ~~{\text{point }}({\tfrac {m}{2a}},-d).}This relation is called thepole–polar relationof the parabola, where the point is thepole, and the corresponding line itspolar. By calculation, one checks the following properties of the pole–polar relation of the parabola: Remark:Pole–polar relations also exist for ellipses and hyperbolas. Let the line of symmetry intersect the parabola at point Q, and denote the focus as point F and its distance from point Q asf. Let the perpendicular to the line of symmetry, through the focus, intersect the parabola at a point T. Then (1) the distance from F to T is2f, and (2) a tangent to the parabola at point T intersects the line of symmetry at a 45° angle.[13]: 26 If two tangents to a parabola are perpendicular to each other, then they intersect on the directrix. Conversely, two tangents that intersect on the directrix are perpendicular. In other words, at any point on the directrix the whole parabola subtends a right angle. Let three tangents to a parabola form a triangle. ThenLambert'stheoremstates that the focus of the parabola lies on thecircumcircleof the triangle.[14][8]: Corollary 20 Tsukerman's converse to Lambert's theorem states that, given three lines that bound a triangle, if two of the lines are tangent to a parabola whose focus lies on the circumcircle of the triangle, then the third line is also tangent to the parabola.[15] Suppose achordcrosses a parabola perpendicular to its axis of symmetry. Let the length of the chord between the points where it intersects the parabola becand the distance from the vertex of the parabola to the chord, measured along the axis of symmetry, bed. The focal length,f, of the parabola is given byf=c216d.{\displaystyle f={\frac {c^{2}}{16d}}.} Suppose a system of Cartesian coordinates is used such that the vertex of the parabola is at the origin, and the axis of symmetry is theyaxis. The parabola opens upward. It is shown elsewhere in this article that the equation of the parabola is4fy=x2, wherefis the focal length. At the positivexend of the chord,x=⁠c/2⁠andy=d. Since this point is on the parabola, these coordinates must satisfy the equation above. Therefore, by substitution,4fd=(c2)2{\displaystyle 4fd=\left({\tfrac {c}{2}}\right)^{2}}. From this,f=c216d{\displaystyle f={\tfrac {c^{2}}{16d}}}. The area enclosed between a parabola and a chord (see diagram) is two-thirds of the area of a parallelogram that surrounds it. One side of the parallelogram is the chord, and the opposite side is a tangent to the parabola.[16][17]The slope of the other parallel sides is irrelevant to the area. Often, as here, they are drawn parallel with the parabola's axis of symmetry, but this is arbitrary. A theorem equivalent to this one, but different in details, was derived byArchimedesin the 3rd century BCE. He used the areas of triangles, rather than that of the parallelogram.[d]SeeThe Quadrature of the Parabola. If the chord has lengthband is perpendicular to the parabola's axis of symmetry, and if the perpendicular distance from the parabola's vertex to the chord ish, the parallelogram is a rectangle, with sides ofbandh. The areaAof the parabolic segment enclosed by the parabola and the chord is thereforeA=23bh.{\displaystyle A={\frac {2}{3}}bh.} This formula can be compared with the area of a triangle:⁠1/2⁠bh. In general, the enclosed area can be calculated as follows. First, locate the point on the parabola where its slope equals that of the chord. This can be done with calculus, or by using a line that is parallel to the axis of symmetry of the parabola and passes through the midpoint of the chord. The required point is where this line intersects the parabola.[e]Then, using the formula given inDistance from a point to a line, calculate the perpendicular distance from this point to the chord. Multiply this by the length of the chord to get the area of the parallelogram, then by 2/3 to get the required enclosed area. A corollary of the above discussion is that if a parabola has several parallel chords, their midpoints all lie on a line parallel to the axis of symmetry. If tangents to the parabola are drawn through the endpoints of any of these chords, the two tangents intersect on this same line parallel to the axis of symmetry (seeAxis-direction of a parabola).[f] If a point X is located on a parabola with focal lengthf, and ifpis theperpendicular distancefrom X to the axis of symmetry of the parabola, then the lengths ofarcsof the parabola that terminate at X can be calculated fromfandpas follows, assuming they are all expressed in the same units.[g]h=p2,q=f2+h2,s=hqf+fln⁡h+qf.{\displaystyle {\begin{aligned}h&={\frac {p}{2}},\\q&={\sqrt {f^{2}+h^{2}}},\\s&={\frac {hq}{f}}+f\ln {\frac {h+q}{f}}.\end{aligned}}} This quantitysis the length of the arc between X and the vertex of the parabola. The length of the arc between X and the symmetrically opposite point on the other side of the parabola is2s. The perpendicular distancepcan be given a positive or negative sign to indicate on which side of the axis of symmetry X is situated. Reversing the sign ofpreverses the signs ofhandswithout changing their absolute values. If these quantities are signed,the length of the arc betweenanytwo points on the parabola is always shown by the difference between their values ofs. The calculation can be simplified by using the properties of logarithms:s1−s2=h1q1−h2q2f+fln⁡h1+q1h2+q2.{\displaystyle s_{1}-s_{2}={\frac {h_{1}q_{1}-h_{2}q_{2}}{f}}+f\ln {\frac {h_{1}+q_{1}}{h_{2}+q_{2}}}.} This can be useful, for example, in calculating the size of the material needed to make aparabolic reflectororparabolic trough. This calculation can be used for a parabola in any orientation. It is not restricted to the situation where the axis of symmetry is parallel to theyaxis. S is the focus, and V is the principal vertex of the parabola VG. Draw VX perpendicular to SV. Take any point B on VG and drop a perpendicular BQ from B to VX. Draw perpendicular ST intersecting BQ, extended if necessary, at T. At B draw the perpendicular BJ, intersecting VX at J. For the parabola, the segment VBV, the area enclosed by the chord VB and the arc VB, is equal to ∆VBQ / 3, alsoBQ=VQ24SV{\displaystyle BQ={\frac {VQ^{2}}{4SV}}}. The area of the parabolic sectorSVB=△SVB+△VBQ3=SV⋅VQ2+VQ⋅BQ6{\displaystyle SVB=\triangle SVB+{\frac {\triangle VBQ}{3}}={\frac {SV\cdot VQ}{2}}+{\frac {VQ\cdot BQ}{6}}}. Since triangles TSB and QBJ are similar,VJ=VQ−JQ=VQ−BQ⋅TBST=VQ−BQ⋅(SV−BQ)VQ=3VQ4+VQ⋅BQ4SV.{\displaystyle VJ=VQ-JQ=VQ-{\frac {BQ\cdot TB}{ST}}=VQ-{\frac {BQ\cdot (SV-BQ)}{VQ}}={\frac {3VQ}{4}}+{\frac {VQ\cdot BQ}{4SV}}.} Therefore, the area of the parabolic sectorSVB=2SV⋅VJ3{\displaystyle SVB={\frac {2SV\cdot VJ}{3}}}and can be found from the length of VJ, as found above. A circle through S, V and B also passes through J. Conversely, if a point, B on the parabola VG is to be found so that the area of the sector SVB is equal to a specified value, determine the point J on VX and construct a circle through S, V and J. Since SJ is the diameter, the center of the circle is at its midpoint, and it lies on the perpendicular bisector of SV, a distance of one half VJ from SV. The required point B is where this circle intersects the parabola. If a body traces the path of the parabola due to an inverse square force directed towards S, the area SVB increases at a constant rate as point B moves forward. It follows that J moves at constant speed along VX as B moves along the parabola. If the speed of the body at the vertex where it is moving perpendicularly to SV isv, then the speed of J is equal to3v/4. The construction can be extended simply to include the case where neither radius coincides with the axis SV as follows. Let A be a fixed point on VG between V and B, and point H be the intersection on VX with the perpendicular to SA at A. From the above, the area of the parabolic sectorSAB=2SV⋅(VJ−VH)3=2SV⋅HJ3{\displaystyle SAB={\frac {2SV\cdot (VJ-VH)}{3}}={\frac {2SV\cdot HJ}{3}}}. Conversely, if it is required to find the point B for a particular area SAB, find point J from HJ and point B as before. By Book 1, Proposition 16, Corollary 6 of Newton'sPrincipia, the speed of a body moving along a parabola with a force directed towards the focus is inversely proportional to the square root of the radius. If the speed at A isv, then at the vertex V it isSASVv{\displaystyle {\sqrt {\frac {SA}{SV}}}v}, and point J moves at a constant speed of3v4SASV{\displaystyle {\frac {3v}{4}}{\sqrt {\frac {SA}{SV}}}}. The above construction was devised by Isaac Newton and can be found in Book 1 ofPhilosophiæ Naturalis Principia Mathematicaas Proposition 30. The focal length of a parabola is half of itsradius of curvatureat its vertex. Consider a point(x,y)on a circle of radiusRand with center at the point(0,R). The circle passes through the origin. If the point is near the origin, thePythagorean theoremshows thatx2+(R−y)2=R2,x2+R2−2Ry+y2=R2,x2+y2=2Ry.{\displaystyle {\begin{aligned}x^{2}+(R-y)^{2}&=R^{2},\\[1ex]x^{2}+R^{2}-2Ry+y^{2}&=R^{2},\\[1ex]x^{2}+y^{2}&=2Ry.\end{aligned}}} But if(x,y)is extremely close to the origin, since thexaxis is a tangent to the circle,yis very small compared withx, soy2is negligible compared with the other terms. Therefore, extremely close to the origin Compare this with the parabola which has its vertex at the origin, opens upward, and has focal lengthf(see preceding sections of this article). Equations(1)and(2)are equivalent ifR= 2f. Therefore, this is the condition for the circle and parabola to coincide at and extremely close to the origin. The radius of curvature at the origin, which is the vertex of the parabola, is twice the focal length. A concave mirror that is a small segment of a sphere behaves approximately like a parabolic mirror, focusing parallel light to a point midway between the centre and the surface of the sphere. Another definition of a parabola usesaffine transformations: An affine transformation of the Euclidean plane has the formx→→f→0+Ax→{\displaystyle {\vec {x}}\to {\vec {f}}_{0}+A{\vec {x}}}, whereA{\displaystyle A}is a regular matrix (determinantis not 0), andf→0{\displaystyle {\vec {f}}_{0}}is an arbitrary vector. Iff→1,f→2{\displaystyle {\vec {f}}_{1},{\vec {f}}_{2}}are the column vectors of the matrixA{\displaystyle A}, the unit parabola(t,t2),t∈R{\displaystyle (t,t^{2}),\ t\in \mathbb {R} }is mapped onto the parabolax→=p→(t)=f→0+f→1t+f→2t2,{\displaystyle {\vec {x}}={\vec {p}}(t)={\vec {f}}_{0}+{\vec {f}}_{1}t+{\vec {f}}_{2}t^{2},}where In general, the two vectorsf→1,f→2{\displaystyle {\vec {f}}_{1},{\vec {f}}_{2}}are not perpendicular, andf→0{\displaystyle {\vec {f}}_{0}}isnotthe vertex, unless the affine transformation is asimilarity. The tangent vector at the pointp→(t){\displaystyle {\vec {p}}(t)}isp→′(t)=f→1+2tf→2{\displaystyle {\vec {p}}'(t)={\vec {f}}_{1}+2t{\vec {f}}_{2}}. At the vertex the tangent vector is orthogonal tof→2{\displaystyle {\vec {f}}_{2}}. Hence the parametert0{\displaystyle t_{0}}of the vertex is the solution of the equationp→′(t)⋅f→2=f→1⋅f→2+2tf22=0,{\displaystyle {\vec {p}}'(t)\cdot {\vec {f}}_{2}={\vec {f}}_{1}\cdot {\vec {f}}_{2}+2tf_{2}^{2}=0,}which ist0=−f→1⋅f→22f22,{\displaystyle t_{0}=-{\frac {{\vec {f}}_{1}\cdot {\vec {f}}_{2}}{2f_{2}^{2}}},}and thevertexisp→(t0)=f→0−f→1⋅f→22f22f→1+(f→1⋅f→2)24(f22)2f→2.{\displaystyle {\vec {p}}(t_{0})={\vec {f}}_{0}-{\frac {{\vec {f}}_{1}\cdot {\vec {f}}_{2}}{2f_{2}^{2}}}{\vec {f}}_{1}+{\frac {({\vec {f}}_{1}\cdot {\vec {f}}_{2})^{2}}{4(f_{2}^{2})^{2}}}{\vec {f}}_{2}.} Thefocal lengthcan be determined by a suitable parameter transformation (which does not change the geometric shape of the parabola). The focal length isf=f12f22−(f→1⋅f→2)24|f2|3.{\displaystyle f={\frac {f_{1}^{2}\,f_{2}^{2}-({\vec {f}}_{1}\cdot {\vec {f}}_{2})^{2}}{4|f_{2}|^{3}}}.}Hence thefocusof the parabola isF:f→0−f→1⋅f→22f22f→1+f12f224(f22)2f→2.{\displaystyle F:\ {\vec {f}}_{0}-{\frac {{\vec {f}}_{1}\cdot {\vec {f}}_{2}}{2f_{2}^{2}}}{\vec {f}}_{1}+{\frac {f_{1}^{2}\,f_{2}^{2}}{4(f_{2}^{2})^{2}}}{\vec {f}}_{2}.} Solving the parametric representation fort,t2{\displaystyle \;t,t^{2}\;}byCramer's ruleand usingt⋅t−t2=0{\displaystyle \;t\cdot t-t^{2}=0\;}, one gets the implicit representationdet(x→−f→0,f→2)2−det(f→1,x→−f→0)det(f→1,f→2)=0.{\displaystyle \det({\vec {x}}\!-\!{\vec {f}}\!_{0},{\vec {f}}\!_{2})^{2}-\det({\vec {f}}\!_{1},{\vec {x}}\!-\!{\vec {f}}\!_{0})\det({\vec {f}}\!_{1},{\vec {f}}\!_{2})=0.} The definition of a parabola in this section gives a parametric representation of an arbitrary parabola, even in space, if one allowsf→0,f→1,f→2{\displaystyle {\vec {f}}\!_{0},{\vec {f}}\!_{1},{\vec {f}}\!_{2}}to be vectors in space. Aquadratic Bézier curveis a curvec→(t){\displaystyle {\vec {c}}(t)}defined by three pointsP0:p→0{\displaystyle P_{0}:{\vec {p}}_{0}},P1:p→1{\displaystyle P_{1}:{\vec {p}}_{1}}andP2:p→2{\displaystyle P_{2}:{\vec {p}}_{2}}, called itscontrol points:c→(t)=∑i=02(2i)ti(1−t)2−ip→i=(1−t)2p→0+2t(1−t)p→1+t2p→2=(p→0−2p→1+p→2)t2+(−2p→0+2p→1)t+p→0,t∈[0,1].{\displaystyle {\begin{aligned}{\vec {c}}(t)&=\sum _{i=0}^{2}{\binom {2}{i}}t^{i}(1-t)^{2-i}{\vec {p}}_{i}\\[1ex]&=(1-t)^{2}{\vec {p}}_{0}+2t(1-t){\vec {p}}_{1}+t^{2}{\vec {p}}_{2}\\[2ex]&=\left({\vec {p}}_{0}-2{\vec {p}}_{1}+{\vec {p}}_{2}\right)t^{2}+\left(-2{\vec {p}}_{0}+2{\vec {p}}_{1}\right)t+{\vec {p}}_{0},\quad t\in [0,1].\end{aligned}}} This curve is an arc of a parabola (see§ As the affine image of the unit parabola). In one method ofnumerical integrationone replaces the graph of a function by arcs of parabolas and integrates the parabola arcs. A parabola is determined by three points. The formula for one arc is∫abf(x)dx≈b−a6⋅(f(a)+4f(a+b2)+f(b)).{\displaystyle \int _{a}^{b}f(x)\,dx\approx {\frac {b-a}{6}}\cdot \left(f(a)+4f\left({\frac {a+b}{2}}\right)+f(b)\right).} The method is calledSimpson's rule. The followingquadricscontain parabolas as plane sections: A parabola can be used as atrisectrix, that is it allows theexact trisection of an arbitrary anglewith straightedge and compass. This is not in contradiction to the impossibility of an angle trisection withcompass-and-straightedge constructionsalone, as the use of parabolas is not allowed in the classic rules for compass-and-straightedge constructions. To trisect∠AOB{\displaystyle \angle AOB}, place its legOB{\displaystyle OB}on thexaxis such that the vertexO{\displaystyle O}is in the coordinate system's origin. The coordinate system also contains the parabolay=2x2{\displaystyle y=2x^{2}}. The unit circle with radius 1 around the origin intersects the angle's other legOA{\displaystyle OA}, and from this point of intersection draw the perpendicular onto theyaxis. The parallel toyaxis through the midpoint of that perpendicular and the tangent on the unit circle in(0,1){\displaystyle (0,1)}intersect inC{\displaystyle C}. The circle aroundC{\displaystyle C}with radiusOC{\displaystyle OC}intersects the parabola atP1{\displaystyle P_{1}}. The perpendicular fromP1{\displaystyle P_{1}}onto thexaxis intersects the unit circle atP2{\displaystyle P_{2}}, and∠P2OB{\displaystyle \angle P_{2}OB}is exactly one third of∠AOB{\displaystyle \angle AOB}. The correctness of this construction can be seen by showing that thexcoordinate ofP1{\displaystyle P_{1}}iscos⁡(α){\displaystyle \cos(\alpha )}. Solving the equation system given by the circle aroundC{\displaystyle C}and the parabola leads to the cubic equation4x3−3x−cos⁡(3α)=0{\displaystyle 4x^{3}-3x-\cos(3\alpha )=0}. Thetriple-angle formulacos⁡(3α)=4cos⁡(α)3−3cos⁡(α){\displaystyle \cos(3\alpha )=4\cos(\alpha )^{3}-3\cos(\alpha )}then shows thatcos⁡(α){\displaystyle \cos(\alpha )}is indeed a solution of that cubic equation. This trisection goes back toRené Descartes, who described it in his bookLa Géométrie(1637).[18] If one replaces the real numbers by an arbitraryfield, many geometric properties of the parabolay=x2{\displaystyle y=x^{2}}are still valid: Essentially new phenomena arise, if the field has characteristic 2 (that is,1+1=0{\displaystyle 1+1=0}): the tangents are all parallel. Inalgebraic geometry, the parabola is generalized by therational normal curves, which have coordinates(x,x2,x3, ...,xn); the standard parabola is the casen= 2, and the casen= 3is known as thetwisted cubic. A further generalization is given by theVeronese variety, when there is more than one input variable. In the theory ofquadratic forms, the parabola is the graph of the quadratic formx2(or other scalings), while theelliptic paraboloidis the graph of thepositive-definitequadratic formx2+y2(or scalings), and thehyperbolic paraboloidis the graph of theindefinite quadratic formx2−y2. Generalizations to more variables yield further such objects. The curvesy=xpfor other values ofpare traditionally referred to as thehigher parabolasand were originally treated implicitly, in the formxp=kyqforpandqboth positive integers, in which form they are seen to be algebraic curves. These correspond to the explicit formulay=xp/qfor a positive fractional power ofx. Negative fractional powers correspond to the implicit equationxpyq=kand are traditionally referred to ashigher hyperbolas. Analytically,xcan also be raised to an irrational power (for positive values ofx); the analytic properties are analogous to whenxis raised to rational powers, but the resulting curve is no longer algebraic and cannot be analyzed by algebraic geometry. In nature, approximations of parabolas and paraboloids are found in many diverse situations. The best-known instance of the parabola in the history ofphysicsis thetrajectoryof a particle or body in motion under the influence of a uniformgravitational fieldwithoutair resistance(for instance, a ball flying through the air, neglecting airfriction). Theparabolic trajectory of projectileswas discovered experimentally in the early 17th century byGalileo, who performed experiments with balls rolling on inclined planes. He also later proved thismathematicallyin his bookDialogue Concerning Two New Sciences.[19][h]For objects extended in space, such as a diver jumping from a diving board, the object itself follows a complex motion as it rotates, but thecenter of massof the object nevertheless moves along a parabola. As in all cases in the physical world, the trajectory is always an approximation of a parabola. The presence of air resistance, for example, always distorts the shape, although at low speeds, the shape is a good approximation of a parabola. At higher speeds, such as in ballistics, the shape is highly distorted and does not resemble a parabola. Anotherhypotheticalsituation in which parabolas might arise, according to the theories of physics described in the 17th and 18th centuries bySir Isaac Newton, is intwo-body orbits, for example, the path of a small planetoid or other object under the influence of the gravitation of theSun.Parabolic orbitsdo not occur in nature; simple orbits most commonly resemblehyperbolasorellipses. The parabolic orbit is thedegenerateintermediate case between those two types of ideal orbit. An object following a parabolic orbit would travel at the exactescape velocityof the object it orbits; objects inellipticalorhyperbolicorbits travel at less or greater than escape velocity, respectively. Long-periodcometstravel close to the Sun's escape velocity while they are moving through the inner Solar system, so their paths are nearly parabolic. Approximations of parabolas are also found in the shape of the main cables on a simplesuspension bridge. The curve of the chains of a suspension bridge is always an intermediate curve between a parabola and acatenary, but in practice the curve is generally nearer to a parabola due to the weight of the load (i.e. the road) being much larger than the cables themselves, and in calculations the second-degree polynomial formula of a parabola is used.[20][21]Under the influence of a uniform load (such as a horizontal suspended deck), the otherwise catenary-shaped cable is deformed toward a parabola (seeCatenary § Suspension bridge curve). Unlike an inelastic chain, a freely hanging spring of zero unstressed length takes the shape of a parabola. Suspension-bridge cables are, ideally, purely in tension, without having to carry other forces, for example, bending. Similarly, the structures of parabolic arches are purely in compression. Paraboloids arise in several physical situations as well. The best-known instance is theparabolic reflector, which is a mirror or similar reflective device that concentrates light or other forms ofelectromagnetic radiationto a commonfocal point, or conversely, collimates light from a point source at the focus into a parallel beam. The principle of the parabolic reflector may have been discovered in the 3rd century BC by the geometerArchimedes, who, according to a dubious legend,[22]constructed parabolic mirrors to defendSyracuseagainst theRomanfleet, by concentrating the sun's rays to set fire to the decks of the Roman ships. The principle was applied totelescopesin the 17th century. Today, paraboloid reflectors can be commonly observed throughout much of the world inmicrowaveand satellite-dish receiving and transmitting antennas. Inparabolic microphones, a parabolic reflector is used to focus sound onto a microphone, giving it highly directional performance. Paraboloids are also observed in the surface of a liquid confined to a container and rotated around the central axis. In this case, thecentrifugal forcecauses the liquid to climb the walls of the container, forming a parabolic surface. This is the principle behind theliquid-mirror telescope. Aircraftused to create aweightless statefor purposes of experimentation, such asNASA's "Vomit Comet", follow a vertically parabolic trajectory for brief periods in order to trace the course of an object infree fall, which produces the same effect as zero gravity for most purposes.
https://en.wikipedia.org/wiki/Parabola
Ingeometry, adegenerate conicis aconic(a second-degreeplane curve, defined by apolynomial equationof degree two) that fails to be anirreducible curve. This means that the defining equation is factorable over thecomplex numbers(or more generally over analgebraically closed field) as the product of two linear polynomials. Using the alternative definition of the conic as the intersection inthree-dimensional spaceof aplaneand a doublecone, a conic is degenerate if the plane goes through the vertex of the cones. In the real plane, a degenerate conic can be two lines that may or may not be parallel, a single line (either two coinciding lines or the union of a line and theline at infinity), a single point (in fact, twocomplex conjugate lines), or the null set (twice the line at infinity or two parallel complex conjugate lines). All these degenerate conics may occur inpencilsof conics. That is, if two real non-degenerated conics are defined by quadratic polynomial equationsf= 0andg= 0, the conics of equationsaf+bg= 0form a pencil, which contains one or three degenerate conics. For any degenerate conic in the real plane, one may choosefandgso that the given degenerate conic belongs to the pencil they determine. The conic section with equationx2−y2=0{\displaystyle x^{2}-y^{2}=0}is degenerate as its equation can be written as(x−y)(x+y)=0{\displaystyle (x-y)(x+y)=0}, and corresponds to two intersecting lines forming an "X". This degenerate conic occurs as the limit casea=1,b=0{\displaystyle a=1,b=0}in thepencilofhyperbolasof equationsa(x2−y2)−b=0.{\displaystyle a(x^{2}-y^{2})-b=0.}The limiting casea=0,b=1{\displaystyle a=0,b=1}is an example of a degenerate conic consisting of twice the line at infinity. Similarly, the conic section with equationx2+y2=0{\displaystyle x^{2}+y^{2}=0}, which has only one real point, is degenerate, asx2+y2{\displaystyle x^{2}+y^{2}}is factorable as(x+iy)(x−iy){\displaystyle (x+iy)(x-iy)}over thecomplex numbers. The conic consists thus of twocomplex conjugate linesthat intersect in the unique real point,(0,0){\displaystyle (0,0)}, of the conic. The pencil of ellipses of equationsax2+b(y2−1)=0{\displaystyle ax^{2}+b(y^{2}-1)=0}degenerates, fora=0,b=1{\displaystyle a=0,b=1}, into two parallel lines and, fora=1,b=0{\displaystyle a=1,b=0}, into a double line. The pencil of circles of equationsa(x2+y2−1)−bx=0{\displaystyle a(x^{2}+y^{2}-1)-bx=0}degenerates fora=0{\displaystyle a=0}into two lines, the line at infinity and the line of equationx=0{\displaystyle x=0}. Over the complex projective plane there are only two types of degenerate conics – two different lines, which necessarily intersect in one point, or one double line. Any degenerate conic may be transformed by aprojective transformationinto any other degenerate conic of the same type. Over the real affine plane the situation is more complicated. A degenerate real conic may be: For any two degenerate conics of the same class, there areaffine transformationsmapping the first conic to the second one. Non-degenerate real conics can be classified as ellipses, parabolas, or hyperbolas by thediscriminantof the non-homogeneous formAx2+2Bxy+Cy2+2Dx+2Ey+F{\displaystyle Ax^{2}+2Bxy+Cy^{2}+2Dx+2Ey+F}, which is the determinant of the matrix the matrix of the quadratic form in(x,y){\displaystyle (x,y)}. This determinant is positive, zero, or negative as the conic is, respectively, an ellipse, a parabola, or a hyperbola. Analogously, a conic can be classified as non-degenerate or degenerate according to the discriminant of thehomogeneousquadratic form in(x,y,z){\displaystyle (x,y,z)}.[1][2]: p.16Here the affine form is homogenized to the discriminant of this form is the determinant of the matrix The conic is degenerate if and only if the determinant of this matrix equals zero. In this case, we have the following possibilities: The case of coincident lines occurs if and only if the rank of the 3×3 matrixQ{\displaystyle Q}is 1; in all other degenerate cases its rank is 2.[3]: p.108 Conics, also known as conic sections to emphasize their three-dimensional geometry, arise as the intersection of aplanewith acone. Degeneracy occurs when the plane contains theapexof the cone or when the cone degenerates to a cylinder and the plane is parallel to the axis of the cylinder. SeeConic section#Degenerate casesfor details. Degenerate conics, as with degeneratealgebraic varietiesgenerally, arise as limits of non-degenerate conics, and are important incompactificationofmoduli spaces of curves. For example, thepencilof curves (1-dimensionallinear system of conics) defined byx2+ay2=1{\displaystyle x^{2}+ay^{2}=1}is non-degenerate fora≠0{\displaystyle a\neq 0}but is degenerate fora=0;{\displaystyle a=0;}concretely, it is an ellipse fora>0,{\displaystyle a>0,}two parallel lines fora=0,{\displaystyle a=0,}and a hyperbola witha<0{\displaystyle a<0}– throughout, one axis has length 2 and the other has length1/|a|,{\textstyle 1/{\sqrt {|a|}},}which is infinity fora=0.{\displaystyle a=0.} Such families arise naturally – given four points ingeneral linear position(no three on a line), there is a pencil of conics through them (five points determine a conic, four points leave one parameter free), of which three are degenerate, each consisting of a pair of lines, corresponding to the(42,2)=3{\displaystyle \textstyle {{\binom {4}{2,2}}=3}}ways of choosing 2 pairs of points from 4 points (counting via themultinomial coefficient). For example, given the four points(±1,±1),{\displaystyle (\pm 1,\pm 1),}the pencil of conics through them can be parameterized as(1+a)x2+(1−a)y2=2,{\displaystyle (1+a)x^{2}+(1-a)y^{2}=2,}yielding the following pencil; in all cases the center is at the origin:[note 1] Note that this parametrization has a symmetry, where inverting the sign ofareversesxandy. In the terminology of (Levy 1964), this is a Type I linear system of conics, and is animated in the linked video. A striking application of such a family is in (Faucette 1996) which gives ageometric solution to a quartic equationby considering the pencil of conics through the four roots of the quartic, and identifying the three degenerate conics with the three roots of theresolvent cubic. Pappus's hexagon theoremis the special case ofPascal's theorem, when a conic degenerates to two lines. In the complex projective plane, all conics are equivalent, and can degenerate to either two different lines or one double line. In the real affine plane: Degenerate conics can degenerate further to more special degenerate conics, as indicated by the dimensions of the spaces and points at infinity. A general conic isdefined by five points: given five points ingeneral position, there is a unique conic passing through them. If three of these points lie on a line, then the conic is reducible, and may or may not be unique. If no four points are collinear, then five points define a unique conic (degenerate if three points are collinear, but the other two points determine the unique other line). If four points are collinear, however, then there is not a unique conic passing through them – one line passing through the four points, and the remaining line passes through the other point, but the angle is undefined, leaving 1 parameter free. If all five points are collinear, then the remaining line is free, which leaves 2 parameters free. Given four points in general linear position (no three collinear; in particular, no two coincident), there are exactly three pairs of lines (degenerate conics) passing through them, which will in general be intersecting, unless the points form atrapezoid(one pair is parallel) or aparallelogram(two pairs are parallel). Given three points, if they are non-collinear, there are three pairs of parallel lines passing through them – choose two to define one line, and the third for the parallel line to pass through, by theparallel postulate. Given two distinct points, there is a unique double line through them.
https://en.wikipedia.org/wiki/Degenerate_conic
Inmathematics, theharmonic meanis a kind ofaverage, one of thePythagorean means. It is the most appropriate average forratiosandratessuch as speeds,[1][2]and is normally only used for positive arguments.[3] The harmonic mean is thereciprocalof thearithmetic meanof the reciprocals of the numbers, that is, thegeneralized f-meanwithf(x)=1x{\displaystyle f(x)={\frac {1}{x}}}. For example, the harmonic mean of 1, 4, and 4 is The harmonic meanHof the positivereal numbersx1,x2,…,xn{\displaystyle x_{1},x_{2},\ldots ,x_{n}}is[4] It is the reciprocal of thearithmetic meanof the reciprocals, and vice versa: where the arithmetic mean isA(x1,x2,…,xn)=1n∑i=1nxi.{\textstyle A(x_{1},x_{2},\ldots ,x_{n})={\tfrac {1}{n}}\sum _{i=1}^{n}x_{i}.} The harmonic mean is aSchur-concavefunction, and is greater than or equal to the minimum of its arguments: for positive arguments,min(x1…xn)≤H(x1…xn)≤nmin(x1…xn){\displaystyle \min(x_{1}\ldots x_{n})\leq H(x_{1}\ldots x_{n})\leq n\min(x_{1}\ldots x_{n})}. Thus, the harmonic mean cannot be madearbitrarily largeby changing some values to bigger ones (while having at least one value unchanged).[citation needed] The harmonic mean is alsoconcavefor positive arguments, an even stronger property than Schur-concavity.[citation needed] For allpositivedata setscontaining at least one pair of nonequal values, the harmonic mean is always the least of the three Pythagorean means,[5]while thearithmetic meanis always the greatest of the three and thegeometric meanis always in between. (If all values in a nonempty data set are equal, the three means are always equal.) It is the special caseM−1of thepower mean:H(x1,x2,…,xn)=M−1(x1,x2,…,xn)=nx1−1+x2−1+⋯+xn−1{\displaystyle H\left(x_{1},x_{2},\ldots ,x_{n}\right)=M_{-1}\left(x_{1},x_{2},\ldots ,x_{n}\right)={\frac {n}{x_{1}^{-1}+x_{2}^{-1}+\cdots +x_{n}^{-1}}}} Since the harmonic mean of a list of numbers tends strongly toward the least elements of the list, it tends (compared to the arithmetic mean) to mitigate the impact of large outliers and aggravate the impact of small ones. The arithmetic mean is often mistakenly used in places calling for the harmonic mean.[6]In the speed examplebelowfor instance, the arithmetic mean of 40 is incorrect, and too big. The harmonic mean is related to the other Pythagorean means, as seen in the equation below. This can be seen by interpreting the denominator to be the arithmetic mean of the product of numbersntimes but each time omitting thej-th term. That is, for the first term, we multiply allnnumbers except the first; for the second, we multiply allnnumbers except the second; and so on. The numerator, excluding then, which goes with the arithmetic mean, is the geometric mean to the powern. Thus then-th harmonic mean is related to then-th geometric and arithmetic means. The general formula isH(x1,…,xn)=(G(x1,…,xn))nA(x2x3⋯xn,x1x3⋯xn,…,x1x2⋯xn−1)=(G(x1,…,xn))nA(1x1∏i=1nxi,1x2∏i=1nxi,…,1xn∏i=1nxi).{\displaystyle H\left(x_{1},\ldots ,x_{n}\right)={\frac {\left(G\left(x_{1},\ldots ,x_{n}\right)\right)^{n}}{A\left(x_{2}x_{3}\cdots x_{n},x_{1}x_{3}\cdots x_{n},\ldots ,x_{1}x_{2}\cdots x_{n-1}\right)}}={\frac {\left(G\left(x_{1},\ldots ,x_{n}\right)\right)^{n}}{A\left({\frac {1}{x_{1}}}{\prod \limits _{i=1}^{n}x_{i}},{\frac {1}{x_{2}}}{\prod \limits _{i=1}^{n}x_{i}},\ldots ,{\frac {1}{x_{n}}}{\prod \limits _{i=1}^{n}x_{i}}\right)}}.} If a set of non-identical numbers is subjected to amean-preserving spread— that is, two or more elements of the set are "spread apart" from each other while leaving the arithmetic mean unchanged — then the harmonic mean always decreases.[7] For the special case of just two numbers,x1{\displaystyle x_{1}}andx2{\displaystyle x_{2}}, the harmonic mean can be written as:[4] (Note that the harmonic mean is undefined ifx1+x2=0{\displaystyle x_{1}+x_{2}=0}, i.e.x1=−x2{\displaystyle x_{1}=-x_{2}}.) In this special case, the harmonic mean is related to thearithmetic meanA=x1+x22{\displaystyle A={\frac {x_{1}+x_{2}}{2}}}and thegeometric meanG=x1x2,{\displaystyle G={\sqrt {x_{1}x_{2}}},}by[4] SinceGA≤1{\displaystyle {\tfrac {G}{A}}\leq 1}by theinequality of arithmetic and geometric means, this shows for then= 2 case thatH≤G(a property that in fact holds for alln). It also follows thatG=AH{\displaystyle G={\sqrt {AH}}}, meaning the two numbers' geometric mean equals the geometric mean of their arithmetic and harmonic means. For the special case of three numbers,x1{\displaystyle x_{1}},x2{\displaystyle x_{2}}andx3{\displaystyle x_{3}}, the harmonic mean can be written as:[4] Three positive numbersH,G, andAare respectively the harmonic, geometric, and arithmetic means of three positive numbersif and only if[8]: p.74, #1834the following inequality holds If a set ofweightsw1{\displaystyle w_{1}}, ...,wn{\displaystyle w_{n}}is associated to the data setx1{\displaystyle x_{1}}, ...,xn{\displaystyle x_{n}}, theweighted harmonic meanis defined by[9] The unweighted harmonic mean can be regarded as the special case where all of the weights are equal. Theprime number theoremstates that the number ofprimesless than or equal ton{\displaystyle n}isasymptotically equalto the harmonic mean of the firstn{\displaystyle n}natural numbers.[10] In many situations involvingratesandratios, the harmonic mean provides the correctaverage. For instance, if a vehicle travels a certain distancesoutbound at a speedv1(e.g. 60 km/h) and returns the same distance at a speedv2(e.g. 20 km/h), then its average speed is the harmonic mean ofv1andv2(30 km/h), not the arithmetic mean (40 km/h). The total travel time is the same as if it had traveled the whole distance at that average speed. This can be proven as follows:[11] Average speed for the entire journey =⁠Total distance traveled/Sum of time for each segment⁠=⁠2s/t1+t2⁠=⁠2s/⁠s/v1⁠+⁠s/v2⁠⁠=⁠2v1v2/v1+v2⁠ However, if the vehicle travels for a certain amount oftimeat a speedv1and then the same amount of time at a speedv2, then its average speed is thearithmetic meanofv1andv2, which in the above example is 40 km/h.[citation needed] Average speed for the entire journey =⁠Total distance traveled/Sum of time for each segment⁠=⁠s1+s2/2t⁠=⁠v1t+v2t/2t⁠=⁠v1+v2/2⁠ The same principle applies to more than two segments: given a series of sub-trips at different speeds, if each sub-trip covers the samedistance, then the average speed is theharmonicmean of all the sub-trip speeds; and if each sub-trip takes the same amount oftime, then the average speed is thearithmeticmean of all the sub-trip speeds. (If neither is the case, then aweighted harmonic meanorweighted arithmetic meanis needed. For the arithmetic mean, the speed of each portion of the trip is weighted by the duration of that portion, while for the harmonic mean, the corresponding weight is the distance. In both cases, the resulting formula reduces to dividing the total distance by the total time.) However, one may avoid the use of the harmonic mean for the case of "weighting by distance". Pose the problem as finding "slowness" of the trip where "slowness" (in hours per kilometre) is the inverse of speed. When trip slowness is found, invert it so as to find the "true" average trip speed. For each trip segment i, the slowness si= 1/speedi. Then take the weightedarithmetic meanof the si's weighted by their respective distances (optionally with the weights normalized so they sum to 1 by dividing them by trip length). This gives the true average slowness (in time per kilometre). It turns out that this procedure, which can be done with no knowledge of the harmonic mean, amounts to the same mathematical operations as one would use in solving this problem by using the harmonic mean. Thus it illustrates why the harmonic mean works in this case.[citation needed] Similarly, if one wishes to estimate the density of analloygiven the densities of its constituent elements and their mass fractions (or, equivalently, percentages by mass), then the predicted density of the alloy (exclusive of typically minor volume changes due to atom packing effects) is the weighted harmonic mean of the individual densities, weighted by mass, rather than the weighted arithmetic mean as one might at first expect. To use the weighted arithmetic mean, the densities would have to be weighted by volume. Applyingdimensional analysisto the problem while labeling the mass units by element and making sure that only like element-masses cancel makes this clear. If one connects two electricalresistorsin parallel, one having resistanceR1(e.g., 60Ω) and one having resistanceR2(e.g., 40 Ω), then the effect is the same as if one had used two resistors with the same resistance, both equal to the harmonic mean ofR1andR2(48 Ω): the equivalent resistance, in either case, is 24 Ω (one-half of the harmonic mean). This same principle applies tocapacitorsin series or toinductorsin parallel. Average resistance for both resistors in parallel =⁠Total voltage/Sum of current for each resistor⁠=⁠2V/I1+I2⁠=⁠2V/⁠V/R1⁠+⁠V/R2⁠⁠=⁠2R1R2/R1+R2⁠ However, if one connects the resistors in series, then the average resistance is the arithmetic mean ofR1andR2(50 Ω), with total resistance equal to twice this, the sum ofR1andR2(100 Ω). This principle applies tocapacitorsin parallel or toinductorsin series. Average resistance for both resistors in series =⁠Total voltage/Sum of current for each resistor⁠=⁠V1+V2/2I⁠=⁠R1I+R2I/2I⁠=⁠R1+R2/2⁠ As with the previous example, the same principle applies when more than two resistors, capacitors or inductors are connected, provided that all are in parallel or all are in series.[citation needed] The "conductivity effective mass" of a semiconductor is also defined as the harmonic mean of the effective masses along the three crystallographic directions.[12] As for otheroptic equations, thethin lens equation⁠1/f⁠=⁠1/u⁠+⁠1/v⁠can be rewritten such that the focal lengthfis one-half of the harmonic mean of the distances of the subjectuand objectvfrom the lens.[13] Two thin lenses of focal lengthf1andf2in series is equivalent to two thin lenses of focal lengthfhm, their harmonic mean, in series. Expressed asoptical power, two thin lenses of optical powersP1andP2in series is equivalent to two thin lenses of optical powerPam, their arithmetic mean, in series. The weighted harmonic mean is the preferable method for averaging multiples, such as theprice–earnings ratio(P/E). If these ratios are averaged using a weighted arithmetic mean, high data points are given greater weights than low data points. The weighted harmonic mean, on the other hand, correctly weights each data point.[14]The simple weighted arithmetic mean when applied to non-price normalized ratios such as the P/E is biased upwards and cannot be numerically justified, since it is based on equalized earnings; just as vehicles speeds cannot be averaged for a roundtrip journey (see above).[15] In anytriangle, the radius of theincircleis one-third of the harmonic mean of thealtitudes. For any point P on theminor arcBC of thecircumcircleof anequilateral triangleABC, with distancesqandtfrom B and C respectively, and with the intersection of PA and BC being at a distanceyfrom point P, we have thatyis half the harmonic mean ofqandt.[16] In aright trianglewith legsaandbandaltitudehfrom thehypotenuseto the right angle,h2is half the harmonic mean ofa2andb2.[17][18] Lettands(t>s) be the sides of the twoinscribed squares in a right trianglewith hypotenusec. Thens2equals half the harmonic mean ofc2andt2. Let atrapezoidhave vertices A, B, C, and D in sequence and have parallel sides AB and CD. Let E be the intersection of thediagonals, and let F be on side DA and G be on side BC such that FEG is parallel to AB and CD. Then FG is the harmonic mean of AB and DC. (This is provable using similar triangles.) One application of this trapezoid result is in thecrossed ladders problem, where two ladders lie oppositely across an alley, each with feet at the base of one sidewall, with one leaning against a wall at heightAand the other leaning against the opposite wall at heightB, as shown. The ladders cross at a height ofhabove the alley floor. Thenhis half the harmonic mean ofAandB. This result still holds if the walls are slanted but still parallel and the "heights"A,B, andhare measured as distances from the floor along lines parallel to the walls. This can be proved easily using the area formula of a trapezoid and area addition formula. In anellipse, thesemi-latus rectum(the distance from a focus to the ellipse along a line parallel to the minor axis) is the harmonic mean of the maximum and minimum distances of the ellipse from a focus. Incomputer science, specificallyinformation retrievalandmachine learning, the harmonic mean of theprecision(true positives per predicted positive) and therecall(true positives per real positive) is often used as an aggregated performance score for the evaluation of algorithms and systems: theF-score(or F-measure). This is used in information retrieval because only the positive class is ofrelevance, while number of negatives, in general, is large and unknown.[19]It is thus a trade-off as to whether the correct positive predictions should be measured in relation to the number of predicted positives or the number of real positives, so it is measured versus a putative number of positives that is an arithmetic mean of the two possible denominators. A consequence arises from basic algebra in problems where people or systems work together. As an example, if a gas-powered pump can drain a pool in 4 hours and a battery-powered pump can drain the same pool in 6 hours, then it will take both pumps⁠6·4/6 + 4⁠, which is equal to 2.4 hours, to drain the pool together. This is one-half of the harmonic mean of 6 and 4:⁠2·6·4/6 + 4⁠= 4.8. That is, the appropriate average for the two types of pump is the harmonic mean, and with one pair of pumps (two pumps), it takes half this harmonic mean time, while with two pairs of pumps (four pumps) it would take a quarter of this harmonic mean time. Inhydrology, the harmonic mean is similarly used to averagehydraulic conductivityvalues for a flow that is perpendicular to layers (e.g., geologic or soil) - flow parallel to layers uses the arithmetic mean. This apparent difference in averaging is explained by the fact that hydrology uses conductivity, which is the inverse of resistivity. Insabermetrics, a baseball player'sPower–speed numberis the harmonic mean of theirhome runandstolen basetotals. Inpopulation genetics, the harmonic mean is used when calculating the effects of fluctuations in the census population size on the effective population size. The harmonic mean takes into account the fact that events such as populationbottleneckincrease the rate genetic drift and reduce the amount of genetic variation in the population. This is a result of the fact that following a bottleneck very few individuals contribute to thegene poollimiting the genetic variation present in the population for many generations to come. When consideringfuel economy in automobilestwo measures are commonly used – miles per gallon (mpg), and litres per 100 km. As the dimensions of these quantities are the inverse of each other (one is distance per volume, the other volume per distance) when taking the mean value of the fuel economy of a range of cars one measure will produce the harmonic mean of the other – i.e., converting the mean value of fuel economy expressed in litres per 100 km to miles per gallon will produce the harmonic mean of the fuel economy expressed in miles per gallon. For calculating the average fuel consumption of a fleet of vehicles from the individual fuel consumptions, the harmonic mean should be used if the fleet uses miles per gallon, whereas the arithmetic mean should be used if the fleet uses litres per 100 km. In the USA theCAFE standards(the federal automobile fuel consumption standards) make use of the harmonic mean. Inchemistryandnuclear physicsthe average mass per particle of a mixture consisting of different species (e.g., molecules or isotopes) is given by the harmonic mean of the individual species' masses weighted by their respective mass fraction. The harmonic mean of abeta distributionwith shape parametersαandβis: The harmonic mean withα< 1 is undefined because its defining expression is not bounded in [0, 1]. Lettingα=β showing that forα=βthe harmonic mean ranges from 0 forα=β= 1, to 1/2 forα=β→ ∞. The following are the limits with one parameter finite (non-zero) and the other parameter approaching these limits: With the geometric mean the harmonic mean may be useful in maximum likelihood estimation in the four parameter case. A second harmonic mean (H1 − X) also exists for this distribution This harmonic mean withβ< 1 is undefined because its defining expression is not bounded in [ 0, 1 ]. Lettingα=βin the above expression showing that forα=βthe harmonic mean ranges from 0, forα=β= 1, to 1/2, forα=β→ ∞. The following are the limits with one parameter finite (non zero) and the other approaching these limits: Although both harmonic means are asymmetric, whenα=βthe two means are equal. The harmonic mean (H) of thelognormal distributionof a random variableXis[20] whereμandσ2are the parameters of the distribution, i.e. the mean and variance of the distribution of the natural logarithm ofX. The harmonic and arithmetic means of the distribution are related by whereCvandμ*are thecoefficient of variationand the mean of the distribution respectively.. The geometric (G), arithmetic and harmonic means of the distribution are related by[21] The harmonic mean of type 1Pareto distributionis[22] wherekis the scale parameter andαis the shape parameter. For a random sample, the harmonic mean is calculated as above. Both themeanand thevariancemay beinfinite(if it includes at least one term of the form 1/0). The mean of the samplemis asymptotically distributed normally with variances2. The variance of the mean itself is[23] wheremis the arithmetic mean of the reciprocals,xare the variates,nis the population size andEis the expectation operator. Assuming that the variance is not infinite and that thecentral limit theoremapplies to the sample then using thedelta method, the variance is whereHis the harmonic mean,mis the arithmetic mean of the reciprocals s2is the variance of the reciprocals of the data andnis the number of data points in the sample. Ajackknifemethod of estimating the variance is possible if the mean is known.[24]This method is the usual 'delete 1' rather than the 'delete m' version. This method first requires the computation of the mean of the sample (m) wherexare the sample values. A series of valuewiis then computed where The mean (h) of thewiis then taken: The variance of the mean is Significance testing andconfidence intervalsfor the mean can then be estimated with thet test. Assume a random variate has a distributionf(x). Assume also that the likelihood of a variate being chosen is proportional to its value. This is known as length based or size biased sampling. Letμbe the mean of the population. Then theprobability density functionf*(x) of the size biased population is The expectation of this length biased distribution E*(x) is[23] whereσ2is the variance. The expectation of the harmonic mean is the same as the non-length biased version E(x) The problem of length biased sampling arises in a number of areas including textile manufacture[25]pedigree analysis[26]and survival analysis[27] Akmanet al.have developed a test for the detection of length based bias in samples.[28] IfXis a positive random variable andq> 0 then for allε> 0[29] Assuming thatXand E(X) are > 0 then[29] This follows fromJensen's inequality. Gurland has shown that[30]for a distribution that takes only positive values, for anyn> 0 Under some conditions[31] where ~ means approximately equal to. Assuming that the variates (x) are drawn from a lognormal distribution there are several possible estimators forH: where Of theseH3is probably the best estimator for samples of 25 or more.[32] A first order approximation to thebiasand variance ofH1are[33] whereCvis the coefficient of variation. Similarly a first order approximation to the bias and variance ofH3are[33] In numerical experimentsH3is generally a superior estimator of the harmonic mean thanH1.[33]H2produces estimates that are largely similar toH1. TheEnvironmental Protection Agencyrecommends the use of the harmonic mean in setting maximum toxin levels in water.[34] In geophysicalreservoir engineeringstudies, the harmonic mean is widely used.[35]
https://en.wikipedia.org/wiki/Harmonic_mean
Inprobability theoryandstatistics, there are several relationships amongprobability distributions. These relations can be categorized in the following groups: Multiplying the variable by any positive real constant yields ascalingof the original distribution. Some are self-replicating, meaning that the scaling yields the same family of distributions, albeit with a different parameter:normal distribution,gamma distribution,Cauchy distribution,exponential distribution,Erlang distribution,Weibull distribution,logistic distribution,error distribution,power-law distribution,Rayleigh distribution. Example: The affine transformax+byields arelocation and scalingof the original distribution. The following are self-replicating:Normal distribution,Cauchy distribution,Logistic distribution,Error distribution,Power distribution,Rayleigh distribution. Example: The reciprocal 1/Xof a random variableX, is a member of the same family of distribution asX, in the following cases:Cauchy distribution,F distribution,log logistic distribution. Examples: Some distributions are invariant under a specific transformation. Example: Some distributions are variant under a specific transformation. The distribution of the sum ofindependent random variablesis theconvolutionof their distributions. SupposeZ{\displaystyle Z}is the sum ofn{\displaystyle n}independent random variablesX1,…,Xn{\displaystyle X_{1},\dots ,X_{n}}each withprobability mass functionsfXi(x){\displaystyle f_{X_{i}}(x)}. ThenZ=∑i=1nXi.{\displaystyle Z=\sum _{i=1}^{n}{X_{i}}.}If it has a distribution from the same family of distributions as the original variables, that family of distributions is said to beclosed under convolution. Often (always?) these distributions are alsostable distributions(see alsoDiscrete-stable distribution). Examples of suchunivariate distributionsare:normal distributions,Poisson distributions,binomial distributions(with common success probability),negative binomial distributions(with common success probability),gamma distributions(with commonrate parameter),chi-squared distributions,Cauchy distributions,hyperexponential distributions. Examples:[3][4] Other distributions are not closed under convolution, but their sum has a known distribution: The product of independent random variablesXandYmay belong to the same family of distribution asXandY:Bernoulli distributionandlog-normal distribution. Example: (See alsoProduct distribution.) For some distributions, theminimumvalue of several independent random variables is a member of the same family, with different parameters:Bernoulli distribution,Geometric distribution,Exponential distribution,Extreme value distribution,Pareto distribution,Rayleigh distribution,Weibull distribution. Examples: Similarly, distributions for which themaximumvalue of several independent random variables is a member of the same family of distribution include:Bernoulli distribution,Power lawdistribution. (See alsoratio distribution.) Approximate or limit relationship means Combination ofiidrandom variables: Special case of distribution parametrization: Consequences of the CLT: When one or more parameter(s) of a distribution are random variables, thecompounddistribution is the marginal distribution of the variable. Examples: Some distributions have been specially named as compounds:beta-binomial distribution,Beta negative binomial distribution,gamma-normal distribution. Examples:
https://en.wikipedia.org/wiki/Relationships_among_probability_distributions#Reciprocal_of_a_random_variable
Incombinatorialmathematics, alarge setofpositive integers is one such that theinfinite sumof the reciprocals diverges. Asmall setis any subset of the positive integers that is not large; that is, one whose sum of reciprocals converges. Large sets appear in theMüntz–Szász theoremand in theErdős conjecture on arithmetic progressions. Paul Erdősconjecturedthat all large sets contain arbitrarily longarithmetic progressions. He offered a prize of $3000 for a proof, more than for any of hisother conjectures, and joked that this prize offer violated the minimum wage law.[1]The question is still open. It is not known how to identify whether a given set is large or small in general. As a result, there are many sets which are not known to be either large or small.
https://en.wikipedia.org/wiki/Large_set_(combinatorics)
Inmathematics,statisticsand elsewhere,sums of squaresoccur in a number of contexts:
https://en.wikipedia.org/wiki/Sum_of_squares_(disambiguation)
Inmathematicsandstatistics,sums of powersoccur in a number of contexts:
https://en.wikipedia.org/wiki/Sums_of_powers
The17-animal inheritance puzzleis amathematical puzzleinvolving unequal butfair allocationofindivisible goods, usually stated in terms of inheritance of a number of large animals (17 camels, 17 horses, 17 elephants, etc.) which must be divided in some stated proportion among a number of beneficiaries. It is a common example of anapportionment problem. Despite often being framed as a puzzle, it is more ananecdoteabout a curious calculation than a problem with a clear mathematical solution.[1]Beyond recreational mathematics and mathematics education, the story has been repeated as aparablewith varied metaphorical meanings. Although an ancient origin for the puzzle has often been claimed, it has not been documented. Instead, a version of the puzzle can be traced back to the works ofMulla Muhammad Mahdi Naraqi, an 18th-century Iranian philosopher. It entered the westernrecreational mathematicsliterature in the late 19th century. Several mathematicians have formulated different generalizations of the puzzle to numbers other than 17. According to the statement of the puzzle, a man dies leaving 17 camels (or other animals) to his three sons, to be divided in the following proportions: the eldest son should inherit1⁄2of the man's property, the middle son should inherit1⁄3, and the youngest son should inherit1⁄9. How should they divide the camels, noting that only a whole live camel has value?[2] As usually stated, to solve the puzzle, the three sons ask for the help of another man, often a priest, judge, or other local official. This man solves the puzzle in the following way: he lends the three sons his own camel, so that there are now 18 camels to be divided. That leaves nine camels for the eldest son, six camels for the middle son, and two camels for the youngest son, in the proportions demanded for the inheritance. These 17 camels leave one camel left over, which the judge takes back as his own.[2]This is possible as the sum of the fractions is less than one:⁠1/2⁠+⁠1/3⁠+⁠1/9⁠=⁠17/18⁠. Some sources point out an additional feature of this solution: each son is satisfied, because he receives more camels than his originally-stated inheritance. The eldest son was originally promised only8+1⁄2camels, but receives nine; the middle son was promised5+2⁄3, but receives six; and the youngest was promised1+8⁄9, but receives two.[3] Similar problems of unequal division go back to ancient times, but without the twist of the loan and return of the extra camel. For instance, theRhind Mathematical Papyrusfeatures a problem in which many loaves of bread are to be divided in four different specified proportions.[2][4]The 17 animals puzzle can be seen as an example of a "completion to unity" problem, of a type found in other examples on this papyrus, in which a set of fractions adding to less than one should be completed, by adding more fractions, to make their total come out to exactly one.[5]Another similar case, involving fractional inheritance in the Roman empire, appears in the writings ofPublius Juventius Celsus, attributed to a case decided bySalvius Julianus.[6][7]The problems of fairly subdividing indivisible elements into specified proportions, seen in these inheritance problems, also arise when allocating seats inelectoral systemsbased onproportional representation.[8] Many similar problems of division into fractions are known frommathematics in the medieval Islamic world,[1][4][9]but "it does not seem that the story of the 17 camels is part of classical Arab-Islamic mathematics".[9]Supposed origins of the problem in the works ofal-Khwarizmi,FibonacciorTartagliaalso cannot be verified.[10]A "legendary tale" attributes it to 16th-centuryMughal EmpireministerBirbal.[11]The earliest documented appearance of the puzzle found by Pierre Ageron, using 17 camels, appears in the work of 18th-century Shiite Iranian philosopherMulla Muhammad Mahdi Naraqi.[9]By 1850 it had already entered circulation in America, through a travelogue of Mesopotamia published by James Phillips Fletcher.[12][13]It appeared inThe Mathematical Monthlyin 1859,[10][14]and a version with 17 elephants and a claimed Chinese origin was included inHanky Panky: A Book of Conjuring Tricks(London, 1872), edited by William Henry Cremer but often attributed toWiljalba Frikellor Henry Llewellyn Williams.[2][10]The same puzzle subsequently appeared in the late 19th and early 20th centuries in the works ofHenry Dudeney,Sam Loyd,[2]Édouard Lucas,[9]Professor Hoffmann,[15]and Émile Fourrey,[16]among others.[17][18][19][20]A version with 17 horses circulated asfolklorein mid-20th-century America.[21] A variant of the story has been told with 11 camels, to be divided into1⁄2,1⁄4, and1⁄6.[22][23]Another variant of the puzzle appears in the bookThe Man Who Counted, a mathematical puzzle book originally published in Portuguese byJúlio César de Mello e Souzain 1938. This version starts with 35 camels, to be divided in the same proportions as in the 17-camel version. After the hero of the story lends a camel, and the 36 camels are divided among the three brothers, two are left over: one to be returned to the hero, and another given to him as a reward for his cleverness. The endnotes to the English translation of the book cite the 17-camel version of the problem to the works of Fourrey and Gaston Boucheny (1939).[10] Beyond recreational mathematics, the story has been used as the basis for school mathematics lessons,[3][24]as aparablewith variedmoralsin religion, law, economics, and politics,[19][25][26][27][28]and even as a lay-explanation forcatalysisin chemistry.[29] Paul Stockmeyer, a computer scientist, defines a class of similar puzzles for any numbern{\displaystyle n}of animals, with the property thatn{\displaystyle n}can be written as a sum of distinctdivisorsd1,d2,…{\displaystyle d_{1},d_{2},\dots }ofn+1{\displaystyle n+1}. In this case, one obtains a puzzle in which the fractions into which then{\displaystyle n}animals should be divided ared1n+1,d2n+1,….{\displaystyle {\frac {d_{1}}{n+1}},{\frac {d_{2}}{n+1}},\dots .}Because the numbersdi{\displaystyle d_{i}}have been chosen to dividen+1{\displaystyle n+1}, all of these fractions simplify tounit fractions. When combined with the judge's share of the animals,1/(n+1){\displaystyle 1/(n+1)}, they produce anEgyptian fractionrepresentation of the number one.[2] The numbers of camels that can be used as the basis for such a puzzle (that is, numbersn{\displaystyle n}that can be represented as sums of distinct divisors ofn+1{\displaystyle n+1}) form theinteger sequence S. Naranan, an Indian physicist, seeks a more restricted class of generalized puzzles, with only three terms, and withn+1{\displaystyle n+1}equal to theleast common multipleof the denominators of the three unit fractions, finding only seven possible triples of fractions that meet these conditions.[11] Brazilian researchers Márcio Luís Ferreira Nascimento and Luiz Barco generalize the problem further, as in the variation with 35 camels, to instances in which more than one camel may be lent and the number returned may be larger than the number lent.[10]
https://en.wikipedia.org/wiki/17-animal_inheritance_puzzle
Inmathematics, amultipleis theproductof any quantity and aninteger.[1]In other words, for the quantitiesaandb, it can be said thatbis a multiple ofaifb=nafor some integern, which is called themultiplier. Ifais notzero, this is equivalent to saying thatb/a{\displaystyle b/a}is an integer. Whenaandbare both integers, andbis a multiple ofa, thenais called adivisorofb. One says also thatadividesb. Ifaandbare not integers, mathematicians prefer generally to useinteger multipleinstead ofmultiple, for clarification. In fact,multipleis used for other kinds of product; for example, apolynomialpis a multiple of another polynomialqif there exists third polynomialrsuch thatp=qr. 14, 49, −21 and 0 are multiples of 7, whereas 3 and −6 are not. This is because there are integers that 7 may be multiplied by to reach the values of 14, 49, 0 and −21, while there are no suchintegersfor 3 and −6. Each of the products listed below, and in particular, the products for 3 and −6, is theonlyway that the relevant number can be written as a product of 7 and another real number: In some texts[which?], "ais asubmultipleofb" has the meaning of "abeing aunit fractionofb" (a=b/n) or, equivalently, "bbeing aninteger multiplenofa" (b=na). This terminology is also used withunits of measurement(for example by theBIPM[2]andNIST[3]), where aunit submultipleis obtained byprefixingthe main unit, defined as the quotient of the main unit by an integer, mostly a power of 103. For example, amillimetreis the 1000-fold submultiple of ametre.[2][3]As another example, oneinchmay be considered as a 12-fold submultiple of afoot, or a 36-fold submultiple of ayard.
https://en.wikipedia.org/wiki/Submultiple
Inmathematics, asuperparticular ratio, also called asuperparticular numberorepimoric ratio, is theratioof two consecutiveinteger numbers. More particularly, the ratio takes the form: Thus: A superparticular number is when a great number contains a lesser number, to which it is compared, and at the same time one part of it. For example, when 3 and 2 are compared, they contain 2, plus the 3 has another 1, which is half of two. When 3 and 4 are compared, they each contain a 3, and the 4 has another 1, which is a third part of 3. Again, when 5, and 4 are compared, they contain the number 4, and the 5 has another 1, which is the fourth part of the number 4, etc. Superparticular ratios were written about byNicomachusin his treatiseIntroduction to Arithmetic. Although these numbers have applications in modernpure mathematics, the areas of study that most frequently refer to the superparticular ratios by this name aremusic theory[2]and thehistory of mathematics.[3] AsLeonhard Eulerobserved, the superparticular numbers (including also the multiply superparticular ratios, numbers formed by adding an integer other than one to aunit fraction) are exactly therational numberswhosesimple continued fractionterminates after two terms. The numbers whose continued fraction terminates in one term are the integers, while the remaining numbers, with three or more terms in their continued fractions, aresuperpartient.[4] TheWallis product represents theirrational numberπin several ways as a product of superparticular ratios and theirinverses. It is also possible to convert theLeibniz formula for πinto anEuler productof superparticular ratios in which each term has aprime numberas its numerator and the nearest multiple of four as its denominator:[5] Ingraph theory, superparticular numbers (or rather, their reciprocals, 1/2, 2/3, 3/4, etc.) arise via theErdős–Stone theoremas the possible values of theupper densityof an infinite graph.[6] In the study ofharmony, many musicalintervalscan be expressed as a superparticular ratio (for example, due tooctave equivalency, the ninth harmonic, 9/1, may be expressed as a superparticular ratio, 9/8). Indeed, whether a ratio was superparticular was the most important criterion inPtolemy's formulation of musical harmony.[7]In this application,Størmer's theoremcan be used to list all possible superparticular numbers for a givenlimit; that is, all ratios of this type in which both the numerator and denominator aresmooth numbers.[2] These ratios are also important in visual harmony.Aspect ratiosof 4:3 and 3:2 are common indigital photography,[8]and aspect ratios of 7:6 and 5:4 are used inmedium formatandlarge formatphotography respectively.[9] Every pair of adjacent positive integers represent a superparticular ratio, and similarly every pair of adjacent harmonics in theharmonic series (music)represent a superparticular ratio. Many individual superparticular ratios have their own names, either in historical mathematics or in music theory. These include the following: The root of some of these terms comes from Latinsesqui-"one and a half" (fromsemis"a half" and-que"and") describing the ratio 3:2.
https://en.wikipedia.org/wiki/Superparticular_ratio
Incomplex analysis, theargument principle(orCauchy's argument principle) is a theorem relating the difference between the number ofzeros and polesof ameromorphic functionto acontour integralof the function'slogarithmic derivative. Iffis a meromorphic function inside and on some closed contourC, andfhas no zeros or poles onC, then whereZandPdenote respectively the number of zeros and poles offinside the contourC, with each zero and pole counted as many times as itsmultiplicityandorder, respectively, indicate. This statement of the theorem assumes that the contourCis simple, that is, without self-intersections, and that it is oriented counter-clockwise. More generally, suppose thatfis a meromorphic function on anopen setΩ in thecomplex planeand thatCis a closed curve in Ω which avoids all zeros and poles offand iscontractibleto a point inside Ω. For each pointz∈ Ω, letn(C,z) be thewinding numberofCaroundz. Then where the first summation is over all zerosaoffcounted with their multiplicities, and the second summation is over the polesboffcounted with their orders. Thecontour integral∮Cf′(z)f(z)dz{\displaystyle \oint _{C}{\frac {f'(z)}{f(z)}}\,dz}can be interpreted as 2πitimes the winding number of the pathf(C) around the origin, using the substitutionw=f(z): That is, it isitimes the total change in theargumentoff(z) asztravels aroundC, explaining the name of the theorem; this follows from and the relation between arguments and logarithms. LetzZbe a zero off. We can writef(z) = (z−zZ)kg(z) wherekis the multiplicity of the zero, and thusg(zZ) ≠ 0. We get and Sinceg(zZ) ≠ 0, it follows thatg'(z)/g(z) has no singularities atzZ, and thus is analytic atzZ, which implies that theresidueoff′(z)/f(z) atzZisk. LetzPbe a pole off. We can writef(z) = (z−zP)−mh(z) wheremis the order of the pole, andh(zP) ≠ 0. Then, and similarly as above. It follows thath′(z)/h(z) has no singularities atzPsinceh(zP) ≠ 0 and thus it is analytic atzP. We find that the residue off′(z)/f(z) atzPis −m. Putting these together, each zerozZof multiplicitykoffcreates a simple pole forf′(z)/f(z) with the residue beingk, and each polezPof ordermoffcreates a simple pole forf′(z)/f(z) with the residue being −m. (Here, by a simple pole we mean a pole of order one.) In addition, it can be shown thatf′(z)/f(z) has no other poles, and so no other residues. By theresidue theoremwe have that the integral aboutCis the product of 2πiand the sum of the residues. Together, the sum of thek's for each zerozZis the number of zeros counting multiplicities of the zeros, and likewise for the poles, and so we have our result. The argument principle can be used to efficiently locate zeros or poles of meromorphic functions on a computer. Even with rounding errors, the expression12πi∮Cf′(z)f(z)dz{\displaystyle {1 \over 2\pi i}\oint _{C}{f'(z) \over f(z)}\,dz}will yield results close to an integer; by determining these integers for different contoursCone can obtain information about the location of the zeros and poles. Numerical tests of theRiemann hypothesisuse this technique to get an upper bound for the number of zeros ofRiemann'sξ(s){\displaystyle \xi (s)}functioninside a rectangle intersecting the critical line. The argument principle can also be used to proveRouché's theorem, which can be used to bound the roots of polynomials. A consequence of the more general formulation of the argument principle is that, under the same hypothesis, ifgis an analytic function in Ω, then For example, iffis apolynomialhaving zerosz1, ...,zpinside a simple contourC, andg(z) =zk, then ispower sum symmetric polynomialof the roots off. Another consequence is if we compute the complex integral: for an appropriate choice ofgandfwe have theAbel–Plana formula: which expresses the relationship between a discrete sum and its integral. The argument principle is also applied incontrol theory. In modern books on feedback control theory, it is commonly used as the theoretical foundation for theNyquist stability criterion. Moreover, a more generalized form of the argument principle can be employed to deriveBode's sensitivity integraland other related integral relationships.[1] There is an immediate generalization of the argument principle. Suppose that g is analytic in the regionΩ{\displaystyle \Omega }. Then where the first summation is again over all zerosaoffcounted with their multiplicities, and the second summation is again over the polesboffcounted with their orders. According to the book byFrank Smithies(Cauchy and the Creation of Complex Function Theory, Cambridge University Press, 1997, p. 177),Augustin-Louis Cauchypresented a theorem similar to the above on 27 November 1831, during his self-imposed exile in Turin (then capital of the Kingdom of Piedmont-Sardinia) away from France. However, according to this book, only zeroes were mentioned, not poles. This theorem by Cauchy was only published many years later in 1874 in a hand-written form and so is quite difficult to read. Cauchy published a paper with a discussion on both zeroes and poles in 1855, two years before his death.
https://en.wikipedia.org/wiki/Argument_principle
Control theoryis a field ofcontrol engineeringandapplied mathematicsthat deals with thecontrolofdynamical systemsin engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing anydelay,overshoot, orsteady-state errorand ensuring a level of controlstability; often with the aim to achieve a degree ofoptimality. To do this, acontrollerwith the requisite corrective behavior is required. This controller monitors the controlledprocess variable(PV), and compares it with the reference orset point(SP). The difference between actual and desired value of the process variable, called theerrorsignal, or SP-PV error, is applied as feedback to generate a control action to bring the controlled process variable to the same value as the set point. Other aspects which are also studied arecontrollabilityandobservability. Control theory is used incontrol system engineeringto design automation that have revolutionized manufacturing, aircraft, communications and other industries, and created new fields such asrobotics. Extensive use is usually made of a diagrammatic style known as theblock diagram. In it thetransfer function, also known as the system function or network function, is a mathematical model of the relation between the input and output based on thedifferential equationsdescribing the system. Control theory dates from the 19th century, when the theoretical basis for the operation of governors was first described byJames Clerk Maxwell.[1]Control theory was further advanced byEdward Routhin 1874,Charles Sturmand in 1895,Adolf Hurwitz, who all contributed to the establishment of control stability criteria; and from 1922 onwards, the development ofPID controltheory byNicolas Minorsky.[2]Although a major application ofmathematicalcontrol theory is incontrol systems engineering, which deals with the design ofprocess controlsystems for industry, other applications range far beyond this. As the general theory of feedback systems, control theory is useful wherever feedback occurs - thus control theory also has applications in life sciences, computer engineering, sociology andoperations research.[3] Although control systems of various types date back to antiquity, a more formal analysis of the field began with a dynamics analysis of thecentrifugal governor, conducted by the physicistJames Clerk Maxwellin 1868, entitledOn Governors.[4]A centrifugal governor was already used to regulate the velocity of windmills.[5]Maxwell described and analyzed the phenomenon ofself-oscillation, in which lags in the system may lead to overcompensation and unstable behavior. This generated a flurry of interest in the topic, during which Maxwell's classmate,Edward John Routh, abstracted Maxwell's results for the general class of linear systems.[6]Independently,Adolf Hurwitzanalyzed system stability using differential equations in 1877, resulting in what is now known as theRouth–Hurwitz theorem.[7][8] A notable application of dynamic control was in the area of crewed flight. TheWright brothersmade their first successful test flights on December 17, 1903, and were distinguished by their ability to control their flights for substantial periods (more so than the ability to produce lift from an airfoil, which was known). Continuous, reliable control of the airplane was necessary for flights lasting longer than a few seconds. ByWorld War II, control theory was becoming an important area of research.Irmgard Flügge-Lotzdeveloped the theory of discontinuous automatic control systems, and applied thebang-bang principleto the development ofautomatic flight control equipmentfor aircraft.[9][10]Other areas of application for discontinuous controls includedfire-control systems,guidance systemsandelectronics. Sometimes, mechanical methods are used to improve the stability of systems. For example,ship stabilizersare fins mounted beneath the waterline and emerging laterally. In contemporary vessels, they may be gyroscopically controlled active fins, which have the capacity to change their angle of attack to counteract roll caused by wind or waves acting on the ship. TheSpace Racealso depended on accurate spacecraft control, and control theory has also seen an increasing use in fields such as economics and artificial intelligence. Here, one might say that the goal is to find aninternal modelthat obeys thegood regulator theorem. So, for example, in economics, the more accurately a (stock or commodities) trading model represents the actions of the market, the more easily it can control that market (and extract "useful work" (profits) from it). In AI, an example might be a chatbot modelling the discourse state of humans: the more accurately it can model the human state (e.g. on a telephone voice-support hotline), the better it can manipulate the human (e.g. into performing the corrective actions to resolve the problem that caused the phone call to the help-line). These last two examples take the narrow historical interpretation of control theory as a set of differential equations modeling and regulating kinetic motion, and broaden it into a vast generalization of aregulatorinteracting with aplant. Fundamentally, there are two types of control loop:open-loop control(feedforward), andclosed-loop control(feedback). The definition of a closed loop control system according to theBritish Standards Institutionis "a control system possessing monitoring feedback, the deviation signal formed as a result of this feedback being used to control the action of a final control element in such a way as to tend to reduce the deviation to zero."[12] Aclosed-loop controlleror feedback controller is acontrol loopwhich incorporatesfeedback, in contrast to anopen-loop controllerornon-feedback controller. A closed-loop controller uses feedback to controlstatesoroutputsof adynamical system. Its name comes from the information path in the system: process inputs (e.g.,voltageapplied to anelectric motor) have an effect on the process outputs (e.g., speed or torque of the motor), which is measured withsensorsand processed by the controller; the result (the control signal) is "fed back" as input to the process, closing the loop.[14] In the case of linearfeedbacksystems, acontrol loopincludingsensors, control algorithms, and actuators is arranged in an attempt to regulate a variable at asetpoint(SP). An everyday example is thecruise controlon a road vehicle; where external influences such as hills would cause speed changes, and the driver has the ability to alter the desired set speed. ThePID algorithmin the controller restores the actual speed to the desired speed in an optimum way, with minimal delay orovershoot, by controlling the power output of the vehicle's engine. Control systems that include some sensing of the results they are trying to achieve are making use of feedback and can adapt to varying circumstances to some extent.Open-loop control systemsdo not make use of feedback, and run only in pre-arranged ways. Closed-loop controllers have the following advantages over open-loop controllers: In some systems, closed-loop and open-loop control are used simultaneously. In such systems, the open-loop control is termedfeedforwardand serves to further improve reference tracking performance. A common closed-loop controller architecture is thePID controller. The field of control theory can be divided into two branches: Mathematical techniques for analyzing and designing control systems fall into two different categories: In contrast to the frequency-domain analysis of the classical control theory, modern control theory utilizes the time-domainstate spacerepresentation,[citation needed]a mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations. To abstract from the number of inputs, outputs, and states, the variables are expressed as vectors and the differential and algebraic equations are written in matrix form (the latter only being possible when the dynamical system is linear). The state space representation (also known as the "time-domain approach") provides a convenient and compact way to model and analyze systems with multiple inputs and outputs. With inputs and outputs, we would otherwise have to write down Laplace transforms to encode all the information about a system. Unlike the frequency domain approach, the use of the state-space representation is not limited to systems with linear components and zero initial conditions. "State space" refers to the space whose axes are the state variables. The state of the system can be represented as a point within that space.[17][18] Control systems can be divided into different categories depending on the number of inputs and outputs. The scope of classical control theory is limited to single-input and single-output (SISO) system design, except when analyzing for disturbance rejection using a second input. The system analysis is carried out in the time domain usingdifferential equations, in the complex-s domain with theLaplace transform, or in the frequency domain by transforming from the complex-s domain. Many systems may be assumed to have a second order and single variable system response in the time domain. A controller designed using classical theory often requires on-site tuning due to incorrect design approximations. Yet, due to the easier physical implementation of classical controller designs as compared to systems designed using modern control theory, these controllers are preferred in most industrial applications. The most common controllers designed using classical control theory arePID controllers. A less common implementation may include either or both a Lead or Lag filter. The ultimate end goal is to meet requirements typically provided in the time-domain called the step response, or at times in the frequency domain called the open-loop response. The step response characteristics applied in a specification are typically percent overshoot, settling time, etc. The open-loop response characteristics applied in a specification are typically Gain and Phase margin and bandwidth. These characteristics may be evaluated through simulation including a dynamic model of the system under control coupled with the compensation model. Modern control theory is carried out in thestate space, and can deal with multiple-input and multiple-output (MIMO) systems. This overcomes the limitations of classical control theory in more sophisticated design problems, such as fighter aircraft control, with the limitation that no frequency domain analysis is possible. In modern design, a system is represented to the greatest advantage as a set of decoupled first orderdifferential equationsdefined usingstate variables.Nonlinear,multivariable,adaptiveandrobust controltheories come under this division. Being fairly new, modern control theory has many areas yet to be explored. Scholars likeRudolf E. KálmánandAleksandr Lyapunovare well known among the people who have shaped modern control theory. Thestabilityof a generaldynamical systemwith no input can be described withLyapunov stabilitycriteria. For simplicity, the following descriptions focus on continuous-time and discrete-timelinear systems. Mathematically, this means that for a causal linear system to be stable all of thepolesof itstransfer functionmust have negative-real values, i.e. the real part of each pole must be less than zero. Practically speaking, stability requires that the transfer function complex poles reside The difference between the two cases is simply due to the traditional method of plotting continuous time versus discrete time transfer functions. The continuous Laplace transform is inCartesian coordinateswhere thex{\displaystyle x}axis is the real axis and the discrete Z-transform is incircular coordinateswhere theρ{\displaystyle \rho }axis is the real axis. When the appropriate conditions above are satisfied a system is said to beasymptotically stable; the variables of an asymptotically stable control system always decrease from their initial value and do not show permanent oscillations. Permanent oscillations occur when a pole has a real part exactly equal to zero (in the continuous time case) or amodulusequal to one (in the discrete time case). If a simply stable system response neither decays nor grows over time, and has no oscillations, it ismarginally stable; in this case the system transfer function has non-repeated poles at the complex plane origin (i.e. their real and complex component is zero in the continuous time case). Oscillations are present when poles with real part equal to zero have an imaginary part not equal to zero. If a system in question has animpulse responseof then the Z-transform (seethis example), is given by which has a pole inz=0.5{\displaystyle z=0.5}(zeroimaginary part). This system is BIBO (asymptotically) stable since the pole isinsidethe unit circle. However, if the impulse response was then the Z-transform is which has a pole atz=1.5{\displaystyle z=1.5}and is not BIBO stable since the pole has a modulus strictly greater than one. Numerous tools exist for the analysis of the poles of a system. These include graphical systems like theroot locus,Bode plotsor theNyquist plots. Mechanical changes can make equipment (and control systems) more stable. Sailors add ballast to improve the stability of ships. Cruise ships useantiroll finsthat extend transversely from the side of the ship for perhaps 30 feet (10 m) and are continuously rotated about their axes to develop forces that oppose the roll. Controllabilityandobservabilityare main issues in the analysis of a system before deciding the best control strategy to be applied, or whether it is even possible to control or stabilize the system. Controllability is related to the possibility of forcing the system into a particular state by using an appropriate control signal. If a state is not controllable, then no signal will ever be able to control the state. If a state is not controllable, but its dynamics are stable, then the state is termedstabilizable. Observability instead is related to the possibility ofobserving, through output measurements, the state of a system. If a state is not observable, the controller will never be able to determine the behavior of an unobservable state and hence cannot use it to stabilize the system. However, similar to the stabilizability condition above, if a state cannot be observed it might still be detectable. From a geometrical point of view, looking at the states of each variable of the system to be controlled, every "bad" state of these variables must be controllable and observable to ensure a good behavior in the closed-loop system. That is, if one of theeigenvaluesof the system is not both controllable and observable, this part of the dynamics will remain untouched in the closed-loop system. If such an eigenvalue is not stable, the dynamics of this eigenvalue will be present in the closed-loop system which therefore will be unstable. Unobservable poles are not present in the transfer function realization of a state-space representation, which is why sometimes the latter is preferred in dynamical systems analysis. Solutions to problems of an uncontrollable or unobservable system include adding actuators and sensors. Several different control strategies have been devised in the past years. These vary from extremely general ones (PID controller), to others devoted to very particular classes of systems (especiallyroboticsor aircraft cruise control). A control problem can have several specifications. Stability, of course, is always present. The controller must ensure that the closed-loop system is stable, regardless of the open-loop stability. A poor choice of controller can even worsen the stability of the open-loop system, which must normally be avoided. Sometimes it would be desired to obtain particular dynamics in the closed loop: i.e. that the poles haveRe[λ]<−λ¯{\displaystyle Re[\lambda ]<-{\overline {\lambda }}}, whereλ¯{\displaystyle {\overline {\lambda }}}is a fixed value strictly greater than zero, instead of simply asking thatRe[λ]<0{\displaystyle Re[\lambda ]<0}. Another typical specification is the rejection of a step disturbance; including anintegratorin the open-loop chain (i.e. directly before the system under control) easily achieves this. Other classes of disturbances need different types of sub-systems to be included. Other "classical" control theory specifications regard the time-response of the closed-loop system. These include therise time(the time needed by the control system to reach the desired value after a perturbation), peakovershoot(the highest value reached by the response before reaching the desired value) and others (settling time, quarter-decay). Frequency domain specifications are usually related torobustness(see after). Modern performance assessments use some variation of integrated tracking error (IAE, ISA, CQI). A control system must always have some robustness property. Arobust controlleris such that its properties do not change much if applied to a system slightly different from the mathematical one used for its synthesis. This requirement is important, as no real physical system truly behaves like the series of differential equations used to represent it mathematically. Typically a simpler mathematical model is chosen in order to simplify calculations, otherwise, the truesystem dynamicscan be so complicated that a complete model is impossible. The process of determining the equations that govern the model's dynamics is calledsystem identification. This can be done off-line: for example, executing a series of measures from which to calculate an approximated mathematical model, typically itstransfer functionor matrix. Such identification from the output, however, cannot take account of unobservable dynamics. Sometimes the model is built directly starting from known physical equations, for example, in the case of amass-spring-dampersystem we know thatmx¨(t)=−Kx(t)−Bx˙(t){\displaystyle m{\ddot {x}}(t)=-Kx(t)-\mathrm {B} {\dot {x}}(t)}. Even assuming that a "complete" model is used in designing the controller, all the parameters included in these equations (called "nominal parameters") are never known with absolute precision; the control system will have to behave correctly even when connected to a physical system with true parameter values away from nominal. Some advanced control techniques include an "on-line" identification process (see later). The parameters of the model are calculated ("identified") while the controller itself is running. In this way, if a drastic variation of the parameters ensues, for example, if the robot's arm releases a weight, the controller will adjust itself consequently in order to ensure the correct performance. Analysis of the robustness of a SISO (single input single output) control system can be performed in the frequency domain, considering the system's transfer function and usingNyquistandBode diagrams. Topics includegain and phase marginand amplitude margin. For MIMO (multi-input multi output) and, in general, more complicated control systems, one must consider the theoretical results devised for each control technique (see next section). I.e., if particular robustness qualities are needed, the engineer must shift their attention to a control technique by including these qualities in its properties. A particular robustness issue is the requirement for a control system to perform properly in the presence of input and state constraints. In the physical world every signal is limited. It could happen that a controller will send control signals that cannot be followed by the physical system, for example, trying to rotate a valve at excessive speed. This can produce undesired behavior of the closed-loop system, or even damage or break actuators or other subsystems. Specific control techniques are available to solve the problem:model predictive control(see later), andanti-wind up systems. The latter consists of an additional control block that ensures that the control signal never exceeds a given threshold. For MIMO systems, pole placement can be performed mathematically using astate space representationof the open-loop system and calculating a feedback matrix assigning poles in the desired positions. In complicated systems this can require computer-assisted calculation capabilities, and cannot always ensure robustness. Furthermore, all system states are not in general measured and so observers must be included and incorporated in pole placement design. Processes in industries likeroboticsand theaerospace industrytypically have strong nonlinear dynamics. In control theory it is sometimes possible to linearize such classes of systems and apply linear techniques, but in many cases it can be necessary to devise from scratch theories permitting control of nonlinear systems. These, e.g.,feedback linearization,backstepping,sliding mode control, trajectory linearization control normally take advantage of results based onLyapunov's theory.Differential geometryhas been widely used as a tool for generalizing well-known linear control concepts to the nonlinear case, as well as showing the subtleties that make it a more challenging problem. Control theory has also been used to decipher the neural mechanism that directs cognitive states.[19] When the system is controlled by multiple controllers, the problem is one of decentralized control. Decentralization is helpful in many ways, for instance, it helps control systems to operate over a larger geographical area. The agents in decentralized control systems can interact using communication channels and coordinate their actions. A stochastic control problem is one in which the evolution of the state variables is subjected to random shocks from outside the system. A deterministic control problem is not subject to external random shocks. Every control system must guarantee first the stability of the closed-loop behavior. Forlinear systems, this can be obtained by directly placing the poles. Nonlinear control systems use specific theories (normally based onAleksandr Lyapunov's Theory) to ensure stability without regard to the inner dynamics of the system. The possibility to fulfill different specifications varies from the model considered and the control strategy chosen. Many active and historical figures made significant contribution to control theory including
https://en.wikipedia.org/wiki/Control_theory#Stability
Filter designis the process of designing asignal processing filterthat satisfies a set of requirements, some of which may be conflicting. The purpose is to find a realization of the filter that meets each of the requirements to an acceptable degree. The filter design process can be described as an optimization problem. Certain parts of the design process can be automated, but an experienced designer may be needed to get a good result. The design of digital filters is a complex topic.[1]Although filters are easily understood and calculated, the practical challenges of their design and implementation are significant and are the subject of advanced research. Typical requirements which may be considered in the design process are: The requiredfrequency responseis an importantparameter. The steepness and complexity of the response curve determines the filter order and feasibility. A first-orderrecursivefilter will only have a single frequency-dependent component. This means that theslopeof the frequency response is limited to 6dBperoctave. For many purposes, this is not sufficient. To achieve steeper slopes, higher-order filters are required. In relation to the desired frequency function, there may also be an accompanyingweightingfunction, which describes, for each frequency, how important it is that the resulting frequency function approximates the desired one. Typical examples of frequency function are: There is a direct correspondence between the filter's frequency function and its impulse response: the former is theFourier transformof the latter. That means that any requirement on the frequency function is a requirement on the impulse response, and vice versa. However, in certain applications it may be the filter's impulse response that is explicit and the design process then aims at producing as close an approximation as possible to the requested impulse response given all other requirements. In some cases it may even be relevant to consider a frequency function and impulse response of the filter which are chosen independently from each other. For example, we may want both a specific frequency function of the filterandthat the resulting filter have a small effective width in the signal domain as possible. The latter condition can be realized by considering a very narrow function as the wanted impulse response of the filter even though this function has no relation to the desired frequency function. The goal of the design process is then to realize a filter which tries to meet both these contradicting design goals as much as possible. An example is forhigh-resolution audioin which the frequency response (magnitude and phase) forsteady statesignals (sum of sinusoids) is the primary filter requirement, while an unconstrained impulse response may cause unexpected degradation due to time spreading oftransientsignals.[2][3] Any filter operating in real time (the filter response only depends on the current and past inputs) must becausal. If the design process yields a noncausal filter, the resulting filter can be made causal by introducing an appropriate time-shift (or delay). Filters that do not operate in real time (e.g. for image processing) can be non-causal. Noncausal filters may be designed to have zero delay. Astable filterassures that every limited input signal produces a limited filter response. A filter which does not meet this requirement may in some situations prove useless or even harmful. Certain design approaches can guarantee stability, for example by using only feed-forward circuits such as an FIR filter. On the other hand, filters based on feedback circuits have other advantages and may therefore be preferred, even if this class of filters includes unstable filters. In this case, the filters must be carefully designed in order to avoid instability. In certain applications we have to deal with signals which contain components which can be described as local phenomena, for example pulses or steps, which have certain time duration. A consequence of applying a filter to a signal is, in intuitive terms, that the duration of the local phenomena is extended by the width of the filter. This implies that it is sometimes important to keep the width of the filter's impulse response function as short as possible. According to the uncertainty relation of the Fourier transform, the product of the width of the filter's impulse response function and the width of its frequency function must exceed a certain constant. This means that any requirement on the filter's locality also implies a bound on its frequency function's width. Consequently, it may not be possible to simultaneously meet requirements on the locality of the filter's impulse response function as well as on its frequency function. This is a typical example of contradicting requirements. A general desire in any design is that the number of operations (additions and multiplications) needed to compute the filter response is as low as possible. In certain applications, this desire is a strict requirement, for example due to limited computational resources, limited power resources, or limited time. The last limitation is typical in real-time applications. There are several ways in which a filter can have different computational complexity. For example, the order of a filter is more or less proportional to the number of operations. This means that by choosing a low order filter, the computation time can be reduced. For discrete filters the computational complexity is more or less proportional to the number of filter coefficients. If the filter has many coefficients, for example in the case of multidimensional signals such as tomography data, it may be relevant to reduce the number of coefficients by removing those which are sufficiently close to zero. In multirate filters, the number of coefficients by taking advantage of its bandwidth limits, where the input signal is downsampled (e.g. to its critical frequency), and upsampled after filtering. Another issue related to computational complexity is separability, that is, if and how a filter can be written as a convolution of two or more simpler filters. In particular, this issue is of importance for multidimensional filters, e.g., 2D filter which are used in image processing. In this case, a significant reduction in computational complexity can be obtained if the filter can be separated as the convolution of one 1D filter in the horizontal direction and one 1D filter in the vertical direction. A result of the filter design process may, e.g., be to approximate some desired filter as a separable filter or as a sum of separable filters. It must also be decided how the filter is going to be implemented: The design of linear analog filters is for the most part covered in thelinear filtersection. Digital filtersare classified into one of two basic forms, according to how they respond to aunit impulse: Unless thesample rateis fixed by some outside constraint, selecting a suitable sample rate is an important design decision. A high rate will require more in terms of computational resources, but less in terms ofanti-aliasing filters.Interferenceandbeatingwith other signals in the system may also be an issue. For any digital filter design, it is crucial to analyze and avoidaliasingeffects. Often, this is done by adding analog anti-aliasing filters at the input and output, thus avoiding any frequency component above theNyquist frequency. The complexity (i.e., steepness) of such filters depends on the requiredsignal-to-noise ratioand the ratio between thesampling rateand the highest frequency of the signal. Parts of the design problem relate to the fact that certain requirements are described in the frequency domain while others are expressed in the time domain and that these may conflict. For example, it is not possible to obtain a filter which has both an arbitrary impulse response and arbitrary frequency function. Other effects which refer to relations between the time and frequency domain are As stated by theGabor limit, an uncertainty principle, the product of the width of the frequency function and the width of the impulse response cannot be smaller than a specific constant. This implies that if a specific frequency function is requested, corresponding to a specific frequency width, the minimum width of the filter in the signal domain is set. Vice versa, if the maximum width of the response is given, this determines the smallest possible width in the frequency. This is a typical example of contradictory requirements where the filter design process may try to find a useful compromise. Letσs2{\displaystyle \sigma _{s}^{2}}be the variance of the input signal and letσf2{\displaystyle \sigma _{f}^{2}}be the variance of the filter. The variance of the filter response,σr2{\displaystyle \sigma _{r}^{2}}, is then given by This means thatσr>σf{\displaystyle \sigma _{r}>\sigma _{f}}and implies that the localization of various features such as pulses or steps in the filter response is limited by the filter width in the signal domain. If a precise localization is requested, we need a filter of small width in the signal domain and, via the uncertainty principle, its width in the frequency domain cannot be arbitrary small. Letf(t)be a function and letF(ω){\displaystyle F(\omega )}be its Fourier transform. There is a theorem which states that if the first derivative ofFwhich is discontinuous has ordern≥0{\displaystyle n\geq 0}, thenfhas an asymptotic decay liket−n−1{\displaystyle t^{-n-1}}. A consequence of this theorem is that the frequency function of a filter should be as smooth as possible to allow its impulse response to have a fast decay, and thereby a short width. One common method for designing FIR filters is theParks-McClellan filter design algorithm, based on theRemez exchange algorithm. Here the user specifies a desired frequency response, a weighting function for errors from this response, and a filter orderN. The algorithm then finds the set ofNcoefficients that minimize the maximum deviation from the ideal. Intuitively, this finds the filter that is as close as you can get to the desired response given that you can use onlyNcoefficients. This method is particularly easy in practice and at least one text[4]includes a program that takes the desired filter andNand returns the optimum coefficients. One possible drawback to filters designed this way is that they contain many small ripples in the passband(s), since such a filter minimizes the peak error. Another method to finding a discrete FIR filter isfilter optimizationdescribed in Knutsson et al., which minimizes the integral of the square of the error, instead of its maximum value. In its basic form this approach requires that an ideal frequency function of the filterFI(ω){\displaystyle F_{I}(\omega )}is specified together with a frequency weighting functionW(ω){\displaystyle W(\omega )}and set of coordinatesxk{\displaystyle x_{k}}in the signal domain where the filter coefficients are located. An error functionε{\displaystyle \varepsilon }is defined as wheref(x){\displaystyle f(x)}is the discrete filter andF{\displaystyle {\mathcal {F}}}is thediscrete-time Fourier transformdefined on the specified set of coordinates. The norm used here is, formally, the usual norm onL2{\displaystyle L^{2}}spaces. This means thatε{\displaystyle \varepsilon }measures the deviation between the requested frequency function of the filter,FI{\displaystyle F_{I}}, and the actual frequency function of the realized filter,F{f}{\displaystyle {\mathcal {F}}\{f\}}. However, the deviation is also subject to the weighting functionW{\displaystyle W}before the error function is computed. Once the error function is established, the optimal filter is given by the coefficientsf(x){\displaystyle f(x)}which minimizeε{\displaystyle \varepsilon }. This can be done by solving the corresponding least squares problem. In practice, theL2{\displaystyle L^{2}}norm has to be approximated by means of a suitable sum over discrete points in the frequency domain. In general, however, these points should be significantly more than the number of coefficients in the signal domain to obtain a useful approximation. The previous method can be extended to include an additional error term related to a desired filter impulse response in the signal domain, with a corresponding weighting function. The ideal impulse response can be chosen independently of the ideal frequency function and is in practice used to limit the effective width and to remove ringing effects of the resulting filter in the signal domain. This is done by choosing a narrow ideal filter impulse response function, e.g., an impulse, and a weighting function which grows fast with the distance from the origin, e.g., the distance squared. The optimal filter can still be calculated by solving a simple least squares problem and the resulting filter is then a "compromise" which has a total optimal fit to the ideal functions in both domains. An important parameter is the relative strength of the two weighting functions which determines in which domain it is more important to have a good fit relative to the ideal function.
https://en.wikipedia.org/wiki/Filter_design
Insignal processing, afilteris a device or process that removes some unwanted components or features from asignal. Filtering is a class ofsignal processing, the defining feature of filters being the complete or partial suppression of some aspect of the signal. Most often, this means removing somefrequenciesor frequency bands. However, filters do not exclusively act in thefrequency domain; especially in the field ofimage processingmany other targets for filtering exist. Correlations can be removed for certain frequency components and not for others without having to act in the frequency domain. Filters are widely used inelectronicsandtelecommunication, inradio,television,audio recording,radar,control systems,music synthesis,image processing,computer graphics, andstructural dynamics. There are many different bases of classifying filters and these overlap in many different ways; there is no simple hierarchical classification. Filters may be: Linear continuous-time circuit is perhaps the most common meaning for filter in the signal processing world, and simply "filter" is often taken to be synonymous. These circuits are generallydesignedto remove certainfrequenciesand allow others to pass. Circuits that perform this function are generallylinearin their response, or at least approximately so. Any nonlinearity would potentially result in the output signal containing frequency components not present in the input signal. The modern design methodology for linear continuous-time filters is callednetwork synthesis. Some important filter families designed in this way are: The difference between these filter families is that they all use a differentpolynomial functionto approximate to theideal filterresponse. This results in each having a differenttransfer function. Another older, less-used methodology is theimage parameter method. Filters designed by this methodology are archaically called "wave filters". Some important filters designed by this method are: Some terms used to describe and classify linear filters: One important application of filters is intelecommunication. Many telecommunication systems usefrequency-division multiplexing, where the system designers divide a wide frequency band into many narrower frequency bands called "slots" or "channels", and each stream of information is allocated one of those channels. The people who design the filters at each transmitter and each receiver try to balance passing the desired signal through as accurately as possible, keeping interference to and from other cooperating transmitters and noise sources outside the system as low as possible, at reasonable cost. Multilevelandmultiphasedigital modulationsystems require filters that have flat phase delay—are linear phase in the passband—to preserve pulse integrity in the time domain,[1]giving lessintersymbol interferencethan other kinds of filters. On the other hand,analog audiosystems usinganalog transmissioncan tolerate much larger ripples inphase delay, and so designers of such systems often deliberately sacrifice linear phase to get filters that are better in other ways—better stop-band rejection, lower passband amplitude ripple, lower cost, etc. Filters can be built in a number of different technologies. The same transfer function can be realised in several different ways, that is the mathematical properties of the filter are the same but the physical properties are quite different. Often the components in different technologies are directly analogous to each other and fulfill the same role in their respective filters. For instance, the resistors, inductors and capacitors of electronics correspond respectively to dampers, masses and springs in mechanics. Likewise, there are corresponding components indistributed-element filters. Digital signal processingallows the inexpensive construction of a wide variety of filters. The signal is sampled and ananalog-to-digital converterturns the signal into a stream of numbers. A computer program running on aCPUor a specializedDSP(or less often running on a hardware implementation of thealgorithm) calculates an output number stream. This output can be converted to a signal by passing it through adigital-to-analog converter. There are problems with noise introduced by the conversions, but these can be controlled and limited for many useful filters. Due to the sampling involved, the input signal must be of limited frequency content oraliasingwill occur. In the late 1930s, engineers realized that small mechanical systems made of rigid materials such asquartzwould acoustically resonate at radio frequencies, i.e. from audible frequencies (sound) up to several hundred megahertz. Some early resonators were made ofsteel, but quartz quickly became favored. The biggest advantage of quartz is that it ispiezoelectric. This means that quartz resonators can directly convert their own mechanical motion into electrical signals. Quartz also has a very low coefficient of thermal expansion which means that quartz resonators can produce stable frequencies over a wide temperature range.Quartz crystalfilters have much higher quality factors than LCR filters. When higher stabilities are required, the crystals and their driving circuits may be mounted in a "crystal oven" to control the temperature. For very narrow band filters, sometimes several crystals are operated in series. A large number of crystals can be collapsed into a single component, by mounting comb-shaped evaporations of metal on a quartz crystal. In this scheme, a "tappeddelay line" reinforces the desired frequencies as the sound waves flow across the surface of the quartz crystal. The tapped delay line has become a general scheme of making high-Qfilters in many different ways. SAW (surface acoustic wave) filters areelectromechanicaldevices commonly used inradio frequencyapplications. Electrical signals are converted to a mechanical wave in a device constructed of apiezoelectriccrystal or ceramic; this wave is delayed as it propagates across the device, before being converted back to an electrical signal by furtherelectrodes. The delayed outputs are recombined to produce a direct analog implementation of afinite impulse responsefilter. This hybrid filtering technique is also found in ananalog sampled filter. SAW filters are limited to frequencies up to 3 GHz. The filters were developed by ProfessorTed Paigeand others.[2] BAW (bulk acoustic wave) filters areelectromechanicaldevices. BAW filters can implement ladder or lattice filters. BAW filters typically operate at frequencies from around 2 to around 16 GHz, and may be smaller or thinner than equivalent SAW filters. Two main variants of BAW filters are making their way into devices:thin-film bulk acoustic resonatoror FBAR and solid mounted bulk acoustic resonators (SMRs). Another method of filtering, atmicrowavefrequencies from 800 MHz to about 5 GHz, is to use a syntheticsingle crystalyttrium iron garnetsphere made of a chemical combination ofyttriumandiron(YIGF, or yttrium iron garnet filter). The garnet sits on a strip of metal driven by atransistor, and a small loopantennatouches the top of the sphere. Anelectromagnetchanges the frequency that the garnet will pass. The advantage of this method is that the garnet can be tuned over a very wide frequency by varying the strength of themagnetic field. For even higher frequencies and greater precision, the vibrations of atoms must be used.Atomic clocksusecaesiummasersas ultra-highQfilters to stabilize their primary oscillators. Another method, used at high, fixed frequencies with very weak radio signals, is to use arubymaser tapped delay line. Thetransfer functionof a filter is most often defined in the domain of the complex frequencies. The back and forth passage to/from this domain is operated by theLaplace transformand its inverse (therefore, here below, the term "input signal" shall be understood as "the Laplace transform of" the time representation of the input signal, and so on). Thetransfer functionH(s){\displaystyle H(s)}of a filter is the ratio of the output signalY(s){\displaystyle Y(s)}to the input signalX(s){\displaystyle X(s)}as a function of the complex frequencys{\displaystyle s}: withs=σ+jω{\displaystyle s=\sigma +j\omega }. For filters that are constructed of discrete components (lumped elements): Distributed-element filtersdo not, in general, have rational-function transfer functions, but can approximate them. The construction of a transfer function involves theLaplace transform, and therefore it is needed to assume null initial conditions, because And whenf(0) = 0 we can get rid of the constants and use the usual expression An alternative to transfer functions is to give the behavior of the filter as aconvolutionof the time-domain input with the filter'simpulse response. Theconvolution theorem, which holds for Laplace transforms, guarantees equivalence with transfer functions. Certain filters may be specified by family and bandform. A filter's family is specified by the approximating polynomial used, and each leads to certain characteristics of the transfer function of the filter. Some common filter families and their particular characteristics are: Each family of filters can be specified to a particular order. The higher the order, the more the filter will approach the "ideal" filter; but also the longer the impulse response is and the longer the latency will be. An ideal filter has full transmission in the pass band, complete attenuation in the stop band, and an abrupt transition between the two bands, but this filter has infinite order (i.e., the response cannot be expressed as alinear differential equationwith a finite sum) and infinite latency (i.e., itscompact supportin theFourier transformforces its time response to be ever lasting). Here is an image comparing Butterworth, Chebyshev, and elliptic filters. The filters in this illustration are all fifth-order low-pass filters. The particular implementation – analog or digital, passive or active – makes no difference; their output would be the same. As is clear from the image, elliptic filters are sharper than the others, but they show ripples on the whole bandwidth. Any family can be used to implement a particular bandform of which frequencies are transmitted, and which, outside the passband, are more or less attenuated. The transfer function completely specifies the behavior of a linear filter, but not the particular technology used to implement it. In other words, there are a number of different ways of achieving a particular transfer function when designing a circuit. A particular bandform of filter can be obtained bytransformationof aprototype filterof that family. Impedance matchingstructures invariably take on the form of a filter, that is, a network of non-dissipative elements. For instance, in a passive electronics implementation, it would likely take the form of aladder topologyof inductors and capacitors. The design of matching networks shares much in common with filters and the design invariably will have a filtering action as an incidental consequence. Although the prime purpose of a matching network is not to filter, it is often the case that both functions are combined in the same circuit. The need for impedance matching does not arise while signals are in the digital domain. Similar comments can be made regardingpower dividers and directional couplers. When implemented in a distributed-element format, these devices can take the form of adistributed-element filter. There are four ports to be matched and widening the bandwidth requires filter-like structures to achieve this. The inverse is also true: distributed-element filters can take the form of coupled lines.[3]
https://en.wikipedia.org/wiki/Filter_(signal_processing)
Incomplex analysis, a branch of mathematics, theGauss–Lucas theoremgives ageometricrelation between therootsof apolynomialPand the roots of itsderivativeP'. The set of roots of a real or complex polynomial is a set ofpointsin thecomplex plane. The theorem states that the roots ofP'all lie within theconvex hullof the roots ofP, that is the smallestconvex polygoncontaining the roots ofP. WhenPhas a single root then this convex hull is a single point and when the roots lie on alinethen the convex hull is asegmentof this line. The Gauss–Lucas theorem, named afterCarl Friedrich Gaussand Félix Lucas, is similar in spirit toRolle's theorem. IfPis a (nonconstant) polynomial with complex coefficients, allzerosofP'belong to the convex hull of the set of zeros ofP.[1] It is easy to see that ifP(x)=ax2+bx+c{\displaystyle P(x)=ax^{2}+bx+c}is asecond degree polynomial, the zero ofP′(x)=2ax+b{\displaystyle P'(x)=2ax+b}is theaverageof the roots ofP. In that case, the convex hull is the line segment with the two roots as endpoints and it is clear that the average of the roots is the middle point of the segment. For a third degree complex polynomialP(cubic function) with three distinct zeros,Marden's theoremstates that the zeros ofP'are the foci of theSteiner inellipsewhich is the unique ellipse tangent to the midpoints of the triangle formed by the zeros ofP. For a fourth degree complex polynomialP(quartic function) with four distinct zeros forming a concavequadrilateral, one of the zeros ofPlies within the convex hull of the other three; all three zeros ofP'lie in two of the three triangles formed by the interior zero ofPand two others zeros ofP.[2] In addition, if a polynomial of degreenofreal coefficientshasndistinct real zerosx1<x2<⋯<xn,{\displaystyle x_{1}<x_{2}<\cdots <x_{n},}we see, usingRolle's theorem, that the zeros of the derivative polynomial are in the interval[x1,xn]{\displaystyle [x_{1},x_{n}]}which is the convex hull of the set of roots. The convex hull of the roots of the polynomial particularly includes the point By thefundamental theorem of algebra,P{\displaystyle P}is a product of linear factors as where thecomplex numbersa1,a2,…,an{\displaystyle a_{1},a_{2},\ldots ,a_{n}}are the – not necessarily distinct – zeros of the polynomialP, the complex numberαis the leading coefficient ofPandnis the degree ofP. For any rootz{\displaystyle z}ofP′{\displaystyle P'}, if it is also a root ofP{\displaystyle P}, then the theorem is trivially true. Otherwise, we have for thelogarithmic derivative Hence Taking their conjugates, and dividing, we obtainz{\displaystyle z}as a convex sum of the roots ofP{\displaystyle P}:
https://en.wikipedia.org/wiki/Gauss%E2%80%93Lucas_theorem
Inmathematicsand in particular the field ofcomplex analysis,Hurwitz's theoremis a theorem associating thezeroesof asequenceofholomorphic,compactlocally uniformly convergentfunctions with that of their corresponding limit. The theorem is named afterAdolf Hurwitz. Let {fk} be a sequence of holomorphic functions on a connectedopen setGthat converge uniformly oncompactsubsets ofGto a holomorphic functionfwhich is not constantly zero onG. Iffhas a zero of ordermatz0then for every small enoughρ> 0 and for sufficiently largek∈N(depending onρ),fkhas preciselymzeroes in the disk defined by |z−z0| <ρ, includingmultiplicity. Furthermore, these zeroes converge toz0ask→ ∞.[1] The theorem does not guarantee that the result will hold for arbitrary disks. Indeed, if one chooses a disk such thatfhas zeroes on itsboundary, the theorem fails. An explicit example is to consider theunit diskDand the sequence defined by which converges uniformly tof(z) =z− 1. The functionf(z) contains no zeroes inD; however, eachfnhas exactly one zero in the disk corresponding to the real value 1 − (1/n). Hurwitz's theorem is used in the proof of theRiemann mapping theorem,[2]and also has the following twocorollariesas an immediate consequence: Letfbe an analytic function on an open subset of the complex plane with a zero of ordermatz0, and suppose that {fn} is a sequence of functions converging uniformly on compact subsets tof. Fix someρ> 0 such thatf(z) ≠ 0 in 0 < |z−z0| ≤ ρ. Choose δ such that |f(z)| >δforzon the circle |z−z0| =ρ. Sincefk(z) converges uniformly on the disc we have chosen, we can findNsuch that |fk(z)| ≥δ/2 for everyk≥Nand everyzon the circle, ensuring that the quotientfk′(z)/fk(z) is well defined for allzon the circle |z−z0| =ρ. By Weierstrass's theorem we havefk′→f′{\displaystyle f_{k}'\to f'}uniformly on the disc, and hence we have another uniform convergence: Denoting the number of zeros offk(z) in the disk byNk, we may apply theargument principleto find In the above step, we were able to interchange the integral and the limit because of the uniform convergence of the integrand. We have shown thatNk→mask→ ∞. Since theNkare integer valued,Nkmust equalmfor large enoughk.[1]
https://en.wikipedia.org/wiki/Hurwitz%27s_theorem_(complex_analysis)
Inmathematics,Marden's theorem, named afterMorris Mardenbut proved about 100 years earlier by Jörg Siebeck, gives a geometric relationship between the zeroes of a third-degreepolynomialwithcomplexcoefficients and the zeroes of itsderivative. See alsogeometrical properties of polynomial roots. A cubic polynomial has three zeroes in the complex number plane, which in general form a triangle, and theGauss–Lucas theoremstates that the roots of its derivative lie within this triangle. Marden's theorem states their location within this triangle more precisely: This proof comes from an exercise inFritz Carlson'sbook “Geometri” (in Swedish, 1943).[1] Given anya,b∈C{\displaystyle a,b\in \mathbb {C} }witha≠0{\displaystyle a\neq 0}, defineg(z)=f(az+b){\displaystyle g(z)=f(az+b)}, theng′(z)=af′(az+b){\displaystyle g'(z)=af'(az+b)}. Thus, we haveg−1(0)=(f−1(0)−b)/a{\displaystyle g^{-1}(0)=(f^{-1}(0)-b)/a} and similarly forg′{\displaystyle g'}andf′{\displaystyle f'}. In other words, by a linear change of variables, we may perform arbitrary translation, rotation, and scaling on the roots off{\displaystyle f}andf′{\displaystyle f'}. Thus, WLOG, we let the Steiner inellipse's focal points be on the real axis, at±c{\displaystyle \pm c}, wherec{\displaystyle c}is the focal length. Leta,b{\displaystyle a,b}be the long and short semiaxis lengths, so thatc=a2−b2{\displaystyle c={\sqrt {a^{2}-b^{2}}}}. Let the three roots off{\displaystyle f}bezj:=xj+yji{\displaystyle z_{j}:=x_{j}+y_{j}i}forj=0,1,2{\displaystyle j=0,1,2}. Horizontally stretch the complex plane so that the Steiner inellipse becomes a circle of radiusb{\displaystyle b}. This would transform the triangle into an equilateral triangle, with verticesζj=baxj+yji{\displaystyle \zeta _{j}={\frac {b}{a}}x_{j}+y_{j}i}. By geometry of the equilateral triangle,∑jζj=0{\displaystyle \sum _{j}\zeta _{j}=0}, we have∑jzj=0{\displaystyle \sum _{j}z_{j}=0}, thusf(z)=z3+z∑jzjzj+1−z0z1z2{\displaystyle f(z)=z^{3}+z\sum _{j}z_{j}z_{j+1}-z_{0}z_{1}z_{2}}byVieta's formulas(for notational cleanness, we "loop back" the indices, that is,z3=z0{\displaystyle z_{3}=z_{0}}.). Now it remains to show that3c2+∑jzjzj+1=0{\displaystyle 3c^{2}+\sum _{j}z_{j}z_{j+1}=0}, Since0=(∑jzj)2=∑jzj2+2∑jzjzj+1{\displaystyle 0=\left(\sum _{j}z_{j}\right)^{2}=\sum _{j}z_{j}^{2}+2\sum _{j}z_{j}z_{j+1}}, it remains to show∑jzj2=6c2{\displaystyle \sum _{j}z_{j}^{2}=6c^{2}}, that is, it remains to show ∑jxjyj=0;∑jxj2−yj2=6(a2−b2){\displaystyle \sum _{j}x_{j}y_{j}=0;\quad \sum _{j}x_{j}^{2}-y_{j}^{2}=6(a^{2}-b^{2})} By the geometry of the equilateral triangle, we have∑jζj2=0{\displaystyle \sum _{j}\zeta _{j}^{2}=0}, and|ζj|=2b{\displaystyle |\zeta _{j}|=2b}for eachj{\displaystyle j}, which implies ∑j2baxjyj=0;∑jb2a2xj2−yj2=0;∑jb2a2xj2+yj2=12b2{\displaystyle \sum _{j}{\frac {2b}{a}}x_{j}y_{j}=0;\quad \sum _{j}{\frac {b^{2}}{a^{2}}}x_{j}^{2}-y_{j}^{2}=0;\quad \sum _{j}{\frac {b^{2}}{a^{2}}}x_{j}^{2}+y_{j}^{2}=12b^{2}} which yields the desired equalities. By theGauss–Lucas theorem, the root of the double derivativep"(z)must be the average of the two foci, which is the center point of the ellipse and thecentroidof the triangle. In the special case that the triangle is equilateral (as happens, for instance, for the polynomialp(z) =z3− 1) the inscribed ellipse becomes a circle, and the derivative ofphas adouble rootat the center of the circle. Conversely, if the derivative has a double root, then the triangle must be equilateral (Kalman 2008a). A more general version of the theorem, due toLinfield (1920), applies to polynomialsp(z) = (z−a)i(z−b)j(z−c)kwhose degreei+j+kmay be higher than three, but that have only three rootsa,b, andc. For such polynomials, the roots of the derivative may be found at the multiple roots of the given polynomial (the roots whose exponent is greater than one) and at the foci of an ellipse whose points of tangency to the triangle divide its sides in the ratiosi:j,j:k, andk:i. Another generalization (Parish (2006)) is ton-gons: somen-gons have an interior ellipse that is tangent to each side at the side's midpoint. Marden's theorem still applies: the foci of this midpoint-tangent inellipse are zeroes of the derivative of the polynomial whose zeroes are the vertices of then-gon. Jörg Siebeck discovered this theorem 81 years before Marden wrote about it. However,Dan Kalmantitled hisAmerican Mathematical Monthlypaper "Marden's theorem" because, as he writes, "I call this Marden’s Theorem because I first read it in M. Marden’s wonderful book". Marden (1945,1966) attributes what is now known as Marden's theorem toSiebeck (1864)and cites nine papers that included a version of the theorem. Dan Kalman won the 2009Lester R. FordAward of theMathematical Association of Americafor his 2008 paper in theAmerican Mathematical Monthlydescribing the theorem.
https://en.wikipedia.org/wiki/Marden%27s_theorem
Incontrol theoryandstability theory, theNyquist stability criterionorStrecker–Nyquist stability criterion, independently discovered by the German electrical engineerFelix Strecker[de]atSiemensin 1930[1][2][3]and the Swedish-American electrical engineerHarry NyquistatBell Telephone Laboratoriesin 1932,[4]is a graphical technique for determining thestabilityof a lineardynamical system. Because it only looks at theNyquist plotof theopen loop systems, it can be applied without explicitly computing thepoles and zerosof either the closed-loop or open-loop system (although the number of each type of right-half-planesingularitiesmust be known). As a result, it can be applied to systems defined by non-rational functions, such as systems with delays. In contrast toBode plots, it can handletransfer functionswith right half-plane singularities. In addition, there is a natural generalization to more complex systems withmultiple inputs and multiple outputs, such as control systems for airplanes. The Nyquist stability criterion is widely used inelectronicsandcontrol system engineering, as well as other fields, for designing and analyzing systems withfeedback. While Nyquist is one of the most general stability tests, it is still restricted tolinear time-invariant(LTI) systems. Nevertheless, there are generalizations of the Nyquist criterion (and plot) for non-linear systems, such as thecircle criterionand thescaled relative graphof anonlinear operator.[5]Additionally, otherstability criterialikeLyapunov methodscan also be applied for non-linear systems. Although Nyquist is a graphical technique, it only provides a limited amount of intuition for why a system is stable or unstable, or how to modify an unstable system to be stable. Techniques likeBode plots, while less general, are sometimes a more useful design tool. ANyquist plotis aparametric plotof a frequency response used inautomatic controlandsignal processing. The most common use of Nyquist plots is for assessing the stability of a system withfeedback. InCartesian coordinates, thereal partof thetransfer functionis plotted on theX-axis while theimaginary partis plotted on theY-axis. The frequency is swept as a parameter, resulting in one point per frequency. The same plot can be described usingpolar coordinates, wheregainof the transfer function is the radial coordinate, and thephaseof the transfer function is the corresponding angular coordinate. The Nyquist plot is named afterHarry Nyquist, a former engineer atBell Laboratories. Assessment of the stability of a closed-loopnegative feedbacksystem is done by applying the Nyquist stability criterion to the Nyquist plot of the open-loop system (i.e. the same system without itsfeedback loop). This method is easily applicable even for systems with delays and other non-rational transfer functions, which may appear difficult to analyze with other methods. Stability is determined by looking at the number of encirclements of the point (−1, 0). The range of gains over which the system will be stable can be determined by looking at crossings of the real axis. The Nyquist plot can provide some information about the shape of the transfer function. For instance, the plot provides information on the difference between the number ofzeros and polesof thetransfer function[6]by the angle at which the curve approaches the origin. When drawn by hand, a cartoon version of the Nyquist plot is sometimes used, which shows the linearity of the curve, but where coordinates are distorted to show more detail in regions of interest. When plotted computationally, one needs to be careful to cover all frequencies of interest. This typically means that the parameter is swept logarithmically, in order to cover a wide range of values. The mathematics uses theLaplace transform, which transforms integrals and derivatives in the time domain to simple multiplication and division in thesdomain. We consider a system whose transfer function isG(s){\displaystyle G(s)}; when placed in a closed loop with negative feedbackH(s){\displaystyle H(s)}, theclosed loop transfer function(CLTF) then becomes: Stability can be determined by examining therootsof the desensitivity factor polynomial1+G(s)H(s){\displaystyle 1+G(s)H(s)}, e.g. using theRouth array, but this method is somewhat tedious. Conclusions can also be reached by examining the open loop transfer function (OLTF)G(s)H(s){\displaystyle G(s)H(s)}, using itsBode plotsor, as here, its polar plot using the Nyquist criterion, as follows. AnyLaplace domaintransfer functionT(s){\displaystyle {\mathcal {T}}(s)}can be expressed as the ratio of twopolynomials: The roots ofN(s){\displaystyle N(s)}are called thezerosofT(s){\displaystyle {\mathcal {T}}(s)}, and the roots ofD(s){\displaystyle D(s)}are thepolesofT(s){\displaystyle {\mathcal {T}}(s)}. The poles ofT(s){\displaystyle {\mathcal {T}}(s)}are also said to be the roots of thecharacteristic equationD(s)=0{\displaystyle D(s)=0}. The stability ofT(s){\displaystyle {\mathcal {T}}(s)}is determined by the values of its poles: for stability, the real part of every pole must be negative. IfT(s){\displaystyle {\mathcal {T}}(s)}is formed by closing a negative unity feedback loop around the open-loop transfer function, then the roots of the characteristic equation are also the zeros of1+G(s)H(s){\displaystyle 1+G(s)H(s)}, or simply the roots ofA(s)+B(s)=0{\displaystyle A(s)+B(s)=0}. Fromcomplex analysis, a contourΓs{\displaystyle \Gamma _{s}}drawn in the complexs{\displaystyle s}plane, encompassing but not passing through any number of zeros and poles of a functionF(s){\displaystyle F(s)}, can bemappedto another plane (namedF(s){\displaystyle F(s)}plane) by the functionF{\displaystyle F}. Precisely, each complex points{\displaystyle s}in the contourΓs{\displaystyle \Gamma _{s}}is mapped to the pointF(s){\displaystyle F(s)}in the newF(s){\displaystyle F(s)}plane yielding a new contour. The Nyquist plot ofF(s){\displaystyle F(s)}, which is the contourΓF(s)=F(Γs){\displaystyle \Gamma _{F(s)}=F(\Gamma _{s})}will encircle the points=−1/k+j0{\displaystyle s={-1/k+j0}}of theF(s){\displaystyle F(s)}planeN{\displaystyle N}times, whereN=P−Z{\displaystyle N=P-Z}byCauchy's argument principle. HereZ{\displaystyle Z}andP{\displaystyle P}are, respectively, the number of zeros of1+kF(s){\displaystyle 1+kF(s)}and poles ofF(s){\displaystyle F(s)}inside the contourΓs{\displaystyle \Gamma _{s}}. Note that we count encirclements in theF(s){\displaystyle F(s)}plane in the same sense as the contourΓs{\displaystyle \Gamma _{s}}and that encirclements in the opposite direction arenegativeencirclements. That is, we consider clockwise encirclements to be positive and counterclockwise encirclements to be negative. Instead of Cauchy's argument principle, the original paper byHarry Nyquistin 1932 uses a less elegant approach. The approach explained here is similar to the approach used by Leroy MacColl (Fundamental theory of servomechanisms 1945) or byHendrik Bode(Network analysis and feedback amplifier design 1945), both of whom also worked forBell Laboratories. This approach appears in most modern textbooks on control theory. We first constructthe Nyquist contour, a contour that encompasses the right-half of the complex plane: The Nyquist contour mapped through the function1+G(s){\displaystyle 1+G(s)}yields a plot of1+G(s){\displaystyle 1+G(s)}in the complex plane. By the argument principle, the number of clockwise encirclements of the origin must be the number of zeros of1+G(s){\displaystyle 1+G(s)}in the right-half complex plane minus the number of poles of1+G(s){\displaystyle 1+G(s)}in the right-half complex plane. If instead, the contour is mapped through the open-loop transfer functionG(s){\displaystyle G(s)}, the result is theNyquist PlotofG(s){\displaystyle G(s)}. By counting the resulting contour's encirclements of −1, we find the difference between the number of poles and zeros in the right-half complex plane of1+G(s){\displaystyle 1+G(s)}. Recalling that the zeros of1+G(s){\displaystyle 1+G(s)}are the poles of the closed-loop system, and noting that the poles of1+G(s){\displaystyle 1+G(s)}are same as the poles ofG(s){\displaystyle G(s)}, we now state theNyquist Criterion: Given a Nyquist contourΓs{\displaystyle \Gamma _{s}}, letP{\displaystyle P}be the number of poles ofG(s){\displaystyle G(s)}encircled byΓs{\displaystyle \Gamma _{s}}, andZ{\displaystyle Z}be the number of zeros of1+G(s){\displaystyle 1+G(s)}encircled byΓs{\displaystyle \Gamma _{s}}. Alternatively, and more importantly, ifZ{\displaystyle Z}is the number of poles of the closed loop system in the right half plane, andP{\displaystyle P}is the number of poles of the open-loop transfer functionG(s){\displaystyle G(s)}in the right half plane, the resultant contour in theG(s){\displaystyle G(s)}-plane,ΓG(s){\displaystyle \Gamma _{G(s)}}shall encircle (clockwise) the point(−1+j0){\displaystyle (-1+j0)}N{\displaystyle N}times such thatN=Z−P{\displaystyle N=Z-P}. If the system is originally open-loop unstable, feedback is necessary to stabilize the system. Right-half-plane (RHP) poles represent that instability. For closed-loop stability of a system, the number of closed-loop roots in the right half of thes-plane must be zero. Hence, the number of counter-clockwise encirclements about−1+j0{\displaystyle -1+j0}must be equal to the number of open-loop poles in the RHP. Any clockwise encirclements of the critical point by the open-loop frequency response (when judged from low frequency to high frequency) would indicate that the feedback control system would be destabilizing if the loop were closed. (Using RHP zeros to "cancel out" RHP poles does not remove the instability, but rather ensures that the system will remain unstable even in the presence of feedback, since the closed-loop roots travel between open-loop poles and zeros in the presence of feedback. In fact, the RHP zero can make the unstable pole unobservable and therefore not stabilizable through feedback.) The above consideration was conducted with an assumption that the open-loop transfer functionG(s){\displaystyle G(s)}does not have any pole on the imaginary axis (i.e. poles of the form0+jω{\displaystyle 0+j\omega }). This results from the requirement of theargument principlethat the contour cannot pass through any pole of the mapping function. The most common case are systems with integrators (poles at zero). To be able to analyze systems with poles on the imaginary axis, the Nyquist Contour can be modified to avoid passing through the point0+jω{\displaystyle 0+j\omega }. One way to do it is to construct a semicircular arc with radiusr→0{\displaystyle r\to 0}around0+jω{\displaystyle 0+j\omega }, that starts at0+j(ω−r){\displaystyle 0+j(\omega -r)}and travels anticlockwise to0+j(ω+r){\displaystyle 0+j(\omega +r)}. Such a modification implies that the phasorG(s){\displaystyle G(s)}travels along an arc of infinite radius by−lπ{\displaystyle -l\pi }, wherel{\displaystyle l}is the multiplicity of the pole on the imaginary axis. Our goal is to, through this process, check for the stability of the transfer function of our unity feedback system with gaink, which is given by That is, we would like to check whether the characteristic equation of the above transfer function, given by has zeros outside the open left-half-plane (commonly initialized as OLHP). We suppose that we have a clockwise (i.e. negatively oriented) contourΓs{\displaystyle \Gamma _{s}}enclosing the right half plane, with indentations as needed to avoid passing through zeros or poles of the functionG(s){\displaystyle G(s)}. Cauchy'sargument principlestates that WhereZ{\displaystyle Z}denotes the number of zeros ofD(s){\displaystyle D(s)}enclosed by the contour andP{\displaystyle P}denotes the number of poles ofD(s){\displaystyle D(s)}by the same contour. Rearranging, we haveZ=N+P{\displaystyle Z=N+P}, which is to say We then note thatD(s)=1+kG(s){\displaystyle D(s)=1+kG(s)}has exactly the same poles asG(s){\displaystyle G(s)}. Thus, we may findP{\displaystyle P}by counting the poles ofG(s){\displaystyle G(s)}that appear within the contour, that is, within the open right half plane (ORHP). We will now rearrange the above integral via substitution. That is, settingu(s)=D(s){\displaystyle u(s)=D(s)}, we have We then make a further substitution, settingv(u)=u−1k{\displaystyle v(u)={\frac {u-1}{k}}}. This gives us We now note thatv(u(Γs))=D(Γs)−1k=G(Γs){\displaystyle v(u(\Gamma _{s}))={{D(\Gamma _{s})-1} \over {k}}=G(\Gamma _{s})}gives us the image of our contour underG(s){\displaystyle G(s)}, which is to say ourNyquist plot. We may further reduce the integral by applyingCauchy's integral formula. In fact, we find that the above integral corresponds precisely to the number of times the Nyquist plot encircles the point−1/k{\displaystyle -1/k}clockwise. Thus, we may finally state that We thus find thatT(s){\displaystyle T(s)}as defined above corresponds to a stable unity-feedback system whenZ{\displaystyle Z}, as evaluated above, is equal to 0. The Nyquist stability criterion is a graphical technique that determines the stability of a dynamical system, such as a feedback control system. It is based on the argument principle and the Nyquist plot of the open-loop transfer function of the system. It can be applied to systems that are not defined by rational functions, such as systems with delays. It can also handle transfer functions with singularities in the right half-plane, unlike Bode plots. The Nyquist stability criterion can also be used to find the phase and gain margins of a system, which are important for frequency domain controller design.[7]
https://en.wikipedia.org/wiki/Nyquist_stability_criterion
Inmathematics,signal processingandcontrol theory, apole–zero plotis a graphical representation of arationaltransfer functionin thecomplex planewhich helps to convey certain properties of the system such as: A pole-zero plot shows the location in the complex plane of thepoles and zerosof thetransfer functionof adynamic system, such as a controller, compensator, sensor, equalizer,filter, or communications channel. By convention, the poles of the system are indicated in the plot by an X while the zeros are indicated by a circle or O. A pole-zero plot is plotted in the plane of a complex frequencydomain, which can represent either a continuous-time or a discrete-time system: In general, arationaltransfer function for a continuous-timeLTI systemhas the form: H(s)=B(s)A(s)=∑m=0MbmsmsN+∑n=0N−1ansn=b0+b1s+b2s2+⋯+bMsMa0+a1s+a2s2+⋯+a(N−1)s(N−1)+sN{\displaystyle H(s)={\frac {B(s)}{A(s)}}={\displaystyle \sum _{m=0}^{M}{b_{m}s^{m}} \over s^{N}+\displaystyle \sum _{n=0}^{N-1}{a_{n}s^{n}}}={\frac {b_{0}+b_{1}s+b_{2}s^{2}+\cdots +b_{M}s^{M}}{a_{0}+a_{1}s+a_{2}s^{2}+\cdots +a_{(N-1)}s^{(N-1)}+s^{N}}}} where EitherM{\displaystyle M}orN{\displaystyle N}or both may be zero, but in real systems, it should be the case thatM≤N{\displaystyle M\leq N}; otherwise the gain would be unbounded at high frequencies. Theregion of convergence(ROC) for a given continuous-time transfer function is a half-plane or vertical strip, either of which contains no poles. In general, the ROC is not unique, and the particular ROC in any given case depends on whether the system iscausalor anti-causal. The ROC is usually chosen to include the imaginary axis since it is important for most practical systems to haveBIBO stability. H(s)=25s2+6s+25{\displaystyle H(s)={\frac {25}{s^{2}+6s+25}}} This system has no (finite) zeros and two poles:s=α1=−3+4j{\displaystyle s=\alpha _{1}=-3+4j}ands=α2=−3−4j{\displaystyle s=\alpha _{2}=-3-4j} The pole-zero plot would be: Notice that these two poles arecomplex conjugates, which is the necessary and sufficient condition to have real-valued coefficients in the differential equation representing the system. In general, a rational transfer function for a discrete-timeLTI systemhas the form: H(z)=P(z)Q(z)=∑m=0Mbmz−m1+∑n=1Nanz−n=b0+b1z−1+b2z−2⋯+bMz−M1+a1z−1+a2z−2⋯+aNz−N{\displaystyle H(z)={\frac {P(z)}{Q(z)}}={\frac {\displaystyle \sum _{m=0}^{M}{b_{m}z^{-m}}}{1+\displaystyle \sum _{n=1}^{N}{a_{n}z^{-n}}}}={\frac {b_{0}+b_{1}z^{-1}+b_{2}z^{-2}\cdots +b_{M}z^{-M}}{1+a_{1}z^{-1}+a_{2}z^{-2}\cdots +a_{N}z^{-N}}}} where EitherM{\displaystyle M}orN{\displaystyle N}or both may be zero. Theregion of convergence(ROC) for a given discrete-time transfer function is adiskorannuluswhich contains no uncancelled poles. In general, the ROC is not unique, and the particular ROC in any given case depends on whether the system iscausalor anti-causal. The ROC is usually chosen to include the unit circle since it is important for most practical systems to haveBIBO stability. IfP(z){\displaystyle P(z)}andQ(z){\displaystyle Q(z)}are completely factored, their solution can be easily plotted in thez-plane. For example, given the following transfer function:H(z)=z+2z2+14{\displaystyle H(z)={\frac {z+2}{z^{2}+{\frac {1}{4}}}}} The only (finite) zero is located at:z=−2{\displaystyle z=-2}, and the two poles are located at:z=±j2{\displaystyle z=\pm {\frac {j}{2}}}, wherej{\displaystyle j}is theimaginary unit. The pole–zero plot would be:
https://en.wikipedia.org/wiki/Pole%E2%80%93zero_plot
Inmathematics, more specificallycomplex analysis, theresidueis acomplex numberproportional to thecontour integralof ameromorphic functionalong a path enclosing one of itssingularities. (More generally, residues can be calculated for any functionf:C∖{ak}k→C{\displaystyle f\colon \mathbb {C} \setminus \{a_{k}\}_{k}\rightarrow \mathbb {C} }that isholomorphicexcept at the discrete points {ak}k, even if some of them areessential singularities.) Residues can be computed quite easily and, once known, allow the determination of general contour integrals via theresidue theorem. The residue of ameromorphic functionf{\displaystyle f}at anisolated singularitya{\displaystyle a}, often denotedRes⁡(f,a){\displaystyle \operatorname {Res} (f,a)},Resa⁡(f){\displaystyle \operatorname {Res} _{a}(f)},Resz=a⁡f(z){\displaystyle \mathop {\operatorname {Res} } _{z=a}f(z)}orresz=a⁡f(z){\displaystyle \mathop {\operatorname {res} } _{z=a}f(z)}, is the unique valueR{\displaystyle R}such thatf(z)−R/(z−a){\displaystyle f(z)-R/(z-a)}has ananalyticantiderivativein apunctured disk0<|z−a|<δ{\displaystyle 0<\vert z-a\vert <\delta }. Alternatively, residues can be calculated by findingLaurent seriesexpansions, and one can define the residue as the coefficienta−1of a Laurent series. The concept can be used to provide contour integration values of certain contour integral problems considered in theresidue theorem. According to theresidue theorem, for ameromorphic functionf{\displaystyle f}, the residue at pointak{\displaystyle a_{k}}is given as: whereγ{\displaystyle \gamma }is apositively orientedsimple closed curvearoundak{\displaystyle a_{k}}and not including any other singularities on or inside the curve. The definition of a residue can be generalized to arbitraryRiemann surfaces. Supposeω{\displaystyle \omega }is a1-formon a Riemann surface. Letω{\displaystyle \omega }be meromorphic at some pointx{\displaystyle x}, so that we may writeω{\displaystyle \omega }in local coordinates asf(z)dz{\displaystyle f(z)\;dz}. Then, the residue ofω{\displaystyle \omega }atx{\displaystyle x}is defined to be the residue off(z){\displaystyle f(z)}at the point corresponding tox{\displaystyle x}. Computing the residue of amonomial makes most residue computations easy to do. Since path integral computations arehomotopyinvariant, we will letC{\displaystyle C}be the circle with radius1{\displaystyle 1}going counter clockwise. Then, using the change of coordinatesz→eiθ{\displaystyle z\to e^{i\theta }}we find that hence our integral now reads as Thus, the residue ofzk{\displaystyle z^{k}}is 1 if integerk=−1{\displaystyle k=-1}and 0 otherwise. If a function is expressed as aLaurent seriesexpansion around c as follows:f(z)=∑n=−∞∞an(z−c)n.{\displaystyle f(z)=\sum _{n=-\infty }^{\infty }a_{n}(z-c)^{n}.}Then, the residue at the point c is calculated as:Res⁡(f,c)=12πi∮γf(z)dz=12πi∑n=−∞∞∮γan(z−c)ndz=a−1{\displaystyle \operatorname {Res} (f,c)={1 \over 2\pi i}\oint _{\gamma }f(z)\,dz={1 \over 2\pi i}\sum _{n=-\infty }^{\infty }\oint _{\gamma }a_{n}(z-c)^{n}\,dz=a_{-1}}using the results from contour integral of a monomial for counter clockwise contour integralγ{\displaystyle \gamma }around a point c. Hence, if aLaurent seriesrepresentation of a function exists around c, then its residue around c is known by the coefficient of the(z−c)−1{\displaystyle (z-c)^{-1}}term. For ameromorphic functionf{\displaystyle f}, with a finite set of singularities within apositively orientedsimple closed curveC{\displaystyle C}which does not pass through any singularity, the value of the contour integral is given according toresidue theorem, as:∮Cf(z)dz=2πi∑k=1nI⁡(C,ak)Res⁡(f,ak).{\displaystyle \oint _{C}f(z)\,dz=2\pi i\sum _{k=1}^{n}\operatorname {I} (C,a_{k})\operatorname {Res} (f,a_{k}).}whereI⁡(C,ak){\displaystyle \operatorname {I} (C,a_{k})}, the winding number, is1{\displaystyle 1}ifak{\displaystyle a_{k}}is in the interior ofC{\displaystyle C}and0{\displaystyle 0}if not, simplifying to:∮γf(z)dz=2πi∑Res⁡(f,ak){\displaystyle \oint _{\gamma }f(z)\,dz=2\pi i\sum \operatorname {Res} (f,a_{k})}whereak{\displaystyle a_{k}}are all isolated singularities within the contourC{\displaystyle C}. Suppose apunctured diskD= {z: 0 < |z−c| <R} in the complex plane is given andfis aholomorphic functiondefined (at least) onD. The residue Res(f,c) offatcis the coefficienta−1of(z−c)−1in theLaurent seriesexpansion offaroundc. Various methods exist for calculating this value, and the choice of which method to use depends on the function in question, and on the nature of the singularity. According to theresidue theorem, we have: whereγtraces out a circle aroundcin a counterclockwise manner and does not pass through or contain other singularities within it. We may choose the pathγto be a circle of radiusεaroundc.Sinceεcan be as small as we desire it can be made to contain only the singularity of c due to nature of isolated singularities. This may be used for calculation in cases where the integral can be calculated directly, but it is usually the case that residues are used to simplify calculation of integrals, and not the other way around. If the functionfcan becontinuedto aholomorphic functionon the whole disk|y−c|<R{\displaystyle |y-c|<R}, then Res(f,c) = 0. The converse is not generally true. Ifcis asimple poleoff, the residue offis given by: If that limit does not exist, thenfinstead has an essential singularity atc. If the limit is 0, thenfis either analytic atcor has a removable singularity there. If the limit is equal to infinity, then the order of the pole is higher than 1. It may be that the functionfcan be expressed as a quotient of two functions,f(z)=g(z)h(z){\displaystyle f(z)={\frac {g(z)}{h(z)}}}, wheregandhareholomorphic functionsin aneighbourhoodofc, withh(c)= 0 andh'(c)≠ 0. In such a case,L'Hôpital's rulecan be used to simplify the above formula to: More generally, ifcis apoleof orderp, then the residue offaroundz=ccan be found by the formula: This formula can be very useful in determining the residues for low-order poles. For higher-order poles, the calculations can become unmanageable, andseries expansionis usually easier. Foressential singularities, no such simple formula exists, and residues must usually be taken directly from series expansions. In general, theresidue at infinityis defined as: If the following condition is met: then theresidue at infinitycan be computed using the following formula: If instead then theresidue at infinityis For functions meromorphic on the entire complex plane with finitely many singularities, the sum of the residues at the (necessarily) isolated singularities plus the residue at infinity is zero, which gives: If parts or all of a function can be expanded into aTaylor seriesorLaurent series, which may be possible if the parts or the whole of the function has a standard series expansion, then calculating the residue is significantly simpler than by other methods. The residue of the function is simply given by the coefficient of(z−c)−1{\displaystyle (z-c)^{-1}}in theLaurent seriesexpansion of the function. As an example, consider thecontour integral whereCis somesimple closed curveabout 0. Let us evaluate this integral using a standard convergence result about integration by series. We can substitute theTaylor seriesforez{\displaystyle e^{z}}into the integrand. The integral then becomes Let us bring the 1/z5factor into the series. The contour integral of the series then writes Since the series converges uniformly on the support of the integration path, we are allowed to exchange integration and summation. The series of the path integrals then collapses to a much simpler form because of the previous computation. So now the integral aroundCof every other term not in the formcz−1is zero, and the integral is reduced to The value 1/4! is theresidueofez/z5atz= 0, and is denoted As a second example, consider calculating the residues at the singularities of the functionf(z)=sin⁡zz2−z{\displaystyle f(z)={\sin z \over z^{2}-z}}which may be used to calculate certain contour integrals. This function appears to have a singularity atz= 0, but if one factorizes the denominator and thus writes the function asf(z)=sin⁡zz(z−1){\displaystyle f(z)={\sin z \over z(z-1)}}it is apparent that the singularity atz= 0 is aremovable singularityand then the residue atz= 0 is therefore 0. The only other singularity is atz= 1. Recall the expression for the Taylor series for a functiong(z) aboutz=a:g(z)=g(a)+g′(a)(z−a)+g″(a)(z−a)22!+g‴(a)(z−a)33!+⋯{\displaystyle g(z)=g(a)+g'(a)(z-a)+{g''(a)(z-a)^{2} \over 2!}+{g'''(a)(z-a)^{3} \over 3!}+\cdots }So, forg(z) = sinzanda= 1 we havesin⁡z=sin⁡1+(cos⁡1)(z−1)+−(sin⁡1)(z−1)22!+−(cos⁡1)(z−1)33!+⋯.{\displaystyle \sin z=\sin 1+(\cos 1)(z-1)+{-(\sin 1)(z-1)^{2} \over 2!}+{-(\cos 1)(z-1)^{3} \over 3!}+\cdots .}and forg(z) = 1/zanda= 1 we have1z=1(z−1)+1=1−(z−1)+(z−1)2−(z−1)3+⋯.{\displaystyle {\frac {1}{z}}={\frac {1}{(z-1)+1}}=1-(z-1)+(z-1)^{2}-(z-1)^{3}+\cdots .}Multiplying those two series and introducing 1/(z− 1) gives ussin⁡zz(z−1)=sin⁡1z−1+(cos⁡1−sin⁡1)+(z−1)(−sin⁡12!−cos⁡1+sin⁡1)+⋯.{\displaystyle {\frac {\sin z}{z(z-1)}}={\sin 1 \over z-1}+(\cos 1-\sin 1)+(z-1)\left(-{\frac {\sin 1}{2!}}-\cos 1+\sin 1\right)+\cdots .}So the residue off(z) atz= 1 is sin 1. The next example shows that, computing a residue by series expansion, a major role is played by theLagrange inversion theorem. Letu(z):=∑k≥1ukzk{\displaystyle u(z):=\sum _{k\geq 1}u_{k}z^{k}}be anentire function, and letv(z):=∑k≥1vkzk{\displaystyle v(z):=\sum _{k\geq 1}v_{k}z^{k}}with positive radius of convergence, and withv1≠0{\textstyle v_{1}\neq 0}. Sov(z){\textstyle v(z)}has a local inverseV(z){\textstyle V(z)}at 0, andu(1/V(z)){\textstyle u(1/V(z))}ismeromorphicat 0. Then we have:Res0⁡(u(1/V(z)))=∑k=0∞kukvk.{\displaystyle \operatorname {Res} _{0}{\big (}u(1/V(z)){\big )}=\sum _{k=0}^{\infty }ku_{k}v_{k}.}Indeed,Res0⁡(u(1/V(z)))=Res0⁡(∑k≥1ukV(z)−k)=∑k≥1ukRes0⁡(V(z)−k){\displaystyle \operatorname {Res} _{0}{\big (}u(1/V(z)){\big )}=\operatorname {Res} _{0}\left(\sum _{k\geq 1}u_{k}V(z)^{-k}\right)=\sum _{k\geq 1}u_{k}\operatorname {Res} _{0}{\big (}V(z)^{-k}{\big )}}because the first series converges uniformly on any small circle around 0. Using the Lagrange inversion theoremRes0⁡(V(z)−k)=kvk,{\displaystyle \operatorname {Res} _{0}{\big (}V(z)^{-k}{\big )}=kv_{k},}and we get the above expression. For example, ifu(z)=z+z2{\displaystyle u(z)=z+z^{2}}and alsov(z)=z+z2{\displaystyle v(z)=z+z^{2}}, thenV(z)=2z1+1+4z{\displaystyle V(z)={\frac {2z}{1+{\sqrt {1+4z}}}}}andu(1/V(z))=1+1+4z2z+1+2z+1+4z2z2.{\displaystyle u(1/V(z))={\frac {1+{\sqrt {1+4z}}}{2z}}+{\frac {1+2z+{\sqrt {1+4z}}}{2z^{2}}}.}The first term contributes 1 to the residue, and the second term contributes 2 since it is asymptotic to1/z2+2/z{\displaystyle 1/z^{2}+2/z}. Note that, with the corresponding stronger symmetric assumptions onu(z){\textstyle u(z)}andv(z){\textstyle v(z)}, it also followsRes0⁡(u(1/V))=Res0⁡(v(1/U)),{\displaystyle \operatorname {Res} _{0}\left(u(1/V)\right)=\operatorname {Res} _{0}\left(v(1/U)\right),}whereU(z){\textstyle U(z)}is a local inverse ofu(z){\textstyle u(z)}at 0.
https://en.wikipedia.org/wiki/Residue_(complex_analysis)
Rouché's theorem, named afterEugène Rouché, states that for any twocomplex-valuedfunctionsfandgholomorphicinside some regionK{\displaystyle K}with closed contour∂K{\displaystyle \partial K}, if|g(z)| < |f(z)|on∂K{\displaystyle \partial K}, thenfandf+ghave the same number of zeros insideK{\displaystyle K}, where each zero is counted as many times as itsmultiplicity. This theorem assumes that the contour∂K{\displaystyle \partial K}is simple, that is, without self-intersections. Rouché's theorem is an easy consequence of a stronger symmetric Rouché's theorem described below. The theorem is usually used to simplify the problem of locating zeros, as follows. Given an analytic function, we write it as the sum of two parts, one of which is simpler and grows faster than (thus dominates) the other part. We can then locate the zeros by looking at only the dominating part. For example, the polynomialz5+3z3+7{\displaystyle z^{5}+3z^{3}+7}has exactly 5 zeros in the disk|z|<2{\displaystyle |z|<2}since|3z3+7|≤31<32=|z5|{\displaystyle |3z^{3}+7|\leq 31<32=|z^{5}|}for every|z|=2{\displaystyle |z|=2}, andz5{\displaystyle z^{5}}, the dominating part, has five zeros in the disk. It is possible to provide an informal explanation of Rouché's theorem. LetCbe a closed, simple curve (i.e., not self-intersecting). Leth(z) =f(z) +g(z). Iffandgare both holomorphic on the interior ofC, thenhmust also be holomorphic on the interior ofC. Then, with the conditions imposed above, the Rouche's theorem in its original (and not symmetric) form says that Notice that the condition |f(z)| > |h(z) −f(z)| means that for anyz, the distance fromf(z) to the origin is larger than the length ofh(z) −f(z), which in the following picture means that for each point on the blue curve, the segment joining it to the origin is larger than the green segment associated with it. Informally we can say that the blue curvef(z) is always closer to the red curveh(z) than it is to the origin. The previous paragraph shows thath(z) must wind around the origin exactly as many times asf(z). The index of both curves around zero is therefore the same, so by theargument principle,f(z)andh(z)must have the same number of zeros insideC. One popular, informal way to summarize this argument is as follows: If a person were to walk a dog on a leash around and around a tree, such that the distance between the person and the tree is always greater than the length of the leash, then the person and the dog go around the tree the same number of times. Consider the polynomialz2+2az+b2{\displaystyle z^{2}+2az+b^{2}}witha>b>0{\displaystyle a>b>0}. By thequadratic formulait has two zeros at−a±a2−b2{\displaystyle -a\pm {\sqrt {a^{2}-b^{2}}}}. Rouché's theorem can be used to obtain some hint about their positions. Since|z2+b2|≤2b2<2a|z|for all|z|=b,{\displaystyle |z^{2}+b^{2}|\leq 2b^{2}<2a|z|{\text{ for all }}|z|=b,} Rouché's theorem says that the polynomial has exactly one zero inside the disk|z|<b{\displaystyle |z|<b}. Since−a−a2−b2{\displaystyle -a-{\sqrt {a^{2}-b^{2}}}}is clearly outside the disk, we conclude that the zero is−a+a2−b2{\displaystyle -a+{\sqrt {a^{2}-b^{2}}}}. In general, a polynomialf(z)=anzn+⋯+a0{\displaystyle f(z)=a_{n}z^{n}+\cdots +a_{0}}. If|ak|rk>∑j≠k|aj|rj{\displaystyle |a_{k}|r^{k}>\sum _{j\neq k}|a_{j}|r^{j}}for somer>0,k∈0:n{\displaystyle r>0,k\in 0:n}, then by Rouche's theorem, the polynomial has exactlyk{\displaystyle k}roots insideB(0,r){\displaystyle B(0,r)}. This sort of argument can be useful in locating residues when one applies Cauchy'sresidue theorem. Rouché's theorem can also be used to give a short proof of thefundamental theorem of algebra. Letp(z)=a0+a1z+a2z2+⋯+anzn,an≠0{\displaystyle p(z)=a_{0}+a_{1}z+a_{2}z^{2}+\cdots +a_{n}z^{n},\quad a_{n}\neq 0}and chooseR>0{\displaystyle R>0}so large that:|a0+a1z+⋯+an−1zn−1|≤∑j=0n−1|aj|Rj<|an|Rn=|anzn|for|z|=R.{\displaystyle |a_{0}+a_{1}z+\cdots +a_{n-1}z^{n-1}|\leq \sum _{j=0}^{n-1}|a_{j}|R^{j}<|a_{n}|R^{n}=|a_{n}z^{n}|{\text{ for }}|z|=R.}Sinceanzn{\displaystyle a_{n}z^{n}}hasn{\displaystyle n}zeros inside the disk|z|<R{\displaystyle |z|<R}(becauseR>0{\displaystyle R>0}), it follows from Rouché's theorem thatp{\displaystyle p}also has the same number of zeros inside the disk. One advantage of this proof over the others is that it shows not only that a polynomial must have a zero but the number of its zeros is equal to its degree (counting, as usual, multiplicity). Another use of Rouché's theorem is to prove theopen mapping theoremfor analytic functions. We refer to the article for the proof. A stronger version of Rouché's theorem was published byTheodor Estermannin 1962.[1]It states: letK⊂G{\displaystyle K\subset G}be a bounded region with continuous boundary∂K{\displaystyle \partial K}. Two holomorphic functionsf,g∈H(G){\displaystyle f,\,g\in {\mathcal {H}}(G)}have the same number of roots (counting multiplicity) inK{\displaystyle K}, if the strict inequality|f(z)−g(z)|<|f(z)|+|g(z)|(z∈∂K){\displaystyle |f(z)-g(z)|<|f(z)|+|g(z)|\qquad \left(z\in \partial K\right)}holds on the boundary∂K.{\displaystyle \partial K.} THIS PART HAS TO BE REVISED, IT IS BASED ON HAVING + IN THE ABOVE FORMULA. THE PROOF BELOW IS CORRECT.The original version of Rouché's theorem then follows from this symmetric version applied to the functionsf+g,f{\displaystyle f+g,f}together with the trivial inequality|f(z)+g(z)|≥0{\displaystyle |f(z)+g(z)|\geq 0}(in fact this inequality is strict sincef(z)+g(z)=0{\displaystyle f(z)+g(z)=0}for somez∈∂K{\displaystyle z\in \partial K}would imply|g(z)|=|f(z)|{\displaystyle |g(z)|=|f(z)|}). The statement can be understood intuitively as follows. By considering−g{\displaystyle -g}in place ofg{\displaystyle g}, the condition can be rewritten as|f(z)+g(z)|<|f(z)|+|g(z)|{\displaystyle |f(z)+g(z)|<|f(z)|+|g(z)|}forz∈∂K{\displaystyle z\in \partial K}. Since|f(z)+g(z)|≤|f(z)|+|g(z)|{\displaystyle |f(z)+g(z)|\leq |f(z)|+|g(z)|}always holds by the triangle inequality, this is equivalent to saying that|f(z)+g(z)|≠|f(z)|+|g(z)|{\displaystyle |f(z)+g(z)|\neq |f(z)|+|g(z)|}on∂K{\displaystyle \partial K}, which in turn means that forz∈∂K{\displaystyle z\in \partial K}the functionsf(z){\displaystyle f(z)}andg(z){\displaystyle g(z)}are non-vanishing andarg⁡f(z)≠arg⁡g(z){\displaystyle \arg {f(z)}\neq \arg {g(z)}}. Intuitively, if the values off{\displaystyle f}andg{\displaystyle g}never pass through the origin and never point in the same direction asz{\displaystyle z}circles along∂K{\displaystyle \partial K}, thenf(z){\displaystyle f(z)}andg(z){\displaystyle g(z)}must wind around the origin the same number of times. LetC:[0,1]→C{\displaystyle C\colon [0,1]\to \mathbb {C} }be a simple closed curve whose image is the boundary∂K{\displaystyle \partial K}. The hypothesis implies thatfhas no roots on∂K{\displaystyle \partial K}, hence by theargument principle, the numberNf(K) of zeros offinKis12πi∮Cf′(z)f(z)dz=12πi∮f∘Cdzz=Indf∘C(0),{\displaystyle {\frac {1}{2\pi i}}\oint _{C}{\frac {f'(z)}{f(z)}}\,dz={\frac {1}{2\pi i}}\oint _{f\circ C}{\frac {dz}{z}}=\mathrm {Ind} _{f\circ C}(0),}i.e., thewinding numberof the closed curvef∘C{\displaystyle f\circ C}around the origin; similarly forg. The hypothesis ensures thatg(z) is not a negative real multiple off(z) for anyz=C(x), thus 0 does not lie on the line segment joiningf(C(x)) tog(C(x)), andHt(x)=(1−t)f(C(x))+tg(C(x)){\displaystyle H_{t}(x)=(1-t)f(C(x))+tg(C(x))}is ahomotopybetween the curvesf∘C{\displaystyle f\circ C}andg∘C{\displaystyle g\circ C}avoiding the origin. The winding number is homotopy-invariant: the functionI(t)=IndHt(0)=12πi∮Htdzz{\displaystyle I(t)=\mathrm {Ind} _{H_{t}}(0)={\frac {1}{2\pi i}}\oint _{H_{t}}{\frac {dz}{z}}}is continuous and integer-valued, hence constant. This showsNf(K)=Indf∘C(0)=Indg∘C(0)=Ng(K).{\displaystyle N_{f}(K)=\mathrm {Ind} _{f\circ C}(0)=\mathrm {Ind} _{g\circ C}(0)=N_{g}(K).}
https://en.wikipedia.org/wiki/Rouch%C3%A9%27s_theorem
Inmathematics,Sendov's conjecture, sometimes also calledIlieff's conjecture, concerns the relationship between the locations ofrootsandcritical pointsof apolynomial functionof acomplex variable. It is named afterBlagovest Sendov. Theconjecturestates that for a polynomial with all rootsr1, ...,rninside theclosed unit disk|z| ≤ 1, each of thenroots is at a distance no more than 1 from at least one critical point. TheGauss–Lucas theoremsays that all of the critical points lie within theconvex hullof the roots. It follows that the critical points must be within the unit disk, since the roots are. The conjecture has beenprovenforn< 9 by Brown-Xiang and fornsufficiently largebyTao.[1][2] The conjecture was first proposed byBlagovest Sendovin 1959; he described the conjecture to his colleagueNikola Obreshkov. In 1967 the conjecture was misattributed[3]to Ljubomir Iliev byWalter Hayman.[4]In 1969 Meir and Sharma proved the conjecture for polynomials withn< 6. In 1991 Brown proved the conjecture forn< 7. Borcea extended the proof ton< 8 in 1996. Brown and Xiang[5]proved the conjecture forn< 9 in 1999.Terence Taoproved the conjecture for sufficiently largenin 2020.
https://en.wikipedia.org/wiki/Sendov%27s_conjecture
Inengineering,applied mathematics, andphysics, theBuckinghamπtheoremis a keytheoremindimensional analysis. It is a formalisation ofRayleigh's method of dimensional analysis. Loosely, the theorem states that if there is a physically meaningful equation involving a certain numbernphysical variables, then the original equation can be rewritten in terms of a set ofp=n−kdimensionless parametersπ1,π2, ...,πpconstructed from the original variables, wherekis the number of physical dimensions involved; it is obtained as therankof a particularmatrix. The theorem provides a method for computing sets of dimensionless parameters from the given variables, ornondimensionalization, even if the form of the equation is still unknown. The Buckinghamπtheorem indicates that validity of thelaws of physicsdoes not depend on a specificunitsystem. A statement of this theorem is that any physical law can be expressed as anidentityinvolving only dimensionless combinations (ratios or products) of the variables linked by the law (for example, pressure and volume are linked byBoyle's law– they areinversely proportional). If the dimensionless combinations' values changed with the systems of units, then the equation would not be an identity, and the theorem would not hold. Although named forEdgar Buckingham, theπtheorem was first proved by the French mathematicianJoseph Bertrandin 1878.[1]Bertrand considered only special cases of problems from electrodynamics and heat conduction, but his article contains, in distinct terms, all the basic ideas of the modern proof of the theorem and clearly indicates the theorem's utility for modelling physical phenomena. The technique of using the theorem ("the method of dimensions") became widely known due to the works ofRayleigh. The first application of theπtheoremin the general case[note 1]to the dependence of pressure drop in a pipe upon governing parameters probably dates back to 1892,[2]a heuristic proof with the use of series expansions, to 1894.[3] Formal generalization of theπtheorem for the case of arbitrarily many quantities was given first byA. Vaschy[fr]in 1892,[4][5]then in 1911—apparently independently—by both A. Federman[6]andD. Riabouchinsky,[7]and again in 1914 by Buckingham.[8]It was Buckingham's article that introduced the use of the symbol "πi{\displaystyle \pi _{i}}" for the dimensionless variables (or parameters), and this is the source of the theorem's name. More formally, the numberp{\displaystyle p}of dimensionless terms that can be formed is equal to thenullityof thedimensional matrix, andk{\displaystyle k}is therank. For experimental purposes, different systems that share the same description in terms of thesedimensionless numbersare equivalent. In mathematical terms, if we have a physically meaningful equation such asf(q1,q2,…,qn)=0,{\displaystyle f(q_{1},q_{2},\ldots ,q_{n})=0,}whereq1,…,qn{\displaystyle q_{1},\ldots ,q_{n}}are anyn{\displaystyle n}physical variables, and there is a maximal dimensionally independent subset of sizek{\displaystyle k},[note 2]then the above equation can be restated asF(π1,π2,…,πp)=0,{\displaystyle F(\pi _{1},\pi _{2},\ldots ,\pi _{p})=0,}whereπ1,…,πp{\displaystyle \pi _{1},\ldots ,\pi _{p}}are dimensionless parameters constructed from theqi{\displaystyle q_{i}}byp=n−k{\displaystyle p=n-k}dimensionless equations — the so-calledPi groups— of the formπi=q1a1q2a2⋯qnan,{\displaystyle \pi _{i}=q_{1}^{a_{1}}\,q_{2}^{a_{2}}\cdots q_{n}^{a_{n}},}where the exponentsai{\displaystyle a_{i}}are rational numbers. (They can always be taken to be integers by redefiningπi{\displaystyle \pi _{i}}as being raised to a power that clears all denominators.) If there areℓ{\displaystyle \ell }fundamental units in play, thenp≥n−ℓ{\displaystyle p\geq n-\ell }. The Buckinghamπtheorem provides a method for computing sets of dimensionless parameters from given variables, even if the form of the equation remains unknown. However, the choice of dimensionless parameters is not unique; Buckingham's theorem only provides a way of generating sets of dimensionless parameters and does not indicate the most "physically meaningful". Two systems for which these parameters coincide are calledsimilar(as withsimilar triangles, they differ only in scale); they are equivalent for the purposes of the equation, and the experimentalist who wants to determine the form of the equation can choose the most convenient one. Most importantly, Buckingham's theorem describes the relation between the number of variables and fundamental dimensions. For simplicity, it will be assumed that the space of fundamental and derived physical units forms avector spaceover thereal numbers, with the fundamental units as basis vectors, and with multiplication of physical units as the "vector addition" operation, and raising to powers as the "scalar multiplication" operation: represent a dimensional variable as the set of exponents needed for the fundamental units (with a power of zero if the particular fundamental unit is not present). For instance, thestandard gravityg{\displaystyle g}has units ofL/T2=L1T−2{\displaystyle {\mathsf {L}}/{\mathsf {T}}^{2}={\mathsf {L}}^{1}{\mathsf {T}}^{-2}}(length over time squared), so it is represented as the vector(1,−2){\displaystyle (1,-2)}with respect to the basis of fundamental units (length, time). We could also require that exponents of the fundamental units be rational numbers and modify the proof accordingly, in which case the exponents in the pi groups can always be taken as rational numbers or even integers. Suppose we have quantitiesq1,q2,…,qn{\displaystyle q_{1},q_{2},\dots ,q_{n}}, where the units ofqi{\displaystyle q_{i}}contain length raised to the powerci{\displaystyle c_{i}}. If we originally measure length in meters but later switch to centimeters, then the numerical value ofqi{\displaystyle q_{i}}would be rescaled by a factor of100ci{\displaystyle 100^{c_{i}}}. Any physically meaningful law should be invariant under an arbitrary rescaling of every fundamental unit; this is the fact that the pi theorem hinges on. Given a system ofn{\displaystyle n}dimensional variablesq1,…,qn{\displaystyle q_{1},\ldots ,q_{n}}inℓ{\displaystyle \ell }fundamental (basis) dimensions, thedimensional matrixis theℓ×n{\displaystyle \ell \times n}matrixM{\displaystyle M}whoseℓ{\displaystyle \ell }rows correspond to the fundamental dimensions and whosen{\displaystyle n}columns are the dimensions of the variables: the(i,j){\displaystyle (i,j)}th entry (where1≤i≤ℓ{\displaystyle 1\leq i\leq \ell }and1≤j≤n{\displaystyle 1\leq j\leq n}) is the power of thei{\displaystyle i}th fundamental dimension in thej{\displaystyle j}th variable. The matrix can be interpreted as taking in a combination of the variable quantities and giving out the dimensions of the combination in terms of the fundamental dimensions. So theℓ×1{\displaystyle \ell \times 1}(column) vector that results from the multiplicationM[a1⋮an]{\displaystyle M{\begin{bmatrix}a_{1}\\\vdots \\a_{n}\end{bmatrix}}}consists of the units ofq1a1q2a2⋯qnan{\displaystyle q_{1}^{a_{1}}\,q_{2}^{a_{2}}\cdots q_{n}^{a_{n}}}in terms of theℓ{\displaystyle \ell }fundamental independent (basis) units.[note 3] If we rescale thei{\displaystyle i}th fundamental unit by a factor ofαi{\displaystyle \alpha _{i}}, thenqj{\displaystyle q_{j}}gets rescaled byα1−m1jα2−m2j⋯αℓ−mℓj{\displaystyle \alpha _{1}^{-m_{1j}}\,\alpha _{2}^{-m_{2j}}\cdots \alpha _{\ell }^{-m_{\ell j}}}, wheremij{\displaystyle m_{ij}}is the(i,j){\displaystyle (i,j)}th entry of the dimensional matrix. In order to convert this into a linear algebra problem, we takelogarithms(the base is irrelevant), yielding[log⁡q1⋮log⁡qn]↦[log⁡q1⋮log⁡qn]−MT[log⁡α1⋮log⁡αℓ],{\displaystyle {\begin{bmatrix}\log {q_{1}}\\\vdots \\\log {q_{n}}\end{bmatrix}}\mapsto {\begin{bmatrix}\log {q_{1}}\\\vdots \\\log {q_{n}}\end{bmatrix}}-M^{\operatorname {T} }{\begin{bmatrix}\log {\alpha _{1}}\\\vdots \\\log {\alpha _{\ell }}\end{bmatrix}},}which is anactionofRℓ{\displaystyle \mathbb {R} ^{\ell }}onRn{\displaystyle \mathbb {R} ^{n}}. We define a physical law to be an arbitrary functionf:(R+)n→R{\displaystyle f\colon (\mathbb {R} ^{+})^{n}\to \mathbb {R} }such that(q1,q2,…,qn){\displaystyle (q_{1},q_{2},\dots ,q_{n})}is a permissible set of values for the physical system whenf(q1,q2,…,qn)=0{\displaystyle f(q_{1},q_{2},\dots ,q_{n})=0}. We further requiref{\displaystyle f}to be invariant under this action. Hence it descends to a functionF:Rn/im⁡MT→R{\displaystyle F\colon \mathbb {R} ^{n}/\operatorname {im} {M^{\operatorname {T} }}\to \mathbb {R} }. All that remains is to exhibit an isomorphism betweenRn/im⁡MT{\displaystyle \mathbb {R} ^{n}/\operatorname {im} {M^{\operatorname {T} }}}andRp{\displaystyle \mathbb {R} ^{p}}, the (log) space of pi groups(log⁡π1,log⁡π2,…,log⁡πp){\displaystyle (\log {\pi _{1}},\log {\pi _{2}},\dots ,\log {\pi _{p}})}. We construct ann×p{\displaystyle n\times p}matrixK{\displaystyle K}whose columns are a basis forker⁡M{\displaystyle \ker {M}}. It tells us how to embedRp{\displaystyle \mathbb {R} ^{p}}intoRn{\displaystyle \mathbb {R} ^{n}}as the kernel ofM{\displaystyle M}. That is, we have anexact sequence Taking tranposes yields another exact sequence Thefirst isomorphism theoremproduces the desired isomorphism, which sends thecosetv+MTRℓ{\displaystyle v+M^{\operatorname {T} }\mathbb {R} ^{\ell }}toKTv{\displaystyle K^{\operatorname {T} }v}. This corresponds to rewriting the tuple(log⁡q1,log⁡q2,…,log⁡qn){\displaystyle (\log q_{1},\log q_{2},\dots ,\log q_{n})}into the pi groups(log⁡π1,log⁡π2,…,log⁡πp){\displaystyle (\log \pi _{1},\log \pi _{2},\dots ,\log \pi _{p})}coming from the columns ofK{\displaystyle K}. TheInternational System of Unitsdefines seven base units, which are theampere,kelvin,second,metre,kilogram,candelaandmole. It is sometimes advantageous to introduce additional base units and techniques to refine the technique of dimensional analysis. (Seeorientational analysisand reference.[9]) This example is elementary but serves to demonstrate the procedure. Suppose a car is driving at 100 km/h; how long does it take to go 200 km? This question considersn=3{\displaystyle n=3}dimensioned variables: distanced,{\displaystyle d,}timet,{\displaystyle t,}and speedv,{\displaystyle v,}and we are seeking some law of the formt=Duration⁡(v,d).{\displaystyle t=\operatorname {Duration} (v,d).}Any two of these variables are dimensionally independent, but the three taken together are not. Thus there isp=n−k=3−2=1{\displaystyle p=n-k=3-2=1}dimensionless quantity. The dimensional matrix isM=[10101−1]{\displaystyle M={\begin{bmatrix}1&0&\;\;\;1\\0&1&-1\end{bmatrix}}}in which the rows correspond to the basis dimensionsL{\displaystyle L}andT,{\displaystyle T,}and the columns to the considered dimensionsL,T,andV,{\displaystyle L,T,{\text{ and }}V,}where the latter stands for the speed dimension. The elements of the matrix correspond to the powers to which the respective dimensions are to be raised. For instance, the third column(1,−1),{\displaystyle (1,-1),}states thatV=L0T0V1,{\displaystyle V=L^{0}T^{0}V^{1},}represented by the column vectorv=[0,0,1],{\displaystyle \mathbf {v} =[0,0,1],}is expressible in terms of the basis dimensions asV=L1T−1=L/T,{\displaystyle V=L^{1}T^{-1}=L/T,}sinceMv=[1,−1].{\displaystyle M\mathbf {v} =[1,-1].} For a dimensionless constantπ=La1Ta2Va3,{\displaystyle \pi =L^{a_{1}}T^{a_{2}}V^{a_{3}},}we are looking for vectorsa=[a1,a2,a3]{\displaystyle \mathbf {a} =[a_{1},a_{2},a_{3}]}such that the matrix-vector productMa{\displaystyle M\mathbf {a} }equals the zero vector[0,0].{\displaystyle [0,0].}In linear algebra, the set of vectors with this property is known as thekernel(or nullspace) of the dimensional matrix. In this particular case its kernel is one-dimensional. The dimensional matrix as written above is inreduced row echelon form, so one can read off a non-zero kernel vector to within a multiplicative constant:a=[−111].{\displaystyle \mathbf {a} ={\begin{bmatrix}-1\\\;\;\;1\\\;\;\;1\\\end{bmatrix}}.} If the dimensional matrix were not already reduced, one could performGauss–Jordan eliminationon the dimensional matrix to more easily determine the kernel. It follows that the dimensionless constant, replacing the dimensions by the corresponding dimensioned variables, may be written:π=d−1t1v1=tv/d.{\displaystyle \pi =d^{-1}t^{1}v^{1}=tv/d.} Since the kernel is only defined to within a multiplicative constant, the above dimensionless constant raised to any arbitrary power yields another (equivalent) dimensionless constant. Dimensional analysis has thus provided a general equation relating the three physical variables:F(π)=0,{\displaystyle F(\pi )=0,}or, lettingC{\displaystyle C}denote azeroof functionF,{\displaystyle F,}π=C,{\displaystyle \pi =C,}which can be written in the desired form (which recall wast=Duration⁡(v,d){\displaystyle t=\operatorname {Duration} (v,d)}) ast=Cdv.{\displaystyle t=C{\frac {d}{v}}.} The actual relationship between the three variables is simplyd=vt.{\displaystyle d=vt.}In other words, in this caseF{\displaystyle F}has one physically relevant root, and it is unity. The fact that only a single value ofC{\displaystyle C}will do and that it is equal to 1 is not revealed by the technique of dimensional analysis. We wish to determine the periodT{\displaystyle T}ofsmall oscillations in a simple pendulum. It will be assumed that it is a function of the lengthL,{\displaystyle L,}the massM,{\displaystyle M,}and theacceleration due to gravityon the surface of the Earthg,{\displaystyle g,}which has dimensions of length divided by time squared. The model is of the formf(T,M,L,g)=0.{\displaystyle f(T,M,L,g)=0.} (Note that it is written as a relation, not as a function:T{\displaystyle T}is not written here as a function ofM,L,andg.{\displaystyle M,L,{\text{ and }}g.}) Period, mass, and length are dimensionally independent, but acceleration can be expressed in terms of time and length, which means the four variables taken together are not dimensionally independent. Thus we need onlyp=n−k=4−3=1{\displaystyle p=n-k=4-3=1}dimensionless parameter, denoted byπ,{\displaystyle \pi ,}and the model can be re-expressed asF(π)=0,{\displaystyle F(\pi )=0,}whereπ{\displaystyle \pi }is given byπ=Ta1Ma2La3ga4{\displaystyle \pi =T^{a_{1}}M^{a_{2}}L^{a_{3}}g^{a_{4}}}for some values ofa1,a2,a3,a4.{\displaystyle a_{1},a_{2},a_{3},a_{4}.} The dimensions of the dimensional quantities are:T=t,M=m,L=ℓ,g=ℓ/t2.{\displaystyle T=t,M=m,L=\ell ,g=\ell /t^{2}.} The dimensional matrix is:M=[100−201000011].{\displaystyle \mathbf {M} ={\begin{bmatrix}1&0&0&-2\\0&1&0&0\\0&0&1&1\end{bmatrix}}.} (The rows correspond to the dimensionst,m,{\displaystyle t,m,}andℓ,{\displaystyle \ell ,}and the columns to the dimensional variablesT,M,L,andg.{\displaystyle T,M,L,{\text{ and }}g.}For instance, the 4th column,(−2,0,1),{\displaystyle (-2,0,1),}states that theg{\displaystyle g}variable has dimensions oft−2m0ℓ1.{\displaystyle t^{-2}m^{0}\ell ^{1}.}) We are looking for a kernel vectora=[a1,a2,a3,a4]{\displaystyle a=\left[a_{1},a_{2},a_{3},a_{4}\right]}such that the matrix product ofM{\displaystyle \mathbf {M} }ona{\displaystyle a}yields the zero vector[0,0,0].{\displaystyle [0,0,0].}The dimensional matrix as written above is in reduced row echelon form, so one can read off a kernel vector within a multiplicative constant:a=[20−11].{\displaystyle a={\begin{bmatrix}2\\0\\-1\\1\end{bmatrix}}.} Were it not already reduced, one could performGauss–Jordan eliminationon the dimensional matrix to more easily determine the kernel. It follows that the dimensionless constant may be written:π=T2M0L−1g1=gT2/L.{\displaystyle {\begin{aligned}\pi &=T^{2}M^{0}L^{-1}g^{1}\\&=gT^{2}/L\end{aligned}}.}In fundamental terms:π=(t)2(m)0(ℓ)−1(ℓ/t2)1=1,{\displaystyle \pi =(t)^{2}(m)^{0}(\ell )^{-1}\left(\ell /t^{2}\right)^{1}=1,}which is dimensionless. Since the kernel is only defined to within a multiplicative constant, if the above dimensionless constant is raised to any arbitrary power, it will yield another equivalent dimensionless constant. In this example, three of the four dimensional quantities are fundamental units, so the last (which isg{\displaystyle g}) must be a combination of the previous. Note that ifa2{\displaystyle a_{2}}(the coefficient ofM{\displaystyle M}) had been non-zero then there would be no way to cancel theM{\displaystyle M}value; thereforea2{\displaystyle a_{2}}mustbe zero. Dimensional analysis has allowed us to conclude that the period of the pendulum is not a function of its massM.{\displaystyle M.}(In the 3D space of powers of mass, time, and distance, we can say that the vector for mass is linearly independent from the vectors for the three other variables. Up to a scaling factor,g→+2T→−L→{\displaystyle {\vec {g}}+2{\vec {T}}-{\vec {L}}}is the only nontrivial way to construct a vector of a dimensionless parameter.) The model can now be expressed as:F(gT2/L)=0.{\displaystyle F\left(gT^{2}/L\right)=0.} Then this implies thatgT2/L=Ci{\displaystyle gT^{2}/L=C_{i}}for some zeroCi{\displaystyle C_{i}}of the functionF.{\displaystyle F.}If there is only one zero, call itC,{\displaystyle C,}thengT2/L=C.{\displaystyle gT^{2}/L=C.}It requires more physical insight or an experiment to show that there is indeed only one zero and that the constant is in fact given byC=4π2.{\displaystyle C=4\pi ^{2}.} For large oscillations of a pendulum, the analysis is complicated by an additional dimensionless parameter, the maximum swing angle. The above analysis is a good approximation as theangle approaches zero. To demonstrate the application of theπtheorem, consider thepowerconsumption of astirrerwith a given shape. The power,P, in dimensions [M · L2/T3], is a function of thedensity,ρ[M/L3], and theviscosityof the fluid to be stirred,μ[M/(L · T)], as well as the size of the stirrer given by itsdiameter,D[L], and theangular speedof the stirrer,n[1/T]. Therefore, we have a total ofn= 5 variables representing our example. Thosen= 5 variables are built up fromk= 3 independent dimensions, e.g., length: L (SIunits:m), time: T (s), and mass: M (kg). According to theπ-theorem, then= 5 variables can be reduced by thek= 3 dimensions to formp=n−k= 5 − 3 = 2 independent dimensionless numbers. Usually, these quantities are chosen asRe=ρnD2μ{\textstyle \mathrm {Re} ={\frac {\rho nD^{2}}{\mu }}}, commonly named theReynolds numberwhich describes the fluid flow regime, andNp=Pρn3D5{\textstyle N_{\mathrm {p} }={\frac {P}{\rho n^{3}D^{5}}}}, thepower number, which is the dimensionless description of the stirrer. Note that the two dimensionless quantities are not unique and depend on which of then= 5 variables are chosen as thek= 3 dimensionally independent basis variables, which, in this example, appear in both dimensionless quantities. The Reynolds number and power number fall from the above analysis ifρ{\textstyle \rho },n, andDare chosen to be the basis variables. If, instead,μ{\textstyle \mu },n, andDare selected, the Reynolds number is recovered while the second dimensionless quantity becomesNRep=PμD3n2{\textstyle N_{\mathrm {Rep} }={\frac {P}{\mu D^{3}n^{2}}}}. We note thatNRep{\textstyle N_{\mathrm {Rep} }}is the product of the Reynolds number and the power number. An example of dimensional analysis can be found for the case of the mechanics of a thin, solid and parallel-sided rotating disc. There are five variables involved which reduce to two non-dimensional groups. The relationship between these can be determined by numerical experiment using, for example, the finite element method.[10] The theorem has also been used in fields other than physics, for instance insports science.[11]
https://en.wikipedia.org/wiki/Buckingham_%CF%80_theorem
Dimensionless numbers(orcharacteristic numbers) have an important role in analyzing the behavior offluidsand their flow as well as in othertransport phenomena.[1]They include theReynoldsand theMach numbers, which describe as ratios the relative magnitude of fluid and physical system characteristics, such asdensity,viscosity,speed of sound, andflow speed. To compare a real situation (e.g. anaircraft) with a small-scale model it is necessary to keep the important characteristic numbers the same. Names and formulation of these numbers were standardized inISO 31-12and inISO 80000-11. As a general example of how dimensionless numbers arise in fluid mechanics, the classical numbers intransport phenomenaofmass,momentum, andenergyare principally analyzed by the ratio of effectivediffusivitiesin each transport mechanism. The sixdimensionless numbersgive the relative strengths of the different phenomena ofinertia,viscosity,conductive heat transport, and diffusivemass transport. (In the table, the diagonals give common symbols for the quantities, and the given dimensionless number is the ratio of the left column quantity over top row quantity; e.g.Re= inertial force/viscous force =vd/ν.) These same quantities may alternatively be expressed as ratios of characteristic time, length, or energy scales. Such forms are less commonly used in practice, but can provide insight into particular applications. Droplet formation mostly depends on momentum, viscosity and surface tension.[2]Ininkjet printingfor example, an ink with a too highOhnesorge numberwould not jet properly, and an ink with a too low Ohnesorge number would be jetted with many satellite drops.[3]Not all of the quantity ratios are explicitly named, though each of the unnamed ratios could be expressed as a product of two other named dimensionless numbers. All numbers are dimensionlessquantities. See other article for extensivelist of dimensionless quantities. Certaindimensionless quantitiesof some importance tofluid mechanicsare given below:
https://en.wikipedia.org/wiki/Dimensionless_numbers_in_fluid_mechanics
AFermi problem(orFermi question,Fermi quiz), also known as anorder-of-magnitude problem, is anestimationproblem inphysicsorengineeringeducation, designed to teachdimensional analysisorapproximationof extreme scientific calculations. Fermi problems are usuallyback-of-the-envelope calculations. Fermi problems typically involve making justified guesses about quantities and theirvarianceor lower and upper bounds. In some cases, order-of-magnitude estimates can also be derived usingdimensional analysis. AFermi estimate(ororder-of-magnitude estimate,order estimation) is an estimate of an extreme scientific calculation. The estimation technique is named after physicistEnrico Fermias he was known for his ability to make good approximate calculations with little or no actual data. An example isEnrico Fermi's estimate of the strength of theatomic bombthat detonated at theTrinity test, based on the distance traveled by pieces of paper he dropped from his hand during the blast. Fermi's estimate of 10kilotons of TNTwas well within an order of magnitude of the now-accepted value of 21 kilotons.[1][2][3] Fermi estimates generally work because the estimations of the individual terms are often close to correct, and overestimates and underestimates help cancel each other out. That is, if there is no consistent bias, a Fermi calculation that involves the multiplication of several estimated factors (such as the number of piano tuners in Chicago) will probably be more accurate than might be first supposed. In detail, multiplying estimates corresponds to adding their logarithms; thus one obtains a sort ofWiener processorrandom walkon thelogarithmic scale, which diffuses asn{\displaystyle {\sqrt {n}}}(in number of termsn). In discrete terms, the number of overestimates minus underestimates will have abinomial distribution. In continuous terms, if one makes a Fermi estimate ofnsteps, withstandard deviationσunits on the log scale from the actual value, then the overall estimate will have standard deviationσn{\displaystyle \sigma {\sqrt {n}}}, since the standard deviation of a sum scales asn{\displaystyle {\sqrt {n}}}in the number of summands. For instance, if one makes a 9-step Fermi estimate, at each step overestimating or underestimating the correct number by a factor of 2 (or with a standard deviation 2), then after 9 steps the standard error will have grown by a logarithmic factor of9=3{\displaystyle {\sqrt {9}}=3}, so 23= 8. Thus one will expect to be within1⁄8to 8 times the correct value – within anorder of magnitude, and much less than the worst case of erring by a factor of 29= 512 (about 2.71 orders of magnitude). If one has a shorter chain or estimates more accurately, the overall estimate will be correspondingly better. Fermi questions are often extreme in nature, and cannot usually be solved using common mathematical or scientific information. Example questions given by the official Fermi Competition:[clarification needed] "If the mass of one teaspoon of water could be converted entirely into energy in the form of heat, what volume of water, initially at room temperature, could it bring to a boil? (litres)." "How much does the Thames River heat up in going over theFanshawe Dam? (Celsius degrees)." "What is the mass of all the automobiles scrapped in North America this month? (kilograms)."[4][5] Possibly the most famous order-of-magnitude problem is theFermi paradox, which considers the odds of a significant number of intelligent civilizations existing in the galaxy, and ponders the apparent contradiction of human civilization never having encountered any. A well-known attempt to ponder this paradox through the lens of a Fermi estimate is theDrake equation, which seeks to estimate the number of such civilizations present in the galaxy.[6] Scientists often look for Fermi estimates of the answer to a problem before turning to more sophisticated methods to calculate a precise answer. This provides a useful check on the results. While the estimate is almost certainly incorrect, it is also a simple calculation that allows for easy error checking, and to find faulty assumptions if the figure produced is far beyond what we might reasonably expect. By contrast, precise calculations can be extremely complex but with the expectation that the answer they produce is correct. The far larger number of factors and operations involved can obscure a very significant error, either in mathematical process or in the assumptions the equation is based on, but the result may still be assumed to be right because it has been derived from a precise formula that is expected to yield good results. Without a reasonable frame of reference to work from it is seldom clear if a result is acceptably precise or is many degrees of magnitude (tens or hundreds of times) too big or too small. The Fermi estimation gives a quick, simple way to obtain this frame of reference for what might reasonably be expected to be the answer. As long as the initial assumptions in the estimate are reasonable quantities, the result obtained will give an answer within the same scale as the correct result, and if not gives a base for understanding why this is the case. For example, suppose a person was asked to determine the number of piano tuners in Chicago. If their initial estimate told them there should be a hundred or so, but the precise answer tells them there are many thousands, then they know they need to find out why there is this divergence from the expected result. First looking for errors, then for factors the estimation did not take account of – does Chicago have a number of music schools or other places with a disproportionately high ratio of pianos to people? Whether close or very far from the observed results, the context the estimation provides gives useful information both about the process of calculation and the assumptions that have been used to look at problems. Fermi estimates are also useful in approaching problems where the optimal choice of calculation method depends on the expected size of the answer. For instance, a Fermi estimate might indicate whether the internal stresses of a structure are low enough that it can be accurately described bylinear elasticity; or if the estimate already bears significant relationship inscalerelative to some other value, for example, if a structure will be over-engineered to withstand loads several times greater than the estimate.[citation needed] Although Fermi calculations are often not accurate, as there may be many problems with their assumptions, this sort of analysis does inform one what to look for to get a better answer. For the above example, one might try to find a better estimate of the number of pianos tuned by a piano tuner in a typical day, or look up an accurate number for the population of Chicago. It also gives a rough estimate that may be good enough for some purposes: if a person wants to start a store in Chicago that sells piano tuning equipment, and calculates that they need 10,000 potential customers to stay in business, they can reasonably assume that the above estimate is far enough below 10,000 that they should consider a different business plan (and, with a little more work, they could compute a rough upper bound on the number of piano tuners by considering the most extremereasonablevalues that could appear in each of their assumptions). The following books contain many examples of Fermi problems with solutions: There are or have been a number of university-level courses devoted to estimation and the solution of Fermi problems. The materials for these courses are a good source for additional Fermi problem examples and material about solution strategies:
https://en.wikipedia.org/wiki/Fermi_estimate
Inengineeringandscience,dimensional analysisis the analysis of the relationships between differentphysical quantitiesby identifying theirbase quantities(such aslength,mass,time, andelectric current) andunits of measurement(such as metres and grams) and tracking these dimensions as calculations or comparisons are performed. The term dimensional analysis is also used to refer toconversion of unitsfrom one dimensional unit to another, which can be used to evaluate scientific formulae. Commensurablephysical quantities are of the samekindand have the same dimension, and can be directly compared to each other, even if they are expressed in differing units of measurement; e.g., metres and feet, grams and pounds, seconds and years.Incommensurablephysicalquantitiesare of differentkindsand have different dimensions, and can not be directly compared to each other, no matter whatunitsthey are expressed in, e.g. metres and grams, seconds and grams, metres and seconds. For example, asking whether a gram is larger than an hour is meaningless. Any physically meaningfulequation, orinequality,musthave the same dimensions on its left and right sides, a property known asdimensional homogeneity. Checking for dimensional homogeneity is a common application of dimensional analysis, serving as a plausibility check onderivedequations andcomputations. It also serves as a guide and constraint in deriving equations that may describe a physicalsystemin the absence of a more rigorous derivation. The concept ofphysical dimensionorquantity dimension, and of dimensional analysis, was introduced byJoseph Fourierin 1822.[1]: 42 TheBuckingham π theoremdescribes how every physically meaningful equation involvingnvariables can be equivalently rewritten as an equation ofn−mdimensionless parameters, wheremis therankof the dimensionalmatrix. Furthermore, and most importantly, it provides a method for computing these dimensionless parameters from the given variables. A dimensional equation can have the dimensions reduced or eliminated throughnondimensionalization, which begins with dimensional analysis, and involves scaling quantities bycharacteristic unitsof a system orphysical constantsof nature.[1]: 43This may give insight into the fundamental properties of the system, as illustrated in the examples below. The dimension of aphysical quantitycan be expressed as a product of the base physical dimensions such as length, mass and time, each raised to an integer (and occasionallyrational)power. Thedimensionof a physical quantity is more fundamental than somescaleorunitused to express the amount of that physical quantity. For example,massis a dimension, while the kilogram is a particular reference quantity chosen to express a quantity of mass. The choice of unit is arbitrary, and its choice is often based on historical precedent.Natural units, being based on only universal constants, may be thought of as being "less arbitrary". There are many possible choices of base physical dimensions. TheSI standardselects the following dimensions and correspondingdimension symbols: The symbols are by convention usually written inromansans seriftypeface.[2]Mathematically, the dimension of the quantityQis given by wherea,b,c,d,e,f,gare the dimensional exponents. Other physical quantities could be defined as the base quantities, as long as they form abasis– for instance, one could replace the dimension (I) ofelectric currentof the SI basis with a dimension (Q) ofelectric charge, sinceQ = TI. A quantity that has onlyb≠ 0(with all other exponents zero) is known as ageometricquantity. A quantity that has only botha≠ 0andb≠ 0is known as akinematicquantity. A quantity that has only all ofa≠ 0,b≠ 0, andc≠ 0is known as adynamicquantity.[3]A quantity that has all exponents null is said to havedimension one.[2] The unit chosen to express a physical quantity and its dimension are related, but not identical concepts. The units of a physical quantity are defined by convention and related to some standard; e.g., length may have units of metres, feet, inches, miles or micrometres; but any length always has a dimension of L, no matter what units of length are chosen to express it. Two different units of the same physical quantity haveconversion factorsthat relate them. For example,1 in = 2.54 cm; in this case 2.54 cm/in is the conversion factor, which is itself dimensionless. Therefore, multiplying by that conversion factor does not change the dimensions of a physical quantity. There are also physicists who have cast doubt on the very existence of incompatible fundamental dimensions of physical quantity,[4]although this does not invalidate the usefulness of dimensional analysis. As examples, the dimension of the physical quantityspeedvis The dimension of the physical quantityaccelerationais The dimension of the physical quantityforceFis The dimension of the physical quantitypressurePis The dimension of the physical quantityenergyEis The dimension of the physical quantitypowerPis The dimension of the physical quantityelectric chargeQis The dimension of the physical quantityvoltageVis The dimension of the physical quantitycapacitanceCis In dimensional analysis,Rayleigh's methodis a conceptual tool used inphysics,chemistry, andengineering. It expresses afunctional relationshipof somevariablesin the form of anexponential equation. It was named afterLord Rayleigh. The method involves the following steps: As a drawback, Rayleigh's method does not provide any information regarding number of dimensionless groups to be obtained as a result of dimensional analysis. Many parameters and measurements in the physical sciences and engineering are expressed as aconcrete number—a numerical quantity and a corresponding dimensional unit. Often a quantity is expressed in terms of several other quantities; for example, speed is a combination of length and time, e.g. 60 kilometres per hour or 1.4 kilometres per second. Compound relations with "per" are expressed withdivision, e.g. 60 km/h. Other relations can involvemultiplication(often shown with acentered dotorjuxtaposition), powers (like m2for square metres), or combinations thereof. A set ofbase unitsfor asystem of measurementis a conventionally chosen set of units, none of which can be expressed as a combination of the others and in terms of which all the remaining units of the system can be expressed.[5]For example, units forlengthand time are normally chosen as base units. Units forvolume, however, can be factored into the base units of length (m3), thus they are considered derived or compound units. Sometimes the names of units obscure the fact that they are derived units. For example, anewton(N) is a unit offorce, which may be expressed as the product of mass (with unit kg) and acceleration (with unit m⋅s−2). The newton is defined as1 N = 1 kg⋅m⋅s−2. Percentages are dimensionless quantities, since they are ratios of two quantities with the same dimensions. In other words, the % sign can be read as "hundredths", since1% = 1/100. Taking a derivative with respect to a quantity divides the dimension by the dimension of the variable that is differentiated with respect to. Thus: Likewise, taking an integral adds the dimension of the variable one is integrating with respect to, but in the numerator. In economics, one distinguishes betweenstocks and flows: a stock has a unit (say, widgets or dollars), while a flow is a derivative of a stock, and has a unit of the form of this unit divided by one of time (say, dollars/year). In some contexts, dimensional quantities are expressed as dimensionless quantities or percentages by omitting some dimensions. For example,debt-to-GDP ratiosare generally expressed as percentages: total debt outstanding (dimension of currency) divided by annual GDP (dimension of currency)—but one may argue that, in comparing a stock to a flow, annual GDP should have dimensions of currency/time (dollars/year, for instance) and thus debt-to-GDP should have the unit year, which indicates that debt-to-GDP is the number of years needed for a constant GDP to pay the debt, if all GDP is spent on the debt and the debt is otherwise unchanged. The most basic rule of dimensional analysis is that of dimensional homogeneity.[6] However, the dimensions form anabelian groupunder multiplication, so: For example, it makes no sense to ask whether 1 hour is more, the same, or less than 1 kilometre, as these have different dimensions, nor to add 1 hour to 1 kilometre. However, it makes sense to ask whether 1 mile is more, the same, or less than 1 kilometre, being the same dimension of physical quantity even though the units are different. On the other hand, if an object travels 100 km in 2 hours, one may divide these and conclude that the object's average speed was 50 km/h. The rule implies that in a physically meaningfulexpressiononly quantities of the same dimension can be added, subtracted, or compared. For example, ifmman,mratandLmandenote, respectively, the mass of some man, the mass of a rat and the length of that man, the dimensionally homogeneous expressionmman+mratis meaningful, but the heterogeneous expressionmman+Lmanis meaningless. However,mman/L2manis fine. Thus, dimensional analysis may be used as asanity checkof physical equations: the two sides of any equation must be commensurable or have the same dimensions. Even when two physical quantities have identical dimensions, it may nevertheless be meaningless to compare or add them. For example, althoughtorqueand energy share the dimensionT−2L2M, they are fundamentally different physical quantities. To compare, add, or subtract quantities with the same dimensions but expressed in different units, the standard procedure is first to convert them all to the same unit. For example, to compare 32 metres with 35 yards, use1 yard = 0.9144 mto convert 35 yards to 32.004 m. A related principle is that any physical law that accurately describes the real world must be independent of the units used to measure the physical variables.[7]For example,Newton's laws of motionmust hold true whether distance is measured in miles or kilometres. This principle gives rise to the form that a conversion factor between two units that measure the same dimension must take multiplication by a simple constant. It also ensures equivalence; for example, if two buildings are the same height in feet, then they must be the same height in metres. In dimensional analysis, a ratio which converts one unit of measure into another without changing the quantity is called aconversion factor. For example, kPa and bar are both units of pressure, and100 kPa = 1 bar. The rules of algebra allow both sides of an equation to be divided by the same expression, so this is equivalent to100 kPa / 1 bar = 1. Since any quantity can be multiplied by 1 without changing it, the expression "100 kPa / 1 bar" can be used to convert from bars to kPa by multiplying it with the quantity to be converted, including the unit. For example,5 bar × 100 kPa / 1 bar = 500 kPabecause5 × 100 / 1 = 500, and bar/bar cancels out, so5 bar = 500 kPa. Dimensional analysis is most often used in physics and chemistry – and in the mathematics thereof – but finds some applications outside of those fields as well. A simple application of dimensional analysis to mathematics is in computing the form of thevolume of ann-ball(the solid ball inndimensions), or the area of its surface, then-sphere: being ann-dimensional figure, the volume scales asxn, while the surface area, being(n− 1)-dimensional, scales asxn−1. Thus the volume of then-ball in terms of the radius isCnrn, for some constantCn. Determining the constant takes more involved mathematics, but the form can be deduced and checked by dimensional analysis alone. In finance, economics, and accounting, dimensional analysis is most commonly referred to in terms of thedistinction between stocks and flows. More generally, dimensional analysis is used in interpreting variousfinancial ratios, economics ratios, and accounting ratios. Influid mechanics, dimensional analysis is performed to obtain dimensionlesspi termsor groups. According to the principles of dimensional analysis, any prototype can be described by a series of these terms or groups that describe the behaviour of the system. Using suitable pi terms or groups, it is possible to develop a similar set of pi terms for a model that has the same dimensional relationships.[8]In other words, pi terms provide a shortcut to developing a model representing a certain prototype. Common dimensionless groups in fluid mechanics include: The origins of dimensional analysis have been disputed by historians.[9][10]The first written application of dimensional analysis has been credited toFrançois Daviet, a student ofJoseph-Louis Lagrange, in a 1799 article at theTurinAcademy of Science.[10] This led to the conclusion that meaningful laws must be homogeneous equations in their various units of measurement, a result which was eventually later formalized in theBuckingham π theorem.Simeon Poissonalso treated the same problem of theparallelogram lawby Daviet, in his treatise of 1811 and 1833 (vol I, p. 39).[11]In the second edition of 1833, Poisson explicitly introduces the termdimensioninstead of the Daviethomogeneity. In 1822, the important Napoleonic scientistJoseph Fouriermade the first credited important contributions[12]based on the idea that physical laws likeF=mashould be independent of the units employed to measure the physical variables. James Clerk Maxwellplayed a major role in establishing modern use of dimensional analysis by distinguishing mass, length, and time as fundamental units, while referring to other units as derived.[13]Although Maxwell defined length, time and mass to be "the three fundamental units", he also noted that gravitational mass can be derived from length and time by assuming a form ofNewton's law of universal gravitationin which thegravitational constantGis taken asunity, thereby definingM = T−2L3.[14]By assuming a form ofCoulomb's lawin which theCoulomb constantkeis taken as unity, Maxwell then determined that the dimensions of an electrostatic unit of charge wereQ = T−1L3/2M1/2,[15]which, after substituting hisM = T−2L3equation for mass, results in charge having the same dimensions as mass, viz.Q = T−2L3. Dimensional analysis is also used to derive relationships between the physical quantities that are involved in a particular phenomenon that one wishes to understand and characterize. It was used for the first time in this way in 1872 byLord Rayleigh, who was trying to understand why the sky is blue.[16]Rayleigh first published the technique in his 1877 bookThe Theory of Sound.[17] The original meaning of the worddimension, in Fourier'sTheorie de la Chaleur, was the numerical value of the exponents of the base units. For example, acceleration was considered to have the dimension 1 with respect to the unit of length, and the dimension −2 with respect to the unit of time.[18]This was slightly changed by Maxwell, who said the dimensions of acceleration are T−2L, instead of just the exponents.[19] What is the period ofoscillationTof a massmattached to an ideal linear spring with spring constantksuspended in gravity of strengthg? That period is the solution forTof some dimensionless equation in the variablesT,m,k, andg. The four quantities have the following dimensions:T[T];m[M];k[M/T2]; andg[L/T2]. From these we can form only one dimensionless product of powers of our chosen variables,G1=T2k/m[T2· M/T2/ M = 1], and puttingG1=Cfor some dimensionless constantCgives the dimensionless equation sought. The dimensionless product of powers of variables is sometimes referred to as a dimensionless group of variables; here the term "group" means "collection" rather than mathematicalgroup. They are often calleddimensionless numbersas well. The variablegdoes not occur in the group. It is easy to see that it is impossible to form a dimensionless product of powers that combinesgwithk,m, andT, becausegis the only quantity that involves the dimension L. This implies that in this problem thegis irrelevant. Dimensional analysis can sometimes yield strong statements about theirrelevanceof some quantities in a problem, or the need for additional parameters. If we have chosen enough variables to properly describe the problem, then from this argument we can conclude that the period of the mass on the spring is independent ofg: it is the same on the earth or the moon. The equation demonstrating the existence of a product of powers for our problem can be written in an entirely equivalent way:⁠T=κmk{\displaystyle T=\kappa {\sqrt {\tfrac {m}{k}}}}⁠, for some dimensionless constantκ(equal toC{\displaystyle {\sqrt {C}}}from the original dimensionless equation). When faced with a case where dimensional analysis rejects a variable (g, here) that one intuitively expects to belong in a physical description of the situation, another possibility is that the rejected variable is in fact relevant, but that some other relevant variable has been omitted, which might combine with the rejected variable to form a dimensionless quantity. That is, however, not the case here. When dimensional analysis yields only one dimensionless group, as here, there are no unknown functions, and the solution is said to be "complete" – although it still may involve unknown dimensionless constants, such asκ. Consider the case of a vibrating wire oflengthℓ(L) vibrating with anamplitudeA(L). The wire has alinear densityρ(M/L) and is undertensions(LM/T2), and we want to know the energyE(L2M/T2) in the wire. Letπ1andπ2be two dimensionless products ofpowersof the variables chosen, given by The linear density of the wire is not involved. The two groups found can be combined into an equivalent form as an equation whereFis some unknown function, or, equivalently as wherefis some other unknown function. Here the unknown function implies that our solution is now incomplete, but dimensional analysis has given us something that may not have been obvious: the energy is proportional to the first power of the tension. Barring further analytical analysis, we might proceed to experiments to discover the form for the unknown functionf. But our experiments are simpler than in the absence of dimensional analysis. We'd perform none to verify that the energy is proportional to the tension. Or perhaps we might guess that the energy is proportional toℓ, and so infer thatE=ℓs. The power of dimensional analysis as an aid to experiment and forming hypotheses becomes evident. The power of dimensional analysis really becomes apparent when it is applied to situations, unlike those given above, that are more complicated, the set of variables involved are not apparent, and the underlying equations hopelessly complex. Consider, for example, a small pebble sitting on the bed of a river. If the river flows fast enough, it will actually raise the pebble and cause it to flow along with the water. At what critical velocity will this occur? Sorting out the guessed variables is not so easy as before. But dimensional analysis can be a powerful aid in understanding problems like this, and is usually the very first tool to be applied to complex problems where the underlying equations and constraints are poorly understood. In such cases, the answer may depend on adimensionless numbersuch as theReynolds number, which may be interpreted by dimensional analysis. Consider the case of a thin, solid, parallel-sided rotating disc of axial thicknesst(L) and radiusR(L). The disc has a densityρ(M/L3), rotates at an angular velocityω(T−1) and this leads to a stressS(T−2L−1M) in the material. There is a theoretical linear elastic solution, given by Lame, to this problem when the disc is thin relative to its radius, the faces of the disc are free to move axially, and the plane stress constitutive relations can be assumed to be valid. As the disc becomes thicker relative to the radius then the plane stress solution breaks down. If the disc is restrained axially on its free faces then a state of plane strain will occur. However, if this is not the case then the state of stress may only be determined though consideration of three-dimensional elasticity and there is no known theoretical solution for this case. An engineer might, therefore, be interested in establishing a relationship between the five variables. Dimensional analysis for this case leads to the following (5 − 3 = 2) non-dimensional groups: Through the use of numerical experiments using, for example, thefinite element method, the nature of the relationship between the two non-dimensional groups can be obtained as shown in the figure. As this problem only involves two non-dimensional groups, the complete picture is provided in a single plot and this can be used as a design/assessment chart for rotating discs.[20] The dimensions that can be formed from a given collection of basic physical dimensions, such as T, L, and M, form anabelian group: Theidentityis written as 1;[citation needed]L0= 1, and the inverse of L is 1/L or L−1. L raised to any integer powerpis a member of the group, having an inverse of L−por 1/Lp. The operation of the group is multiplication, having the usual rules for handling exponents (Ln× Lm= Ln+m). Physically, 1/L can be interpreted asreciprocal length, and 1/T as reciprocal time (seereciprocal second). An abelian group is equivalent to amoduleover the integers, with the dimensional symbolTiLjMkcorresponding to the tuple(i,j,k). When physical measured quantities (be they like-dimensioned or unlike-dimensioned) are multiplied or divided by one other, their dimensional units are likewise multiplied or divided; this corresponds to addition or subtraction in the module. When measurable quantities are raised to an integer power, the same is done to the dimensional symbols attached to those quantities; this corresponds toscalar multiplicationin the module. A basis for such a module of dimensional symbols is called a set ofbase quantities, and all other vectors are called derived units. As in any module, one may choose differentbases, which yields different systems of units (e.g.,choosingwhether the unit for charge is derived from the unit for current, or vice versa). The group identity, the dimension of dimensionless quantities, corresponds to the origin in this module,(0, 0, 0). In certain cases, one can define fractional dimensions, specifically by formally defining fractional powers of one-dimensional vector spaces, likeVL1/2.[21]However, it is not possible to take arbitrary fractional powers of units, due torepresentation-theoreticobstructions.[22] One can work with vector spaces with given dimensions without needing to use units (corresponding to coordinate systems of the vector spaces). For example, given dimensionsMandL, one has the vector spacesVMandVL, and can defineVML:=VM⊗VLas thetensor product. Similarly, the dual space can be interpreted as having "negative" dimensions.[23]This corresponds to the fact that under thenatural pairingbetween a vector space and its dual, the dimensions cancel, leaving adimensionlessscalar. The set of units of the physical quantities involved in a problem correspond to a set of vectors (or a matrix). Thenullitydescribes some number (e.g.,m) of ways in which these vectors can be combined to produce a zero vector. These correspond to producing (from the measurements) a number of dimensionless quantities,{π1, ..., πm}. (In fact these ways completely span the null subspace of another different space, of powers of the measurements.) Every possible way of multiplying (andexponentiating) together the measured quantities to produce something with the same unit as some derived quantityXcan be expressed in the general form Consequently, every possiblecommensurateequation for the physics of the system can be rewritten in the form Knowing this restriction can be a powerful tool for obtaining new insight into the system. The dimension of physical quantities of interest inmechanicscan be expressed in terms of base dimensions T, L, and M – these form a 3-dimensional vector space. This is not the only valid choice of base dimensions, but it is the one most commonly used. For example, one might choose force, length and mass as the base dimensions (as some have done), with associated dimensions F, L, M; this corresponds to a different basis, and one may convert between these representations by achange of basis. The choice of the base set of dimensions is thus a convention, with the benefit of increased utility and familiarity. The choice of base dimensions is not entirely arbitrary, because they must form abasis: they mustspanthe space, and belinearly independent. For example, F, L, M form a set of fundamental dimensions because they form a basis that is equivalent to T, L, M: the former can be expressed as [F = LM/T2], L, M, while the latter can be expressed as [T = (LM/F)1/2], L, M. On the other hand, length, velocity and time (T, L, V) do not form a set of base dimensions for mechanics, for two reasons: Depending on the field of physics, it may be advantageous to choose one or another extended set of dimensional symbols. In electromagnetism, for example, it may be useful to use dimensions of T, L, M and Q, where Q represents the dimension ofelectric charge. Inthermodynamics, the base set of dimensions is often extended to include a dimension for temperature, Θ. In chemistry, theamount of substance(the number of molecules divided by theAvogadro constant, ≈6.02×1023mol−1) is also defined as a base dimension, N. In the interaction ofrelativistic plasmawith strong laser pulses, a dimensionlessrelativistic similarity parameter, connected with the symmetry properties of the collisionlessVlasov equation, is constructed from the plasma-, electron- and critical-densities in addition to the electromagnetic vector potential. The choice of the dimensions or even the number of dimensions to be used in different fields of physics is to some extent arbitrary, but consistency in use and ease of communications are common and necessary features. Bridgman's theorem restricts the type of function that can be used to define a physical quantity from general (dimensionally compounded) quantities to only products of powers of the quantities, unless some of the independent quantities are algebraically combined to yield dimensionless groups, whose functions are grouped together in the dimensionless numeric multiplying factor.[24][25]This excludes polynomials of more than one term or transcendental functions not of that form. Scalararguments totranscendental functionssuch asexponential,trigonometricandlogarithmicfunctions, or toinhomogeneous polynomials, must bedimensionless quantities. (Note: this requirement is somewhat relaxed in Siano's orientational analysis described below, in which the square of certain dimensioned quantities are dimensionless.) While most mathematical identities about dimensionless numbers translate in a straightforward manner to dimensional quantities, care must be taken with logarithms of ratios: the identitylog(a/b) = loga− logb, where the logarithm is taken in any base, holds for dimensionless numbersaandb, but it doesnothold ifaandbare dimensional, because in this case the left-hand side is well-defined but the right-hand side is not.[26] Similarly, while one can evaluatemonomials(xn) of dimensional quantities, one cannot evaluate polynomials of mixed degree with dimensionless coefficients on dimensional quantities: forx2, the expression(3 m)2= 9 m2makes sense (as an area), while forx2+x, the expression(3 m)2+ 3 m = 9 m2+ 3 mdoes not make sense. However, polynomials of mixed degree can make sense if the coefficients are suitably chosen physical quantities that are not dimensionless. For example, This is the height to which an object rises in timetif the acceleration ofgravityis 9.8metres per second per secondand the initial upward speed is 500metres per second. It is not necessary fortto be inseconds. For example, supposet= 0.01 minutes. Then the first term would be The value of a dimensional physical quantityZis written as the product of aunit[Z] within the dimension and a dimensionless numerical value or numerical factor,n.[27] When like-dimensioned quantities are added or subtracted or compared, it is convenient to express them in the same unit so that the numerical values of these quantities may be directly added or subtracted. But, in concept, there is no problem adding quantities of the same dimension expressed in different units. For example, 1 metre added to 1 foot is a length, but one cannot derive that length by simply adding 1 and 1. Aconversion factor, which is a ratio of like-dimensioned quantities and is equal to the dimensionless unity, is needed: The factor 0.3048 m/ft is identical to the dimensionless 1, so multiplying by this conversion factor changes nothing. Then when adding two quantities of like dimension, but expressed in different units, the appropriate conversion factor, which is essentially the dimensionless 1, is used to convert the quantities to the same unit so that their numerical values can be added or subtracted. Only in this manner is it meaningful to speak of adding like-dimensioned quantities of differing units. Aquantity equation, also sometimes called acomplete equation, is an equation that remains valid independently of theunit of measurementused when expressing thephysical quantities.[28] In contrast, in anumerical-value equation, just the numerical values of the quantities occur, without units. Therefore, it is only valid when each numerical values is referenced to a specific unit. For example, a quantity equation fordisplacementdasspeedsmultiplied by time differencetwould be: fors= 5 m/s, wheretanddmay be expressed in any units,convertedif necessary. In contrast, a corresponding numerical-value equation would be: whereTis the numeric value oftwhen expressed in seconds andDis the numeric value ofdwhen expressed in metres. Generally, the use of numerical-value equations is discouraged.[28] The dimensionless constants that arise in the results obtained, such as theCin the Poiseuille's Law problem and theκin the spring problems discussed above, come from a more detailed analysis of the underlying physics and often arise from integrating some differential equation. Dimensional analysis itself has little to say about these constants, but it is useful to know that they very often have a magnitude of order unity. This observation can allow one to sometimes make "back of the envelope" calculations about the phenomenon of interest, and therefore be able to more efficiently design experiments to measure it, or to judge whether it is important, etc. Paradoxically, dimensional analysis can be a useful tool even if all the parameters in the underlying theory are dimensionless, e.g., lattice models such as theIsing modelcan be used to study phase transitions and critical phenomena. Such models can be formulated in a purely dimensionless way. As we approach the critical point closer and closer, the distance over which the variables in the lattice model are correlated (the so-called correlation length,χ) becomes larger and larger. Now, the correlation length is the relevant length scale related to critical phenomena, so one can, e.g., surmise on "dimensional grounds" that the non-analytical part of the free energy per lattice site should be~ 1/χd, wheredis the dimension of the lattice. It has been argued by some physicists, e.g.,Michael J. Duff,[4][29]that the laws of physics are inherently dimensionless. The fact that we have assigned incompatible dimensions to Length, Time and Mass is, according to this point of view, just a matter of convention, borne out of the fact that before the advent of modern physics, there was no way to relate mass, length, and time to each other. The three independent dimensionful constants:c,ħ, andG, in the fundamental equations of physics must then be seen as mere conversion factors to convert Mass, Time and Length into each other. Just as in the case of critical properties of lattice models, one can recover the results of dimensional analysis in the appropriate scaling limit; e.g., dimensional analysis in mechanics can be derived by reinserting the constantsħ,c, andG(but we can now consider them to be dimensionless) and demanding that a nonsingular relation between quantities exists in the limitc→ ∞,ħ→ 0andG→ 0. In problems involving a gravitational field the latter limit should be taken such that the field stays finite. Following are tables of commonly occurring expressions in physics, related to the dimensions of energy, momentum, and force.[30][31][32] T−2L2M T−1LM T−2LM Dimensional correctness as part oftype checkinghas been studied since 1977.[33]Implementations for Ada[34]and C++[35]were described in 1985 and 1988. Kennedy's 1996 thesis describes an implementation inStandard ML,[36]and later inF#.[37]There are implementations forHaskell,[38]OCaml,[39]andRust,[40]Python,[41]and a code checker forFortran.[42][43]Griffioen's 2019 thesis extended Kennedy'sHindley–Milner type systemto support Hart's matrices.[44][45]McBride and Nordvall-Forsberg show how to usedependent typesto extend type systems for units of measure.[46] Mathematica 13.2 has a function for transformations with quantities named NondimensionalizationTransform that applies a nondimensionalization transform to an equation.[47]Mathematica also has a function to find the dimensions of a unit such as 1 J named UnitDimensions.[48]Mathematica also has a function that will find dimensionally equivalent combinations of a subset of physical quantities named DimensionalCombations.[49]Mathematica can also factor out certain dimension with UnitDimensions by specifying an argument to the function UnityDimensions.[50]For example, you can use UnityDimensions to factor out angles.[50]In addition to UnitDimensions, Mathematica can find the dimensions of a QuantityVariable with the function QuantityVariableDimensions.[51] Some discussions of dimensional analysis implicitly describe all quantities as mathematical vectors. In mathematics scalars are considered a special case of vectors;[citation needed]vectors can be added to or subtracted from other vectors, and, inter alia, multiplied or divided by scalars. If a vector is used to define a position, this assumes an implicit point of reference: anorigin. While this is useful and often perfectly adequate, allowing many important errors to be caught, it can fail to model certain aspects of physics. A more rigorous approach requires distinguishing between position and displacement (or moment in time versus duration, or absolute temperature versus temperature change). Consider points on a line, each with a position with respect to a given origin, and distances among them. Positions and displacements all have units of length, but their meaning is not interchangeable: This illustrates the subtle distinction betweenaffinequantities (ones modeled by anaffine space, such as position) andvectorquantities (ones modeled by avector space, such as displacement). Properly then, positions have dimension ofaffinelength, while displacements have dimension ofvectorlength. To assign a number to anaffineunit, one must not only choose a unit of measurement, but also apoint of reference, while to assign a number to avectorunit only requires a unit of measurement. Thus some physical quantities are better modeled by vectorial quantities while others tend to require affine representation, and the distinction is reflected in their dimensional analysis. This distinction is particularly important in the case of temperature, for which the numeric value ofabsolute zerois not the origin 0 in some scales. For absolute zero, where the symbol ≘ meanscorresponds to, since although these values on the respective temperature scales correspond, they represent distinct quantities in the same way that the distances from distinct starting points to the same end point are distinct quantities, and cannot in general be equated. For temperature differences, (Here °R refers to theRankine scale, not theRéaumur scale). Unit conversion for temperature differences is simply a matter of multiplying by, e.g., 1 °F / 1 K (although the ratio is not a constant value). But because some of these scales have origins that do not correspond to absolute zero, conversion from one temperature scale to another requires accounting for that. As a result, simple dimensional analysis can lead to errors if it is ambiguous whether 1 K means the absolute temperature equal to −272.15 °C, or the temperature difference equal to 1 °C. Similar to the issue of a point of reference is the issue of orientation: a displacement in 2 or 3 dimensions is not just a length, but is a length together with adirection. (In 1 dimension, this issue is equivalent to the distinction between positive and negative.) Thus, to compare or combine two dimensional quantities in multi-dimensional Euclidean space, one also needs a bearing: they need to be compared to aframe of reference. This leads to theextensionsdiscussed below, namely Huntley's directed dimensions and Siano's orientational analysis. Huntley has pointed out that a dimensional analysis can become more powerful by discovering new independent dimensions in the quantities under consideration, thus increasing the rankm{\displaystyle m}of the dimensional matrix.[52] He introduced two approaches: As an example of the usefulness of the first approach, suppose we wish to calculate thedistance a cannonball travelswhen fired with a vertical velocity componentvy{\displaystyle v_{\text{y}}}and a horizontal velocity component⁠vx{\displaystyle v_{\text{x}}}⁠, assuming it is fired on a flat surface. Assuming no use of directed lengths, the quantities of interest are thenR, the distance travelled, with dimension L,⁠vx{\displaystyle v_{\text{x}}}⁠,⁠vy{\displaystyle v_{\text{y}}}⁠, both dimensioned as T−1L, andgthe downward acceleration of gravity, with dimension T−2L. With these four quantities, we may conclude that the equation for the rangeRmay be written: Or dimensionally from which we may deduce thata+b+c=1{\displaystyle a+b+c=1}and⁠a+b+2c=0{\displaystyle a+b+2c=0}⁠, which leaves one exponent undetermined. This is to be expected since we have two fundamental dimensions T and L, and four parameters, with one equation. However, if we use directed length dimensions, thenvx{\displaystyle v_{\mathrm {x} }}will be dimensioned as T−1Lx,vy{\displaystyle v_{\mathrm {y} }}as T−1Ly,Ras Lxandgas T−2Ly. The dimensional equation becomes: and we may solve completely asa= 1,b= 1andc= −1. The increase in deductive power gained by the use of directed length dimensions is apparent. Huntley's concept of directed length dimensions however has some serious limitations: It also is often quite difficult to assign the L, Lx, Ly, Lz, symbols to the physical variables involved in the problem of interest. He invokes a procedure that involves the "symmetry" of the physical problem. This is often very difficult to apply reliably: It is unclear as to what parts of the problem that the notion of "symmetry" is being invoked. Is it the symmetry of the physical body that forces are acting upon, or to the points, lines or areas at which forces are being applied? What if more than one body is involved with different symmetries? Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts? Are they the same or different? These difficulties are responsible for the limited application of Huntley's directed length dimensions to real problems. In Huntley's second approach, he holds that it is sometimes useful (e.g., in fluid mechanics and thermodynamics) to distinguish between mass as a measure of inertia (inertial mass), and mass as a measure of the quantity of matter.Quantity of matteris defined by Huntley as a quantity onlyproportionalto inertial mass, while not implicating inertial properties. No further restrictions are added to its definition. For example, consider the derivation ofPoiseuille's Law. We wish to find the rate of mass flow of a viscous fluid through a circular pipe. Without drawing distinctions between inertial and substantial mass, we may choose as the relevant variables: There are three fundamental variables, so the above five equations will yield two independent dimensionless variables: If we distinguish between inertial mass with dimensionMi{\displaystyle M_{\text{i}}}and quantity of matter with dimensionMm{\displaystyle M_{\text{m}}}, then mass flow rate and density will use quantity of matter as the mass parameter, while the pressure gradient and coefficient of viscosity will use inertial mass. We now have four fundamental parameters, and one dimensionless constant, so that the dimensional equation may be written: where now onlyCis an undetermined constant (found to be equal toπ/8{\displaystyle \pi /8}by methods outside of dimensional analysis). This equation may be solved for the mass flow rate to yieldPoiseuille's law. Huntley's recognition of quantity of matter as an independent quantity dimension is evidently successful in the problems where it is applicable, but his definition of quantity of matter is open to interpretation, as it lacks specificity beyond the two requirements he postulated for it. For a given substance, the SI dimensionamount of substance, with unitmole, does satisfy Huntley's two requirements as a measure of quantity of matter, and could be used as a quantity of matter in any problem of dimensional analysis where Huntley's concept is applicable. Anglesare, by convention, considered to be dimensionless quantities (although the wisdom of this is contested[53]) . As an example, consider again the projectile problem in which a point mass is launched from the origin(x,y) = (0, 0)at a speedvand angleθabove thex-axis, with the force of gravity directed along the negativey-axis. It is desired to find the rangeR, at which point the mass returns to thex-axis. Conventional analysis will yield the dimensionless variableπ=Rg/v2, but offers no insight into the relationship betweenRandθ. Siano has suggested that the directed dimensions of Huntley be replaced by usingorientational symbols1x1y1zto denote vector directions, and an orientationless symbol 10.[54]Thus, Huntley's Lxbecomes L1xwith L specifying the dimension of length, and1xspecifying the orientation. Siano further shows that the orientational symbols have an algebra of their own. Along with the requirement that1i−1= 1i, the following multiplication table for the orientation symbols results: The orientational symbols form a group (theKlein four-groupor "Viergruppe"). In this system, scalars always have the same orientation as the identity element, independent of the "symmetry of the problem". Physical quantities that are vectors have the orientation expected: a force or a velocity in the z-direction has the orientation of1z. For angles, consider an angleθthat lies in the z-plane. Form a right triangle in the z-plane withθbeing one of the acute angles. The side of the right triangle adjacent to the angle then has an orientation1xand the side opposite has an orientation1y. Since (using~to indicate orientational equivalence)tan(θ) =θ+ ... ~ 1y/1xwe conclude that an angle in the xy-plane must have an orientation1y/1x= 1z, which is not unreasonable. Analogous reasoning forces the conclusion thatsin(θ)has orientation1zwhilecos(θ)has orientation 10. These are different, so one concludes (correctly), for example, that there are no solutions of physical equations that are of the formacos(θ) +bsin(θ), whereaandbare real scalars. An expression such assin⁡(θ+π/2)=cos⁡(θ){\displaystyle \sin(\theta +\pi /2)=\cos(\theta )}is not dimensionally inconsistent since it is a special case of the sum of angles formula and should properly be written: which fora=θ{\displaystyle a=\theta }andb=π/2{\displaystyle b=\pi /2}yields⁠sin⁡(θ1z+[π/2]1z)=1zcos⁡(θ1z){\displaystyle \sin(\theta \,1_{\text{z}}+[\pi /2]\,1_{\text{z}})=1_{\text{z}}\cos(\theta \,1_{\text{z}})}⁠. Siano distinguishes between geometric angles, which have an orientation in 3-dimensional space, and phase angles associated with time-based oscillations, which have no spatial orientation, i.e. the orientation of a phase angle is⁠10{\displaystyle 1_{0}}⁠. The assignment of orientational symbols to physical quantities and the requirement that physical equations be orientationally homogeneous can actually be used in a way that is similar to dimensional analysis to derive more information about acceptable solutions of physical problems. In this approach, one solves the dimensional equation as far as one can. If the lowest power of a physical variable is fractional, both sides of the solution is raised to a power such that all powers are integral, putting it intonormal form. The orientational equation is then solved to give a more restrictive condition on the unknown powers of the orientational symbols. The solution is then more complete than the one that dimensional analysis alone gives. Often, the added information is that one of the powers of a certain variable is even or odd. As an example, for the projectile problem, using orientational symbols,θ, being in the xy-plane will thus have dimension1zand the range of the projectileRwill be of the form: Dimensional homogeneity will now correctly yielda= −1andb= 2, and orientational homogeneity requires that⁠1x/(1ya1zc)=1zc+1=1{\displaystyle 1_{x}/(1_{y}^{a}1_{z}^{c})=1_{z}^{c+1}=1}⁠. In other words, thatcmust be an odd integer. In fact, the required function of theta will besin(θ)cos(θ)which is a series consisting of odd powers ofθ. It is seen that the Taylor series ofsin(θ)andcos(θ)are orientationally homogeneous using the above multiplication table, while expressions likecos(θ) + sin(θ)andexp(θ)are not, and are (correctly) deemed unphysical. Siano's orientational analysis is compatible with the conventional conception of angular quantities as being dimensionless, and within orientational analysis, theradianmay still be considered a dimensionless unit. The orientational analysis of a quantity equation is carried out separately from the ordinary dimensional analysis, yielding information that supplements the dimensional analysis.
https://en.wikipedia.org/wiki/Numerical-value_equation
Inengineeringandscience,dimensional analysisis the analysis of the relationships between differentphysical quantitiesby identifying theirbase quantities(such aslength,mass,time, andelectric current) andunits of measurement(such as metres and grams) and tracking these dimensions as calculations or comparisons are performed. The term dimensional analysis is also used to refer toconversion of unitsfrom one dimensional unit to another, which can be used to evaluate scientific formulae. Commensurablephysical quantities are of the samekindand have the same dimension, and can be directly compared to each other, even if they are expressed in differing units of measurement; e.g., metres and feet, grams and pounds, seconds and years.Incommensurablephysicalquantitiesare of differentkindsand have different dimensions, and can not be directly compared to each other, no matter whatunitsthey are expressed in, e.g. metres and grams, seconds and grams, metres and seconds. For example, asking whether a gram is larger than an hour is meaningless. Any physically meaningfulequation, orinequality,musthave the same dimensions on its left and right sides, a property known asdimensional homogeneity. Checking for dimensional homogeneity is a common application of dimensional analysis, serving as a plausibility check onderivedequations andcomputations. It also serves as a guide and constraint in deriving equations that may describe a physicalsystemin the absence of a more rigorous derivation. The concept ofphysical dimensionorquantity dimension, and of dimensional analysis, was introduced byJoseph Fourierin 1822.[1]: 42 TheBuckingham π theoremdescribes how every physically meaningful equation involvingnvariables can be equivalently rewritten as an equation ofn−mdimensionless parameters, wheremis therankof the dimensionalmatrix. Furthermore, and most importantly, it provides a method for computing these dimensionless parameters from the given variables. A dimensional equation can have the dimensions reduced or eliminated throughnondimensionalization, which begins with dimensional analysis, and involves scaling quantities bycharacteristic unitsof a system orphysical constantsof nature.[1]: 43This may give insight into the fundamental properties of the system, as illustrated in the examples below. The dimension of aphysical quantitycan be expressed as a product of the base physical dimensions such as length, mass and time, each raised to an integer (and occasionallyrational)power. Thedimensionof a physical quantity is more fundamental than somescaleorunitused to express the amount of that physical quantity. For example,massis a dimension, while the kilogram is a particular reference quantity chosen to express a quantity of mass. The choice of unit is arbitrary, and its choice is often based on historical precedent.Natural units, being based on only universal constants, may be thought of as being "less arbitrary". There are many possible choices of base physical dimensions. TheSI standardselects the following dimensions and correspondingdimension symbols: The symbols are by convention usually written inromansans seriftypeface.[2]Mathematically, the dimension of the quantityQis given by wherea,b,c,d,e,f,gare the dimensional exponents. Other physical quantities could be defined as the base quantities, as long as they form abasis– for instance, one could replace the dimension (I) ofelectric currentof the SI basis with a dimension (Q) ofelectric charge, sinceQ = TI. A quantity that has onlyb≠ 0(with all other exponents zero) is known as ageometricquantity. A quantity that has only botha≠ 0andb≠ 0is known as akinematicquantity. A quantity that has only all ofa≠ 0,b≠ 0, andc≠ 0is known as adynamicquantity.[3]A quantity that has all exponents null is said to havedimension one.[2] The unit chosen to express a physical quantity and its dimension are related, but not identical concepts. The units of a physical quantity are defined by convention and related to some standard; e.g., length may have units of metres, feet, inches, miles or micrometres; but any length always has a dimension of L, no matter what units of length are chosen to express it. Two different units of the same physical quantity haveconversion factorsthat relate them. For example,1 in = 2.54 cm; in this case 2.54 cm/in is the conversion factor, which is itself dimensionless. Therefore, multiplying by that conversion factor does not change the dimensions of a physical quantity. There are also physicists who have cast doubt on the very existence of incompatible fundamental dimensions of physical quantity,[4]although this does not invalidate the usefulness of dimensional analysis. As examples, the dimension of the physical quantityspeedvis The dimension of the physical quantityaccelerationais The dimension of the physical quantityforceFis The dimension of the physical quantitypressurePis The dimension of the physical quantityenergyEis The dimension of the physical quantitypowerPis The dimension of the physical quantityelectric chargeQis The dimension of the physical quantityvoltageVis The dimension of the physical quantitycapacitanceCis In dimensional analysis,Rayleigh's methodis a conceptual tool used inphysics,chemistry, andengineering. It expresses afunctional relationshipof somevariablesin the form of anexponential equation. It was named afterLord Rayleigh. The method involves the following steps: As a drawback, Rayleigh's method does not provide any information regarding number of dimensionless groups to be obtained as a result of dimensional analysis. Many parameters and measurements in the physical sciences and engineering are expressed as aconcrete number—a numerical quantity and a corresponding dimensional unit. Often a quantity is expressed in terms of several other quantities; for example, speed is a combination of length and time, e.g. 60 kilometres per hour or 1.4 kilometres per second. Compound relations with "per" are expressed withdivision, e.g. 60 km/h. Other relations can involvemultiplication(often shown with acentered dotorjuxtaposition), powers (like m2for square metres), or combinations thereof. A set ofbase unitsfor asystem of measurementis a conventionally chosen set of units, none of which can be expressed as a combination of the others and in terms of which all the remaining units of the system can be expressed.[5]For example, units forlengthand time are normally chosen as base units. Units forvolume, however, can be factored into the base units of length (m3), thus they are considered derived or compound units. Sometimes the names of units obscure the fact that they are derived units. For example, anewton(N) is a unit offorce, which may be expressed as the product of mass (with unit kg) and acceleration (with unit m⋅s−2). The newton is defined as1 N = 1 kg⋅m⋅s−2. Percentages are dimensionless quantities, since they are ratios of two quantities with the same dimensions. In other words, the % sign can be read as "hundredths", since1% = 1/100. Taking a derivative with respect to a quantity divides the dimension by the dimension of the variable that is differentiated with respect to. Thus: Likewise, taking an integral adds the dimension of the variable one is integrating with respect to, but in the numerator. In economics, one distinguishes betweenstocks and flows: a stock has a unit (say, widgets or dollars), while a flow is a derivative of a stock, and has a unit of the form of this unit divided by one of time (say, dollars/year). In some contexts, dimensional quantities are expressed as dimensionless quantities or percentages by omitting some dimensions. For example,debt-to-GDP ratiosare generally expressed as percentages: total debt outstanding (dimension of currency) divided by annual GDP (dimension of currency)—but one may argue that, in comparing a stock to a flow, annual GDP should have dimensions of currency/time (dollars/year, for instance) and thus debt-to-GDP should have the unit year, which indicates that debt-to-GDP is the number of years needed for a constant GDP to pay the debt, if all GDP is spent on the debt and the debt is otherwise unchanged. The most basic rule of dimensional analysis is that of dimensional homogeneity.[6] However, the dimensions form anabelian groupunder multiplication, so: For example, it makes no sense to ask whether 1 hour is more, the same, or less than 1 kilometre, as these have different dimensions, nor to add 1 hour to 1 kilometre. However, it makes sense to ask whether 1 mile is more, the same, or less than 1 kilometre, being the same dimension of physical quantity even though the units are different. On the other hand, if an object travels 100 km in 2 hours, one may divide these and conclude that the object's average speed was 50 km/h. The rule implies that in a physically meaningfulexpressiononly quantities of the same dimension can be added, subtracted, or compared. For example, ifmman,mratandLmandenote, respectively, the mass of some man, the mass of a rat and the length of that man, the dimensionally homogeneous expressionmman+mratis meaningful, but the heterogeneous expressionmman+Lmanis meaningless. However,mman/L2manis fine. Thus, dimensional analysis may be used as asanity checkof physical equations: the two sides of any equation must be commensurable or have the same dimensions. Even when two physical quantities have identical dimensions, it may nevertheless be meaningless to compare or add them. For example, althoughtorqueand energy share the dimensionT−2L2M, they are fundamentally different physical quantities. To compare, add, or subtract quantities with the same dimensions but expressed in different units, the standard procedure is first to convert them all to the same unit. For example, to compare 32 metres with 35 yards, use1 yard = 0.9144 mto convert 35 yards to 32.004 m. A related principle is that any physical law that accurately describes the real world must be independent of the units used to measure the physical variables.[7]For example,Newton's laws of motionmust hold true whether distance is measured in miles or kilometres. This principle gives rise to the form that a conversion factor between two units that measure the same dimension must take multiplication by a simple constant. It also ensures equivalence; for example, if two buildings are the same height in feet, then they must be the same height in metres. In dimensional analysis, a ratio which converts one unit of measure into another without changing the quantity is called aconversion factor. For example, kPa and bar are both units of pressure, and100 kPa = 1 bar. The rules of algebra allow both sides of an equation to be divided by the same expression, so this is equivalent to100 kPa / 1 bar = 1. Since any quantity can be multiplied by 1 without changing it, the expression "100 kPa / 1 bar" can be used to convert from bars to kPa by multiplying it with the quantity to be converted, including the unit. For example,5 bar × 100 kPa / 1 bar = 500 kPabecause5 × 100 / 1 = 500, and bar/bar cancels out, so5 bar = 500 kPa. Dimensional analysis is most often used in physics and chemistry – and in the mathematics thereof – but finds some applications outside of those fields as well. A simple application of dimensional analysis to mathematics is in computing the form of thevolume of ann-ball(the solid ball inndimensions), or the area of its surface, then-sphere: being ann-dimensional figure, the volume scales asxn, while the surface area, being(n− 1)-dimensional, scales asxn−1. Thus the volume of then-ball in terms of the radius isCnrn, for some constantCn. Determining the constant takes more involved mathematics, but the form can be deduced and checked by dimensional analysis alone. In finance, economics, and accounting, dimensional analysis is most commonly referred to in terms of thedistinction between stocks and flows. More generally, dimensional analysis is used in interpreting variousfinancial ratios, economics ratios, and accounting ratios. Influid mechanics, dimensional analysis is performed to obtain dimensionlesspi termsor groups. According to the principles of dimensional analysis, any prototype can be described by a series of these terms or groups that describe the behaviour of the system. Using suitable pi terms or groups, it is possible to develop a similar set of pi terms for a model that has the same dimensional relationships.[8]In other words, pi terms provide a shortcut to developing a model representing a certain prototype. Common dimensionless groups in fluid mechanics include: The origins of dimensional analysis have been disputed by historians.[9][10]The first written application of dimensional analysis has been credited toFrançois Daviet, a student ofJoseph-Louis Lagrange, in a 1799 article at theTurinAcademy of Science.[10] This led to the conclusion that meaningful laws must be homogeneous equations in their various units of measurement, a result which was eventually later formalized in theBuckingham π theorem.Simeon Poissonalso treated the same problem of theparallelogram lawby Daviet, in his treatise of 1811 and 1833 (vol I, p. 39).[11]In the second edition of 1833, Poisson explicitly introduces the termdimensioninstead of the Daviethomogeneity. In 1822, the important Napoleonic scientistJoseph Fouriermade the first credited important contributions[12]based on the idea that physical laws likeF=mashould be independent of the units employed to measure the physical variables. James Clerk Maxwellplayed a major role in establishing modern use of dimensional analysis by distinguishing mass, length, and time as fundamental units, while referring to other units as derived.[13]Although Maxwell defined length, time and mass to be "the three fundamental units", he also noted that gravitational mass can be derived from length and time by assuming a form ofNewton's law of universal gravitationin which thegravitational constantGis taken asunity, thereby definingM = T−2L3.[14]By assuming a form ofCoulomb's lawin which theCoulomb constantkeis taken as unity, Maxwell then determined that the dimensions of an electrostatic unit of charge wereQ = T−1L3/2M1/2,[15]which, after substituting hisM = T−2L3equation for mass, results in charge having the same dimensions as mass, viz.Q = T−2L3. Dimensional analysis is also used to derive relationships between the physical quantities that are involved in a particular phenomenon that one wishes to understand and characterize. It was used for the first time in this way in 1872 byLord Rayleigh, who was trying to understand why the sky is blue.[16]Rayleigh first published the technique in his 1877 bookThe Theory of Sound.[17] The original meaning of the worddimension, in Fourier'sTheorie de la Chaleur, was the numerical value of the exponents of the base units. For example, acceleration was considered to have the dimension 1 with respect to the unit of length, and the dimension −2 with respect to the unit of time.[18]This was slightly changed by Maxwell, who said the dimensions of acceleration are T−2L, instead of just the exponents.[19] What is the period ofoscillationTof a massmattached to an ideal linear spring with spring constantksuspended in gravity of strengthg? That period is the solution forTof some dimensionless equation in the variablesT,m,k, andg. The four quantities have the following dimensions:T[T];m[M];k[M/T2]; andg[L/T2]. From these we can form only one dimensionless product of powers of our chosen variables,G1=T2k/m[T2· M/T2/ M = 1], and puttingG1=Cfor some dimensionless constantCgives the dimensionless equation sought. The dimensionless product of powers of variables is sometimes referred to as a dimensionless group of variables; here the term "group" means "collection" rather than mathematicalgroup. They are often calleddimensionless numbersas well. The variablegdoes not occur in the group. It is easy to see that it is impossible to form a dimensionless product of powers that combinesgwithk,m, andT, becausegis the only quantity that involves the dimension L. This implies that in this problem thegis irrelevant. Dimensional analysis can sometimes yield strong statements about theirrelevanceof some quantities in a problem, or the need for additional parameters. If we have chosen enough variables to properly describe the problem, then from this argument we can conclude that the period of the mass on the spring is independent ofg: it is the same on the earth or the moon. The equation demonstrating the existence of a product of powers for our problem can be written in an entirely equivalent way:⁠T=κmk{\displaystyle T=\kappa {\sqrt {\tfrac {m}{k}}}}⁠, for some dimensionless constantκ(equal toC{\displaystyle {\sqrt {C}}}from the original dimensionless equation). When faced with a case where dimensional analysis rejects a variable (g, here) that one intuitively expects to belong in a physical description of the situation, another possibility is that the rejected variable is in fact relevant, but that some other relevant variable has been omitted, which might combine with the rejected variable to form a dimensionless quantity. That is, however, not the case here. When dimensional analysis yields only one dimensionless group, as here, there are no unknown functions, and the solution is said to be "complete" – although it still may involve unknown dimensionless constants, such asκ. Consider the case of a vibrating wire oflengthℓ(L) vibrating with anamplitudeA(L). The wire has alinear densityρ(M/L) and is undertensions(LM/T2), and we want to know the energyE(L2M/T2) in the wire. Letπ1andπ2be two dimensionless products ofpowersof the variables chosen, given by The linear density of the wire is not involved. The two groups found can be combined into an equivalent form as an equation whereFis some unknown function, or, equivalently as wherefis some other unknown function. Here the unknown function implies that our solution is now incomplete, but dimensional analysis has given us something that may not have been obvious: the energy is proportional to the first power of the tension. Barring further analytical analysis, we might proceed to experiments to discover the form for the unknown functionf. But our experiments are simpler than in the absence of dimensional analysis. We'd perform none to verify that the energy is proportional to the tension. Or perhaps we might guess that the energy is proportional toℓ, and so infer thatE=ℓs. The power of dimensional analysis as an aid to experiment and forming hypotheses becomes evident. The power of dimensional analysis really becomes apparent when it is applied to situations, unlike those given above, that are more complicated, the set of variables involved are not apparent, and the underlying equations hopelessly complex. Consider, for example, a small pebble sitting on the bed of a river. If the river flows fast enough, it will actually raise the pebble and cause it to flow along with the water. At what critical velocity will this occur? Sorting out the guessed variables is not so easy as before. But dimensional analysis can be a powerful aid in understanding problems like this, and is usually the very first tool to be applied to complex problems where the underlying equations and constraints are poorly understood. In such cases, the answer may depend on adimensionless numbersuch as theReynolds number, which may be interpreted by dimensional analysis. Consider the case of a thin, solid, parallel-sided rotating disc of axial thicknesst(L) and radiusR(L). The disc has a densityρ(M/L3), rotates at an angular velocityω(T−1) and this leads to a stressS(T−2L−1M) in the material. There is a theoretical linear elastic solution, given by Lame, to this problem when the disc is thin relative to its radius, the faces of the disc are free to move axially, and the plane stress constitutive relations can be assumed to be valid. As the disc becomes thicker relative to the radius then the plane stress solution breaks down. If the disc is restrained axially on its free faces then a state of plane strain will occur. However, if this is not the case then the state of stress may only be determined though consideration of three-dimensional elasticity and there is no known theoretical solution for this case. An engineer might, therefore, be interested in establishing a relationship between the five variables. Dimensional analysis for this case leads to the following (5 − 3 = 2) non-dimensional groups: Through the use of numerical experiments using, for example, thefinite element method, the nature of the relationship between the two non-dimensional groups can be obtained as shown in the figure. As this problem only involves two non-dimensional groups, the complete picture is provided in a single plot and this can be used as a design/assessment chart for rotating discs.[20] The dimensions that can be formed from a given collection of basic physical dimensions, such as T, L, and M, form anabelian group: Theidentityis written as 1;[citation needed]L0= 1, and the inverse of L is 1/L or L−1. L raised to any integer powerpis a member of the group, having an inverse of L−por 1/Lp. The operation of the group is multiplication, having the usual rules for handling exponents (Ln× Lm= Ln+m). Physically, 1/L can be interpreted asreciprocal length, and 1/T as reciprocal time (seereciprocal second). An abelian group is equivalent to amoduleover the integers, with the dimensional symbolTiLjMkcorresponding to the tuple(i,j,k). When physical measured quantities (be they like-dimensioned or unlike-dimensioned) are multiplied or divided by one other, their dimensional units are likewise multiplied or divided; this corresponds to addition or subtraction in the module. When measurable quantities are raised to an integer power, the same is done to the dimensional symbols attached to those quantities; this corresponds toscalar multiplicationin the module. A basis for such a module of dimensional symbols is called a set ofbase quantities, and all other vectors are called derived units. As in any module, one may choose differentbases, which yields different systems of units (e.g.,choosingwhether the unit for charge is derived from the unit for current, or vice versa). The group identity, the dimension of dimensionless quantities, corresponds to the origin in this module,(0, 0, 0). In certain cases, one can define fractional dimensions, specifically by formally defining fractional powers of one-dimensional vector spaces, likeVL1/2.[21]However, it is not possible to take arbitrary fractional powers of units, due torepresentation-theoreticobstructions.[22] One can work with vector spaces with given dimensions without needing to use units (corresponding to coordinate systems of the vector spaces). For example, given dimensionsMandL, one has the vector spacesVMandVL, and can defineVML:=VM⊗VLas thetensor product. Similarly, the dual space can be interpreted as having "negative" dimensions.[23]This corresponds to the fact that under thenatural pairingbetween a vector space and its dual, the dimensions cancel, leaving adimensionlessscalar. The set of units of the physical quantities involved in a problem correspond to a set of vectors (or a matrix). Thenullitydescribes some number (e.g.,m) of ways in which these vectors can be combined to produce a zero vector. These correspond to producing (from the measurements) a number of dimensionless quantities,{π1, ..., πm}. (In fact these ways completely span the null subspace of another different space, of powers of the measurements.) Every possible way of multiplying (andexponentiating) together the measured quantities to produce something with the same unit as some derived quantityXcan be expressed in the general form Consequently, every possiblecommensurateequation for the physics of the system can be rewritten in the form Knowing this restriction can be a powerful tool for obtaining new insight into the system. The dimension of physical quantities of interest inmechanicscan be expressed in terms of base dimensions T, L, and M – these form a 3-dimensional vector space. This is not the only valid choice of base dimensions, but it is the one most commonly used. For example, one might choose force, length and mass as the base dimensions (as some have done), with associated dimensions F, L, M; this corresponds to a different basis, and one may convert between these representations by achange of basis. The choice of the base set of dimensions is thus a convention, with the benefit of increased utility and familiarity. The choice of base dimensions is not entirely arbitrary, because they must form abasis: they mustspanthe space, and belinearly independent. For example, F, L, M form a set of fundamental dimensions because they form a basis that is equivalent to T, L, M: the former can be expressed as [F = LM/T2], L, M, while the latter can be expressed as [T = (LM/F)1/2], L, M. On the other hand, length, velocity and time (T, L, V) do not form a set of base dimensions for mechanics, for two reasons: Depending on the field of physics, it may be advantageous to choose one or another extended set of dimensional symbols. In electromagnetism, for example, it may be useful to use dimensions of T, L, M and Q, where Q represents the dimension ofelectric charge. Inthermodynamics, the base set of dimensions is often extended to include a dimension for temperature, Θ. In chemistry, theamount of substance(the number of molecules divided by theAvogadro constant, ≈6.02×1023mol−1) is also defined as a base dimension, N. In the interaction ofrelativistic plasmawith strong laser pulses, a dimensionlessrelativistic similarity parameter, connected with the symmetry properties of the collisionlessVlasov equation, is constructed from the plasma-, electron- and critical-densities in addition to the electromagnetic vector potential. The choice of the dimensions or even the number of dimensions to be used in different fields of physics is to some extent arbitrary, but consistency in use and ease of communications are common and necessary features. Bridgman's theorem restricts the type of function that can be used to define a physical quantity from general (dimensionally compounded) quantities to only products of powers of the quantities, unless some of the independent quantities are algebraically combined to yield dimensionless groups, whose functions are grouped together in the dimensionless numeric multiplying factor.[24][25]This excludes polynomials of more than one term or transcendental functions not of that form. Scalararguments totranscendental functionssuch asexponential,trigonometricandlogarithmicfunctions, or toinhomogeneous polynomials, must bedimensionless quantities. (Note: this requirement is somewhat relaxed in Siano's orientational analysis described below, in which the square of certain dimensioned quantities are dimensionless.) While most mathematical identities about dimensionless numbers translate in a straightforward manner to dimensional quantities, care must be taken with logarithms of ratios: the identitylog(a/b) = loga− logb, where the logarithm is taken in any base, holds for dimensionless numbersaandb, but it doesnothold ifaandbare dimensional, because in this case the left-hand side is well-defined but the right-hand side is not.[26] Similarly, while one can evaluatemonomials(xn) of dimensional quantities, one cannot evaluate polynomials of mixed degree with dimensionless coefficients on dimensional quantities: forx2, the expression(3 m)2= 9 m2makes sense (as an area), while forx2+x, the expression(3 m)2+ 3 m = 9 m2+ 3 mdoes not make sense. However, polynomials of mixed degree can make sense if the coefficients are suitably chosen physical quantities that are not dimensionless. For example, This is the height to which an object rises in timetif the acceleration ofgravityis 9.8metres per second per secondand the initial upward speed is 500metres per second. It is not necessary fortto be inseconds. For example, supposet= 0.01 minutes. Then the first term would be The value of a dimensional physical quantityZis written as the product of aunit[Z] within the dimension and a dimensionless numerical value or numerical factor,n.[27] When like-dimensioned quantities are added or subtracted or compared, it is convenient to express them in the same unit so that the numerical values of these quantities may be directly added or subtracted. But, in concept, there is no problem adding quantities of the same dimension expressed in different units. For example, 1 metre added to 1 foot is a length, but one cannot derive that length by simply adding 1 and 1. Aconversion factor, which is a ratio of like-dimensioned quantities and is equal to the dimensionless unity, is needed: The factor 0.3048 m/ft is identical to the dimensionless 1, so multiplying by this conversion factor changes nothing. Then when adding two quantities of like dimension, but expressed in different units, the appropriate conversion factor, which is essentially the dimensionless 1, is used to convert the quantities to the same unit so that their numerical values can be added or subtracted. Only in this manner is it meaningful to speak of adding like-dimensioned quantities of differing units. Aquantity equation, also sometimes called acomplete equation, is an equation that remains valid independently of theunit of measurementused when expressing thephysical quantities.[28] In contrast, in anumerical-value equation, just the numerical values of the quantities occur, without units. Therefore, it is only valid when each numerical values is referenced to a specific unit. For example, a quantity equation fordisplacementdasspeedsmultiplied by time differencetwould be: fors= 5 m/s, wheretanddmay be expressed in any units,convertedif necessary. In contrast, a corresponding numerical-value equation would be: whereTis the numeric value oftwhen expressed in seconds andDis the numeric value ofdwhen expressed in metres. Generally, the use of numerical-value equations is discouraged.[28] The dimensionless constants that arise in the results obtained, such as theCin the Poiseuille's Law problem and theκin the spring problems discussed above, come from a more detailed analysis of the underlying physics and often arise from integrating some differential equation. Dimensional analysis itself has little to say about these constants, but it is useful to know that they very often have a magnitude of order unity. This observation can allow one to sometimes make "back of the envelope" calculations about the phenomenon of interest, and therefore be able to more efficiently design experiments to measure it, or to judge whether it is important, etc. Paradoxically, dimensional analysis can be a useful tool even if all the parameters in the underlying theory are dimensionless, e.g., lattice models such as theIsing modelcan be used to study phase transitions and critical phenomena. Such models can be formulated in a purely dimensionless way. As we approach the critical point closer and closer, the distance over which the variables in the lattice model are correlated (the so-called correlation length,χ) becomes larger and larger. Now, the correlation length is the relevant length scale related to critical phenomena, so one can, e.g., surmise on "dimensional grounds" that the non-analytical part of the free energy per lattice site should be~ 1/χd, wheredis the dimension of the lattice. It has been argued by some physicists, e.g.,Michael J. Duff,[4][29]that the laws of physics are inherently dimensionless. The fact that we have assigned incompatible dimensions to Length, Time and Mass is, according to this point of view, just a matter of convention, borne out of the fact that before the advent of modern physics, there was no way to relate mass, length, and time to each other. The three independent dimensionful constants:c,ħ, andG, in the fundamental equations of physics must then be seen as mere conversion factors to convert Mass, Time and Length into each other. Just as in the case of critical properties of lattice models, one can recover the results of dimensional analysis in the appropriate scaling limit; e.g., dimensional analysis in mechanics can be derived by reinserting the constantsħ,c, andG(but we can now consider them to be dimensionless) and demanding that a nonsingular relation between quantities exists in the limitc→ ∞,ħ→ 0andG→ 0. In problems involving a gravitational field the latter limit should be taken such that the field stays finite. Following are tables of commonly occurring expressions in physics, related to the dimensions of energy, momentum, and force.[30][31][32] T−2L2M T−1LM T−2LM Dimensional correctness as part oftype checkinghas been studied since 1977.[33]Implementations for Ada[34]and C++[35]were described in 1985 and 1988. Kennedy's 1996 thesis describes an implementation inStandard ML,[36]and later inF#.[37]There are implementations forHaskell,[38]OCaml,[39]andRust,[40]Python,[41]and a code checker forFortran.[42][43]Griffioen's 2019 thesis extended Kennedy'sHindley–Milner type systemto support Hart's matrices.[44][45]McBride and Nordvall-Forsberg show how to usedependent typesto extend type systems for units of measure.[46] Mathematica 13.2 has a function for transformations with quantities named NondimensionalizationTransform that applies a nondimensionalization transform to an equation.[47]Mathematica also has a function to find the dimensions of a unit such as 1 J named UnitDimensions.[48]Mathematica also has a function that will find dimensionally equivalent combinations of a subset of physical quantities named DimensionalCombations.[49]Mathematica can also factor out certain dimension with UnitDimensions by specifying an argument to the function UnityDimensions.[50]For example, you can use UnityDimensions to factor out angles.[50]In addition to UnitDimensions, Mathematica can find the dimensions of a QuantityVariable with the function QuantityVariableDimensions.[51] Some discussions of dimensional analysis implicitly describe all quantities as mathematical vectors. In mathematics scalars are considered a special case of vectors;[citation needed]vectors can be added to or subtracted from other vectors, and, inter alia, multiplied or divided by scalars. If a vector is used to define a position, this assumes an implicit point of reference: anorigin. While this is useful and often perfectly adequate, allowing many important errors to be caught, it can fail to model certain aspects of physics. A more rigorous approach requires distinguishing between position and displacement (or moment in time versus duration, or absolute temperature versus temperature change). Consider points on a line, each with a position with respect to a given origin, and distances among them. Positions and displacements all have units of length, but their meaning is not interchangeable: This illustrates the subtle distinction betweenaffinequantities (ones modeled by anaffine space, such as position) andvectorquantities (ones modeled by avector space, such as displacement). Properly then, positions have dimension ofaffinelength, while displacements have dimension ofvectorlength. To assign a number to anaffineunit, one must not only choose a unit of measurement, but also apoint of reference, while to assign a number to avectorunit only requires a unit of measurement. Thus some physical quantities are better modeled by vectorial quantities while others tend to require affine representation, and the distinction is reflected in their dimensional analysis. This distinction is particularly important in the case of temperature, for which the numeric value ofabsolute zerois not the origin 0 in some scales. For absolute zero, where the symbol ≘ meanscorresponds to, since although these values on the respective temperature scales correspond, they represent distinct quantities in the same way that the distances from distinct starting points to the same end point are distinct quantities, and cannot in general be equated. For temperature differences, (Here °R refers to theRankine scale, not theRéaumur scale). Unit conversion for temperature differences is simply a matter of multiplying by, e.g., 1 °F / 1 K (although the ratio is not a constant value). But because some of these scales have origins that do not correspond to absolute zero, conversion from one temperature scale to another requires accounting for that. As a result, simple dimensional analysis can lead to errors if it is ambiguous whether 1 K means the absolute temperature equal to −272.15 °C, or the temperature difference equal to 1 °C. Similar to the issue of a point of reference is the issue of orientation: a displacement in 2 or 3 dimensions is not just a length, but is a length together with adirection. (In 1 dimension, this issue is equivalent to the distinction between positive and negative.) Thus, to compare or combine two dimensional quantities in multi-dimensional Euclidean space, one also needs a bearing: they need to be compared to aframe of reference. This leads to theextensionsdiscussed below, namely Huntley's directed dimensions and Siano's orientational analysis. Huntley has pointed out that a dimensional analysis can become more powerful by discovering new independent dimensions in the quantities under consideration, thus increasing the rankm{\displaystyle m}of the dimensional matrix.[52] He introduced two approaches: As an example of the usefulness of the first approach, suppose we wish to calculate thedistance a cannonball travelswhen fired with a vertical velocity componentvy{\displaystyle v_{\text{y}}}and a horizontal velocity component⁠vx{\displaystyle v_{\text{x}}}⁠, assuming it is fired on a flat surface. Assuming no use of directed lengths, the quantities of interest are thenR, the distance travelled, with dimension L,⁠vx{\displaystyle v_{\text{x}}}⁠,⁠vy{\displaystyle v_{\text{y}}}⁠, both dimensioned as T−1L, andgthe downward acceleration of gravity, with dimension T−2L. With these four quantities, we may conclude that the equation for the rangeRmay be written: Or dimensionally from which we may deduce thata+b+c=1{\displaystyle a+b+c=1}and⁠a+b+2c=0{\displaystyle a+b+2c=0}⁠, which leaves one exponent undetermined. This is to be expected since we have two fundamental dimensions T and L, and four parameters, with one equation. However, if we use directed length dimensions, thenvx{\displaystyle v_{\mathrm {x} }}will be dimensioned as T−1Lx,vy{\displaystyle v_{\mathrm {y} }}as T−1Ly,Ras Lxandgas T−2Ly. The dimensional equation becomes: and we may solve completely asa= 1,b= 1andc= −1. The increase in deductive power gained by the use of directed length dimensions is apparent. Huntley's concept of directed length dimensions however has some serious limitations: It also is often quite difficult to assign the L, Lx, Ly, Lz, symbols to the physical variables involved in the problem of interest. He invokes a procedure that involves the "symmetry" of the physical problem. This is often very difficult to apply reliably: It is unclear as to what parts of the problem that the notion of "symmetry" is being invoked. Is it the symmetry of the physical body that forces are acting upon, or to the points, lines or areas at which forces are being applied? What if more than one body is involved with different symmetries? Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts? Are they the same or different? These difficulties are responsible for the limited application of Huntley's directed length dimensions to real problems. In Huntley's second approach, he holds that it is sometimes useful (e.g., in fluid mechanics and thermodynamics) to distinguish between mass as a measure of inertia (inertial mass), and mass as a measure of the quantity of matter.Quantity of matteris defined by Huntley as a quantity onlyproportionalto inertial mass, while not implicating inertial properties. No further restrictions are added to its definition. For example, consider the derivation ofPoiseuille's Law. We wish to find the rate of mass flow of a viscous fluid through a circular pipe. Without drawing distinctions between inertial and substantial mass, we may choose as the relevant variables: There are three fundamental variables, so the above five equations will yield two independent dimensionless variables: If we distinguish between inertial mass with dimensionMi{\displaystyle M_{\text{i}}}and quantity of matter with dimensionMm{\displaystyle M_{\text{m}}}, then mass flow rate and density will use quantity of matter as the mass parameter, while the pressure gradient and coefficient of viscosity will use inertial mass. We now have four fundamental parameters, and one dimensionless constant, so that the dimensional equation may be written: where now onlyCis an undetermined constant (found to be equal toπ/8{\displaystyle \pi /8}by methods outside of dimensional analysis). This equation may be solved for the mass flow rate to yieldPoiseuille's law. Huntley's recognition of quantity of matter as an independent quantity dimension is evidently successful in the problems where it is applicable, but his definition of quantity of matter is open to interpretation, as it lacks specificity beyond the two requirements he postulated for it. For a given substance, the SI dimensionamount of substance, with unitmole, does satisfy Huntley's two requirements as a measure of quantity of matter, and could be used as a quantity of matter in any problem of dimensional analysis where Huntley's concept is applicable. Anglesare, by convention, considered to be dimensionless quantities (although the wisdom of this is contested[53]) . As an example, consider again the projectile problem in which a point mass is launched from the origin(x,y) = (0, 0)at a speedvand angleθabove thex-axis, with the force of gravity directed along the negativey-axis. It is desired to find the rangeR, at which point the mass returns to thex-axis. Conventional analysis will yield the dimensionless variableπ=Rg/v2, but offers no insight into the relationship betweenRandθ. Siano has suggested that the directed dimensions of Huntley be replaced by usingorientational symbols1x1y1zto denote vector directions, and an orientationless symbol 10.[54]Thus, Huntley's Lxbecomes L1xwith L specifying the dimension of length, and1xspecifying the orientation. Siano further shows that the orientational symbols have an algebra of their own. Along with the requirement that1i−1= 1i, the following multiplication table for the orientation symbols results: The orientational symbols form a group (theKlein four-groupor "Viergruppe"). In this system, scalars always have the same orientation as the identity element, independent of the "symmetry of the problem". Physical quantities that are vectors have the orientation expected: a force or a velocity in the z-direction has the orientation of1z. For angles, consider an angleθthat lies in the z-plane. Form a right triangle in the z-plane withθbeing one of the acute angles. The side of the right triangle adjacent to the angle then has an orientation1xand the side opposite has an orientation1y. Since (using~to indicate orientational equivalence)tan(θ) =θ+ ... ~ 1y/1xwe conclude that an angle in the xy-plane must have an orientation1y/1x= 1z, which is not unreasonable. Analogous reasoning forces the conclusion thatsin(θ)has orientation1zwhilecos(θ)has orientation 10. These are different, so one concludes (correctly), for example, that there are no solutions of physical equations that are of the formacos(θ) +bsin(θ), whereaandbare real scalars. An expression such assin⁡(θ+π/2)=cos⁡(θ){\displaystyle \sin(\theta +\pi /2)=\cos(\theta )}is not dimensionally inconsistent since it is a special case of the sum of angles formula and should properly be written: which fora=θ{\displaystyle a=\theta }andb=π/2{\displaystyle b=\pi /2}yields⁠sin⁡(θ1z+[π/2]1z)=1zcos⁡(θ1z){\displaystyle \sin(\theta \,1_{\text{z}}+[\pi /2]\,1_{\text{z}})=1_{\text{z}}\cos(\theta \,1_{\text{z}})}⁠. Siano distinguishes between geometric angles, which have an orientation in 3-dimensional space, and phase angles associated with time-based oscillations, which have no spatial orientation, i.e. the orientation of a phase angle is⁠10{\displaystyle 1_{0}}⁠. The assignment of orientational symbols to physical quantities and the requirement that physical equations be orientationally homogeneous can actually be used in a way that is similar to dimensional analysis to derive more information about acceptable solutions of physical problems. In this approach, one solves the dimensional equation as far as one can. If the lowest power of a physical variable is fractional, both sides of the solution is raised to a power such that all powers are integral, putting it intonormal form. The orientational equation is then solved to give a more restrictive condition on the unknown powers of the orientational symbols. The solution is then more complete than the one that dimensional analysis alone gives. Often, the added information is that one of the powers of a certain variable is even or odd. As an example, for the projectile problem, using orientational symbols,θ, being in the xy-plane will thus have dimension1zand the range of the projectileRwill be of the form: Dimensional homogeneity will now correctly yielda= −1andb= 2, and orientational homogeneity requires that⁠1x/(1ya1zc)=1zc+1=1{\displaystyle 1_{x}/(1_{y}^{a}1_{z}^{c})=1_{z}^{c+1}=1}⁠. In other words, thatcmust be an odd integer. In fact, the required function of theta will besin(θ)cos(θ)which is a series consisting of odd powers ofθ. It is seen that the Taylor series ofsin(θ)andcos(θ)are orientationally homogeneous using the above multiplication table, while expressions likecos(θ) + sin(θ)andexp(θ)are not, and are (correctly) deemed unphysical. Siano's orientational analysis is compatible with the conventional conception of angular quantities as being dimensionless, and within orientational analysis, theradianmay still be considered a dimensionless unit. The orientational analysis of a quantity equation is carried out separately from the ordinary dimensional analysis, yielding information that supplements the dimensional analysis.
https://en.wikipedia.org/wiki/Rayleigh%27s_method_of_dimensional_analysis
Similitudeis a concept applicable to the testing ofengineeringmodels. A model is said to havesimilitudewith the real application if the two sharegeometricsimilarity,kinematicsimilarity anddynamicsimilarity.Similarityandsimilitudeare interchangeable in this context. The termdynamic similitudeis often used as a catch-all because it implies that geometric and kinematic similitude have already been met. Similitude's main application is inhydraulicandaerospace engineeringto testfluid flowconditions withscaledmodels. It is also the primary theory behind many textbookformulasinfluid mechanics. The concept of similitude is strongly tied todimensional analysis. Engineering models are used to study complex fluid dynamics problems where calculations and computer simulations are not reliable. Models are usually smaller than the final design, but not always.Scale modelsallow testing of a design prior to building, and in many cases are a critical step in the development process. Construction of a scale model, however, must be accompanied by an analysis to determine what conditions it is tested under. While the geometry may be simply scaled, other parameters, such aspressure,temperatureor thevelocityand type offluidmay need to be altered. Similitude is achieved when testing conditions are created such that the test results are applicable to the real design. The following criteria are required to achieve similitude; To satisfy the above conditions the application is analyzed; It is often impossible to achieve strict similitude during a model test. The greater the departure from the application's operating conditions, the more difficult achieving similitude is. In these cases some aspects of similitude may be neglected, focusing on only the most important parameters. The design of marine vessels remains more of an art than a science in large part because dynamic similitude is especially difficult to attain for a vessel that is partially submerged: a ship is affected by wind forces in the air above it, by hydrodynamic forces within the water under it, and especially by wave motions at the interface between the water and the air. The scaling requirements for each of these phenomena differ, so models cannot replicate what happens to a full sized vessel nearly so well as can be done for an aircraft or submarine—each of which operates entirely within one medium. Similitude is a term used widely in fracture mechanics relating to the strain life approach. Under given loading conditions the fatigue damage in an un-notched specimen is comparable to that of a notched specimen. Similitude suggests that the component fatigue life of the two objects will also be similar. Consider asubmarinemodeled at 1/40th scale. The application operates in sea water at 0.5 °C, moving at 5 m/s. The model will be tested in fresh water at 20 °C. Find the power required for the submarine to operate at the stated speed. Afree body diagramis constructed and the relevant relationships of force and velocity are formulated using techniques fromcontinuum mechanics. The variables which describe the system are: This example has five independent variables and threefundamental units. The fundamental units are:meter,kilogram,second.[1] Invoking theBuckingham π theoremshows that the system can be described with two dimensionless numbers and one independent variable.[2] Dimensional analysisis used to rearrange the units to form theReynolds number(Re{\displaystyle R_{e}}) andpressure coefficient(Cp{\displaystyle C_{p}}). These dimensionless numbers account for all the variables listed above exceptF, which will be the test measurement. Since the dimensionless parameters will stay constant for both the test and the real application, they will be used to formulate scaling laws for the test. Scaling laws: The pressure (p{\displaystyle p}) is not one of the five variables, but the force (F{\displaystyle F}) is. The pressure difference (Δp{\displaystyle p}) has thus been replaced with (F/L2{\displaystyle F/L^{2}}) in the pressure coefficient. This gives a required test velocity of: A model test is then conducted at that velocity and the force that is measured in the model (Fmodel{\displaystyle F_{model}}) is then scaled to find the force that can be expected for the real application (Fapplication{\displaystyle F_{application}}): The powerP{\displaystyle P}in watts required by the submarine is then: Note that even though the model is scaled smaller, the water velocity needs to be increased for testing. This remarkable result shows how similitude in nature is often counterintuitive. Similitude has been well documented for a large number of engineering problems and is the basis of many textbook formulas and dimensionless quantities. These formulas and quantities are easy to use without having to repeat the laborious task of dimensional analysis and formula derivation. Simplification of the formulas (by neglecting some aspects of similitude) is common, and needs to be reviewed by the engineer for each application. Similitude can be used to predict the performance of a new design based on data from an existing, similar design. In this case, the model is the existing design. Another use of similitude and models is in validation ofcomputer simulationswith the ultimate goal of eliminating the need for physical models altogether. Another application of similitude is to replace the operating fluid with a different test fluid. Wind tunnels, for example, have trouble with air liquefying in certain conditions soheliumis sometimes used. Other applications may operate in dangerous or expensive fluids so the testing is carried out in a more convenient substitute. Some common applications of similitude and associated dimensionless numbers; Similitude analysis is a powerful engineering tool to design the scaled-down structures. Although both dimensional analysis and direct use of the governing equations may be used to derive the scaling laws, the latter results in more specific scaling laws.[3]The design of the scaled-down composite structures can be successfully carried out using the complete and partial similarities.[4]In the design of the scaled structures under complete similarity condition, all the derived scaling laws must be satisfied between the model and prototype which yields the perfect similarity between the two scales. However, the design of a scaled-down structure which is perfectly similar to its prototype has the practical limitation, especially for laminated structures. Relaxing some of the scaling laws may eliminate the limitation of the design under complete similarity condition and yields the scaled models that are partially similar to their prototype. However, the design of the scaled structures under the partial similarity condition must follow a deliberate methodology to ensure the accuracy of the scaled structure in predicting the structural response of the prototype.[5]Scaled models can be designed to replicate the dynamic characteristic (e.g. frequencies, mode shapes and damping ratios) of their full-scale counterparts. However, appropriate response scaling laws need to be derived to predict the dynamic response of the full-scale prototype from the experimental data of the scaled model.[6]
https://en.wikipedia.org/wiki/Similitude
Asystem of units of measurement, also known as asystem of unitsorsystem of measurement, is a collection ofunits of measurementand rules relating them to each other. Systems of measurement have historically been important, regulated and defined for the purposes of science andcommerce. Instances in use include theInternational System of UnitsorSI(the modern form of themetric system), theBritish imperial system, and theUnited States customary system. In antiquity,systems of measurementwere defined locally: the different units might be defined independently according to the length of a king's thumb or the size of his foot, the length of stride, the length of arm, or maybe the weight of water in a keg of specific size, perhaps itself defined inhandsandknuckles. The unifying characteristic is that there was some definition based on some standard. Eventuallycubitsandstridesgave way to "customary units" to meet the needs of merchants and scientists. The preference for a more universal and consistent system only gradually spread with the growth of international trade and science. Changing a measurement system has costs in the near term, which often results in resistance to such a change. The substantial benefit of conversion to a more rational and internationally consistent system of measurement has been recognized and promoted by scientists, engineers, businesses and politicians, and has resulted in most of the world adopting a commonly agreed metric system. TheFrench Revolutiongave rise to themetric system, and this has spread around the world, replacing most customary units of measure. In most systems,length(distance),mass, andtimearebase quantities. Later, science developments showed that an electromagnetic quantity such aselectric chargeor electric current could be added to extend the set of base quantities.Gaussian unitshave only length, mass, and time as base quantities, with no separate electromagnetic dimension. Other quantities, such aspowerandspeed, are derived from the base quantities: for example, speed is distance per unit time. Historically, a wide range of units was used for the same type of quantity. In different contexts length was measured ininches,feet,yards,fathoms,rods,chains,furlongs,miles,nautical miles,stadia,leagues, with conversion factors that were not based on power of ten. In the metric system and other recent systems, underlying relationships between quantities, as expressed by formulae of physics such asNewton's laws of motion, is used to select a small number of base quantities for which a unit is defined for each, from which all other units may be derived. Secondary units (multiples and submultiples) are derived from these base and derived units by multiplying by powers of ten. For example, where the unit of length is themetre; a distance of 1 metre is 1,000 millimetres, or 0.001 kilometres. Metrication is complete or nearly complete in most countries. However,US customary unitsremain heavily used in theUnited Statesand to some degree inLiberia. TraditionalBurmese units of measurementare used inBurma, with partial transition to the metric system. U.S. units are used in limited contexts in Canada due to the large volume of trade with the U.S. There is also considerable use of imperial weights and measures, despitede jureCanadian conversion to metric. A number of other jurisdictions have laws mandating or permitting other systems of measurement in some or all contexts, such as the United Kingdom whoseroad signage legislation, for instance, only allows distance signs displayingimperial units(miles or yards)[1]or Hong Kong.[2] In the United States, metric units are virtually always used in science, frequently in the military, and partially in industry. U.S. customary units are primarily used in U.S. households. At retail stores, the litre (spelled 'liter' in the U.S.) is a commonly used unit for volume, especially on bottles of beverages, and milligrams, rather thangrains, are used for medications. Some other non-SIunits are still in international use, such asnautical milesandknotsin aviation and shipping, andfeetfor aircraft altitude. Metric systemsof units have evolved since the adoption of the first well-defined system in France in 1795. During this evolution the use of these systems has spread throughout the world, first to non-English-speaking countries, and then to English speaking countries. Multiples and submultiples of metric units are related by powers of ten and their names are formed withprefixes. This relationship is compatible with the decimal system of numbers and it contributes greatly to the convenience of metric units. In the early metric system there were two base units, themetrefor length and thegramfor mass. The other units of length and mass, and all units of area, volume, and derived units such as density were derived from these two base units. Mesures usuelles(Frenchforcustomary measures) were a system of measurement introduced as a compromise between the metric system and traditional measurements. It was used in France from 1812 to 1839. A number of variations on the metric system have been in use. These includegravitational systems, thecentimetre–gram–second systems(cgs) useful in science, themetre–tonne–second system(mts) once used in the USSR and themetre–kilogram–second system(mks). In some engineering fields, likecomputer-aided design, millimetre–gram–second (mmgs) is also used.[3] The current international standard for the metric system is theInternational System of Units(Système international d'unitésor SI). It is a system in which all units can be expressed in terms of seven units. The units that serve as theSI base unitsare themetre,kilogram,second,ampere,kelvin,mole, andcandela. BothBritish imperial unitsandUS customary unitsderive from earlierEnglish units. Imperial units were mostly used in the formerBritish Empireand theBritish Commonwealth, but in all these countries they have been largely supplanted by the metric system. They are still used for some applications in the United Kingdom but have been mostly replaced by the metric system incommercial,scientific, andindustrialapplications. US customary units, however, are still the main system of measurement in theUnited States. While some steps towardsmetricationhave been made (mainly in the late 1960s and early 1970s), the customary units have a strong hold due to the vast industrial infrastructure and commercial development. While British imperial and US customary systems are closely related, there are a number ofdifferences between them. Units of length and area (theinch,foot,yard,mile, etc.) have been identical since the adoption of theInternational Yard and Pound Agreement; however, the US and, formerly, India retained older definitions for surveying purposes. This gave rise to the USsurvey foot, for instance. Theavoirdupoisunits of mass and weight differ for units larger than apound(lb). The British imperial system uses a stone of 14 lb, along hundredweightof 112 lb and along tonof 2,240 lb. Thestoneis not a measurement of weight used in the US. The US customary system uses theshort hundredweightof 100 lb andshort tonof 2,000 lb. Where these systems most notably differ is in their units of volume. An imperial fluid ounce of 28.4130625 ml is 3.924% smaller than the USfluid ounce(fl oz) of 29.5735295625millilitres(ml). However, as there are 16 US fl oz to a USpintand 20 imp fl oz to an imperial pint, the imperial pint is 20.095% larger than a US pint, and the same is true forgills,quarts, andgallons: six US gallons (22.712470704 L) is only 0.08% less than five imperial gallons (22.73045 L). Theavoirdupoissystem served as the general system of mass and weight. In addition to this, there are thetroyand theapothecaries' systems. Troy weight was customarily used forprecious metals,black powder, andgemstones. The troy ounce is the only unit of the system in current use; it is used for precious metals. Although the troy ounce is larger than its avoirdupois equivalent, the pound is smaller. The obsolete troy pound was divided into 12 ounces, rather than the 16 ounces per pound of the avoirdupois system. The apothecaries' system was traditionally used inpharmacology, but has now been replaced by the metric system; it shared the same pound and ounce as the troy system but with different further subdivisions. Natural unitsareunits of measurementdefined in terms of universalphysical constantsin such a manner that selected physical constants take on the numerical value of one when expressed in terms of those units. Natural units are so named because their definition relies on only properties ofnatureand not on any human construct. Varying systems of natural units are possible, depending on the choice of constants used. Some examples are as follows: Non-standard measurement unitsalso found in books, newspapers etc., include: A unit of measurement that applies tomoneyis called aunit of accountin economics and unit of measure in accounting.[5]This is normally acurrencyissued by acountryor a fraction thereof; for instance, theUS dollarand US cent (1⁄100of a dollar), or theeuroand euro cent. ISO 4217is the international standard describing three letter codes (also known as the currency code) to define the names of currencies established by the International Organization for Standardization (ISO). Throughout history, many official systems of measurement have been used. While no longer in official use, some of thesecustomary systemsare occasionally used in day-to-day life, for instance incooking. Still in use:
https://en.wikipedia.org/wiki/System_of_measurement
TheDadda multiplieris a hardwarebinary multiplierdesign invented by computer scientistLuigi Daddain 1965.[1]It uses a selection offull and half addersto sum the partial products in stages (theDadda treeorDadda reduction) until two numbers are left. The design is similar to theWallace multiplier, but the different reduction tree reduces the required number ofgates(for all but the smallest operand sizes) and makes it slightly faster (for all operand sizes).[2] Both Dadda and Wallace multipliers have the same three steps for two bit stringsw1{\displaystyle w_{1}}andw2{\displaystyle w_{2}}of lengthsℓ1{\displaystyle \ell _{1}}andℓ2{\displaystyle \ell _{2}}respectively: As with the Wallace multiplier, the multiplication products of the first step carry different weights reflecting the magnitude of the original bit values in the multiplication. For example, the product of bitsanbm{\displaystyle a_{n}b_{m}}has weightn+m{\displaystyle n+m}. Unlike Wallace multipliers that reduce as much as possible on each layer, Dadda multipliers attempt to minimize the number of gates used, as well as input/output delay. Because of this, Dadda multipliers have a less expensive reduction phase, but the final numbers may be a few bits longer, thus requiring slightly bigger adders. To achieve a more optimal final product, the structure of the reduction process is governed by slightly more complex rules than in Wallace multipliers. The progression of the reduction is controlled by a maximum-height sequencedj{\displaystyle d_{j}}, defined by: This yields a sequence like so: The initial value ofj{\displaystyle j}is chosen as the largest value such thatdj<min(n1,n2){\displaystyle d_{j}<\min {(n_{1},n_{2})}}, wheren1{\displaystyle n_{1}}andn2{\displaystyle n_{2}}are the number of bits in the input multiplicand and multiplier. The lesser of the two bit lengths will be the maximum height of each column of weights after the first stage of multiplication. For each stagej{\displaystyle j}of the reduction, the goal of the algorithm is the reduce the height of each column so that it is less than or equal to the value ofdj{\displaystyle d_{j}}. For each stage from,…,1{\displaystyle ,\ldots ,1}, reduce each column starting at the lowest-weight column,c0{\displaystyle c_{0}}according to these rules: The example in the adjacent image illustrates the reduction of an 8 × 8 multiplier, explained here. The initial statej=4{\displaystyle j=4}is chosen asd4=6{\displaystyle d_{4}=6}, the largest value less than 8. Stagej=4{\displaystyle j=4},d4=6{\displaystyle d_{4}=6} Stagej=3{\displaystyle j=3},d3=4{\displaystyle d_{3}=4} Stagej=2{\displaystyle j=2},d2=3{\displaystyle d_{2}=3} Stagej=1{\displaystyle j=1},d1=2{\displaystyle d_{1}=2} Addition The output of the last stage leaves 15 columns of height two or less which can be passed into a standard adder.
https://en.wikipedia.org/wiki/Dadda_multiplier
Adivision algorithmis analgorithmwhich, given twointegersNandD(respectively the numerator and the denominator), computes theirquotientand/orremainder, the result ofEuclidean division. Some are applied by hand, while others are employed by digital circuit designs and software. Division algorithms fall into two main categories: slow division and fast division. Slow division algorithms produce one digit of the final quotient per iteration. Examples of slow division includerestoring, non-performing restoring,non-restoring, andSRTdivision. Fast division methods start with a close approximation to the final quotient and produce twice as many digits of the final quotient on each iteration.[1]Newton–RaphsonandGoldschmidtalgorithms fall into this category. Variants of these algorithms allow using fastmultiplication algorithms. It results that, for large integers, thecomputer timeneeded for a division is the same, up to a constant factor, as the time needed for a multiplication, whichever multiplication algorithm is used. Discussion will refer to the formN/D=(Q,R){\displaystyle N/D=(Q,R)}, where is the input, and is the output. The simplest division algorithm, historically incorporated into agreatest common divisoralgorithm presented inEuclid'sElements, Book VII, Proposition 1, finds the remainder given two positive integers using only subtractions and comparisons: The proof that the quotient and remainder exist and are unique (described atEuclidean division) gives rise to a complete division algorithm, applicable to both negative and positive numbers, using additions, subtractions, and comparisons: This procedure always produces R ≥ 0. Although very simple, it takes Ω(Q) steps, and so is exponentially slower than even slow division algorithms like long division. It is useful if Q is known to be small (being anoutput-sensitive algorithm), and can serve as an executable specification. Long division is the standard algorithm used for pen-and-paper division of multi-digit numbers expressed in decimal notation. It shifts gradually from the left to the right end of the dividend, subtracting the largest possible multiple of the divisor (at the digit level) at each stage; the multiples then become the digits of the quotient, and the final difference is then the remainder. When used with a binary radix, this method forms the basis for the (unsigned) integer division with remainder algorithm below.Short divisionis an abbreviated form of long division suitable for one-digit divisors.Chunking– also known as the partial quotients method or the hangman method – is a less-efficient form of long division which may be easier to understand. By allowing one to subtract more multiples than what one currently has at each stage, a more freeform variant of long division can be developed as well. The following algorithm, the binary version of the famouslong division, will divideNbyD, placing the quotient inQand the remainder inR. In the following pseudo-code, all values are treated as unsigned integers. If we take N=11002(1210) and D=1002(410) Step 1: Set R=0 and Q=0Step 2: Take i=3 (one less than the number of bits in N)Step 3: R=00 (left shifted by 1)Step 4: R=01 (setting R(0) to N(i))Step 5: R < D, so skip statement Step 2: Set i=2Step 3: R=010Step 4: R=011Step 5: R < D, statement skipped Step 2: Set i=1Step 3: R=0110Step 4: R=0110Step 5: R>=D, statement enteredStep 5b: R=10 (R−D)Step 5c: Q=10 (setting Q(i) to 1) Step 2: Set i=0Step 3: R=100Step 4: R=100Step 5: R>=D, statement enteredStep 5b: R=0 (R−D)Step 5c: Q=11 (setting Q(i) to 1) endQ=112(310) and R=0. Slow division methods are all based on a standard recurrence equation[2] where: Restoring division operates onfixed-pointfractional numbers and depends on the assumption 0 <D<N.[citation needed] The quotient digitsqare formed from the digit set {0,1}. The basic algorithm for binary (radix 2) restoring division is: Non-performing restoring division is similar to restoring division except that the value of 2R is saved, soDdoes not need to be added back in for the case of R < 0. Non-restoring division uses the digit set {−1, 1} for the quotient digits instead of {0, 1}. The algorithm is more complex, but has the advantage when implemented in hardware that there is only one decision and addition/subtraction per quotient bit; there is no restoring step after the subtraction,[3]which potentially cuts down the numbers of operations by up to half and lets it be executed faster.[4]The basic algorithm for binary (radix 2) non-restoring division of non-negative numbers is:[verification needed] Following this algorithm, the quotient is in a non-standard form consisting of digits of −1 and +1. This form needs to be converted to binary to form the final quotient. Example: If the −1 digits ofQ{\displaystyle Q}are stored as zeros (0) as is common, thenP{\displaystyle P}isQ{\displaystyle Q}and computingM{\displaystyle M}is trivial: perform a ones' complement (bit by bit complement) on the originalQ{\displaystyle Q}. Finally, quotients computed by this algorithm are always odd, and the remainder in R is in the range −D ≤ R < D. For example, 5 / 2 = 3 R −1. To convert to a positive remainder, do a single restoring stepafterQ is converted from non-standard form to standard form: The actual remainder is R >> n. (As with restoring division, the low-order bits of R are used up at the same rate as bits of the quotient Q are produced, and it is common to use a single shift register for both.) SRT division is a popular method for division in manymicroprocessorimplementations.[5][6]The algorithm is named after D. W. Sweeney ofIBM, James E. Robertson ofUniversity of Illinois, andK. D. TocherofImperial College London. They all developed the algorithm independently at approximately the same time (published in February 1957, September 1958, and January 1958 respectively).[7][8][9] SRT division is similar to non-restoring division, but it uses alookup tablebased on the dividend and the divisor to determine each quotient digit. The most significant difference is that aredundant representationis used for the quotient. For example, when implementing radix-4 SRT division, each quotient digit is chosen fromfivepossibilities: { −2, −1, 0, +1, +2 }. Because of this, the choice of a quotient digit need not be perfect; later quotient digits can correct for slight errors. (For example, the quotient digit pairs (0, +2) and (1, −2) are equivalent, since 0×4+2 = 1×4−2.) This tolerance allows quotient digits to be selected using only a few most-significant bits of the dividend and divisor, rather than requiring a full-width subtraction. This simplification in turn allows a radix higher than 2 to be used. Like non-restoring division, the final steps are a final full-width subtraction to resolve the last quotient bit, and conversion of the quotient to standard binary form. TheIntel Pentiumprocessor'sinfamous floating-point division bugwas caused by an incorrectly coded lookup table. Five of the 1066 entries had been mistakenly omitted.[10][11][12] Newton–Raphson usesNewton's methodto find thereciprocalofD{\displaystyle D}and multiply that reciprocal byN{\displaystyle N}to find thefinal quotientQ{\displaystyle Q}. The steps of Newton–Raphson division are: In order to apply Newton's method to find the reciprocal ofD{\displaystyle D}, it is necessary to find a functionf(X){\displaystyle f(X)}that has a zero atX=1/D{\displaystyle X=1/D}. The obvious such function isf(X)=DX−1{\displaystyle f(X)=DX-1}, but the Newton–Raphson iteration for this is unhelpful, since it cannot be computed without already knowing the reciprocal ofD{\displaystyle D}(moreover it attempts to compute the exact reciprocal in one step, rather than allow for iterative improvements). A function that does work isf(X)=(1/X)−D{\displaystyle f(X)=(1/X)-D}, for which the Newton–Raphson iteration gives which can be calculated fromXi{\displaystyle X_{i}}using only multiplication and subtraction, or using twofused multiply–adds. From a computation point of view, the expressionsXi+1=Xi+Xi(1−DXi){\displaystyle X_{i+1}=X_{i}+X_{i}(1-DX_{i})}andXi+1=Xi(2−DXi){\displaystyle X_{i+1}=X_{i}(2-DX_{i})}are not equivalent. To obtain a result with a precision of 2nbits while making use of the second expression, one must compute the product betweenXi{\displaystyle X_{i}}and(2−DXi){\displaystyle (2-DX_{i})}with double the given precision ofXi{\displaystyle X_{i}}(nbits).[citation needed]In contrast, the product betweenXi{\displaystyle X_{i}}and(1−DXi){\displaystyle (1-DX_{i})}need only be computed with a precision ofnbits, because the leadingnbits (after the binary point) of(1−DXi){\displaystyle (1-DX_{i})}are zeros. If the error is defined asεi=1−DXi{\displaystyle \varepsilon _{i}=1-DX_{i}}, then: This squaring of the error at each iteration step – the so-calledquadratic convergenceof Newton–Raphson's method – has the effect that the number of correct digits in the result roughlydoubles for every iteration, a property that becomes extremely valuable when the numbers involved have many digits (e.g. in the large integer domain). But it also means that the initial convergence of the method can be comparatively slow, especially if the initial estimateX0{\displaystyle X_{0}}is poorly chosen. For the subproblem of choosing an initial estimateX0{\displaystyle X_{0}}, it is convenient to apply a bit-shift to the divisorDto scale it so that 0.5 ≤D≤ 1. Applying the same bit-shift to the numeratorNensures the quotient does not change. Once within a bounded range, a simple polynomialapproximationcan be used to find an initial estimate. The linearapproximationwith minimum worst-case absolute error on the interval[0.5,1]{\displaystyle [0.5,1]}is: The coefficients of the linear approximationT0+T1D{\displaystyle T_{0}+T_{1}D}are determined as follows. The absolute value of the error is|ε0|=|1−D(T0+T1D)|{\displaystyle |\varepsilon _{0}|=|1-D(T_{0}+T_{1}D)|}. The minimum of the maximum absolute value of the error is determined by theChebyshev equioscillation theoremapplied toF(D)=1−D(T0+T1D){\displaystyle F(D)=1-D(T_{0}+T_{1}D)}. The local minimum ofF(D){\displaystyle F(D)}occurs whenF′(D)=0{\displaystyle F'(D)=0}, which has solutionD=−T0/(2T1){\displaystyle D=-T_{0}/(2T_{1})}. The function at that minimum must be of opposite sign as the function at the endpoints, namely,F(1/2)=F(1)=−F(−T0/(2T1)){\displaystyle F(1/2)=F(1)=-F(-T_{0}/(2T_{1}))}. The two equations in the two unknowns have a unique solutionT0=48/17{\displaystyle T_{0}=48/17}andT1=−32/17{\displaystyle T_{1}=-32/17}, and the maximum error isF(1)=1/17{\displaystyle F(1)=1/17}. Using this approximation, the absolute value of the error of the initial value is less than The best quadratic fit to1/D{\displaystyle 1/D}in the interval is It is chosen to make the error equal to a re-scaled third orderChebyshev polynomialof the first kind, and gives an absolute value of the error less than or equal to 1/99. This improvement is equivalent tolog2⁡(log⁡99/log⁡17)≈0.7{\displaystyle \log _{2}(\log 99/\log 17)\approx 0.7}Newton–Raphson iterations, at a computational cost of less than one iteration. It is possible to generate a polynomial fit of degree larger than 2, computing the coefficients using theRemez algorithm. The trade-off is that the initial guess requires more computational cycles but hopefully in exchange for fewer iterations of Newton–Raphson. Since for this method theconvergenceis exactly quadratic, it follows that, from an initial errorε0{\displaystyle \varepsilon _{0}},S{\displaystyle S}iterations will give an answer accurate to binary places. Typical values are: A quadratic initial estimate plus two iterations is accurate enough for IEEEsingle precision, but three iterations are marginal fordouble precision. A linear initial estimate plus four iterations is sufficient for both double anddouble extendedformats. The following computes the quotient ofNandDwith a precision ofPbinary places: For example, for a double-precision floating-point division, this method uses 10 multiplies, 9 adds, and 2 shifts. There is an iteration which uses three multiplications to cube the error: TheYiεiterm is new. Expanding out the above,Xi+1{\displaystyle X_{i+1}}can be written as with the result that the error term This is 3/2 the computation of the quadratic iteration, but achieveslog⁡3/log⁡2≈1.585{\displaystyle \log 3/\log 2\approx 1.585}as much convergence, so is slightly more efficient. Put another way, two iterations of this method raise the error to the ninth power at the same computational cost as three quadratic iterations, which only raise the error to the eighth power. The number of correct bits afterS{\displaystyle S}iterations is binary places. Typical values are: A quadratic initial estimate plus two cubic iterations provides ample precision for an IEEE double-precision result. It is also possible to use a mixture of quadratic and cubic iterations. Using at least one quadratic iteration ensures that the error is positive, i.e. the reciprocal is underestimated.[13]: 370This can simplify a following rounding step if an exactly-rounded quotient is required. Using higher degree polynomials in either the initialization or the iteration results in a degradation of performance because the extra multiplications required would be better spent on doing more iterations.[citation needed] Goldschmidt division[14](after Robert Elliott Goldschmidt)[15]uses an iterative process of repeatedly multiplying both the dividend and divisor by a common factorFi, chosen such that the divisor converges to 1. This causes the dividend to converge to the sought quotientQ: The steps for Goldschmidt division are: AssumingN/Dhas been scaled so that 0 <D< 1, eachFiis based onD: Multiplying the dividend and divisor by the factor yields: After a sufficient numberkof iterationsQ=Nk{\displaystyle Q=N_{k}}. The Goldschmidt method is used inAMDAthlon CPUs and later models.[16][17]It is also known as Anderson Earle Goldschmidt Powers (AEGP) algorithm and is implemented by variousIBMprocessors.[18][19]Although it converges at the same rate as a Newton–Raphson implementation, one advantage of the Goldschmidt method is that the multiplications in the numerator and in the denominator can be done in parallel.[19] The Goldschmidt method can be used with factors that allow simplifications by thebinomial theorem. Assume⁠N/D{\displaystyle N/D}⁠has been scaled by apower of twosuch thatD∈(12,1]{\displaystyle D\in \left({\tfrac {1}{2}},1\right]}. We chooseD=1−x{\displaystyle D=1-x}andFi=1+x2i{\displaystyle F_{i}=1+x^{2^{i}}}. This yields Afternsteps(x∈[0,12)){\displaystyle \left(x\in \left[0,{\tfrac {1}{2}}\right)\right)}, the denominator1−x2n{\displaystyle 1-x^{2^{n}}}can be rounded to1with arelative error which is maximum at2−2n{\displaystyle 2^{-2^{n}}}whenx=12{\displaystyle x={\tfrac {1}{2}}}, thus providing a minimum precision of2n{\displaystyle 2^{n}}binary digits. Methods designed for hardware implementation generally do not scale to integers with thousands or millions of decimal digits; these frequently occur, for example, inmodularreductions incryptography. For these large integers, more efficient division algorithms transform the problem to use a small number of multiplications, which can then be done using an asymptotically efficientmultiplication algorithmsuch as theKaratsuba algorithm,Toom–Cook multiplicationor theSchönhage–Strassen algorithm. The result is that thecomputational complexityof the division is of the same order (up to a multiplicative constant) as that of the multiplication. Examples include reduction to multiplication byNewton's methodasdescribed above,[20]as well as the slightly fasterBurnikel-Ziegler division,[21]Barrett reductionandMontgomery reductionalgorithms.[22][verification needed]Newton's method is particularly efficient in scenarios where one must divide by the same divisor many times, since after the initial Newton inversion only one (truncated) multiplication is needed for each division. The division by a constantDis equivalent to the multiplication by itsreciprocal. Since the denominator is constant, so is its reciprocal (1/D). Thus it is possible to compute the value of (1/D) once at compile time, and at run time perform the multiplicationN·(1/D) rather than the divisionN/D. Infloating-pointarithmetic the use of (1/D) presents little problem,[a]but inintegerarithmetic the reciprocal will always evaluate to zero (assuming |D| > 1). It is not necessary to use specifically (1/D); any value (X/Y) that reduces to (1/D) may be used. For example, for division by 3, the factors 1/3, 2/6, 3/9, or 194/582 could be used. Consequently, ifYwere a power of two the division step would reduce to a fast right bit shift. The effect of calculatingN/Das (N·X)/Yreplaces a division with a multiply and a shift. Note that the parentheses are important, asN·(X/Y) will evaluate to zero. However, unlessDitself is a power of two, there is noXandYthat satisfies the conditions above. Fortunately, (N·X)/Ygives exactly the same result asN/Din integer arithmetic even when (X/Y) is not exactly equal to 1/D, but "close enough" that the error introduced by the approximation is in the bits that are discarded by the shift operation.[23][24][25]Barrett reductionuses powers of 2 for the value ofYto make division byYa simple right shift.[b] As a concretefixed-point arithmeticexample, for 32-bit unsigned integers, division by 3 can be replaced with a multiply by⁠2863311531/233⁠, a multiplication by 2863311531 (hexadecimal0xAAAAAAAB) followed by a 33 right bit shift. The value of 2863311531 is calculated as⁠233/3⁠, then rounded up. Likewise, division by 10 can be expressed as a multiplication by 3435973837 (0xCCCCCCCD) followed by division by 235(or 35 right bit shift).[27]: p230-234OEISprovides sequences of the constants for multiplication asA346495and for the right shift asA346496. For generalx-bit unsigned integer division where the divisorDis not a power of 2, the following identity converts the division into twox-bit addition/subtraction, onex-bit byx-bit multiplication (where only the upper half of the result is used) and several shifts, after precomputingk=x+⌈log2⁡D⌉{\displaystyle k=x+\lceil \log _{2}{D}\rceil }anda=⌈2kD⌉−2x{\displaystyle a=\left\lceil {\frac {2^{k}}{D}}\right\rceil -2^{x}}: In some cases, division by a constant can be accomplished in even less time by converting the "multiply by a constant" into aseries of shifts and adds or subtracts.[28]Of particular interest is division by 10, for which the exact quotient is obtained, with remainder if required.[29] When a division operation is performed, the exactquotientq{\displaystyle q}andremainderr{\displaystyle r}are approximated to fit within the computer’s precision limits. The Division Algorithm states: [a=bq+r]{\displaystyle [a=bq+r]} where0≤r<|b|{\displaystyle 0\leq r<|b|}. Infloating-point arithmetic, the quotientq{\displaystyle q}is represented asq~{\displaystyle {\tilde {q}}}and the remainderr{\displaystyle r}asr~{\displaystyle {\tilde {r}}}, introducingrounding errorsϵq{\displaystyle \epsilon _{q}}ϵq{\displaystyle \epsilon _{q}}andϵr{\displaystyle \epsilon _{r}}: [q~=q+ϵq][r~=r+ϵr]{\displaystyle [{\tilde {q}}=q+\epsilon _{q}][{\tilde {r}}=r+\epsilon _{r}]} This rounding causes a small error, which can propagate and accumulate through subsequent calculations. Such errors are particularly pronounced in iterative processes and when subtracting nearly equal values - is toldloss of significance. To mitigate these errors, techniques such as the use ofguard digitsorhigher precision arithmeticare employed.[30][31]
https://en.wikipedia.org/wiki/Division_algorithm
Inmathematicsandcomputer science,Horner's method(orHorner's scheme) is an algorithm forpolynomial evaluation. Although named afterWilliam George Horner, this method is much older, as it has been attributed toJoseph-Louis Lagrangeby Horner himself, and can be traced back many hundreds of years to Chinese and Persian mathematicians.[1]After the introduction of computers, this algorithm became fundamental for computing efficiently with polynomials. The algorithm is based onHorner's rule, in which a polynomial is written innested form:a0+a1x+a2x2+a3x3+⋯+anxn=a0+x(a1+x(a2+x(a3+⋯+x(an−1+xan)⋯))).{\displaystyle {\begin{aligned}&a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+\cdots +a_{n}x^{n}\\={}&a_{0}+x{\bigg (}a_{1}+x{\Big (}a_{2}+x{\big (}a_{3}+\cdots +x(a_{n-1}+x\,a_{n})\cdots {\big )}{\Big )}{\bigg )}.\end{aligned}}} This allows the evaluation of apolynomialof degreenwith onlyn{\displaystyle n}multiplications andn{\displaystyle n}additions. This is optimal, since there are polynomials of degreenthat cannot be evaluated with fewer arithmetic operations.[2] Alternatively,Horner's methodandHorner–Ruffini methodalso refers to a method for approximating the roots of polynomials, described by Horner in 1819. It is a variant of theNewton–Raphson methodmade more efficient for hand calculation by application of Horner's rule. It was widely used until computers came into general use around 1970. Given the polynomialp(x)=∑i=0naixi=a0+a1x+a2x2+a3x3+⋯+anxn,{\displaystyle p(x)=\sum _{i=0}^{n}a_{i}x^{i}=a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+\cdots +a_{n}x^{n},}wherea0,…,an{\displaystyle a_{0},\ldots ,a_{n}}are constant coefficients, the problem is to evaluate the polynomial at a specific valuex0{\displaystyle x_{0}}ofx.{\displaystyle x.} For this, a new sequence of constants is definedrecursivelyas follows: Thenb0{\displaystyle b_{0}}is the value ofp(x0){\displaystyle p(x_{0})}. To see why this works, the polynomial can be written in the formp(x)=a0+x(a1+x(a2+x(a3+⋯+x(an−1+xan)⋯))).{\displaystyle p(x)=a_{0}+x{\bigg (}a_{1}+x{\Big (}a_{2}+x{\big (}a_{3}+\cdots +x(a_{n-1}+x\,a_{n})\cdots {\big )}{\Big )}{\bigg )}\ .} Thus, by iteratively substituting thebi{\displaystyle b_{i}}into the expression,p(x0)=a0+x0(a1+x0(a2+⋯+x0(an−1+bnx0)⋯))=a0+x0(a1+x0(a2+⋯+x0bn−1))⋮=a0+x0b1=b0.{\displaystyle {\begin{aligned}p(x_{0})&=a_{0}+x_{0}{\Big (}a_{1}+x_{0}{\big (}a_{2}+\cdots +x_{0}(a_{n-1}+b_{n}x_{0})\cdots {\big )}{\Big )}\\&=a_{0}+x_{0}{\Big (}a_{1}+x_{0}{\big (}a_{2}+\cdots +x_{0}b_{n-1}{\big )}{\Big )}\\&~~\vdots \\&=a_{0}+x_{0}b_{1}\\&=b_{0}.\end{aligned}}} Now, it can be proven that; This expression constitutes Horner's practical application, as it offers a very quick way of determining the outcome of;p(x)/(x−x0){\displaystyle p(x)/(x-x_{0})}withb0{\displaystyle b_{0}}(which is equal top(x0){\displaystyle p(x_{0})}) being the division's remainder, as is demonstrated by the examples below. Ifx0{\displaystyle x_{0}}is a root ofp(x){\displaystyle p(x)}, thenb0=0{\displaystyle b_{0}=0}(meaning the remainder is0{\displaystyle 0}), which means you can factorp(x){\displaystyle p(x)}asx−x0{\displaystyle x-x_{0}}. To finding the consecutiveb{\displaystyle b}-values, you start with determiningbn{\displaystyle b_{n}}, which is simply equal toan{\displaystyle a_{n}}. Then you then work recursively using the formula:bn−1=an−1+bnx0{\displaystyle b_{n-1}=a_{n-1}+b_{n}x_{0}}till you arrive atb0{\displaystyle b_{0}}. Evaluatef(x)=2x3−6x2+2x−1{\displaystyle f(x)=2x^{3}-6x^{2}+2x-1}forx=3{\displaystyle x=3}. We usesynthetic divisionas follows: The entries in the third row are the sum of those in the first two. Each entry in the second row is the product of thex-value (3in this example) with the third-row entry immediately to the left. The entries in the first row are the coefficients of the polynomial to be evaluated. Then the remainder off(x){\displaystyle f(x)}on division byx−3{\displaystyle x-3}is5. But by thepolynomial remainder theorem, we know that the remainder isf(3){\displaystyle f(3)}. Thus,f(3)=5{\displaystyle f(3)=5}. In this example, ifa3=2,a2=−6,a1=2,a0=−1{\displaystyle a_{3}=2,a_{2}=-6,a_{1}=2,a_{0}=-1}we can see thatb3=2,b2=0,b1=2,b0=5{\displaystyle b_{3}=2,b_{2}=0,b_{1}=2,b_{0}=5}, the entries in the third row. So, synthetic division (which was actually invented and published by Ruffini 10 years before Horner's publication) is easier to use; it can be shown to be equivalent to Horner's method. As a consequence of the polynomial remainder theorem, the entries in the third row are the coefficients of the second-degree polynomial, the quotient off(x){\displaystyle f(x)}on division byx−3{\displaystyle x-3}. The remainder is5. This makes Horner's method useful forpolynomial long division. Dividex3−6x2+11x−6{\displaystyle x^{3}-6x^{2}+11x-6}byx−2{\displaystyle x-2}: The quotient isx2−4x+3{\displaystyle x^{2}-4x+3}. Letf1(x)=4x4−6x3+3x−5{\displaystyle f_{1}(x)=4x^{4}-6x^{3}+3x-5}andf2(x)=2x−1{\displaystyle f_{2}(x)=2x-1}. Dividef1(x){\displaystyle f_{1}(x)}byf2(x){\displaystyle f_{2}\,(x)}using Horner's method. The third row is the sum of the first two rows, divided by2. Each entry in the second row is the product of1with the third-row entry to the left. The answer isf1(x)f2(x)=2x3−2x2−x+1−42x−1.{\displaystyle {\frac {f_{1}(x)}{f_{2}(x)}}=2x^{3}-2x^{2}-x+1-{\frac {4}{2x-1}}.} Evaluation using the monomial form of a degreen{\displaystyle n}polynomial requires at mostn{\displaystyle n}additions and(n2+n)/2{\displaystyle (n^{2}+n)/2}multiplications, if powers are calculated by repeated multiplication and each monomial is evaluated individually. The cost can be reduced ton{\displaystyle n}additions and2n−1{\displaystyle 2n-1}multiplications by evaluating the powers ofx{\displaystyle x}by iteration. If numerical data are represented in terms of digits (or bits), then the naive algorithm also entails storing approximately2n{\displaystyle 2n}times the number of bits ofx{\displaystyle x}: the evaluated polynomial has approximate magnitudexn{\displaystyle x^{n}}, and one must also storexn{\displaystyle x^{n}}itself. By contrast, Horner's method requires onlyn{\displaystyle n}additions andn{\displaystyle n}multiplications, and its storage requirements are onlyn{\displaystyle n}times the number of bits ofx{\displaystyle x}. Alternatively, Horner's method can be computed withn{\displaystyle n}fused multiply–adds. Horner's method can also be extended to evaluate the firstk{\displaystyle k}derivatives of the polynomial withkn{\displaystyle kn}additions and multiplications.[3] Horner's method is optimal, in the sense that any algorithm to evaluate an arbitrary polynomial must use at least as many operations.Alexander Ostrowskiproved in 1954 that the number of additions required is minimal.[4]Victor Panproved in 1966 that the number of multiplications is minimal.[5]However, whenx{\displaystyle x}is a matrix,Horner's method is not optimal. This assumes that the polynomial is evaluated in monomial form and nopreconditioningof the representation is allowed, which makes sense if the polynomial is evaluated only once. However, if preconditioning is allowed and the polynomial is to be evaluated many times, thenfaster algorithms are possible. They involve a transformation of the representation of the polynomial. In general, a degree-n{\displaystyle n}polynomial can be evaluated using only⌊n/2⌋+2 multiplications andn{\displaystyle n}additions.[6] A disadvantage of Horner's rule is that all of the operations aresequentially dependent, so it is not possible to take advantage ofinstruction level parallelismon modern computers. In most applications where the efficiency of polynomial evaluation matters, many low-order polynomials are evaluated simultaneously (for each pixel or polygon in computer graphics, or for each grid square in a numerical simulation), so it is not necessary to find parallelism within a single polynomial evaluation. If, however, one is evaluating a single polynomial of very high order, it may be useful to break it up as follows:p(x)=∑i=0naixi=a0+a1x+a2x2+a3x3+⋯+anxn=(a0+a2x2+a4x4+⋯)+(a1x+a3x3+a5x5+⋯)=(a0+a2x2+a4x4+⋯)+x(a1+a3x2+a5x4+⋯)=∑i=0⌊n/2⌋a2ix2i+x∑i=0⌊n/2⌋a2i+1x2i=p0(x2)+xp1(x2).{\displaystyle {\begin{aligned}p(x)&=\sum _{i=0}^{n}a_{i}x^{i}\\[1ex]&=a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+\cdots +a_{n}x^{n}\\[1ex]&=\left(a_{0}+a_{2}x^{2}+a_{4}x^{4}+\cdots \right)+\left(a_{1}x+a_{3}x^{3}+a_{5}x^{5}+\cdots \right)\\[1ex]&=\left(a_{0}+a_{2}x^{2}+a_{4}x^{4}+\cdots \right)+x\left(a_{1}+a_{3}x^{2}+a_{5}x^{4}+\cdots \right)\\[1ex]&=\sum _{i=0}^{\lfloor n/2\rfloor }a_{2i}x^{2i}+x\sum _{i=0}^{\lfloor n/2\rfloor }a_{2i+1}x^{2i}\\[1ex]&=p_{0}(x^{2})+xp_{1}(x^{2}).\end{aligned}}} More generally, the summation can be broken intokparts:p(x)=∑i=0naixi=∑j=0k−1xj∑i=0⌊n/k⌋aki+jxki=∑j=0k−1xjpj(xk){\displaystyle p(x)=\sum _{i=0}^{n}a_{i}x^{i}=\sum _{j=0}^{k-1}x^{j}\sum _{i=0}^{\lfloor n/k\rfloor }a_{ki+j}x^{ki}=\sum _{j=0}^{k-1}x^{j}p_{j}(x^{k})}where the inner summations may be evaluated using separate parallel instances of Horner's method. This requires slightly more operations than the basic Horner's method, but allowsk-waySIMDexecution of most of them. Modern compilers generally evaluate polynomials this way when advantageous, although forfloating-pointcalculations this requires enabling (unsafe) reassociative math[citation needed]. Another use of breaking a polynomial down this way is to calculate steps of the inner summations in an alternating fashion to take advantage ofinstruction-level parallelism. Horner's method is a fast, code-efficient method for multiplication and division of binary numbers on amicrocontrollerwith nohardware multiplier. One of the binary numbers to be multiplied is represented as a trivial polynomial, where (using the above notation)ai=1{\displaystyle a_{i}=1}, andx=2{\displaystyle x=2}. Then,x(orxto some power) is repeatedly factored out. In thisbinary numeral system(base 2),x=2{\displaystyle x=2}, so powers of 2 are repeatedly factored out. For example, to find the product of two numbers (0.15625) andm:(0.15625)m=(0.00101b)m=(2−3+2−5)m=(2−3)m+(2−5)m=2−3(m+(2−2)m)=2−3(m+2−2(m)).{\displaystyle {\begin{aligned}(0.15625)m&=(0.00101_{b})m=\left(2^{-3}+2^{-5}\right)m=\left(2^{-3})m+(2^{-5}\right)m\\&=2^{-3}\left(m+\left(2^{-2}\right)m\right)=2^{-3}\left(m+2^{-2}(m)\right).\end{aligned}}} To find the product of two binary numbersdandm: In general, for a binary number with bit values (d3d2d1d0{\displaystyle d_{3}d_{2}d_{1}d_{0}}) the product is(d323+d222+d121+d020)m=d323m+d222m+d121m+d020m.{\displaystyle (d_{3}2^{3}+d_{2}2^{2}+d_{1}2^{1}+d_{0}2^{0})m=d_{3}2^{3}m+d_{2}2^{2}m+d_{1}2^{1}m+d_{0}2^{0}m.}At this stage in the algorithm, it is required that terms with zero-valued coefficients are dropped, so that only binary coefficients equal to one are counted, thus the problem of multiplication ordivision by zerois not an issue, despite this implication in the factored equation:=d0(m+2d1d0(m+2d2d1(m+2d3d2(m)))).{\displaystyle =d_{0}\left(m+2{\frac {d_{1}}{d_{0}}}\left(m+2{\frac {d_{2}}{d_{1}}}\left(m+2{\frac {d_{3}}{d_{2}}}(m)\right)\right)\right).} The denominators all equal one (or the term is absent), so this reduces to=d0(m+2d1(m+2d2(m+2d3(m)))),{\displaystyle =d_{0}(m+2{d_{1}}(m+2{d_{2}}(m+2{d_{3}}(m)))),}or equivalently (as consistent with the "method" described above)=d3(m+2−1d2(m+2−1d1(m+d0(m)))).{\displaystyle =d_{3}(m+2^{-1}{d_{2}}(m+2^{-1}{d_{1}}(m+{d_{0}}(m)))).} In binary (base-2) math, multiplication by a power of 2 is merely aregister shiftoperation. Thus, multiplying by 2 is calculated in base-2 by anarithmetic shift. The factor (2−1) is a rightarithmetic shift, a (0) results in no operation (since 20= 1 is the multiplicativeidentity element), and a (21) results in a left arithmetic shift. The multiplication product can now be quickly calculated using only arithmetic shift operations, addition and subtraction. The method is particularly fast on processors supporting a single-instruction shift-and-addition-accumulate. Compared to a C floating-point library, Horner's method sacrifices some accuracy, however it is nominally 13 times faster (16 times faster when the "canonical signed digit" (CSD) form is used) and uses only 20% of the code space.[7] Horner's method can be used to convert between different positionalnumeral systems– in which casexis the base of the number system, and theaicoefficients are the digits of the base-xrepresentation of a given number – and can also be used ifxis amatrix, in which case the gain in computational efficiency is even greater. However, for such casesfaster methodsare known.[8] Using the long division algorithm in combination withNewton's method, it is possible to approximate the real roots of a polynomial. The algorithm works as follows. Given a polynomialpn(x){\displaystyle p_{n}(x)}of degreen{\displaystyle n}with zeroszn<zn−1<⋯<z1,{\displaystyle z_{n}<z_{n-1}<\cdots <z_{1},}make some initial guessx0{\displaystyle x_{0}}such thatz1<x0{\displaystyle z_{1}<x_{0}}. Now iterate the following two steps: These two steps are repeated until all real zeros are found for the polynomial. If the approximated zeros are not precise enough, the obtained values can be used as initial guesses for Newton's method but using the full polynomial rather than the reduced polynomials.[9] Consider the polynomialp6(x)=(x+8)(x+5)(x+3)(x−2)(x−3)(x−7){\displaystyle p_{6}(x)=(x+8)(x+5)(x+3)(x-2)(x-3)(x-7)}which can be expanded top6(x)=x6+4x5−72x4−214x3+1127x2+1602x−5040.{\displaystyle p_{6}(x)=x^{6}+4x^{5}-72x^{4}-214x^{3}+1127x^{2}+1602x-5040.} From the above we know that the largest root of this polynomial is 7 so we are able to make an initial guess of 8. Using Newton's method the first zero of 7 is found as shown in black in the figure to the right. Nextp(x){\displaystyle p(x)}is divided by(x−7){\displaystyle (x-7)}to obtainp5(x)=x5+11x4+5x3−179x2−126x+720{\displaystyle p_{5}(x)=x^{5}+11x^{4}+5x^{3}-179x^{2}-126x+720}which is drawn in red in the figure to the right. Newton's method is used to find the largest zero of this polynomial with an initial guess of 7. The largest zero of this polynomial which corresponds to the second largest zero of the original polynomial is found at 3 and is circled in red. The degree 5 polynomial is now divided by(x−3){\displaystyle (x-3)}to obtainp4(x)=x4+14x3+47x2−38x−240{\displaystyle p_{4}(x)=x^{4}+14x^{3}+47x^{2}-38x-240}which is shown in yellow. The zero for this polynomial is found at 2 again using Newton's method and is circled in yellow. Horner's method is now used to obtainp3(x)=x3+16x2+79x+120{\displaystyle p_{3}(x)=x^{3}+16x^{2}+79x+120}which is shown in green and found to have a zero at −3. This polynomial is further reduced top2(x)=x2+13x+40{\displaystyle p_{2}(x)=x^{2}+13x+40}which is shown in blue and yields a zero of −5. The final root of the original polynomial may be found by either using the final zero as an initial guess for Newton's method, or by reducingp2(x){\displaystyle p_{2}(x)}and solving thelinear equation. As can be seen, the expected roots of −8, −5, −3, 2, 3, and 7 were found. Horner's method can be modified to compute the divided difference(p(y)−p(x))/(y−x).{\displaystyle (p(y)-p(x))/(y-x).}Given the polynomial (as before)p(x)=∑i=0naixi=a0+a1x+a2x2+a3x3+⋯+anxn,{\displaystyle p(x)=\sum _{i=0}^{n}a_{i}x^{i}=a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+\cdots +a_{n}x^{n},}proceed as follows[10]bn=an,dn=bn,bn−1=an−1+bnx,dn−1=bn−1+dny,⋮⋮b1=a1+b2x,d1=b1+d2y,b0=a0+b1x.{\displaystyle {\begin{aligned}b_{n}&=a_{n},&\quad d_{n}&=b_{n},\\b_{n-1}&=a_{n-1}+b_{n}x,&\quad d_{n-1}&=b_{n-1}+d_{n}y,\\&{}\ \ \vdots &\quad &{}\ \ \vdots \\b_{1}&=a_{1}+b_{2}x,&\quad d_{1}&=b_{1}+d_{2}y,\\b_{0}&=a_{0}+b_{1}x.\end{aligned}}} At completion, we havep(x)=b0,p(y)−p(x)y−x=d1,p(y)=b0+(y−x)d1.{\displaystyle {\begin{aligned}p(x)&=b_{0},\\{\frac {p(y)-p(x)}{y-x}}&=d_{1},\\p(y)&=b_{0}+(y-x)d_{1}.\end{aligned}}}This computation of the divided difference is subject to less round-off error than evaluatingp(x){\displaystyle p(x)}andp(y){\displaystyle p(y)}separately, particularly whenx≈y{\displaystyle x\approx y}. Substitutingy=x{\displaystyle y=x}in this method givesd1=p′(x){\displaystyle d_{1}=p'(x)}, the derivative ofp(x){\displaystyle p(x)}. Horner's paper, titled "A new method of solving numerical equations of all orders, by continuous approximation",[12]wasreadbefore the Royal Society of London, at its meeting on July 1, 1819, with a sequel in 1823.[12]Horner's paper in Part II ofPhilosophical Transactions of the Royal Society of Londonfor 1819 was warmly and expansively welcomed by areviewer[permanent dead link]in the issue ofThe Monthly Review: or, Literary Journalfor April, 1820; in comparison, a technical paper byCharles Babbageis dismissed curtly in this review. The sequence of reviews inThe Monthly Reviewfor September, 1821, concludes that Holdred was the first person to discover a direct and general practical solution of numerical equations. Fuller[13]showed that the method in Horner's 1819 paper differs from what afterwards became known as "Horner's method" and that in consequence the priority for this method should go to Holdred (1820). Unlike his English contemporaries, Horner drew on the Continental literature, notably the work ofArbogast. Horner is also known to have made a close reading of John Bonneycastle's book on algebra, though he neglected the work ofPaolo Ruffini. Although Horner is credited with making the method accessible and practical, it was known long before Horner. In reverse chronological order, Horner's method was already known to: Qin Jiushao, in hisShu Shu Jiu Zhang(Mathematical Treatise in Nine Sections; 1247), presents a portfolio of methods of Horner-type for solving polynomial equations, which was based on earlier works of the 11th century Song dynasty mathematicianJia Xian; for example, one method is specifically suited to bi-quintics, of which Qin gives an instance, in keeping with the then Chinese custom of case studies.Yoshio MikamiinDevelopment of Mathematics in China and Japan(Leipzig 1913) wrote: "... who can deny the fact of Horner's illustrious process being used in China at least nearly six long centuries earlier than in Europe ... We of course don't intend in any way to ascribe Horner's invention to a Chinese origin, but the lapse of time sufficiently makes it not altogether impossible that the Europeans could have known of the Chinese method in a direct or indirect way."[20] Ulrich Libbrechtconcluded:It is obvious that this procedure is a Chinese invention ... the method was not known in India. He said, Fibonacci probably learned of it from Arabs, who perhaps borrowed from the Chinese.[21]The extraction of square and cube roots along similar lines is already discussed byLiu Huiin connection with Problems IV.16 and 22 inJiu Zhang Suan Shu, whileWang Xiaotongin the 7th century supposes his readers can solve cubics by an approximation method described in his bookJigu Suanjing.
https://en.wikipedia.org/wiki/Horner_scheme
Becausematrix multiplicationis such a central operation in manynumerical algorithms, much work has been invested in makingmatrix multiplication algorithmsefficient. Applications of matrix multiplication in computational problems are found in many fields includingscientific computingandpattern recognitionand in seemingly unrelated problems such as counting the paths through agraph.[1]Many different algorithms have been designed for multiplying matrices on different types of hardware, includingparallelanddistributedsystems, where the computational work is spread over multiple processors (perhaps over a network). Directly applying the mathematical definition of matrix multiplication gives an algorithm thattakes timeon the order ofn3fieldoperations to multiply twon×nmatrices over that field (Θ(n3)inbig O notation). Better asymptotic bounds on the time required to multiply matrices have been known since theStrassen's algorithmin the 1960s, but the optimal time (that is, thecomputational complexity of matrix multiplication) remains unknown. As of April 2024[update], the best announced bound on theasymptotic complexityof a matrix multiplication algorithm isO(n2.371552)time, given byWilliams, Xu, Xu, and Zhou.[2][3]This improves on the bound ofO(n2.3728596)time, given by Alman and Williams.[4][5]However, this algorithm is agalactic algorithmbecause of the large constants and cannot be realized practically. Thedefinition of matrix multiplicationis that ifC=ABfor ann×mmatrixAand anm×pmatrixB, thenCis ann×pmatrix with entries From this, a simple algorithm can be constructed which loops over the indicesifrom 1 throughnandjfrom 1 throughp, computing the above using a nested loop: This algorithm takestimeΘ(nmp)(inasymptotic notation).[1]A common simplification for the purpose ofalgorithm analysisis to assume that the inputs are all square matrices of sizen×n, in which case the running time isΘ(n3), i.e., cubic in the size of the dimension.[6] The three loops in iterative matrix multiplication can be arbitrarily swapped with each other without an effect on correctness or asymptotic running time. However, the order can have a considerable impact on practical performance due to thememory access patternsandcacheuse of the algorithm;[1]which order is best also depends on whether the matrices are stored inrow-major order, column-major order, or a mix of both. In particular, in the idealized case of afully associative cacheconsisting ofMbytes andbbytes per cache line (i.e.⁠M/b⁠cache lines), the above algorithm is sub-optimal forAandBstored in row-major order. Whenn>⁠M/b⁠, every iteration of the inner loop (a simultaneous sweep through a row ofAand a column ofB) incurs a cache miss when accessing an element ofB. This means that the algorithm incursΘ(n3)cache misses in the worst case. As of 2010[update], the speed of memories compared to that of processors is such that the cache misses, rather than the actual calculations, dominate the running time for sizable matrices.[7] The optimal variant of the iterative algorithm forAandBin row-major layout is atiledversion, where the matrix is implicitly divided into square tiles of size√Mby√M:[7][8] In the idealized cache model, this algorithm incurs onlyΘ(⁠n3/b√M⁠)cache misses; the divisorb√Mamounts to several orders of magnitude on modern machines, so that the actual calculations dominate the running time, rather than the cache misses.[7] An alternative to the iterative algorithm is thedivide-and-conquer algorithmfor matrix multiplication. This relies on theblock partitioning which works for all square matrices whose dimensions are powers of two, i.e., the shapes are2n× 2nfor somen. The matrix product is now which consists of eight multiplications of pairs of submatrices, followed by an addition step. The divide-and-conquer algorithm computes the smaller multiplicationsrecursively, using thescalar multiplicationc11=a11b11as its base case. The complexity of this algorithm as a function ofnis given by the recurrence[6] accounting for the eight recursive calls on matrices of sizen/2andΘ(n2)to sum the four pairs of resulting matrices element-wise. Application of themaster theorem for divide-and-conquer recurrencesshows this recursion to have the solutionΘ(n3), the same as the iterative algorithm.[6] A variant of this algorithm that works for matrices of arbitrary shapes and is faster in practice[7]splits matrices in two instead of four submatrices, as follows.[9]Splitting a matrix now means dividing it into two parts of equal size, or as close to equal sizes as possible in the case of odd dimensions. The cache miss rate of recursive matrix multiplication is the same as that of atilediterative version, but unlike that algorithm, the recursive algorithm iscache-oblivious:[9]there is no tuning parameter required to get optimal cache performance, and it behaves well in amultiprogrammingenvironment where cache sizes are effectively dynamic due to other processes taking up cache space.[7](The simple iterative algorithm is cache-oblivious as well, but much slower in practice if the matrix layout is not adapted to the algorithm.) The number of cache misses incurred by this algorithm, on a machine withMlines of ideal cache, each of sizebbytes, is bounded by[9]: 13 Algorithms exist that provide better running times than the straightforward ones. The first to be discovered wasStrassen's algorithm, devised byVolker Strassenin 1969 and often referred to as "fast matrix multiplication". It is based on a way of multiplying two 2×2 matrices which requires only 7 multiplications (instead of the usual 8), at the expense of several additional addition and subtraction operations. Applying this recursively gives an algorithm with a multiplicative cost ofO(nlog2⁡7)≈O(n2.807){\displaystyle O(n^{\log _{2}7})\approx O(n^{2.807})}. Strassen's algorithm is more complex, and thenumerical stabilityis reduced compared to the naïve algorithm,[10]but it is faster in cases wheren> 100or so[1]and appears in several libraries, such asBLAS.[11]It is very useful for large matrices over exact domains such asfinite fields, where numerical stability is not an issue. Since Strassen's algorithm is actually used in practical numerical software and computer algebra systems, improving on the constants hidden in thebig-O notationhas its merits. A table that compares key aspects of the improved version based on recursive multiplication of 2×2-block matrices via 7 block matrix multiplications follows. As usual,n{\displaystyle n}gives the dimensions of the matrix andM{\displaystyle M}designates the memory size. It is known that a Strassen-like algorithm with a 2×2-block matrix step requires at least 7 block matrix multiplications. In 1976 Probert[16]showed that such an algorithm requires at least 15 additions (including subtractions); however, a hidden assumption was that the blocks and the 2×2-block matrix are represented in the same basis. Karstadt and Schwartz computed in different bases and traded 3 additions for less expensive basis transformations. They also proved that one cannot go below 12 additions per step using different bases. In subsequent work Beniamini et el.[17]applied this base-change trick to more general decompositions than 2×2-block matrices and improved the leading constant for their run times. It is an open question intheoretical computer sciencehow well Strassen's algorithm can be improved in terms ofasymptotic complexity. Thematrix multiplication exponent, usually denotedω{\displaystyle \omega }, is the smallest real number for which anyn×n{\displaystyle n\times n}matrix over a field can be multiplied together usingnω+o(1){\displaystyle n^{\omega +o(1)}}field operations. The current best bound onω{\displaystyle \omega }isω<2.371552{\displaystyle \omega <2.371552}, byWilliams, Xu, Xu, and Zhou.[2][4]This algorithm, like all other recent algorithms in this line of research, is a generalization of the Coppersmith–Winograd algorithm, which was given byDon CoppersmithandShmuel Winogradin 1990.[18]The conceptual idea of these algorithms is similar to Strassen's algorithm: a way is devised for multiplying twok×k-matrices with fewer thank3multiplications, and this technique is applied recursively. However, the constant coefficient hidden by thebig-O notationis so large that these algorithms are only worthwhile for matrices that are too large to handle on present-day computers.[19][20]Victor Pan proposed so-called feasible sub-cubic matrix multiplication algorithms with an exponent slightly above 2.77, but in return with a much smaller hidden constant coefficient.[21] Freivalds' algorithmis a simpleMonte Carlo algorithmthat, given matricesA,BandC, verifies inΘ(n2)time ifAB=C. In 2022,DeepMindintroduced AlphaTensor, aneural networkthat used a single-player game analogy to invent thousands of matrix multiplication algorithms, including some previously discovered by humans and some that were not.[22]Operations were restricted to thenon-commutative ground field[clarification needed](normal arithmetic) andfinite fieldZ/2Z{\displaystyle \mathbb {Z} /2\mathbb {Z} }(mod 2 arithmetic). The best "practical" (explicit low-rank decomposition of a matrix multiplication tensor) algorithm found ran in O(n2.778).[23]Finding low-rank decompositions of such tensors (and beyond) is NP-hard; optimal multiplication even for 3×3 matricesremains unknown, even in commutative field.[23]On 4×4 matrices, AlphaTensor unexpectedly discovered a solution with 47 multiplication steps, an improvement over the 49 required with Strassen’s algorithm of 1969, albeit restricted to mod 2 arithmetic. Similarly, AlphaTensor solved 5×5 matrices with 96 rather than Strassen's 98 steps. Based on the surprising discovery that such improvements exist, other researchers were quickly able to find a similar independent 4×4 algorithm, and separately tweaked Deepmind's 96-step 5×5 algorithm down to 95 steps in mod 2 arithmetic and to 97[24]in normal arithmetic.[25]Some algorithms were completely new: for example, (4, 5, 5) was improved to 76 steps from a baseline of 80 in both normal and mod 2 arithmetic. Thedivide-and-conquer algorithmsketched earlier can beparallelizedin two ways forshared-memory multiprocessors. These are based on the fact that the eight recursive matrix multiplications in can be performed independently of each other, as can the four summations (although the algorithm needs to "join" the multiplications before doing the summations). Exploiting the full parallelism of the problem, one obtains an algorithm that can be expressed infork–join stylepseudocode:[26] Proceduremultiply(C,A,B): Procedureadd(C,T)addsTintoC, element-wise: Here,forkis a keyword that signal a computation may be run in parallel with the rest of the function call, whilejoinwaits for all previously "forked" computations to complete.partitionachieves its goal by pointer manipulation only. This algorithm has acritical path lengthofΘ(log2n)steps, meaning it takes that much time on an ideal machine with an infinite number of processors; therefore, it has a maximum possiblespeedupofΘ(n3/log2n)on any real computer. The algorithm isn't practical due to the communication cost inherent in moving data to and from the temporary matrixT, but a more practical variant achievesΘ(n2)speedup, without using a temporary matrix.[26] On modern architectures with hierarchical memory, the cost of loading and storing input matrix elements tends to dominate the cost of arithmetic. On a single machine this is the amount of data transferred between RAM and cache, while on a distributed memory multi-node machine it is the amount transferred between nodes; in either case it is called thecommunication bandwidth. The naïve algorithm using three nested loops usesΩ(n3)communication bandwidth. Cannon's algorithm, also known as the2D algorithm, is acommunication-avoiding algorithmthat partitions each input matrix into a block matrix whose elements are submatrices of size√M/3by√M/3, whereMis the size of fast memory.[27]The naïve algorithm is then used over the block matrices, computing products of submatrices entirely in fast memory. This reduces communication bandwidth toO(n3/√M), which is asymptotically optimal (for algorithms performingΩ(n3)computation).[28][29] In a distributed setting withpprocessors arranged in a√pby√p2D mesh, one submatrix of the result can be assigned to each processor, and the product can be computed with each processor transmittingO(n2/√p)words, which is asymptotically optimal assuming that each node stores the minimumO(n2/p)elements.[29]This can be improved by the3D algorithm,which arranges the processors in a 3D cube mesh, assigning every product of two input submatrices to a single processor. The result submatrices are then generated by performing a reduction over each row.[30]This algorithm transmitsO(n2/p2/3)words per processor, which is asymptotically optimal.[29]However, this requires replicating each input matrix elementp1/3times, and so requires a factor ofp1/3more memory than is needed to store the inputs. This algorithm can be combined with Strassen to further reduce runtime.[30]"2.5D" algorithms provide a continuous tradeoff between memory usage and communication bandwidth.[31]On modern distributed computing environments such asMapReduce, specialized multiplication algorithms have been developed.[32] There are a variety of algorithms for multiplication onmeshes. For multiplication of twon×non a standard two-dimensional mesh using the 2DCannon's algorithm, one can complete the multiplication in 3n-2 steps although this is reduced to half this number for repeated computations.[33]The standard array is inefficient because the data from the two matrices does not arrive simultaneously and it must be padded with zeroes. The result is even faster on a two-layered cross-wired mesh, where only 2n-1 steps are needed.[34]The performance improves further for repeated computations leading to 100% efficiency.[35]The cross-wired mesh array may be seen as a special case of a non-planar (i.e. multilayered) processing structure.[36] In a 3D mesh withn3processing elements, two matrices can be multiplied inO(log⁡n){\displaystyle {\mathcal {O}}(\log n)}using the DNS algorithm.[37]
https://en.wikipedia.org/wiki/Matrix_multiplication_algorithm