text
stringlengths
11
320k
source
stringlengths
26
161
Animaginary numberis the product of areal numberand theimaginary uniti,[note 1]which is defined by its propertyi2= −1.[1][2]Thesquareof an imaginary numberbiis−b2. For example,5iis an imaginary number, and its square is−25. The numberzerois considered to be both real and imaginary.[3] Originally coined in the 17th century byRené Descartes[4]as a derogatory term and regarded as fictitious or useless, the concept gained wide acceptance following the work ofLeonhard Euler(in the 18th century) andAugustin-Louis CauchyandCarl Friedrich Gauss(in the early 19th century). An imaginary numberbican be added to a real numberato form acomplex numberof the forma+bi, where the real numbersaandbare called, respectively, thereal partand theimaginary partof the complex number.[5] Although the GreekmathematicianandengineerHeron of Alexandriais noted as the first to present a calculation involving the square root of a negative number,[6][7]it wasRafael Bombelliwho first set down the rules for multiplication ofcomplex numbersin 1572. The concept had appeared in print earlier, such as in work byGerolamo Cardano. At the time, imaginary numbers and negative numbers were poorly understood and were regarded by some as fictitious or useless, much as zero once was. Many other mathematicians were slow to adopt the use of imaginary numbers, includingRené Descartes, who wrote about them in hisLa Géométriein which he coined the termimaginaryand meant it to be derogatory.[8][9]The use of imaginary numbers was not widely accepted until the work ofLeonhard Euler(1707–1783) andCarl Friedrich Gauss(1777–1855). The geometric significance of complex numbers as points in a plane was first described byCaspar Wessel(1745–1818).[10] In 1843,William Rowan Hamiltonextended the idea of an axis of imaginary numbers in the plane to a four-dimensional space ofquaternion imaginariesin which three of the dimensions are analogous to the imaginary numbers in the complex field. Geometrically, imaginary numbers are found on the vertical axis of thecomplex number plane, which allows them to be presentedperpendicularto the real axis. One way of viewing imaginary numbers is to consider a standardnumber linepositively increasing in magnitude to the right and negatively increasing in magnitude to the left. At 0 on thex-axis, ay-axis can be drawn with "positive" direction going up; "positive" imaginary numbers then increase in magnitude upwards, and "negative" imaginary numbers increase in magnitude downwards. This vertical axis is often called the "imaginary axis"[11]and is denotediR,{\displaystyle i\mathbb {R} ,}I,{\displaystyle \mathbb {I} ,}orℑ.[12] In this representation, multiplication byicorresponds to a counterclockwiserotationof 90 degrees about the origin, which is a quarter of a circle. Multiplication by−icorresponds to a clockwise rotation of 90 degrees about the origin. Similarly, multiplying by a purely imaginary numberbi, withba real number, both causes a counterclockwise rotation about the origin by 90 degrees and scales the answer by a factor ofb. Whenb< 0, this can instead be described as a clockwise rotation by 90 degrees and a scaling by|b|.[13] Care must be used when working with imaginary numbers that are expressed as theprincipal valuesof thesquare rootsofnegative numbers.[14]For example, ifxandyare both positive real numbers, the following chain of equalities appears reasonable at first glance: But the result is clearly nonsense. The step where the square root was broken apart was illegitimate. (SeeMathematical fallacy.)
https://en.wikipedia.org/wiki/Imaginary_number
Inmathematics education, anumber sentenceis anequationorinequalityexpressed usingnumbersand mathematical symbols. The term is used inprimary levelmathematics teaching in the US,[1]Canada, UK,[2]Australia, New Zealand[3]and South Africa.[4] The term is used as means of asking students to write down equations using simple mathematical symbols (numerals, the four main basic mathematical operators, equality symbol).[5]Sometimes boxes or shapes are used to indicate unknown values. As such, number sentences are used to introduce students to notions of structure andelementary algebraprior to a more formal treatment of these concepts. A number sentence without unknowns is equivalent to a logical proposition expressed using the notation of arithmetic. Some students will use a direct computational approach. They will carry out the addition 26 + 39 = 65, put 65 = 26 +◻{\displaystyle \Box }, and then find that◻{\displaystyle \Box }= 39.
https://en.wikipedia.org/wiki/Number_sentence
Signed zeroiszerowith an associatedsign. In ordinary arithmetic, the number 0 does not have a sign, so that −0, +0 and 0 are equivalent. However, incomputing, some number representations allow for the existence of two zeros, often denoted by−0(negative zero) and+0(positive zero), regarded as equal by the numerical comparison operations but with possible different behaviors in particular operations. This occurs in thesign-magnitudeandones' complementsigned number representationsfor integers, and in mostfloating-point numberrepresentations. The number 0 is usually encoded as +0, but can still be represented by +0, −0, or 0. TheIEEE 754standard for floating-point arithmetic (presently used by most computers and programming languages that support floating-point numbers) requires both +0 and −0. Real arithmetic with signed zeros can be considered a variant of theextended real number linesuch that+1⁄−0= −∞ and+1⁄+0= +∞;divisionisundefinedonly for+±0⁄±0and+±∞⁄±∞. Negatively signed zero echoes themathematical analysisconcept of approaching 0 from below as aone-sided limit, which may be denoted byx→ 0−,x→ 0−, orx→ ↑0. The notation "−0" may be used informally to denote a negative number that has beenroundedto zero. The concept of negative zero also has some theoretical applications instatistical mechanicsand other disciplines. It is claimed that the inclusion of signed zero in IEEE 754 makes it much easier to achieve numerical accuracy in some critical problems,[1]in particular when computing withcomplexelementary functions.[2]On the other hand, the concept of signed zero runs contrary to the usual assumption made in mathematics that negative zero is the same value as zero. Representations that allow negative zero can be a source of errors in programs, if software developers do not take into account that while the two zero representations behave as equal under numeric comparisons, they yield different results in some operations. Binary integer formats can usevarious encodings. In the widely usedtwo's complementencoding, zero is unsigned. In a 1+7-bitsign-and-magnituderepresentation for integers, negative zero is represented by the bit string1000 0000. In an 8-bitones' complementrepresentation, negative zero is represented by the bit string1111 1111. In all these three encodings, positive or unsigned zero is represented by0000 0000. However, the latter two encodings (with a signed zero) are uncommon for integer formats. The most common formats with a signed zero are floating-point formats (IEEE 754formats or similar), described below. In IEEE 754 binary floating-point formats, zero values are represented by the biased exponent andsignificandboth being zero. Negative zero has the sign bit set to one. One may obtain negative zero as the result of certain computations, for instance as the result ofarithmetic underflowon a negative number (other results may also be possible), or−1.0 × 0.0, or simply as−0.0. In IEEE 754 decimal floating-point formats, a negative zero is represented by an exponent being any valid exponent in the range for the format, the true significand being zero, and the sign bit being one. The IEEE 754 floating-point standard specifies the behavior of positive zero and negative zero under various operations. The outcome may depend on the currentIEEE rounding modesettings. In systems that include both signed and unsigned zeros, the notation0+{\displaystyle 0^{+}}and0−{\displaystyle 0^{-}}is sometimes used for signed zeros. Addition and multiplication are commutative, but there are some special rules that have to be followed, which mean the usual mathematical rules for algebraic simplification may not apply. The={\displaystyle =}sign below shows the obtained floating-point results (it is not the usual equality operator). The usual rule for signs is always followed when multiplying or dividing: There are special rules for adding or subtracting signed zero: Because of negative zero (and also when the rounding mode is upward or downward), the expressions−(x−y)and(−x) − (−y), for floating-point variablesxandy, cannot be replaced byy−x. However(−0) +xcan be replaced byxwith rounding to nearest (except whenxcan be asignaling NaN). Some other special rules: Division of a non-zero number by zero sets the divide by zeroflag, and an operation producing a NaN sets the invalid operation flag. Anexception handleris called if enabled for the corresponding flag. According to the IEEE 754 standard, negative zero and positive zero should compare as equal with the usual (numerical) comparison operators, like the==operators ofCandJava. In those languages, special programming tricks may be needed to distinguish the two values: Note:Castingto integral type will not always work, especially on two's complement systems. However, some programming languages may provide alternative comparison operators that do distinguish the two zeros. This is the case, for example, of theequalsmethod in Java'sDoublewrapper class.[4] Informally, one may use the notation "−0" for a negative value that was rounded to zero. This notation may be useful when a negative sign is significant; for example, when tabulatingCelsiustemperatures, where a negative sign meansbelow freezing. Instatistical mechanics, one sometimes usesnegative temperaturesto describe systems withpopulation inversion, which can be considered to have a temperature greater than positive infinity, because the coefficient of energy in the population distribution function is −1/Temperature. In this context, a temperature of −0 is a (theoretical) temperature larger than any other negative temperature, corresponding to the (theoretical) maximum conceivable extent of population inversion, the opposite extreme to +0.[5]
https://en.wikipedia.org/wiki/Signed_zero
0(zero) is anumberrepresenting an emptyquantity. Adding (or subtracting) 0 to any number leaves that number unchanged; in mathematical terminology, 0 is theadditive identityof theintegers,rational numbers,real numbers, andcomplex numbers, as well as otheralgebraic structures. Multiplying any number by 0 results in 0, and consequentlydivision by zerohasno meaninginarithmetic. As anumerical digit, 0 plays a crucial role indecimalnotation: it indicates that thepower of tencorresponding to the place containing a 0 does not contribute to the total. For example, "205" in decimal means two hundreds, no tens, and five ones. The same principle applies inplace-value notationsthat uses a base other than ten, such asbinaryandhexadecimal. The modern use of 0 in this manner derives fromIndian mathematicsthat was transmitted to Europe viamedieval Islamic mathematiciansand popularized byFibonacci. It was independently used by theMaya. Commonnames for the number 0 in Englishincludezero,nought,naught(/nɔːt/), andnil. In contexts where at least one adjacent digit distinguishes it from theletter O, the number is sometimes pronounced asohoro(/oʊ/). Informal orslangterms for 0 includezilchandzip. Historically,ought,aught(/ɔːt/), andcipherhave also been used. The wordzerocame into the English language via Frenchzérofrom theItalianzero, a contraction of the Venetianzeveroform of Italianzefiroviaṣafiraorṣifr.[1]In pre-Islamic time the wordṣifr(Arabicصفر) had the meaning "empty".[2]Sifrevolved to mean zero when it was used to translateśūnya(Sanskrit:शून्य) from India.[2]The first known English use ofzerowas in 1598.[3] The Italian mathematicianFibonacci(c.1170– c.1250), who grew up in North Africa and is credited with introducing the decimal system to Europe, used the termzephyrum. This becamezefiroin Italian, and was then contracted tozeroin Venetian. The Italian wordzefirowas already in existence (meaning "west wind" from Latin and GreekZephyrus) and may have influenced the spelling when transcribing Arabicṣifr.[4] Depending on the context, there may be different words used for the number zero, or the concept of zero. For the simple notion of lacking, the words "nothing" (although this is not accurate) and "none" are often used. The British English words"nought" or "naught", and "nil" are also synonymous.[5][6] It is often called "oh" in the context of reading out a string of digits, such astelephone numbers,street addresses,credit card numbers,military time, or years. For example, thearea code201 may be pronounced "two oh one", and the year 1907 is often pronounced "nineteen oh seven". The presence of other digits, indicating that the string contains only numbers, avoids confusion with the letter O. For this reason, systems that include strings with both letters and numbers (such asCanadian postal codes) may exclude the use of the letter O.[citation needed] Slang words for zero include "zip", "zilch", "nada", and "scratch".[7]In the context of sports, "nil" is sometimes used, especially inBritish English. Several sports have specific words for a score of zero, such as "love" intennis– from Frenchl'œuf, "the egg" – and "duck" incricket, a shortening of "duck's egg". "Goose egg" is another general slang term used for zero.[7] AncientEgyptian numeralswere ofbase 10.[8]They usedhieroglyphsfor the digits and were notpositional. Inone papyruswritten around1770 BC, a scribe recorded daily incomes and expenditures for thepharaoh's court, using thenfrhieroglyph to indicate cases where the amount of a foodstuff received was exactly equal to the amount disbursed. EgyptologistAlan Gardinersuggested that thenfrhieroglyph was being used as a symbol for zero. The same symbol was also used to indicate the base level in drawings of tombs and pyramids, and distances were measured relative to the base line as being above or below this line.[9] By the middle of the 2nd millennium BC,Babylonian mathematicshad a sophisticatedbase 60positional numeral system. The lack of a positional value (or zero) was indicated by aspacebetweensexagesimalnumerals. In a tablet unearthed atKish(dating to as early as700 BC), the scribe Bêl-bân-aplu used three hooks as aplaceholderin the sameBabylonian system.[10]By300 BC, a punctuation symbol (two slanted wedges) was repurposed as a placeholder.[11][12] The Babylonian positional numeral system differed from the laterHindu–Arabic systemin that it did not explicitly specify the magnitude of the leading sexagesimal digit, so that for example the lone digit 1 () might represent any of 1, 60, 3600 = 602, etc., similar to the significand of afloating-point numberbut without an explicit exponent, and so only distinguished implicitly from context. The zero-like placeholder mark was only ever used in between digits, but never alone or at the end of a number.[13] TheMesoamerican Long Count calendardeveloped in south-central Mexico and Central America required the use of zero as a placeholder within itsvigesimal(base-20) positional numeral system. Many different glyphs, including the partialquatrefoilwere used as a zero symbol for these Long Count dates, the earliest of which (on Stela 2 at Chiapa de Corzo,Chiapas) has a date of 36 BC.[a][14] Since the eight earliest Long Count dates appear outside the Maya homeland,[15]it is generally believed that the use of zero in the Americas predated the Maya and was possibly the invention of theOlmecs.[16]Many of the earliest Long Count dates were found within the Olmec heartland, although the Olmec civilization ended by the4th century BC,[17]several centuries before the earliest known Long Count dates.[18] Although zero became an integral part ofMaya numerals, with a different, emptytortoise-like "shell shape" used for many depictions of the "zero" numeral, it is assumed not to have influencedOld Worldnumeral systems.[citation needed] Quipu, a knotted cord device, used in theInca Empireand its predecessor societies in theAndeanregion to record accounting and other digital data, is encoded in abase tenpositional system. Zero is represented by the absence of a knot in the appropriate position.[19] Theancient Greekshad no symbol for zero (μηδέν, pronounced 'midén'), and did not use a digit placeholder for it.[20]According to mathematicianCharles Seife, the ancient Greeks did begin to adopt the Babylonian placeholder zero for their work inastronomyafter 500 BC, representing it with the lowercase Greek letterό(όμικρον:omicron). However, after using the Babylonian placeholder zero for astronomical calculations they would typically convert the numbers back intoGreek numerals. Greeks seemed to have a philosophical opposition to using zero as a number.[21]Other scholars give the Greek partial adoption of the Babylonian zero a later date, with neuroscientist Andreas Nieder giving a date of after 400 BC and mathematician Robert Kaplan dating it after theconquests of Alexander.[22][23] Greeks seemed unsure about the status of zero as a number. Some of them asked themselves, "How can not being be?", leading to philosophical and, by themedievalperiod, religious arguments about the nature and existence of zero and thevacuum. TheparadoxesofZeno of Eleadepend in large part on the uncertain interpretation of zero.[24] By AD150,Ptolemy, influenced byHipparchusand theBabylonians, was using a symbol for zero (—°)[25][26]in his work onmathematical astronomycalled theSyntaxis Mathematica, also known as theAlmagest.[27]ThisHellenistic zerowas perhaps the earliest documented use of a numeral representing zero in the Old World.[28]Ptolemy used it many times in hisAlmagest(VI.8) for the magnitude ofsolarandlunar eclipses. It represented the value of bothdigitsandminutesof immersion at first and last contact. Digits varied continuously from 0 to 12 to 0 as the Moon passed over the Sun (a triangular pulse), where twelve digits was theangular diameterof the Sun. Minutes of immersion was tabulated from 0′0″ to 31′20″ to 0′0″, where 0′0″ used the symbol as a placeholder in two positions of hissexagesimalpositional numeral system,[b]while the combination meant a zero angle. Minutes of immersion was also a continuous function⁠1/12⁠31′20″√d(24−d)(a triangular pulse withconvexsides), where d was the digit function and 31′20″ was the sum of the radii of the Sun's and Moon's discs.[29]Ptolemy's symbol was a placeholder as well as a number used by two continuous mathematical functions, one within another, so it meant zero, not none. Over time, Ptolemy's zero tended to increase in size and lose theoverline, sometimes depicted as a large elongated 0-like omicron "Ο" or as omicron with overline "ō" instead of a dot with overline.[30] The earliest use of zero in the calculation of theJulian Easteroccurred before AD311, at the first entry in a table ofepactsas preserved in anEthiopicdocument for the years 311 to 369, using aGeʽezword for "none" (English translation is "0" elsewhere) alongside Geʽez numerals (based on Greek numerals), which was translated from an equivalent table published by theChurch of AlexandriainMedieval Greek.[31]This use was repeated in 525 in an equivalent table, that was translated via the Latinnulla("none") byDionysius Exiguus, alongsideRoman numerals.[32]When division produced zero as a remainder,nihil, meaning "nothing", was used. These medieval zeros were used by all future medievalcalculators of Easter. The initial "N" was used as a zero symbol in a table of Roman numerals byBede—or his colleagues—around AD725.[33] In mostcultures, 0 was identified before the idea of negative things (i.e., quantities less than zero) was accepted.[citation needed] TheSūnzĭ Suànjīng, of unknown date but estimated to be dated from the 1st to5th centuries AD, describe how the4th century BCChinesecounting rodssystem enabled one to perform decimal calculations. As noted in theXiahou Yang Suanjing(425–468 AD), to multiply or divide a number by 10, 100, 1000, or 10000, all one needs to do, with rods on the counting board, is to move them forwards, or back, by 1, 2, 3, or 4 places.[35]The rods gave the decimal representation of a number, with an empty space denoting zero.[34][36]The counting rod system is apositional notationsystem.[37][38] Zero was not treated as a number at that time, but as a "vacant position".[39]Qín Jiǔsháo's 1247Mathematical Treatise in Nine Sectionsis the oldest surviving Chinese mathematical text using a round symbol ‘〇’ for zero.[40]The origin of this symbol is unknown; it may have been produced by modifying a square symbol.[41]Chinese authors had been familiar with the idea of negative numbers by theHan dynasty(2nd century AD), as seen inThe Nine Chapters on the Mathematical Art.[42] Pingala(c.3rdor 2nd century BC),[43]aSanskrit prosodyscholar,[44]usedbinary sequences, in the form of short and long syllables (the latter equal in length to two short syllables), to identify the possible valid Sanskritmeters, a notation similar toMorse code.[45]Pingala used theSanskritwordśūnyaexplicitly to refer to zero.[43] The concept of zero as a written digit in thedecimalplace value notation was developed inIndia.[47]A symbol for zero, a large dot likely to be the precursor of the still-current hollow symbol, is used throughout theBakhshali manuscript, a practical manual on arithmetic for merchants.[48]In 2017, researchers at theBodleian Libraryreportedradiocarbon datingresults for three samples from the manuscript, indicating that they came from three different centuries: from AD 224–383, AD 680–779, and AD 885–993. It is not known how thebirchbark fragments from different centuries forming the manuscript came to be packaged together. If the writing on the oldest birch bark fragments is as old as those fragments, it represents South Asia's oldest recorded use of a zero symbol. However, it is possible that the writing dates instead to the time period of the youngest fragments, AD 885–993. The latter dating has been argued to be more consistent with the sophisticated use of zero within the document, as portions of it appear to show zero being employed as a number in its own right, rather than only as a positional placeholder.[46][49][50] TheLokavibhāga, aJaintext oncosmologysurviving in a medieval Sanskrit translation of thePrakritoriginal, which is internally dated to AD 458 (Saka era380), uses a decimalplace-value system, including a zero. In this text,śūnya("void, empty") is also used to refer to zero.[51] TheAryabhatiya(c.499), statessthānāt sthānaṁ daśaguṇaṁ syāt"from place to place each is ten times the preceding".[52][53][54] Rules governing the use of zero appeared inBrahmagupta'sBrahmasputha Siddhanta(7th century), which states the sum of zero with itself as zero, and incorrectly describesdivision by zeroin the following way:[55][56] A positive or negative number when divided by zero is a fraction with the zero as denominator. Zero divided by a negative or positive number is either zero or is expressed as a fraction with zero as numerator and the finite quantity as denominator. Zero divided by zero is zero. A black dot is used as a decimal placeholder in theBakhshali manuscript, portions of which date from AD 224–993.[46] There are numerous copper plate inscriptions, with the same smallOin them, some of them possibly dated to the 6th century, but their date or authenticity may be open to doubt.[10] A stone tablet found in the ruins of a temple near Sambor on theMekong,Kratié Province,Cambodia, includes the inscription of "605" inKhmer numerals(a set of numeral glyphs for theHindu–Arabic numeral system). The number is the year of the inscription in theSaka era, corresponding to a date of AD 683.[57] The first known use of specialglyphsfor the decimal digits that includes the indubitable appearance of a symbol for the digit zero, a small circle, appears on a stone inscription found at theChaturbhuj Temple, Gwalior, in India, dated AD 876.[58][59] TheArabic-language inheritance of science was largelyGreek,[60]followed by Hindu influences.[61]In 773, atAl-Mansur's behest, translations were made of many ancient treatises including Greek, Roman, Indian, and others. In AD 813, astronomical tables were prepared by aPersianmathematician,Muḥammad ibn Mūsā al-Khwārizmī, using Hindu numerals;[61]and about 825, he published a book synthesizing Greek and Hindu knowledge and also contained his own contribution to mathematics including an explanation of the use of zero.[62]This book was later translated intoLatinin the 12th century under the titleAlgoritmi de numero Indorum. This title means "al-Khwarizmi on the Numerals of the Indians". The word "Algoritmi" was the translator's Latinization of Al-Khwarizmi's name, and the word "Algorithm" or "Algorism" started to acquire a meaning of any arithmetic based on decimals.[61] Muhammad ibn Ahmad al-Khwarizmi, in 976, stated that if no number appears in the place of tens in a calculation, a little circle should be used "to keep the rows". This circle was calledṣifr.[63] TheHindu–Arabic numeral system(base 10) reached Western Europe in the 11th century, viaAl-Andalus, through SpanishMuslims, theMoors, together with knowledge ofclassical astronomyand instruments like theastrolabe.Gerbert of Aurillacis credited with reintroducing the lost teachings into Catholic Europe. For this reason, the numerals came to be known in Europe as "Arabic numerals". The Italian mathematicianFibonaccior Leonardo of Pisa was instrumental in bringing the system into European mathematics in 1202, stating: After my father's appointment byhis homelandas state official in the customs house ofBugiafor the Pisan merchants who thronged to it, he took charge; and in view of its future usefulness and convenience, had me in my boyhood come to him and there wanted me to devote myself to and be instructed in the study of calculation for some days. There, following my introduction, as a consequence of marvelous instruction in the art, to the nine digits of the Hindus, the knowledge of the art very much appealed to me before all others, and for it I realized that all its aspects were studied in Egypt, Syria, Greece, Sicily, and Provence, with their varying methods; and at these places thereafter, while on business. I pursued my study in depth and learned the give-and-take of disputation. But all this even, and thealgorism, as well as the art ofPythagoras, I considered as almost a mistake in respect to the method of theHindus[Modus Indorum]. Therefore, embracing more stringently that method of the Hindus, and taking stricter pains in its study, while adding certain things from my own understanding and inserting also certain things from the niceties ofEuclid's geometric art. I have striven to compose this book in its entirety as understandably as I could, dividing it into fifteen chapters. Almost everything which I have introduced I have displayed with exact proof, in order that those further seeking this knowledge, with its pre-eminent method, might be instructed, and further, in order that theLatin peoplemight not be discovered to be without it, as they have been up to now. If I have perchance omitted anything more or less proper or necessary, I beg indulgence, since there is no one who is blameless and utterly provident in all things. The nine Indian figures are: 9 8 7 6 5 4 3 2 1. With these nine figures, and with the sign 0... any number may be written.[64] From the 13th century, manuals on calculation (adding, multiplying, extracting roots, etc.) became common in Europe where they were calledalgorismusafter the Persian mathematicianal-Khwārizmī. One popular manual was written byJohannes de Sacroboscoin the early 1200s and was one of the earliest scientific books to beprinted, in 1488.[65][66]The practice of calculating on paper using Hindu–Arabic numerals only gradually displaced calculation by abacus and recording withRoman numerals.[67]In the 16th century, Hindu–Arabic numerals became the predominant numerals used in Europe.[65] Today, the numerical digit 0 is usually written as a circle or ellipse. Traditionally, many printtypefacesmade the capital letterOmore rounded than the narrower, elliptical digit 0.[68]Typewritersoriginally made no distinction in shape between O and 0; some models did not even have a separate key for the digit 0. The distinction came into prominence on modern characterdisplays.[68] Aslashed zero(0/{\displaystyle 0\!\!\!{/}}) is often used to distinguish the number from the letter (mostly in computing, navigation and in the military, for example). The digit 0 with a dot in the center seems to have originated as an option onIBM 3270displays and has continued with some modern computer typefaces such asAndalé Mono, and in some airline reservation systems. One variation uses a short vertical bar instead of the dot. Some fonts designed for use with computers made the "0" character more squared at the edges, like a rectangle, and the "O" character more rounded. A further distinction is made infalsification-hindering typefaceas used onGerman car number platesby slitting open the digit 0 on the upper right side. In some systems either the letter O or the numeral 0, or both, are excluded from use, to avoid confusion. The concept of zero plays multiple roles in mathematics: as a digit, it is an important part of positional notation for representing numbers, while it also plays an important role as a number in its own right in many algebraic settings. In positional number systems (such as the usualdecimal notationfor representing numbers), the digit 0 plays the role of a placeholder, indicating that certain powers of the base do not contribute. For example, the decimal number 205 is the sum of two hundreds and five ones, with the 0 digit indicating that no tens are added. The digit plays the same role indecimal fractionsand in thedecimal representationof other real numbers (indicating whether any tenths, hundredths, thousandths, etc., are present) and in bases other than 10 (for example, in binary, where it indicates which powers of 2 are omitted).[69] The number 0 is the smallestnonnegative integer, and the largest nonpositive integer. Thenatural numberfollowing 0 is 1 and no natural number precedes 0. The number 0may or may not be considered a natural number,[70][71]but it is aninteger, and hence arational numberand areal number.[72]All rational numbers arealgebraic numbers, including 0. When the real numbers are extended to form thecomplex numbers, 0 becomes theoriginof the complex plane. The number 0 can be regarded as neither positive nor negative[73]or, alternatively, both positive and negative[74]and is usually displayed as the central number in anumber line. Zero iseven[75](that is, a multiple of 2), and is also aninteger multipleof any other integer, rational, or real number. It is neither aprime numbernor acomposite number: it is not prime because prime numbers are greater than 1 by definition, and it is not composite because it cannot be expressed as the product of two smaller natural numbers.[76](However, thesingleton set{0} is aprime idealin theringof the integers.) The following are some basic rules for dealing with the number 0. These rules apply for any real or complex numberx, unless otherwise stated. The expression⁠0/0⁠, which may be obtained in an attempt to determine the limit of an expression of the form⁠f(x)/g(x)⁠as a result of applying thelimoperator independently to both operands of the fraction, is a so-called "indeterminate form". That does not mean that the limit sought is necessarily undefined; rather, it means that the limit of⁠f(x)/g(x)⁠, if it exists, must be found by another method, such asl'Hôpital's rule.[78] The sum of 0 numbers (theempty sum) is 0, and the product of 0 numbers (theempty product) is 1. Thefactorial0! evaluates to 1, as a special case of the empty product.[79] The role of 0 as the smallest counting number can be generalized or extended in various ways. Inset theory, 0 is thecardinalityof theempty set(notated as "{ }", "∅{\textstyle \emptyset }", or "∅"): if one does not have any apples, then one has 0 apples. In fact, in certain axiomatic developments of mathematics from set theory, 0 isdefinedto be the empty set.[80]When this is done, the empty set is thevon Neumann cardinal assignmentfor a set with no elements, which is the empty set. The cardinality function, applied to the empty set, returns the empty set as a value, thereby assigning it 0 elements. Also in set theory, 0 is the lowestordinal number, corresponding to the empty set viewed as awell-ordered set. Inorder theory(and especially its subfieldlattice theory), 0 may denote theleast elementof alatticeor otherpartially ordered set. The role of 0 as additive identity generalizes beyond elementary algebra. Inabstract algebra, 0 is commonly used to denote azero element, which is theidentity elementfor addition (if defined on the structure under consideration) and anabsorbing elementfor multiplication (if defined). (Such elements may also be calledzero elements.) Examples include identity elements ofadditive groupsandvector spaces. Another example is thezero function(orzero map) on a domainD. This is theconstant functionwith 0 as its only possible output value, that is, it is the functionfdefined byf(x) = 0for allxinD. As a function from the real numbers to the real numbers, the zero function is the only function that is bothevenandodd. The number 0 is also used in several other ways within various branches of mathematics: The value zero plays a special role for many physical quantities. For some quantities, the zero level is naturally distinguished from all other levels, whereas for others it is more or less arbitrarily chosen. For example, for anabsolute temperature(typically measured inkelvins),zerois the lowest possible value. (Negative temperaturescan be defined for some physical systems, but negative-temperature systems are not actually colder.) This is in contrast to temperatures on the Celsius scale, for example, where zero is arbitrarily defined to be at thefreezing pointof water.[83][84]Measuring sound intensity indecibelsorphons, the zero level is arbitrarily set at a reference value—for example, at a value for the threshold of hearing. Inphysics, thezero-point energyis the lowest possible energy that aquantum mechanicalphysical systemmay possess and is the energy of theground stateof the system. Modern computers store information inbinary, that is, using an "alphabet" that contains only two symbols, usually chosen to be "0" and "1". Binary coding is convenient fordigital electronics, where "0" and "1" can stand for the absence or presence of electrical current in a wire.[85]Computer programmerstypically usehigh-level programming languagesthat are more intelligible to humans than thebinary instructionsthat are directly executed by thecentral processing unit. 0 plays various important roles in high-level languages. For example, aBoolean variablestores a value that is eithertrueorfalse,and 0 is often the numerical representation offalse.[86] 0 also plays a role inarrayindexing. The most common practice throughout human history has been to start counting at one, and this is the practice in early classic programming languages such asFortranandCOBOL.[87]However, in the late 1950sLISPintroducedzero-based numberingfor arrays whileAlgol 58introduced completely flexible basing for array subscripts (allowing any positive, negative, or zero integer as base for array subscripts), and most subsequent programming languages adopted one or other of these positions.[citation needed]For example, the elements of an array are numbered starting from 0 inC, so that for an array ofnitems the sequence of array indices runs from 0 ton−1.[88] There can be confusion between 0- and 1-based indexing; for example, Java'sJDBCindexes parameters from 1 althoughJavaitself uses 0-based indexing.[89] In C, abytecontaining the value 0 serves to indicate where astringof characters ends. Also, 0 is a standard way to refer to anull pointerin code.[90] In databases, it is possible for a field not to have a value. It is then said to have anull value.[91]For numeric fields it is not the value zero. For text fields this is not blank nor the empty string. The presence of null values leads tothree-valued logic. No longer is a condition eithertrueorfalse, but it can beundetermined. Any computation including a null value delivers a null result.[92] In mathematics, there is no "positive zero" or "negative zero" distinct from zero; both −0 and +0 represent exactly the same number. However, in some computer hardwaresigned number representations, zero has two distinct representations, a positive one grouped with the positive numbers and a negative one grouped with the negatives. This kind of dual representation is known assigned zero, with the latter form sometimes called negative zero. These representations include thesigned magnitudeandones' complementbinary integer representations (but not thetwo's complementbinary form used in most modern computers), and mostfloating-pointnumber representations (such asIEEE 754andIBM S/360floating-point formats). Anepoch, in computing terminology, is the date and time associated with a zero timestamp. TheUnix epochbegins the midnight before the first of January 1970.[93][94][95]TheClassic Mac OSepoch andPalm OSepoch begin the midnight before the first of January 1904.[96] ManyAPIsandoperating systemsthat require applications to return an integer value as anexit statustypically use zero to indicate success and non-zero values to indicate specificerroror warning conditions.[97][citation needed] Programmers often use aslashed zeroto avoid confusion with the letter "O".[98] Incomparative zoologyandcognitive science, recognition that some animals display awareness of the concept of zero leads to the conclusion that the capability for numerical abstraction arose early in theevolutionof species.[99] In theBCcalendar era, the year 1BC is the first year before AD1; there is not ayear zero. By contrast, inastronomical year numbering, the year 1BC is numbered 0, the year 2BC is numbered −1, and so forth.[100]
https://en.wikipedia.org/wiki/History_of_zero
Anintegeris thenumberzero (0), a positivenatural number(1, 2, 3, ...), or the negation of a positive natural number (−1, −2, −3, ...).[1]The negations oradditive inversesof the positive natural numbers are referred to asnegative integers.[2]Thesetof all integers is often denoted by theboldfaceZorblackboard boldZ{\displaystyle \mathbb {Z} }.[3][4] The set of natural numbersN{\displaystyle \mathbb {N} }is asubsetofZ{\displaystyle \mathbb {Z} }, which in turn is a subset of the set of allrational numbersQ{\displaystyle \mathbb {Q} }, itself a subset of thereal numbersR{\displaystyle \mathbb {R} }.[a]Like the set of natural numbers, the set of integersZ{\displaystyle \mathbb {Z} }iscountably infinite. An integer may be regarded as a real number that can be written without afractional component. For example, 21, 4, 0, and −2048 are integers, while 9.75,⁠5+1/2⁠, 5/4, and√2are not.[8] The integers form the smallestgroupand the smallestringcontaining thenatural numbers. Inalgebraic number theory, the integers are sometimes qualified asrational integersto distinguish them from the more generalalgebraic integers. In fact, (rational) integers are algebraic integers that are alsorational numbers. The word integer comes from theLatinintegermeaning "whole" or (literally) "untouched", fromin("not") plustangere("to touch"). "Entire" derives from the same origin via theFrenchwordentier, which means bothentireandinteger.[9]Historically the term was used for anumberthat was a multiple of 1,[10][11]or to the whole part of amixed number.[12][13]Only positive integers were considered, making the term synonymous with thenatural numbers. The definition of integer expanded over time to includenegative numbersas their usefulness was recognized.[14]For exampleLeonhard Eulerin his 1765Elements of Algebradefined integers to include both positive and negative numbers.[15] The phrasethe set of the integerswas not used before the end of the 19th century, whenGeorg Cantorintroduced the concept ofinfinite setsandset theory. The use of the letter Z to denote the set of integers comes from theGermanwordZahlen("numbers")[3][4]and has been attributed toDavid Hilbert.[16]The earliest known use of the notation in a textbook occurs inAlgèbrewritten by the collectiveNicolas Bourbaki, dating to 1947.[3][17]The notation was not adopted immediately. For example, another textbook used the letter J,[18]and a 1960 paper used Z to denote the non-negative integers.[19]But by 1961, Z was generally used by modern algebra texts to denote the positive and negative integers.[20] The symbolZ{\displaystyle \mathbb {Z} }is often annotated to denote various sets, with varying usage amongst different authors:Z+{\displaystyle \mathbb {Z} ^{+}},Z+{\displaystyle \mathbb {Z} _{+}}, orZ>{\displaystyle \mathbb {Z} ^{>}}for the positive integers,Z0+{\displaystyle \mathbb {Z} ^{0+}}orZ≥{\displaystyle \mathbb {Z} ^{\geq }}for non-negative integers, andZ≠{\displaystyle \mathbb {Z} ^{\neq }}for non-zero integers. Some authors useZ∗{\displaystyle \mathbb {Z} ^{*}}for non-zero integers, while others use it for non-negative integers, or for {−1,1} (thegroup of unitsofZ{\displaystyle \mathbb {Z} }). Additionally,Zp{\displaystyle \mathbb {Z} _{p}}is used to denote either the set ofintegers modulop(i.e., the set ofcongruence classesof integers), or the set ofp-adic integers.[21][22] Thewhole numberswere synonymous with the integers up until the early 1950s.[23][24][25]In the late 1950s, as part of theNew Mathmovement,[26]American elementary school teachers began teaching thatwhole numbersreferred to thenatural numbers, excluding negative numbers, whileintegerincluded the negative numbers.[27][28]Thewhole numbersremain ambiguous to the present day.[29] Ring homomorphisms Algebraic structures Related structures Algebraic number theory Noncommutative algebraic geometry Free algebra Clifford algebra Like thenatural numbers,Z{\displaystyle \mathbb {Z} }isclosedunder theoperationsof addition andmultiplication, that is, the sum and product of any two integers is an integer. However, with the inclusion of the negative natural numbers (and importantly,0),Z{\displaystyle \mathbb {Z} }, unlike the natural numbers, is also closed undersubtraction.[30] The integers form aringwhich is the most basic one, in the following sense: for any ring, there is a uniquering homomorphismfrom the integers into this ring. Thisuniversal property, namely to be aninitial objectin thecategory of rings, characterizes the ringZ{\displaystyle \mathbb {Z} }. This unique homomorphism isinjectiveif and only if thecharacteristicof the ring is zero. It follows that every ring of characteristic zero contains a subring isomorphic to⁠Z{\displaystyle \mathbb {Z} }⁠, which is its smallest subring. Z{\displaystyle \mathbb {Z} }is not closed underdivision, since the quotient of two integers (e.g., 1 divided by 2) need not be an integer. Although the natural numbers are closed underexponentiation, the integers are not (since the result can be a fraction when the exponent is negative). The following table lists some of the basic properties of addition and multiplication for any integersa,b, andc: The first five properties listed above for addition say thatZ{\displaystyle \mathbb {Z} }, under addition, is anabelian group. It is also acyclic group, since every non-zero integer can be written as a finite sum1 + 1 + ... + 1or(−1) + (−1) + ... + (−1). In fact,Z{\displaystyle \mathbb {Z} }under addition is theonlyinfinite cyclic group—in the sense that any infinite cyclic group isisomorphictoZ{\displaystyle \mathbb {Z} }. The first four properties listed above for multiplication say thatZ{\displaystyle \mathbb {Z} }under multiplication is acommutative monoid. However, not every integer has a multiplicative inverse (as is the case of the number 2), which means thatZ{\displaystyle \mathbb {Z} }under multiplication is not a group. All the rules from the above property table (except for the last), when taken together, say thatZ{\displaystyle \mathbb {Z} }together with addition and multiplication is acommutative ringwithunity. It is the prototype of all objects of suchalgebraic structure. Only thoseequalitiesofexpressionsare true inZ{\displaystyle \mathbb {Z} }for allvalues of variables, which are true in any unital commutative ring. Certain non-zero integers map tozeroin certain rings. The lack ofzero divisorsin the integers (last property in the table) means that the commutative ringZ{\displaystyle \mathbb {Z} }is anintegral domain. The lack of multiplicative inverses, which is equivalent to the fact thatZ{\displaystyle \mathbb {Z} }is not closed under division, means thatZ{\displaystyle \mathbb {Z} }isnotafield. The smallest field containing the integers as asubringis the field ofrational numbers. The process of constructing the rationals from the integers can be mimicked to form thefield of fractionsof any integral domain. And back, starting from analgebraic number field(an extension of rational numbers), itsring of integerscan be extracted, which includesZ{\displaystyle \mathbb {Z} }as itssubring. Although ordinary division is not defined onZ{\displaystyle \mathbb {Z} }, the division "with remainder" is defined on them. It is calledEuclidean division, and possesses the following important property: given two integersaandbwithb≠ 0, there exist unique integersqandrsuch thata=q×b+rand0 ≤r< |b|, where|b|denotes theabsolute valueofb. The integerqis called thequotientandris called theremainderof the division ofabyb. TheEuclidean algorithmfor computinggreatest common divisorsworks by a sequence of Euclidean divisions. The above says thatZ{\displaystyle \mathbb {Z} }is aEuclidean domain. This implies thatZ{\displaystyle \mathbb {Z} }is aprincipal ideal domain, and any positive integer can be written as the products ofprimesin anessentially uniqueway.[31]This is thefundamental theorem of arithmetic. Z{\displaystyle \mathbb {Z} }is atotally ordered setwithoutupper or lower bound. The ordering ofZ{\displaystyle \mathbb {Z} }is given by::... −3 < −2 < −1 < 0 < 1 < 2 < 3 < .... An integer ispositiveif it is greater thanzero, andnegativeif it is less than zero. Zero is defined as neither negative nor positive. The ordering of integers is compatible with the algebraic operations in the following way: Thus it follows thatZ{\displaystyle \mathbb {Z} }together with the above ordering is anordered ring. The integers are the only nontrivialtotally orderedabelian groupwhose positive elements arewell-ordered.[32]This is equivalent to the statement that anyNoetherianvaluation ringis either afield—or adiscrete valuation ring. In elementary school teaching, integers are often intuitively defined as the union of the (positive) natural numbers,zero, and the negations of the natural numbers. This can be formalized as follows.[33]First construct the set of natural numbers according to thePeano axioms, call thisP{\displaystyle P}. Then construct a setP−{\displaystyle P^{-}}which isdisjointfromP{\displaystyle P}and in one-to-one correspondence withP{\displaystyle P}via a functionψ{\displaystyle \psi }. For example, takeP−{\displaystyle P^{-}}to be theordered pairs(1,n){\displaystyle (1,n)}with the mappingψ=n↦(1,n){\displaystyle \psi =n\mapsto (1,n)}. Finally let 0 be some object not inP{\displaystyle P}orP−{\displaystyle P^{-}}, for example the ordered pair (0,0). Then the integers are defined to be the unionP∪P−∪{0}{\displaystyle P\cup P^{-}\cup \{0\}}. The traditional arithmetic operations can then be defined on the integers in apiecewisefashion, for each of positive numbers, negative numbers, and zero. For examplenegationis defined as follows: −x={ψ(x),ifx∈Pψ−1(x),ifx∈P−0,ifx=0{\displaystyle -x={\begin{cases}\psi (x),&{\text{if }}x\in P\\\psi ^{-1}(x),&{\text{if }}x\in P^{-}\\0,&{\text{if }}x=0\end{cases}}} The traditional style of definition leads to many different cases (each arithmetic operation needs to be defined on each combination of types of integer) and makes it tedious to prove that integers obey the various laws of arithmetic.[34] In modern set-theoretic mathematics, a more abstract construction[35][36]allowing one to define arithmetical operations without any case distinction is often used instead.[37]The integers can thus be formally constructed as theequivalence classesofordered pairsofnatural numbers(a,b).[38] The intuition is that(a,b)stands for the result of subtractingbfroma.[38]To confirm our expectation that1 − 2and4 − 5denote the same number, we define anequivalence relation~on these pairs with the following rule: precisely when Addition and multiplication of integers can be defined in terms of the equivalent operations on the natural numbers;[38]by using[(a,b)]to denote the equivalence class having(a,b)as a member, one has: The negation (or additive inverse) of an integer is obtained by reversing the order of the pair: Hence subtraction can be defined as the addition of the additive inverse: The standard ordering on the integers is given by: It is easily verified that these definitions are independent of the choice of representatives of the equivalence classes. Every equivalence class has a unique member that is of the form(n,0)or(0,n)(or both at once). The natural numbernis identified with the class[(n,0)](i.e., the natural numbers areembeddedinto the integers by map sendingnto[(n,0)]), and the class[(0,n)]is denoted−n(this covers all remaining classes, and gives the class[(0,0)]a second time since −0 = 0. Thus,[(a,b)]is denoted by If the natural numbers are identified with the corresponding integers (using the embedding mentioned above), this convention creates no ambiguity. This notation recovers the familiarrepresentationof the integers as{..., −2, −1, 0, 1, 2, ...}. Some examples are: In theoretical computer science, other approaches for the construction of integers are used byautomated theorem proversandterm rewrite engines. Integers are represented asalgebraic termsbuilt using a few basic operations (e.g.,zero,succ,pred) and usingnatural numbers, which are assumed to be already constructed (using thePeano approach). There exist at least ten such constructions of signed integers.[39]These constructions differ in several ways: the number of basic operations used for the construction, the number (usually, between 0 and 2), and the types of arguments accepted by these operations; the presence or absence of natural numbers as arguments of some of these operations, and the fact that these operations are free constructors or not, i.e., that the same integer can be represented using only one or many algebraic terms. The technique for the construction of integers presented in the previous section corresponds to the particular case where there is a single basic operationpair(x,y){\displaystyle (x,y)}that takes as arguments two natural numbersx{\displaystyle x}andy{\displaystyle y}, and returns an integer (equal tox−y{\displaystyle x-y}). This operation is not free since the integer 0 can be writtenpair(0,0), orpair(1,1), orpair(2,2), etc.. This technique of construction is used by theproof assistantIsabelle; however, many other tools use alternative construction techniques, notable those based upon free constructors, which are simpler and can be implemented more efficiently in computers. An integer is often a primitivedata typeincomputer languages. However, integer data types can only represent asubsetof all integers, since practical computers are of finite capacity. Also, in the commontwo's complementrepresentation, the inherent definition ofsigndistinguishes between "negative" and "non-negative" rather than "negative, positive, and 0". (It is, however, certainly possible for a computer to determine whether an integer value is truly positive.) Fixed length integer approximation data types (or subsets) are denotedintor Integer in several programming languages (such asAlgol68,C,Java,Delphi, etc.). Variable-length representations of integers, such asbignums, can store any integer that fits in the computer's memory. Other integer data types are implemented with a fixed size, usually a number of bits which is a power of 2 (4, 8, 16, etc.) or a memorable number of decimal digits (e.g., 9 or 10). The set of integers iscountably infinite, meaning it is possible to pair each integer with a unique natural number. An example of such a pairing is More technically, thecardinalityofZ{\displaystyle \mathbb {Z} }is said to equalℵ0(aleph-null). The pairing between elements ofZ{\displaystyle \mathbb {Z} }andN{\displaystyle \mathbb {N} }is called abijection. This article incorporates material from Integer onPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
https://en.wikipedia.org/wiki/Integers
Inmathematics, thepositive partof arealorextended real-valuedfunctionis defined by the formulaf+(x)=max(f(x),0)={f(x)iff(x)>00otherwise.{\displaystyle f^{+}(x)=\max(f(x),0)={\begin{cases}f(x)&{\text{ if }}f(x)>0\\0&{\text{ otherwise.}}\end{cases}}} Intuitively, thegraphoff+{\displaystyle f^{+}}is obtained by taking the graph off{\displaystyle f}, 'chopping off' the part under thex-axis, and lettingf+{\displaystyle f^{+}}take the value zero there. Similarly, thenegative partoffis defined asf−(x)=max(−f(x),0)=−min(f(x),0)={−f(x)iff(x)<00otherwise{\displaystyle f^{-}(x)=\max(-f(x),0)=-\min(f(x),0)={\begin{cases}-f(x)&{\text{ if }}f(x)<0\\0&{\text{ otherwise}}\end{cases}}} Note that bothf+andf−are non-negative functions. A peculiarity of terminology is that the 'negative part' is neither negative nor a part (like the imaginary part of acomplex numberis neither imaginary nor a part). The functionfcan be expressed in terms off+andf−asf=f+−f−.{\displaystyle f=f^{+}-f^{-}.} Also note that|f|=f++f−.{\displaystyle |f|=f^{+}+f^{-}.} Using these two equations one may express the positive and negative parts asf+=|f|+f2f−=|f|−f2.{\displaystyle {\begin{aligned}f^{+}&={\frac {|f|+f}{2}}\\f^{-}&={\frac {|f|-f}{2}}.\end{aligned}}} Another representation, using theIverson bracketisf+=[f>0]ff−=−[f<0]f.{\displaystyle {\begin{aligned}f^{+}&=[f>0]f\\f^{-}&=-[f<0]f.\end{aligned}}} One may define the positive and negative part of any function with values in alinearly ordered group. The unitramp functionis the positive part of theidentity function. Given ameasurable space(X, Σ), an extended real-valued functionfismeasurableif and only ifits positive and negative parts are. Therefore, if such a functionfis measurable, so is its absolute value|f|, being the sum of two measurable functions. The converse, though, does not necessarily hold: for example, takingfasf=1V−12,{\displaystyle f=1_{V}-{\frac {1}{2}},}whereVis aVitali set, it is clear thatfis not measurable, but its absolute value is, being a constant function. The positive part and negative part of a function are used to define theLebesgue integralfor a real-valued function. Analogously to this decomposition of a function, one may decompose asigned measureinto positive and negative parts — see theHahn decomposition theorem.
https://en.wikipedia.org/wiki/Positive_and_negative_parts
Inmathematics, arational numberis anumberthat can be expressed as thequotientorfraction⁠pq{\displaystyle {\tfrac {p}{q}}}⁠of twointegers, anumeratorpand a non-zerodenominatorq.[1]For example,⁠37{\displaystyle {\tfrac {3}{7}}}⁠is a rational number, as is every integer (for example,−5=−51{\displaystyle -5={\tfrac {-5}{1}}}).Thesetof all rational numbers, also referred to as "the rationals",[2]thefield of rationals[3]or thefield of rational numbersis usually denoted by boldfaceQ, orblackboard bold⁠Q.{\displaystyle \mathbb {Q} .}⁠ A rational number is areal number. The real numbers that are rational are those whosedecimal expansioneither terminates after a finite number ofdigits(example:3/4 = 0.75), or eventually begins torepeatthe same finitesequenceof digits over and over (example:9/44 = 0.20454545...).[4]This statement is true not only inbase 10, but also in every other integerbase, such as thebinaryandhexadecimalones (seeRepeating decimal § Extension to other bases). Areal numberthat is not rational is calledirrational.[5]Irrational numbers include thesquare root of 2(⁠2{\displaystyle {\sqrt {2}}}⁠),π,e, and thegolden ratio(φ). Since the set of rational numbers iscountable, and the set of real numbers isuncountable,almost allreal numbers are irrational.[1] Rational numbers can beformallydefined asequivalence classesof pairs of integers(p, q)withq≠ 0, using theequivalence relationdefined as follows: The fraction⁠pq{\displaystyle {\tfrac {p}{q}}}⁠then denotes the equivalence class of(p, q).[6] Rational numbers together withadditionandmultiplicationform afieldwhich contains theintegers, and is contained in any field containing the integers. In other words, the field of rational numbers is aprime field, and a field hascharacteristic zeroif and only if it contains the rational numbers as a subfield. Finiteextensionsof⁠Q{\displaystyle \mathbb {Q} }⁠are calledalgebraic number fields, and thealgebraic closureof⁠Q{\displaystyle \mathbb {Q} }⁠is the field ofalgebraic numbers.[7] Inmathematical analysis, the rational numbers form adense subsetof the real numbers. The real numbers can be constructed from the rational numbers bycompletion, usingCauchy sequences,Dedekind cuts, or infinitedecimals(seeConstruction of the real numbers). In mathematics, "rational" is often used as a noun abbreviating "rational number". The adjectiverationalsometimes means that thecoefficientsare rational numbers. For example, arational pointis a point with rationalcoordinates(i.e., a point whose coordinates are rational numbers); arational matrixis amatrixof rational numbers; arational polynomialmay be a polynomial with rational coefficients, although the term "polynomial over the rationals" is generally preferred, to avoid confusion between "rational expression" and "rational function" (apolynomialis a rational expression and defines a rational function, even if its coefficients are not rational numbers). However, arational curveis nota curve defined over the rationals, but a curve which can be parameterized by rational functions. Although nowadaysrational numbersare defined in terms ofratios, the termrationalis not aderivationofratio. On the contrary, it isratiothat is derived fromrational: the first use ofratiowith its modern meaning was attested in English about 1660,[8]while the use ofrationalfor qualifying numbers appeared almost a century earlier, in 1570.[9]This meaning ofrationalcame from the mathematical meaning ofirrational, which was first used in 1551, and it was used in "translations of Euclid (following his peculiar use ofἄλογος)".[10][11] This unusual history originated in the fact thatancient Greeks"avoided heresy by forbidding themselves from thinking of those [irrational] lengths as numbers".[12]So such lengths wereirrational, in the sense ofillogical, that is "not to be spoken about" (ἄλογοςin Greek).[13] Every rational number may be expressed in a unique way as anirreducible fraction⁠ab,{\displaystyle {\tfrac {a}{b}},}⁠whereaandbarecoprime integersandb> 0. This is often called thecanonical formof the rational number. Starting from a rational number⁠ab,{\displaystyle {\tfrac {a}{b}},}⁠its canonical form may be obtained by dividingaandbby theirgreatest common divisor, and, ifb< 0, changing the sign of the resulting numerator and denominator. Any integerncan be expressed as the rational number⁠n1,{\displaystyle {\tfrac {n}{1}},}⁠which is its canonical form as a rational number. If both fractions are in canonical form, then: If both denominators are positive (particularly if both fractions are in canonical form): On the other hand, if either denominator is negative, then each fraction with a negative denominator must first be converted into an equivalent form with a positive denominator—by changing the signs of both its numerator and denominator.[6] Two fractions are added as follows: If both fractions are in canonical form, the result is in canonical form if and only ifb, darecoprime integers.[6][14] If both fractions are in canonical form, the result is in canonical form if and only ifb, darecoprime integers.[14] The rule for multiplication is: where the result may be areducible fraction—even if both original fractions are in canonical form.[6][14] Every rational number⁠ab{\displaystyle {\tfrac {a}{b}}}⁠has anadditive inverse, often called itsopposite, If⁠ab{\displaystyle {\tfrac {a}{b}}}⁠is in canonical form, the same is true for its opposite. A nonzero rational number⁠ab{\displaystyle {\tfrac {a}{b}}}⁠has amultiplicative inverse, also called itsreciprocal, If⁠ab{\displaystyle {\tfrac {a}{b}}}⁠is in canonical form, then the canonical form of its reciprocal is either⁠ba{\displaystyle {\tfrac {b}{a}}}⁠or⁠−b−a,{\displaystyle {\tfrac {-b}{-a}},}⁠depending on the sign ofa. Ifb, c, dare nonzero, the division rule is Thus, dividing⁠ab{\displaystyle {\tfrac {a}{b}}}⁠by⁠cd{\displaystyle {\tfrac {c}{d}}}⁠is equivalent to multiplying⁠ab{\displaystyle {\tfrac {a}{b}}}⁠by thereciprocalof⁠cd:{\displaystyle {\tfrac {c}{d}}:}⁠[14] Ifnis a non-negative integer, then The result is in canonical form if the same is true for⁠ab.{\displaystyle {\tfrac {a}{b}}.}⁠In particular, Ifa≠ 0, then If⁠ab{\displaystyle {\tfrac {a}{b}}}⁠is in canonical form, the canonical form of the result is⁠bnan{\displaystyle {\tfrac {b^{n}}{a^{n}}}}⁠ifa> 0ornis even. Otherwise, the canonical form of the result is⁠−bn−an.{\displaystyle {\tfrac {-b^{n}}{-a^{n}}}.}⁠ Afinite continued fractionis an expression such as whereanare integers. Every rational number⁠ab{\displaystyle {\tfrac {a}{b}}}⁠can be represented as a finite continued fraction, whosecoefficientsancan be determined by applying theEuclidean algorithmto(a, b). are different ways to represent the same rational value. The rational numbers may be built asequivalence classesofordered pairsofintegers.[6][14] More precisely, let⁠(Z×(Z∖{0})){\displaystyle (\mathbb {Z} \times (\mathbb {Z} \setminus \{0\}))}⁠be the set of the pairs(m, n)of integers suchn≠ 0. Anequivalence relationis defined on this set by Addition and multiplication can be defined by the following rules: This equivalence relation is acongruence relation, which means that it is compatible with the addition and multiplication defined above; the set of rational numbers⁠Q{\displaystyle \mathbb {Q} }⁠is the defined as thequotient setby this equivalence relation,⁠(Z×(Z∖{0}))/∼,{\displaystyle (\mathbb {Z} \times (\mathbb {Z} \backslash \{0\}))/\sim ,}⁠equipped with the addition and the multiplication induced by the above operations. (This construction can be carried out with anyintegral domainand produces itsfield of fractions.)[6] The equivalence class of a pair(m, n)is denoted⁠mn.{\displaystyle {\tfrac {m}{n}}.}⁠Two pairs(m1,n1)and(m2,n2)belong to the same equivalence class (that is are equivalent) if and only if This means that if and only if[6][14] Every equivalence class⁠mn{\displaystyle {\tfrac {m}{n}}}⁠may be represented by infinitely many pairs, since Each equivalence class contains a uniquecanonical representative element. The canonical representative is the unique pair(m, n)in the equivalence class such thatmandnarecoprime, andn> 0. It is called therepresentation in lowest termsof the rational number. The integers may be considered to be rational numbers identifying the integernwith the rational number⁠n1.{\displaystyle {\tfrac {n}{1}}.}⁠ Atotal ordermay be defined on the rational numbers, that extends the natural order of the integers. One has If The set⁠Q{\displaystyle \mathbb {Q} }⁠of all rational numbers, together with the addition and multiplication operations shown above, forms afield.[6] ⁠Q{\displaystyle \mathbb {Q} }⁠has nofield automorphismother than the identity. (A field automorphism must fix 0 and 1; as it must fix the sum and the difference of two fixed elements, it must fix every integer; as it must fix the quotient of two fixed elements, it must fix every rational number, and is thus the identity.) ⁠Q{\displaystyle \mathbb {Q} }⁠is aprime field, which is a field that has no subfield other than itself.[15]The rationals are the smallest field withcharacteristiczero. Every field of characteristic zero contains a unique subfield isomorphic to⁠Q.{\displaystyle \mathbb {Q} .}⁠ With the order defined above,⁠Q{\displaystyle \mathbb {Q} }⁠is anordered field[14]that has no subfield other than itself, and is the smallest ordered field, in the sense that every ordered field contains a unique subfieldisomorphicto⁠Q.{\displaystyle \mathbb {Q} .}⁠ ⁠Q{\displaystyle \mathbb {Q} }⁠is thefield of fractionsof theintegers⁠Z.{\displaystyle \mathbb {Z} .}⁠[16]Thealgebraic closureof⁠Q,{\displaystyle \mathbb {Q} ,}⁠i.e. the field of roots of rational polynomials, is the field ofalgebraic numbers. The rationals are adensely orderedset: between any two rationals, there sits another one, and, therefore, infinitely many other ones.[6]For example, for any two fractions such that (whereb,d{\displaystyle b,d}are positive), we have Anytotally orderedset which is countable, dense (in the above sense), and has no least or greatest element isorder isomorphicto the rational numbers.[17] The set of positive rational numbers iscountable, as is illustrated in the figure. More precisely, one can sort the fractions by increasing values of the sum of the numerator and the denominator, and, for equal sums, by increasing numerator or denominator. This produces asequenceof fractions, from which one can remove the reducible fractions (in red on the figure), for getting a sequence that contains each rational number exactly once. This establishes a bijection between the rational numbers and the natural numbers, which maps each rational number to its rank in the sequence. A similar method can be used for numbering all rational numbers (positive and negative). As the set of all rational numbers is countable, and the set of all real numbers (as well as the set of irrational numbers) is uncountable, the set of rational numbers is anull set, that is,almost allreal numbers are irrational, in the sense ofLebesgue measure.[18] The rationals are adense subsetof thereal numbers; every real number has rational numbers arbitrarily close to it.[6]A related property is that rational numbers are the only numbers withfiniteexpansions asregular continued fractions.[19] In the usualtopologyof the real numbers, the rationals are neither anopen setnor aclosed set.[20] By virtue of their order, the rationals carry anorder topology. The rational numbers, as a subspace of the real numbers, also carry asubspace topology. The rational numbers form ametric spaceby using theabsolute differencemetricd(x,y)=|x−y|,{\displaystyle d(x,y)=|x-y|,}and this yields a third topology on⁠Q.{\displaystyle \mathbb {Q} .}⁠All three topologies coincide and turn the rationals into atopological field. The rational numbers are an important example of a space which is notlocally compact. The rationals are characterized topologically as the uniquecountablemetrizable spacewithoutisolated points. The space is alsototally disconnected. The rational numbers do not form acomplete metric space, and thereal numbersare the completion of⁠Q{\displaystyle \mathbb {Q} }⁠under the metricd(x,y)=|x−y|{\displaystyle d(x,y)=|x-y|}above.[14] In addition to the absolute value metric mentioned above, there are other metrics which turn⁠Q{\displaystyle \mathbb {Q} }⁠into a topological field: Letpbe aprime numberand for any non-zero integera, let|a|p=p−n,{\displaystyle |a|_{p}=p^{-n},}wherepnis the highest power ofpdividinga. In addition set|0|p=0.{\displaystyle |0|_{p}=0.}For any rational number⁠ab,{\displaystyle {\frac {a}{b}},}⁠we set Then defines ametricon⁠Q.{\displaystyle \mathbb {Q} .}⁠[21] The metric space⁠(Q,dp){\displaystyle (\mathbb {Q} ,d_{p})}⁠is not complete, and its completion is thep-adic number field⁠Qp.{\displaystyle \mathbb {Q} _{p}.}⁠Ostrowski's theoremstates that any non-trivialabsolute valueon the rational numbers⁠Q{\displaystyle \mathbb {Q} }⁠is equivalent to either the usual real absolute value or ap-adicabsolute value.
https://en.wikipedia.org/wiki/Rational_numbers
Inmathematics, areal numberis anumberthat can be used tomeasureacontinuousone-dimensionalquantitysuch as adurationortemperature. Here,continuousmeans that pairs of values can have arbitrarily small differences.[a]Every real number can be almost uniquely represented by an infinitedecimal expansion.[b][1] The real numbers are fundamental incalculus(and in many other branches of mathematics), in particular by their role in the classical definitions oflimits,continuityandderivatives.[c] The set of real numbers, sometimes called "the reals", is traditionallydenotedby a boldR, often usingblackboard bold,⁠R{\displaystyle \mathbb {R} }⁠.[2][3]The adjectivereal, used in the 17th century byRené Descartes, distinguishes real numbers fromimaginary numberssuch as thesquare rootsof−1.[4] The real numbers include therational numbers, such as theinteger−5and thefraction4 / 3. The rest of the real numbers are calledirrational numbers. Some irrational numbers (as well as all the rationals) are therootof apolynomialwith integer coefficients, such as the square root√2= 1.414...; these are calledalgebraic numbers. There are also real numbers which are not, such asπ= 3.1415...; these are calledtranscendental numbers.[4] Real numbers can be thought of as all points on alinecalled thenumber lineorreal line, where the points corresponding to integers (..., −2, −1, 0, 1, 2, ...) are equally spaced. The informal descriptions above of the real numbers are not sufficient for ensuring the correctness of proofs oftheoremsinvolving real numbers. The realization that a better definition was needed, and the elaboration of such a definition was a major development of19th-century mathematicsand is the foundation ofreal analysis, the study ofreal functionsand real-valuedsequences. A currentaxiomaticdefinition is that real numbers form theunique(up toanisomorphism)Dedekind-completeordered field.[d]Other common definitions of real numbers includeequivalence classesofCauchy sequences(of rational numbers),Dedekind cuts, and infinitedecimal representations. All these definitions satisfy the axiomatic definition and are thus equivalent. Real numbers are completely characterized by their fundamental properties that can be summarized by saying that they form anordered fieldthat isDedekind complete. Here, "completely characterized" means that there is a uniqueisomorphismbetween any two Dedekind complete ordered fields, and thus that their elements have exactly the same properties. This implies that one can manipulate real numbers and compute with them, without knowing how they can be defined; this is what mathematicians and physicists did during several centuries before the first formal definitions were provided in the second half of the 19th century. SeeConstruction of the real numbersfor details about these formal definitions and the proof of their equivalence. The real numbers form anordered field. Intuitively, this means that methods and rules ofelementary arithmeticapply to them. More precisely, there are twobinary operations,additionandmultiplication, and atotal orderthat have the following properties. Many other properties can be deduced from the above ones. In particular: Several other operations are commonly used, which can be deduced from the above ones. Thetotal orderthat is considered above is denoteda<b{\displaystyle a<b}and read as "aisless thanb". Three otherorder relationsare also commonly used: The real numbers0and1are commonly identified with thenatural numbers0and1. This allows identifying any natural numbernwith the sum ofnreal numbers equal to1. This identification can be pursued by identifying a negative integer−n{\displaystyle -n}(wheren{\displaystyle n}is a natural number) with the additive inverse−n{\displaystyle -n}of the real number identified withn.{\displaystyle n.}Similarly arational numberp/q{\displaystyle p/q}(wherepandqare integers andq≠0{\displaystyle q\neq 0}) is identified with the division of the real numbers identified withpandq. These identifications make the setQ{\displaystyle \mathbb {Q} }of the rational numbers an orderedsubfieldof the real numbersR.{\displaystyle \mathbb {R} .}TheDedekind completenessdescribed below implies that some real numbers, such as2,{\displaystyle {\sqrt {2}},}are not rational numbers; they are calledirrational numbers. The above identifications make sense, since natural numbers, integers and real numbers are generally not defined by their individual nature, but by defining properties (axioms). So, the identification of natural numbers with some real numbers is justified by the fact thatPeano axiomsare satisfied by these real numbers, with the addition with1taken as thesuccessor function. Formally, one has an injectivehomomorphismofordered monoidsfrom the natural numbersN{\displaystyle \mathbb {N} }to the integersZ,{\displaystyle \mathbb {Z} ,}an injective homomorphism ofordered ringsfromZ{\displaystyle \mathbb {Z} }to the rational numbersQ,{\displaystyle \mathbb {Q} ,}and an injective homomorphism ofordered fieldsfromQ{\displaystyle \mathbb {Q} }to the real numbersR.{\displaystyle \mathbb {R} .}The identifications consist of not distinguishing the source and the image of each injective homomorphism, and thus to write These identifications are formallyabuses of notation(since, formally, a rational number is an equivalence class of pairs of integers, and a real number is an equivalence class of Cauchy series), and are generally harmless. It is only in very specific situations, that one must avoid them and replace them by using explicitly the above homomorphisms. This is the case inconstructive mathematicsandcomputer programming. In the latter case, these homomorphisms are interpreted astype conversionsthat can often be done automatically by thecompiler. Previous properties do not distinguish real numbers fromrational numbers. This distinction is provided byDedekind completeness, which states that every set of real numbers with anupper boundadmits aleast upper bound. This means the following. A set of real numbersS{\displaystyle S}isbounded aboveif there is a real numberu{\displaystyle u}such thats≤u{\displaystyle s\leq u}for alls∈S{\displaystyle s\in S}; such au{\displaystyle u}is called anupper boundofS.{\displaystyle S.}So, Dedekind completeness means that, ifSis bounded above, it has an upper bound that is less than any other upper bound. Dedekind completeness implies other sorts of completeness (see below), but also has some important consequences. The last two properties are summarized by saying that the real numbers form areal closed field. This implies the real version of thefundamental theorem of algebra, namely that every polynomial with real coefficients can be factored into polynomials with real coefficients of degree at most two. The most common way of describing a real number is via its decimal representation, a sequence ofdecimal digitseach representing the product of an integer between zero and nine times apower of ten, extending to finitely many positive powers of ten to the left and infinitely many negative powers of ten to the right. For a numberxwhose decimal representation extendskplaces to the left, the standard notation is the juxtaposition of the digitsbkbk−1⋯b0.a1a2⋯,{\displaystyle b_{k}b_{k-1}\cdots b_{0}.a_{1}a_{2}\cdots ,}in descending order by power of ten, with non-negative and negative powers of ten separated by adecimal point, representing theinfinite series For example, for the circle constantπ=3.14159⋯,{\displaystyle \pi =3.14159\cdots ,}kis zero andb0=3,{\displaystyle b_{0}=3,}a1=1,{\displaystyle a_{1}=1,}a2=4,{\displaystyle a_{2}=4,}etc. More formally, adecimal representationfor a nonnegative real numberxconsists of a nonnegative integerkand integers between zero and nine in theinfinite sequence (Ifk>0,{\displaystyle k>0,}then by conventionbk≠0.{\displaystyle b_{k}\neq 0.}) Such a decimal representation specifies the real number as the least upper bound of thedecimal fractionsthat are obtained bytruncatingthe sequence: given a positive integern, the truncation of the sequence at the placenis the finitepartial sum The real numberxdefined by the sequence is the least upper bound of theDn,{\displaystyle D_{n},}which exists by Dedekind completeness. Conversely, given a nonnegative real numberx, one can define a decimal representation ofxbyinduction, as follows. Definebk⋯b0{\displaystyle b_{k}\cdots b_{0}}as decimal representation of the largest integerD0{\displaystyle D_{0}}such thatD0≤x{\displaystyle D_{0}\leq x}(this integer exists because of the Archimedean property). Then, supposing byinductionthat the decimal fractionDi{\displaystyle D_{i}}has been defined fori<n,{\displaystyle i<n,}one definesan{\displaystyle a_{n}}as the largest digit such thatDn−1+an/10n≤a,{\displaystyle D_{n-1}+a_{n}/10^{n}\leq a,}and one setsDn=Dn−1+an/10n.{\displaystyle D_{n}=D_{n-1}+a_{n}/10^{n}.} One can use the defining properties of the real numbers to show thatxis the least upper bound of theDn.{\displaystyle D_{n}.}So, the resulting sequence of digits is called adecimal representationofx. Another decimal representation can be obtained by replacing≤x{\displaystyle \leq x}with<x{\displaystyle <x}in the preceding construction. These two representations are identical, unlessxis adecimal fractionof the formm10h.{\textstyle {\frac {m}{10^{h}}}.}In this case, in the first decimal representation, allan{\displaystyle a_{n}}are zero forn>h,{\displaystyle n>h,}and, in the second representation, allan{\displaystyle a_{n}}9. (see0.999...for details). In summary, there is abijectionbetween the real numbers and the decimal representations that do not end with infinitely many trailing 9. The preceding considerations apply directly for everynumeral baseB≥2,{\displaystyle B\geq 2,}simply by replacing 10 withB{\displaystyle B}and 9 withB−1.{\displaystyle B-1.} A main reason for using real numbers is so that many sequences havelimits. More formally, the reals arecomplete(in the sense ofmetric spacesoruniform spaces, which is a different sense than the Dedekind completeness of the order in the previous section): Asequence(xn) of real numbers is called aCauchy sequenceif for anyε > 0there exists an integerN(possibly depending on ε) such that thedistance|xn−xm|is less than ε for allnandmthat are both greater thanN. This definition, originally provided byCauchy, formalizes the fact that thexneventually come and remain arbitrarily close to each other. A sequence (xn)converges to the limitxif its elements eventually come and remain arbitrarily close tox, that is, if for anyε > 0there exists an integerN(possibly depending on ε) such that the distance|xn−x|is less than ε forngreater thanN. Every convergent sequence is a Cauchy sequence, and the converse is true for real numbers, and this means that thetopological spaceof the real numbers is complete. The set of rational numbers is not complete. For example, the sequence (1; 1.4; 1.41; 1.414; 1.4142; 1.41421; ...), where each term adds a digit of the decimal expansion of the positivesquare rootof 2, is Cauchy but it does not converge to a rational number (in the real numbers, in contrast, it converges to the positivesquare rootof 2). The completeness property of the reals is the basis on whichcalculus, and more generallymathematical analysis, are built. In particular, the test that a sequence is a Cauchy sequence allows proving that a sequence has a limit, without computing it, and even without knowing it. For example, the standard series of theexponential function converges to a real number for everyx, because the sums can be made arbitrarily small (independently ofM) by choosingNsufficiently large. This proves that the sequence is Cauchy, and thus converges, showing thatex{\displaystyle e^{x}}is well defined for everyx. The real numbers are often described as "the complete ordered field", a phrase that can be interpreted in several ways. First, an order can belattice-complete. It is easy to see that no ordered field can be lattice-complete, because it can have nolargest element(given any elementz,z+ 1is larger). Additionally, an order can be Dedekind-complete, see§ Axiomatic approach. The uniqueness result at the end of that section justifies using the word "the" in the phrase "complete ordered field" when this is the sense of "complete" that is meant. This sense of completeness is most closely related to the construction of the reals from Dedekind cuts, since that construction starts from an ordered field (the rationals) and then forms the Dedekind-completion of it in a standard way. These two notions of completeness ignore the field structure. However, anordered group(in this case, the additive group of the field) defines auniformstructure, and uniform structures have a notion ofcompleteness; the description in§ Completenessis a special case. (We refer to the notion of completeness in uniform spaces rather than the related and better known notion formetric spaces, since the definition of metric space relies on already having a characterization of the real numbers.) It is not true thatR{\displaystyle \mathbb {R} }is theonlyuniformly complete ordered field, but it is the only uniformly completeArchimedean field, and indeed one often hears the phrase "complete Archimedean field" instead of "complete ordered field". Every uniformly complete Archimedean field must also be Dedekind-complete (and vice versa), justifying using "the" in the phrase "the complete Archimedean field". This sense of completeness is most closely related to the construction of the reals from Cauchy sequences (the construction carried out in full in this article), since it starts with an Archimedean field (the rationals) and forms the uniform completion of it in a standard way. But the original use of the phrase "complete Archimedean field" was byDavid Hilbert, who meant still something else by it. He meant that the real numbers form thelargestArchimedean field in the sense that every other Archimedean field is a subfield ofR{\displaystyle \mathbb {R} }. ThusR{\displaystyle \mathbb {R} }is "complete" in the sense that nothing further can be added to it without making it no longer an Archimedean field. This sense of completeness is most closely related to the construction of the reals fromsurreal numbers, since that construction starts with a proper class that contains every ordered field (the surreals) and then selects from it the largest Archimedean subfield. The set of all real numbers isuncountable, in the sense that while both the set of allnatural numbers{1, 2, 3, 4, ...}and the set of all real numbers areinfinite sets, there exists noone-to-one functionfrom the real numbers to the natural numbers. Thecardinalityof the set of all real numbers is called thecardinality of the continuumand commonly denoted byc.{\displaystyle {\mathfrak {c}}.}It is strictly greater than the cardinality of the set of all natural numbers, denotedℵ0{\displaystyle \aleph _{0}}and calledAleph-zerooraleph-nought. The cardinality of the continuum equals the cardinality of thepower setof the natural numbers, that is, the set of all subsets of the natural numbers. The statement that there is no cardinality strictly greater thanℵ0{\displaystyle \aleph _{0}}and strictly smaller thanc{\displaystyle {\mathfrak {c}}}is known as thecontinuum hypothesis(CH). It is neither provable nor refutable using the axioms ofZermelo–Fraenkel set theoryincluding theaxiom of choice(ZFC)—the standard foundation of modern mathematics. In fact, some models of ZFC satisfy CH, while others violate it.[5] As a topological space, the real numbers areseparable. This is because the set of rationals, which is countable, isdensein the real numbers. The irrational numbers are also dense in the real numbers, however they are uncountable and have the same cardinality as the reals. The real numbers form ametric space: the distance betweenxandyis defined as theabsolute value|x−y|. By virtue of being a totally ordered set, they also carry anorder topology; thetopologyarising from the metric and the one arising from the order are identical, but yield different presentations for the topology—in the order topology as ordered intervals, in the metric topology as epsilon-balls. The Dedekind cuts construction uses the order topology presentation, while the Cauchy sequences construction uses the metric topology presentation. The reals form acontractible(henceconnectedandsimply connected),separableandcompletemetric space ofHausdorff dimension1. The real numbers arelocally compactbut notcompact. There are various properties that uniquely specify them; for instance, all unbounded, connected, and separableorder topologiesare necessarilyhomeomorphicto the reals. Every nonnegative real number has asquare rootinR{\displaystyle \mathbb {R} }, although no negative number does. This shows that the order onR{\displaystyle \mathbb {R} }is determined by its algebraic structure. Also, everypolynomialof odd degree admits at least one real root: these two properties makeR{\displaystyle \mathbb {R} }the premier example of areal closed field. Proving this is the first half of one proof of thefundamental theorem of algebra. The reals carry a canonicalmeasure, theLebesgue measure, which is theHaar measureon their structure as atopological groupnormalized such that theunit interval[0;1] has measure 1. There exist sets of real numbers that are not Lebesgue measurable, e.g.Vitali sets. The supremum axiom of the reals refers to subsets of the reals and is therefore a second-order logical statement. It is not possible to characterize the reals withfirst-order logicalone: theLöwenheim–Skolem theoremimplies that there exists a countable dense subset of the real numbers satisfying exactly the same sentences in first-order logic as the real numbers themselves. The set ofhyperreal numberssatisfies the same first order sentences asR{\displaystyle \mathbb {R} }. Ordered fields that satisfy the same first-order sentences asR{\displaystyle \mathbb {R} }are callednonstandard modelsofR{\displaystyle \mathbb {R} }. This is what makesnonstandard analysiswork; by proving a first-order statement in some nonstandard model (which may be easier than proving it inR{\displaystyle \mathbb {R} }), we know that the same statement must also be true ofR{\displaystyle \mathbb {R} }. ThefieldR{\displaystyle \mathbb {R} }of real numbers is anextension fieldof the fieldQ{\displaystyle \mathbb {Q} }of rational numbers, andR{\displaystyle \mathbb {R} }can therefore be seen as avector spaceoverQ{\displaystyle \mathbb {Q} }.Zermelo–Fraenkel set theorywith theaxiom of choiceguarantees the existence of abasisof this vector space: there exists a setBof real numbers such that every real number can be written uniquely as a finitelinear combinationof elements of this set, using rational coefficients only, and such that no element ofBis a rational linear combination of the others. However, this existence theorem is purely theoretical, as such a base has never been explicitly described. Thewell-ordering theoremimplies that the real numbers can bewell-orderedif the axiom of choice is assumed: there exists a total order onR{\displaystyle \mathbb {R} }with the property that every nonemptysubsetofR{\displaystyle \mathbb {R} }has aleast elementin this ordering. (The standard ordering ≤ of the real numbers is not a well-ordering since e.g. anopen intervaldoes not contain a least element in this ordering.) Again, the existence of such a well-ordering is purely theoretical, as it has not been explicitly described. IfV=Lis assumed in addition to the axioms of ZF, a well ordering of the real numbers can be shown to be explicitly definable by a formula.[6] A real number may be eithercomputableor uncomputable; eitheralgorithmically randomor not; and eitherarithmetically randomor not. Simple fractionswere used by theEgyptiansaround 1000 BC; theVedic"Shulba Sutras" ("The rules of chords") inc.600 BCinclude what may be the first "use" of irrational numbers. The concept of irrationality was implicitly accepted by earlyIndian mathematicianssuch asManava(c.750–690 BC), who was aware that thesquare rootsof certain numbers, such as 2 and 61, could not be exactly determined.[7] Around 500 BC, theGreek mathematiciansled byPythagorasalso realized that thesquare root of 2is irrational. For Greek mathematicians, numbers were only thenatural numbers. Real numbers were called "proportions", being the ratios of two lengths, or equivalently being measures of a length in terms of another length, called unit length. Two lengths are "commensurable", if there is a unit in which they are both measured by integers, that is, in modern terminology, if their ratio is arational number.Eudoxus of Cnidus(c. 390−340 BC) provided a definition of the equality of two irrational proportions in a way that is similar toDedekind cuts(introduced more than 2,000 years later), except that he did not use anyarithmetic operationother than multiplication of a length by a natural number (seeEudoxus of Cnidus). This may be viewed as the first definition of the real numbers. TheMiddle Agesbrought about the acceptance ofzero,negative numbers, integers, andfractionalnumbers, first byIndianandChinese mathematicians, and then byArabic mathematicians, who were also the first to treat irrational numbers as algebraic objects (the latter being made possible by the development of algebra).[8]Arabic mathematicians merged the concepts of "number" and "magnitude" into a more general idea of real numbers.[9]The Egyptian mathematicianAbū Kāmil Shujā ibn Aslam(c.850–930)was the first to accept irrational numbers as solutions toquadratic equations, or ascoefficientsin anequation(often in the form of square roots,cube roots, andfourth roots).[10]In Europe, such numbers, not commensurable with the numerical unit, were calledirrationalorsurd("deaf"). In the 16th century,Simon Stevincreated the basis for moderndecimalnotation, and insisted that there is no difference between rational and irrational numbers in this regard. In the 17th century,Descartesintroduced the term "real" to describe roots of apolynomial, distinguishing them from "imaginary" numbers. In the 18th and 19th centuries, there was much work on irrational and transcendental numbers.Lambert(1761) gave a flawed proof thatπcannot be rational;Legendre(1794) completed the proof[11]and showed thatπis not the square root of a rational number.[12]Liouville(1840) showed that neitherenore2can be a root of an integerquadratic equation, and then established the existence of transcendental numbers;Cantor(1873) extended and greatly simplified this proof.[13]Hermite(1873) proved thateis transcendental, andLindemann(1882), showed thatπis transcendental. Lindemann's proof was much simplified by Weierstrass (1885),Hilbert(1893),Hurwitz,[14]andGordan.[15] The concept that many points existed between rational numbers, such as the square root of 2, was well known to the ancient Greeks. The existence of a continuous number line was considered self-evident, but the nature of this continuity, presently calledcompleteness, was not understood. The rigor developed for geometry did not cross over to the concept of numbers until the 1800s.[16] The developers ofcalculusused real numbers andlimitswithout defining them rigorously. In hisCours d'Analyse(1821),Cauchymade calculus rigorous, but he used the real numbers without defining them, and assumed without proof that everyCauchysequence has a limit and that this limit is a real number. In 1854Bernhard Riemannhighlighted the limitations of calculus in the method ofFourier series, showing the need for a rigorous definition of the real numbers.[17]: 672 Beginning withRichard Dedekindin 1858, several mathematicians worked on the definition of the real numbers, includingHermann Hankel,Charles Méray, andEduard Heine, leading to the publication in 1872 of two independent definitions of real numbers, one by Dedekind, asDedekind cuts, and the other one byGeorg Cantor, as equivalence classes of Cauchy sequences.[18]Several problems were left open by these definitions, which contributed to thefoundational crisis of mathematics. Firstly both definitions suppose thatrational numbersand thusnatural numbersare rigorously defined; this was done a few years later withPeano axioms. Secondly, both definitions involveinfinite sets(Dedekind cuts and sets of the elements of a Cauchy sequence), and Cantor'sset theorywas published several years later. Thirdly, these definitions implyquantificationon infinite sets, and this cannot be formalized in the classicallogicoffirst-order predicates. This is one of the reasons for whichhigher-order logicswere developed in the first half of the 20th century. In 1874 Cantor showed that the set of all real numbers isuncountably infinite, but the set of all algebraic numbers iscountably infinite.Cantor's first uncountability proofwas different from his famousdiagonal argumentpublished in 1891. The real number system(R;+;⋅;<){\displaystyle (\mathbb {R} ;{}+{};{}\cdot {};{}<{})}can be definedaxiomaticallyup to anisomorphism, which is described hereinafter. There are also many ways to construct "the" real number system, and a popular approach involves starting from natural numbers, then defining rational numbers algebraically, and finally defining real numbers as equivalence classes of theirCauchy sequencesor as Dedekind cuts, which are certain subsets of rational numbers.[19]Another approach is to start from some rigorous axiomatization of Euclidean geometry (say of Hilbert or ofTarski), and then define the real number system geometrically. All these constructions of the real numbers have been shown to be equivalent, in the sense that the resulting number systems areisomorphic. LetR{\displaystyle \mathbb {R} }denote thesetof all real numbers. Then: The last property applies to the real numbers but not to the rational numbers (or toother more exotic ordered fields). For example,{x∈Q:x2<2}{\displaystyle \{x\in \mathbb {Q} :x^{2}<2\}}has a rational upper bound (e.g., 1.42), but noleastrational upper bound, because2{\displaystyle {\sqrt {2}}}is not rational. These properties imply theArchimedean property(which is not implied by other definitions of completeness), which states that the set of integers has no upper bound in the reals. In fact, if this were false, then the integers would have a least upper boundN; then,N– 1 would not be an upper bound, and there would be an integernsuch thatn>N– 1, and thusn+ 1 >N, which is a contradiction with the upper-bound property ofN. The real numbers are uniquely specified by the above properties. More precisely, given any two Dedekind-complete ordered fieldsR1{\displaystyle \mathbb {R} _{1}}andR2{\displaystyle \mathbb {R} _{2}}, there exists a unique fieldisomorphismfromR1{\displaystyle \mathbb {R} _{1}}toR2{\displaystyle \mathbb {R_{2}} }. This uniqueness allows us to think of them as essentially the same mathematical object. For another axiomatization ofR{\displaystyle \mathbb {R} }seeTarski's axiomatization of the reals. The real numbers can be constructed as acompletionof the rational numbers, in such a way that a sequence defined by a decimal or binary expansion like (3; 3.1; 3.14; 3.141; 3.1415; ...)convergesto a unique real number—in this caseπ. For details and other constructions of real numbers, seeConstruction of the real numbers. In the physical sciences most physical constants, such as the universal gravitational constant, and physical variables, such as position, mass, speed, and electric charge, are modeled using real numbers. In fact the fundamental physical theories such asclassical mechanics,electromagnetism,quantum mechanics,general relativity, and thestandard modelare described using mathematical structures, typicallysmooth manifoldsorHilbert spaces, that are based on the real numbers, although actual measurements of physical quantities are of finiteaccuracy and precision. Physicists have occasionally suggested that a more fundamental theory would replace the real numbers with quantities that do not form a continuum, but such proposals remain speculative.[20] The real numbers are most often formalized using theZermelo–Fraenkelaxiomatization of set theory, but some mathematicians study the real numbers with other logical foundations of mathematics. In particular, the real numbers are also studied inreverse mathematicsand inconstructive mathematics.[21] Thehyperreal numbersas developed byEdwin Hewitt,Abraham Robinson, and others extend the set of the real numbers by introducinginfinitesimaland infinite numbers, allowing for buildinginfinitesimal calculusin a way closer to the original intuitions ofLeibniz,Euler,Cauchy, and others. Edward Nelson'sinternal set theoryenriches theZermelo–Fraenkelset theory syntactically by introducing a unary predicate "standard". In this approach, infinitesimals are (non-"standard") elements of the set of the real numbers (rather than being elements of an extension thereof, as in Robinson's theory). Thecontinuum hypothesisposits that the cardinality of the set of the real numbers isℵ1{\displaystyle \aleph _{1}}; i.e. the smallest infinitecardinal numberafterℵ0{\displaystyle \aleph _{0}}, the cardinality of the integers.Paul Cohenproved in 1963 that it is an axiom independent of the other axioms of set theory; that is: one may choose either the continuum hypothesis or its negation as an axiom of set theory, without contradiction. Electronic calculatorsandcomputerscannot operate on arbitrary real numbers, because finite computers cannot directly store infinitely many digits or other infinite representations. Nor do they usually even operate on arbitrarydefinable real numbers, which are inconvenient to manipulate. Instead, computers typically work with finite-precision approximations calledfloating-point numbers, a representation similar toscientific notation. The achievable precision is limited by thedata storage spaceallocated for each number, whether asfixed-point, floating-point, orarbitrary-precision numbers, or some other representation. Mostscientific computationusesbinaryfloating-point arithmetic, often a64-bit representationwith around 16 decimaldigits of precision. Real numbers satisfy theusual rules of arithmetic, butfloating-point numbers do not. The field ofnumerical analysisstudies thestabilityandaccuracyof numericalalgorithmsimplemented with approximate arithmetic. Alternately,computer algebra systemscan operate on irrational quantities exactly bymanipulating symbolic formulasfor them (such as2,{\textstyle {\sqrt {2}},}arctan⁡5,{\textstyle \arctan 5,}or∫01xxdx{\textstyle \int _{0}^{1}x^{x}\,dx}) rather than their rational or decimal approximation.[22]But exact and symbolic arithmetic also have limitations: for instance, they are computationally more expensive; it is not in general possible to determine whether two symbolic expressions are equal (theconstant problem); and arithmetic operations can causeexponentialexplosion in the size of representation of a single number (for instance, squaring a rational number roughly doubles the number of digits in its numerator and denominator, and squaring apolynomialroughly doubles its number of terms), overwhelming finite computer storage.[23] A real number is calledcomputableif there exists an algorithm that yields its digits. Because there are onlycountablymany algorithms,[24]but an uncountable number of reals,almost allreal numbers fail to be computable. Moreover, the equality of two computable numbers is anundecidable problem. Someconstructivistsaccept the existence of only those reals that are computable. The set ofdefinable numbersis broader, but still only countable. Inset theory, specificallydescriptive set theory, theBaire spaceis used as a surrogate for the real numbers since the latter have some topological properties (connectedness) that are a technical inconvenience. Elements of Baire space are referred to as "reals". Thesetof all real numbers is denotedR{\displaystyle \mathbb {R} }(blackboard bold) orR(upright bold). As it is naturally endowed with the structure of afield, the expressionfield of real numbersis frequently used when its algebraic properties are under consideration. The sets of positive real numbers and negative real numbers are often notedR+{\displaystyle \mathbb {R} ^{+}}andR−{\displaystyle \mathbb {R} ^{-}},[25]respectively;R+{\displaystyle \mathbb {R} _{+}}andR−{\displaystyle \mathbb {R} _{-}}are also used.[26]The non-negative real numbers can be notedR≥0{\displaystyle \mathbb {R} _{\geq 0}}but one often sees this set notedR+∪{0}.{\displaystyle \mathbb {R} ^{+}\cup \{0\}.}[25]In French mathematics, thepositive real numbersandnegative real numberscommonly includezero, and these sets are noted respectivelyR+{\displaystyle \mathbb {R_{+}} }andR−.{\displaystyle \mathbb {R} _{-}.}[26]In this understanding, the respective sets without zero are called strictly positive real numbers and strictly negative real numbers, and are notedR+∗{\displaystyle \mathbb {R} _{+}^{*}}andR−∗.{\displaystyle \mathbb {R} _{-}^{*}.}[26] The notationRn{\displaystyle \mathbb {R} ^{n}}refers to the set of then-tuplesof elements ofR{\displaystyle \mathbb {R} }(real coordinate space), which can be identified to theCartesian productofncopies ofR.{\displaystyle \mathbb {R} .}It is ann-dimensionalvector spaceover the field of the real numbers, often called thecoordinate spaceof dimensionn; this space may be identified to then-dimensionalEuclidean spaceas soon as aCartesian coordinate systemhas been chosen in the latter. In this identification, apointof the Euclidean space is identified with the tuple of itsCartesian coordinates. In mathematicsrealis used as an adjective, meaning that the underlying field is the field of the real numbers (orthe real field). For example,realmatrix,real polynomialandrealLie algebra. The word is also used as anoun, meaning a real number (as in "the set of all reals"). The real numbers can be generalized and extended in several different directions:
https://en.wikipedia.org/wiki/Real_numbers
Inmathematics, thesign functionorsignum function(fromsignum,Latinfor "sign") is afunctionthat has the value−1,+1or0according to whether thesignof a givenreal numberis positive or negative, or the given number is itself zero. Inmathematical notationthe sign function is often represented assgn⁡x{\displaystyle \operatorname {sgn} x}orsgn⁡(x){\displaystyle \operatorname {sgn}(x)}.[1] The signum function of a real numberx{\displaystyle x}is apiecewisefunction which is defined as follows:[1]sgn⁡x:={−1ifx<0,0ifx=0,1ifx>0.{\displaystyle \operatorname {sgn} x:={\begin{cases}-1&{\text{if }}x<0,\\0&{\text{if }}x=0,\\1&{\text{if }}x>0.\end{cases}}} Thelaw of trichotomystates that every real number must be positive, negative or zero. The signum function denotes which unique category a number falls into by mapping it to one of the values−1,+1or0,which can then be used in mathematical expressions or further calculations. For example:sgn⁡(2)=+1,sgn⁡(π)=+1,sgn⁡(−8)=−1,sgn⁡(−12)=−1,sgn⁡(0)=0.{\displaystyle {\begin{array}{lcr}\operatorname {sgn}(2)&=&+1\,,\\\operatorname {sgn}(\pi )&=&+1\,,\\\operatorname {sgn}(-8)&=&-1\,,\\\operatorname {sgn}(-{\frac {1}{2}})&=&-1\,,\\\operatorname {sgn}(0)&=&0\,.\end{array}}} Any real number can be expressed as the product of itsabsolute valueand its sign:x=|x|sgn⁡x.{\displaystyle x=|x|\operatorname {sgn} x\,.} It follows that wheneverx{\displaystyle x}is not equal to 0 we havesgn⁡x=x|x|=|x|x.{\displaystyle \operatorname {sgn} x={\frac {x}{|x|}}={\frac {|x|}{x}}\,.} Similarly, foranyreal numberx{\displaystyle x},|x|=xsgn⁡x.{\displaystyle |x|=x\operatorname {sgn} x\,.}We can also be certain that:sgn⁡(xy)=(sgn⁡x)(sgn⁡y),{\displaystyle \operatorname {sgn}(xy)=(\operatorname {sgn} x)(\operatorname {sgn} y)\,,}and sosgn⁡(xn)=(sgn⁡x)n.{\displaystyle \operatorname {sgn}(x^{n})=(\operatorname {sgn} x)^{n}\,.} The signum can also be written using theIverson bracketnotation:sgn⁡x=−[x<0]+[x>0].{\displaystyle \operatorname {sgn} x=-[x<0]+[x>0]\,.} The signum can also be written using thefloorand the absolute value functions:sgn⁡x=⌊x|x|+1⌋−⌊−x|x|+1⌋.{\displaystyle \operatorname {sgn} x={\Biggl \lfloor }{\frac {x}{|x|+1}}{\Biggr \rfloor }-{\Biggl \lfloor }{\frac {-x}{|x|+1}}{\Biggr \rfloor }\,.}If00{\displaystyle 0^{0}}is accepted to be equal to 1, the signum can also be written for all real numbers assgn⁡x=0(−x+|x|)−0(x+|x|).{\displaystyle \operatorname {sgn} x=0^{\left(-x+\left\vert x\right\vert \right)}-0^{\left(x+\left\vert x\right\vert \right)}\,.} Although the sign function takes the value−1whenx{\displaystyle x}is negative, the ringed point(0, −1)in the plot ofsgn⁡x{\displaystyle \operatorname {sgn} x}indicates that this is not the case whenx=0{\displaystyle x=0}. Instead, the value jumps abruptly to the solid point at(0, 0)wheresgn⁡(0)=0{\displaystyle \operatorname {sgn}(0)=0}. There is then a similar jump tosgn⁡(x)=+1{\displaystyle \operatorname {sgn}(x)=+1}whenx{\displaystyle x}is positive. Either jump demonstrates visually that the sign functionsgn⁡x{\displaystyle \operatorname {sgn} x}is discontinuous at zero, even though it is continuous at any point wherex{\displaystyle x}is either positive or negative. These observations are confirmed by any of the various equivalent formal definitions ofcontinuityinmathematical analysis. A functionf(x){\displaystyle f(x)}, such assgn⁡(x),{\displaystyle \operatorname {sgn}(x),}is continuous at a pointx=a{\displaystyle x=a}if the valuef(a){\displaystyle f(a)}can be approximated arbitrarily closely by thesequenceof valuesf(a1),f(a2),f(a3),…,{\displaystyle f(a_{1}),f(a_{2}),f(a_{3}),\dots ,}where thean{\displaystyle a_{n}}make up any infinite sequence which becomes arbitrarily close toa{\displaystyle a}asn{\displaystyle n}becomes sufficiently large. In the notation of mathematicallimits, continuity off{\displaystyle f}ata{\displaystyle a}requires thatf(an)→f(a){\displaystyle f(a_{n})\to f(a)}asn→∞{\displaystyle n\to \infty }for any sequence(an)n=1∞{\displaystyle \left(a_{n}\right)_{n=1}^{\infty }}for whichan→a.{\displaystyle a_{n}\to a.}The arrow symbol can be read to meanapproaches, ortends to, and it applies to the sequence as a whole. This criterion fails for the sign function ata=0{\displaystyle a=0}. For example, we can choosean{\displaystyle a_{n}}to be the sequence1,12,13,14,…,{\displaystyle 1,{\tfrac {1}{2}},{\tfrac {1}{3}},{\tfrac {1}{4}},\dots ,}which tends towards zero asn{\displaystyle n}increases towards infinity. In this case,an→a{\displaystyle a_{n}\to a}as required, butsgn⁡(a)=0{\displaystyle \operatorname {sgn}(a)=0}andsgn⁡(an)=+1{\displaystyle \operatorname {sgn}(a_{n})=+1}for eachn,{\displaystyle n,}so thatsgn⁡(an)→1≠sgn⁡(a){\displaystyle \operatorname {sgn}(a_{n})\to 1\neq \operatorname {sgn}(a)}. This counterexample confirms more formally the discontinuity ofsgn⁡x{\displaystyle \operatorname {sgn} x}at zero that is visible in the plot. Despite the sign function having a very simple form, the step change at zero causes difficulties for traditionalcalculustechniques, which are quite stringent in their requirements. Continuity is a frequent constraint. One solution can be to approximate the sign function by a smooth continuous function; others might involve less stringent approaches that build on classical methods to accommodate larger classes of function. The signum function can be given as a number of different (pointwise) limits:sgn⁡x=limn→∞1−2−nx1+2−nx=limn→∞2πarctan⁡(nx)=limn→∞tanh⁡(nx)=limε→0xx2+ε2.{\displaystyle {\begin{aligned}\operatorname {sgn} x&=\lim _{n\to \infty }{\frac {1-2^{-nx}}{1+2^{-nx}}}\\&=\lim _{n\to \infty }{\frac {2}{\pi }}\operatorname {arctan} (nx)\\&=\lim _{n\to \infty }\tanh(nx)\\&=\lim _{\varepsilon \to 0}{\frac {x}{\sqrt {x^{2}+\varepsilon ^{2}}}}.\end{aligned}}}Here,tanh{\displaystyle \tanh }is thehyperbolic tangent, andarctan{\displaystyle \operatorname {arctan} }is theinverse tangent. The last of these is the derivative ofx2+ε2{\displaystyle {\sqrt {x^{2}+\varepsilon ^{2}}}}. This is inspired from the fact that the above is exactly equal for all nonzerox{\displaystyle x}ifε=0{\displaystyle \varepsilon =0}, and has the advantage of simple generalization to higher-dimensional analogues of the sign function (for example, the partial derivatives ofx2+y2{\displaystyle {\sqrt {x^{2}+y^{2}}}}). SeeHeaviside step function § Analytic approximations. The signum functionsgn⁡x{\displaystyle \operatorname {sgn} x}isdifferentiableeverywhere except whenx=0.{\displaystyle x=0.}Itsderivativeis zero whenx{\displaystyle x}is non-zero:d(sgn⁡x)dx=0forx≠0.{\displaystyle {\frac {{\text{d}}\,(\operatorname {sgn} x)}{{\text{d}}x}}=0\qquad {\text{for }}x\neq 0\,.} This follows from the differentiability of anyconstant function, for which the derivative is always zero on its domain of definition. The signumsgn⁡x{\displaystyle \operatorname {sgn} x}acts as a constant function when it is restricted to the negativeopen regionx<0,{\displaystyle x<0,}where it equals−1. It can similarly be regarded as a constant function within the positive open regionx>0,{\displaystyle x>0,}where the corresponding constant is+1. Although these are two different constant functions, their derivative is equal to zero in each case. It is not possible to define a classical derivative atx=0{\displaystyle x=0}, because there is a discontinuity there. Although it is not differentiable atx=0{\displaystyle x=0}in the ordinary sense, under the generalized notion of differentiation indistribution theory, the derivative of the signum function is two times theDirac delta function. This can be demonstrated using the identity[2]sgn⁡x=2H(x)−1,{\displaystyle \operatorname {sgn} x=2H(x)-1\,,}whereH(x){\displaystyle H(x)}is theHeaviside step functionusing the standardH(0)=12{\displaystyle H(0)={\frac {1}{2}}}formalism. Using this identity, it is easy to derive the distributional derivative:[3]dsgn⁡xdx=2dH(x)dx=2δ(x).{\displaystyle {\frac {{\text{d}}\operatorname {sgn} x}{{\text{d}}x}}=2{\frac {{\text{d}}H(x)}{{\text{d}}x}}=2\delta (x)\,.} The signum function has adefinite integralbetween any pair of finite valuesaandb, even when the interval of integration includes zero. The resulting integral foraandbis then equal to the difference between their absolute values:∫ab(sgn⁡x)dx=|b|−|a|.{\displaystyle \int _{a}^{b}(\operatorname {sgn} x)\,{\text{d}}x=|b|-|a|\,.} In fact, the signum function is the derivative of the absolute value function, except where there is an abrupt change ingradientat zero:d|x|dx=sgn⁡xforx≠0.{\displaystyle {\frac {{\text{d}}|x|}{{\text{d}}x}}=\operatorname {sgn} x\qquad {\text{for }}x\neq 0\,.} We can understand this as before by considering the definition of the absolute value|x|{\displaystyle |x|}on the separate regionsx>0{\displaystyle x>0}andx<0.{\displaystyle x<0.}For example, the absolute value function is identical tox{\displaystyle x}in the regionx>0,{\displaystyle x>0,}whose derivative is the constant value+1, which equals the value ofsgn⁡x{\displaystyle \operatorname {sgn} x}there. Because the absolute value is aconvex function, there is at least onesubderivativeat every point, including at the origin. Everywhere except zero, the resultingsubdifferentialconsists of a single value, equal to the value of the sign function. In contrast, there are many subderivatives at zero, with just one of them taking the valuesgn⁡(0)=0{\displaystyle \operatorname {sgn}(0)=0}. A subderivative value0occurs here because the absolute value function is at a minimum. The full family of valid subderivatives at zero constitutes the subdifferential interval[−1,1]{\displaystyle [-1,1]}, which might be thought of informally as "filling in" the graph of the sign function with a vertical line through the origin, making it continuous as a two dimensional curve. In integration theory, the signum function is aweak derivativeof the absolute value function. Weak derivatives are equivalent if they are equalalmost everywhere, making them impervious to isolated anomalies at a single point. This includes the change in gradient of the absolute value function at zero, which prohibits there being a classical derivative. TheFourier transformof the signum function is[4]PV∫−∞∞(sgn⁡x)e−ikxdx=2ikfork≠0,{\displaystyle PV\int _{-\infty }^{\infty }(\operatorname {sgn} x)e^{-ikx}{\text{d}}x={\frac {2}{ik}}\qquad {\text{for }}k\neq 0,}wherePV{\displaystyle PV}means taking theCauchy principal value. The signum function can be generalized tocomplex numbersas:sgn⁡z=z|z|{\displaystyle \operatorname {sgn} z={\frac {z}{|z|}}}for any complex numberz{\displaystyle z}exceptz=0{\displaystyle z=0}. The signum of a given complex numberz{\displaystyle z}is thepointon theunit circleof thecomplex planethat is nearest toz{\displaystyle z}. Then, forz≠0{\displaystyle z\neq 0},sgn⁡z=eiarg⁡z,{\displaystyle \operatorname {sgn} z=e^{i\arg z}\,,}wherearg{\displaystyle \arg }is thecomplex argument function. For reasons of symmetry, and to keep this a proper generalization of the signum function on the reals, also in the complex domain one usually defines, forz=0{\displaystyle z=0}:sgn⁡(0+0i)=0{\displaystyle \operatorname {sgn}(0+0i)=0} Another generalization of the sign function for real and complex expressions iscsgn{\displaystyle {\text{csgn}}},[5]which is defined as:csgn⁡z={1ifRe(z)>0,−1ifRe(z)<0,sgn⁡Im(z)ifRe(z)=0{\displaystyle \operatorname {csgn} z={\begin{cases}1&{\text{if }}\mathrm {Re} (z)>0,\\-1&{\text{if }}\mathrm {Re} (z)<0,\\\operatorname {sgn} \mathrm {Im} (z)&{\text{if }}\mathrm {Re} (z)=0\end{cases}}}whereRe(z){\displaystyle {\text{Re}}(z)}is the real part ofz{\displaystyle z}andIm(z){\displaystyle {\text{Im}}(z)}is the imaginary part ofz{\displaystyle z}. We then have (forz≠0{\displaystyle z\neq 0}):csgn⁡z=zz2=z2z.{\displaystyle \operatorname {csgn} z={\frac {z}{\sqrt {z^{2}}}}={\frac {\sqrt {z^{2}}}{z}}.} Thanks to thePolar decompositiontheorem, a matrixA∈Kn×n{\displaystyle {\boldsymbol {A}}\in \mathbb {K} ^{n\times n}}(n∈N{\displaystyle n\in \mathbb {N} }andK∈{R,C}{\displaystyle \mathbb {K} \in \{\mathbb {R} ,\mathbb {C} \}}) can be decomposed as a productQP{\displaystyle {\boldsymbol {Q}}{\boldsymbol {P}}}whereQ{\displaystyle {\boldsymbol {Q}}}is a unitary matrix andP{\displaystyle {\boldsymbol {P}}}is a self-adjoint, or Hermitian, positive definite matrix, both inKn×n{\displaystyle \mathbb {K} ^{n\times n}}. IfA{\displaystyle {\boldsymbol {A}}}is invertible then such a decomposition is unique andQ{\displaystyle {\boldsymbol {Q}}}plays the role ofA{\displaystyle {\boldsymbol {A}}}'s signum. A dual construction is given by the decompositionA=SR{\displaystyle {\boldsymbol {A}}={\boldsymbol {S}}{\boldsymbol {R}}}whereR{\displaystyle {\boldsymbol {R}}}is unitary, but generally different thanQ{\displaystyle {\boldsymbol {Q}}}. This leads to each invertible matrix having a unique left-signumQ{\displaystyle {\boldsymbol {Q}}}and right-signumR{\displaystyle {\boldsymbol {R}}}. In the special case whereK=R,n=2,{\displaystyle \mathbb {K} =\mathbb {R} ,\ n=2,}and the (invertible) matrixA=[a−bba]{\displaystyle {\boldsymbol {A}}=\left[{\begin{array}{rr}a&-b\\b&a\end{array}}\right]}, which identifies with the (nonzero) complex numbera+ib=c{\displaystyle a+\mathrm {i} b=c}, then the signum matrices satisfyQ=P=[a−bba]/|c|{\displaystyle {\boldsymbol {Q}}={\boldsymbol {P}}=\left[{\begin{array}{rr}a&-b\\b&a\end{array}}\right]/|c|}and identify with the complex signum ofc{\displaystyle c},sgn⁡c=c/|c|{\displaystyle \operatorname {sgn} c=c/|c|}. In this sense, polar decomposition generalizes to matrices the signum-modulus decomposition of complex numbers. At real values ofx{\displaystyle x}, it is possible to define ageneralized function–version of the signum function,ε(x){\displaystyle \varepsilon (x)}such thatε(x)2=1{\displaystyle \varepsilon (x)^{2}=1}everywhere, including at the pointx=0{\displaystyle x=0}, unlikesgn{\displaystyle \operatorname {sgn} }, for which(sgn⁡0)2=0{\displaystyle (\operatorname {sgn} 0)^{2}=0}. This generalized signum allows construction of thealgebra of generalized functions, but the price of such generalization is the loss ofcommutativity. In particular, the generalized signum anticommutes with the Dirac delta function[6]ε(x)δ(x)+δ(x)ε(x)=0;{\displaystyle \varepsilon (x)\delta (x)+\delta (x)\varepsilon (x)=0\,;}in addition,ε(x){\displaystyle \varepsilon (x)}cannot be evaluated atx=0{\displaystyle x=0}; and the special name,ε{\displaystyle \varepsilon }is necessary to distinguish it from the functionsgn{\displaystyle \operatorname {sgn} }. (ε(0){\displaystyle \varepsilon (0)}is not defined, butsgn⁡0=0{\displaystyle \operatorname {sgn} 0=0}.)
https://en.wikipedia.org/wiki/Sign_function
Inmathematics, thesignof areal numberis its property of being either positive,negative, or0. Depending on local conventions, zero may be considered as having its own unique sign, having no sign, or having both positive and negative sign. In some contexts, it makes sense to distinguish betweena positive and a negative zero. In mathematics and physics, the phrase "change of sign" is associated with exchanging an object for itsadditive inverse(multiplication with−1, negation), an operation which is not restricted to real numbers. It applies among other objects to vectors, matrices, and complex numbers, which are not prescribed to be only either positive, negative, or zero. The word "sign" is also often used to indicate binary aspects of mathematical or scientific objects, such as odd and even (sign of a permutation), sense oforientationor rotation (cw/ccw),one sided limits, and other concepts described in§ Other meaningsbelow. Numbersfrom various number systems, likeintegers,rationals,complex numbers,quaternions,octonions, ... may have multiple attributes, that fix certain properties of a number. A number system that bears the structure of anordered ringcontains a unique number that when added with any number leaves the latter unchanged. This unique number is known as the system's additiveidentity element. For example, the integers has the structure of an ordered ring. This number is generally denoted as0.Because of thetotal orderin this ring, there are numbers greater than zero, called thepositivenumbers. Another property required for a ring to be ordered is that, for each positive number, there exists a unique corresponding number less than0whose sum with the original positive number is0.These numbers less than0are called thenegativenumbers. The numbers in each such pair are their respectiveadditive inverses. This attribute of a number, being exclusively eitherzero(0),positive(+), ornegative(−), is called itssign, and is often encoded to the real numbers0,1, and−1, respectively (similar to the way thesign functionis defined).[1]Since rational and real numbers are also ordered rings (in fact orderedfields), thesignattribute also applies to these number systems. When a minus sign is used in between two numbers, it represents the binary operation of subtraction. When a minus sign is written before a single number, it represents theunary operationof yielding theadditive inverse(sometimes callednegation) of the operand. Abstractly then, the difference of two number is the sum of the minuend with the additive inverse of the subtrahend. While0is its own additive inverse (−0 = 0), the additive inverse of a positive number is negative, and the additive inverse of a negative number is positive. A double application of this operation is written as−(−3) = 3. The plus sign is predominantly used in algebra to denote the binary operation of addition, and only rarely to emphasize the positivity of an expression. In commonnumeral notation(used inarithmeticand elsewhere), the sign of a number is often made explicit by placinga plus or a minus signbefore the number. For example,+3denotes "positive three", and−3denotes "negative three" (algebraically: the additive inverse of3). Without specific context (or when no explicit sign is given), a number is interpreted per default as positive. This notation establishes a strong association of the minus sign "−" with negative numbers, and the plus sign "+" with positive numbers. Within the convention ofzerobeing neither positive nor negative, a specific sign-value0may be assigned to the number value0. This is exploited in thesgn{\displaystyle \operatorname {sgn} }-function, as defined for real numbers.[1]In arithmetic,+0and−0both denote the same number0. There is generally no danger of confusing the value with its sign, although the convention of assigning both signs to0does not immediately allow for this discrimination. In certain European countries, e.g. in Belgium and France,0is considered to bebothpositive and negative following the convention set forth byNicolas Bourbaki.[2] In some contexts, such asfloating-point representationsof real numbers within computers, it is useful to consider signed versions of zero, withsigned zerosreferring to different, discrete number representations (seesigned number representationsfor more). The symbols+0and−0rarely appear as substitutes for0+and0−, used incalculusandmathematical analysisforone-sided limits(right-sided limit and left-sided limit, respectively). This notation refers to the behaviour of a function as its real input variable approaches0along positive (resp., negative) values; the two limits need not exist or agree. When0is said to be neither positive nor negative, the following phrases may refer to the sign of a number: When0is said to be both positive and negative,[citation needed]modified phrases are used to refer to the sign of a number: For example, theabsolute valueof a real number is always "non-negative", but is not necessarily "positive" in the first interpretation, whereas in the second interpretation, it is called "positive"—though not necessarily "strictly positive". The same terminology is sometimes used forfunctionsthat yield real or other signed values. For example, a function would be called apositive functionif its values are positive for all arguments of its domain, or anon-negative functionif all of its values are non-negative. Complex numbers are impossible to order, so they cannot carry the structure of an ordered ring, and, accordingly, cannot be partitioned into positive and negative complex numbers. They do, however, share an attribute with the reals, which is calledabsolute valueormagnitude. Magnitudes are always non-negative real numbers, and to any non-zero number there belongs a positive real number, itsabsolute value. For example, the absolute value of−3and the absolute value of3are both equal to3. This is written in symbols as|−3| = 3and|3| = 3. In general, any arbitrary real value can be specified by its magnitude and its sign. Using the standard encoding, any real value is given by the product of the magnitude and the sign in standard encoding. This relation can be generalized to define asignfor complex numbers. Since the real and complex numbers both form a field and contain the positive reals, they also contain the reciprocals of the magnitudes of all non-zero numbers. This means that any non-zero number may be multiplied with the reciprocal of its magnitude, that is, divided by its magnitude. It is immediate that the quotient of any non-zero real number by its magnitude yields exactly its sign. By analogy, thesign of a complex numberzcan be defined as the quotientofzand itsmagnitude|z|.The sign of a complex number is the exponential of the product of its argument with the imaginary unit. represents in some sense its complex argument. This is to be compared to the sign of real numbers, except witheiπ=−1.{\displaystyle e^{i\pi }=-1.}For the definition of a complex sign-function. see§ Complex sign functionbelow. When dealing with numbers, it is often convenient to have their sign available as a number. This is accomplished by functions that extract the sign of any number, and map it to a predefined value before making it available for further calculations. For example, it might be advantageous to formulate an intricate algorithm for positive values only, and take care of the sign only afterwards. Thesign functionorsignum functionextracts the sign of a real number, by mapping the set of real numbers to the set of the three reals{−1,0,1}.{\displaystyle \{-1,\;0,\;1\}.}It can be defined as follows:[1]sgn:R→{−1,0,1}x↦sgn⁡(x)={−1ifx<0,0ifx=0,1ifx>0.{\displaystyle {\begin{aligned}\operatorname {sgn} :{}&\mathbb {R} \to \{-1,0,1\}\\&x\mapsto \operatorname {sgn}(x)={\begin{cases}-1&{\text{if }}x<0,\\~~\,0&{\text{if }}x=0,\\~~\,1&{\text{if }}x>0.\end{cases}}\end{aligned}}}Thussgn(x)is 1 whenxis positive, andsgn(x)is −1 whenxis negative. For non-zero values ofx, this function can also be defined by the formulasgn⁡(x)=x|x|=|x|x,{\displaystyle \operatorname {sgn}(x)={\frac {x}{|x|}}={\frac {|x|}{x}},}where|x|is theabsolute valueofx. While a real number has a 1-dimensional direction, a complex number has a 2-dimensional direction. The complex sign function requires themagnitudeof its argumentz=x+iy, which can be calculated as|z|=zz¯=x2+y2.{\displaystyle |z|={\sqrt {z{\bar {z}}}}={\sqrt {x^{2}+y^{2}}}.} Analogous to above, thecomplex sign functionextracts the complex sign of a complex number by mapping the set of non-zero complex numbers to the set of unimodular complex numbers, and0to0:{z∈C:|z|=1}∪{0}.{\displaystyle \{z\in \mathbb {C} :|z|=1\}\cup \{0\}.}It may be defined as follows: Letzbe also expressed by its magnitude and one of its argumentsφasz= |z|⋅eiφ,then[3]sgn⁡(z)={0forz=0z|z|=eiφotherwise.{\displaystyle \operatorname {sgn}(z)={\begin{cases}0&{\text{for }}z=0\\{\dfrac {z}{|z|}}=e^{i\varphi }&{\text{otherwise}}.\end{cases}}} This definition may also be recognized as a normalized vector, that is, a vector whose direction is unchanged, and whose length is fixed tounity. If the original value was R,θ in polar form, then sign(R, θ) is 1 θ. Extension of sign() or signum() to any number of dimensions is obvious, but this has already been defined as normalizing a vector. In situations where there are exactly two possibilities on equal footing for an attribute, these are often labelled by convention asplusandminus, respectively. In some contexts, the choice of this assignment (i.e., which range of values is considered positive and which negative) is natural, whereas in other contexts, the choice is arbitrary, making an explicit sign convention necessary, the only requirement being consistent use of the convention. In many contexts, it is common to associate a sign with the measure of anangle, particularly an oriented angle or anangle of rotation. In such a situation, the sign indicates whether the angle is in theclockwiseor counterclockwise direction. Though different conventions can be used, it is common inmathematicsto have counterclockwise angles count as positive, and clockwise angles count as negative.[4] It is also possible to associate a sign to an angle of rotation in three dimensions, assuming that theaxis of rotationhas been oriented. Specifically, aright-handedrotation around an oriented axis typically counts as positive, while a left-handed rotation counts as negative. An angle which is the negative of a given angle has an equal arc, but theopposite axis.[5] When a quantityxchanges over time, thechangein the value ofxis typically defined by the equationΔx=xfinal−xinitial.{\displaystyle \Delta x=x_{\text{final}}-x_{\text{initial}}.} Using this convention, an increase inxcounts as positive change, while a decrease ofxcounts as negative change. Incalculus, this same convention is used in the definition of thederivative. As a result, anyincreasing functionhas positive derivative, while any decreasing function has negative derivative. When studying one-dimensionaldisplacementsandmotionsinanalytic geometryandphysics, it is common to label the two possibledirectionsas positive and negative. Because thenumber lineis usually drawn with positive numbers to the right, and negative numbers to the left, a common convention is for motions to the right to be given a positive sign, and for motions to the left to be given a negative sign. On theCartesian plane, the rightward and upward directions are usually thought of as positive, with rightward being the positivex-direction, and upward being the positivey-direction. If a displacementvectoris separated into itsvector components, then the horizontal part will be positive for motion to the right and negative for motion to the left, while the vertical part will be positive for motion upward and negative for motion downward. Likewise, a negativespeed(rate of change of displacement) implies avelocityin theopposite direction, i.e., receding instead of advancing; a special case is theradial speed. In3D space, notions related to sign can be found in the twonormal orientationsandorientabilityin general. Incomputing, an integer value may be either signed or unsigned, depending on whether the computer is keeping track of a sign for the number. By restricting an integervariableto non-negative values only, one morebitcan be used for storing the value of a number. Because of the way integer arithmetic is done within computers,signed number representationsusually do not store the sign as a single independent bit, instead using e.g.two's complement. In contrast, real numbers are stored and manipulated asfloating pointvalues. The floating point values are represented using three separate values, mantissa, exponent, and sign. Given this separate sign bit, it is possible to represent both positive and negative zero. Most programming languages normally treat positive zero and negative zero as equivalent values, albeit, they provide means by which the distinction can be detected. In addition to the sign of a real number, the word sign is also used in various related ways throughout mathematics and other sciences:
https://en.wikipedia.org/wiki/Sign_(mathematics)
Incomputing,signed number representationsare required to encodenegative numbersin binary number systems. Inmathematics, negative numbers in any base are represented by prefixing them with a minus sign ("−"). However, inRAMor CPUregisters, numbers are represented only as sequences ofbits, without extra symbols. The four best-known methods of extending thebinary numeral systemto representsigned numbersare:sign–magnitude,ones' complement,two's complement, andoffset binary. Some of the alternative methods use implicit instead of explicit signs, such as negative binary, using thebase −2. Corresponding methods can be devised forother bases, whether positive, negative, fractional, or other elaborations on such themes. There is no definitive criterion by which any of the representations is universally superior. For integers, the representation used in most current computing devices is two's complement, although theUnisys ClearPath Dorado seriesmainframes use ones' complement. The early days of digital computing were marked by competing ideas about both hardware technology and mathematics technology (numbering systems). One of the great debates was the format of negative numbers, with some of the era's top experts expressing very strong and differing opinions.[citation needed]One camp supportedtwo's complement, the system that is dominant today. Another camp supported ones' complement, where a negative value is formed by inverting all of the bits in its positive equivalent. A third group supported sign–magnitude, where a value is changed from positive to negative simply by toggling the word's highest-order bit. There were arguments for and against each of the systems. Sign–magnitude allowed for easier tracing of memory dumps (a common process in the 1960s) as small numeric values use fewer 1 bits. These systems did ones' complement math internally, so numbers would have to be converted to ones' complement values when they were transmitted from a register to the math unit and then converted back to sign–magnitude when the result was transmitted back to the register. The electronics required more gates than the other systems – a key concern when the cost and packaging of discrete transistors were critical. IBM was one of the early supporters of sign–magnitude, with their704,709and709xseries computers being perhaps the best-known systems to use it. Ones' complement allowed for somewhat simpler hardware designs, as there was no need to convert values when passed to and from the math unit. But it also shared an undesirable characteristic with sign–magnitude: the ability to representnegative zero(−0). Negative zero behaves exactly like positive zero: when used as an operand in any calculation, the result will be the same whether an operand is positive or negative zero. The disadvantage is that the existence of two forms of the same value necessitates two comparisons when checking for equality with zero. Ones' complement subtraction can also result in anend-around borrow(described below). It can be argued that this makes the addition and subtraction logic more complicated or that it makes it simpler, as a subtraction requires simply inverting the bits of the second operand as it is passed to the adder. ThePDP-1,CDC 160 series,CDC 3000series,CDC 6000 series,UNIVAC 1100series, andLINCcomputer use ones' complement representation. Two's complement is the easiest to implement in hardware, which may be the ultimate reason for its widespread popularity.[1]Processors on the early mainframes often consisted of thousands of transistors, so eliminating a significant number of transistors was a significant cost savings. Mainframes such as theIBM System/360, theGE-600 series,[2]and thePDP-6andPDP-10use two's complement, as did minicomputers such as thePDP-5andPDP-8and thePDP-11andVAXmachines. The architects of the early integrated-circuit-based CPUs (Intel 8080, etc.) also chose to use two's complement math. As IC technology advanced, two's complement technology was adopted in virtually all processors, includingx86,[3]m68k,Power ISA,[4]MIPS,SPARC,ARM,Itanium,PA-RISC, andDEC Alpha. In thesign–magnituderepresentation, also calledsign-and-magnitudeorsigned magnitude, a signed number is represented by the bit pattern corresponding to the sign of the number for thesign bit(often themost significant bit, set to 0 for a positive number and to 1 for a negative number), and the magnitude of the number (orabsolute value) for the remaining bits. For example, in an eight-bitbyte, only seven bits represent the magnitude, which can range from 0000000 (0) to 1111111 (127). Thus numbers ranging from −12710to +12710can be represented once the sign bit (the eighth bit) is added. For example, −4310encoded in an eight-bit byte is10101011 while 4310is00101011. Using sign–magnitude representation has multiple consequences which makes them more intricate to implement:[5] This approach is directly comparable to the common way of showing a sign (placing a "+" or "−" next to the number's magnitude). Some early binary computers (e.g.,IBM 7090) use this representation, perhaps because of its natural relation to common usage. Sign–magnitude is the most common way of representing thesignificandinfloating-pointvalues. In theones' complementrepresentation,[6]a negative number is represented by the bit pattern corresponding to thebitwise NOT(i.e. the "complement") of the positive number. Like sign–magnitude representation, ones' complement has two representations of 0: 00000000 (+0) and 11111111 (−0).[7] As an example, the ones' complement form of 00101011 (4310) becomes 11010100 (−4310). The range ofsignednumbers using ones' complement is represented by−(2N−1− 1)to(2N−1− 1)and ±0. A conventional eight-bit byte is −12710to +12710with zero being either 00000000 (+0) or 11111111 (−0). To add two numbers represented in this system, one does a conventional binary addition, but it is then necessary to do anend-around carry: that is, add any resultingcarryback into the resulting sum.[8]To see why this is necessary, consider the following example showing the case of the addition of −1 (11111110) to +2 (00000010): In the previous example, the first binary addition gives 00000000, which is incorrect. The correct result (00000001) only appears when the carry is added back in. A remark on terminology: The system is referred to as "ones' complement" because thenegationof a positive valuex(represented as thebitwise NOTofx) can also be formed by subtractingxfrom the ones' complement representation of zero that is a long sequence of ones (−0). Two's complement arithmetic, on the other hand, forms the negation ofxby subtractingxfrom a single large power of two that iscongruentto +0.[9]Therefore, ones' complement and two's complement representations of the same negative value will differ by one. Note that the ones' complement representation of a negative number can be obtained from the sign–magnitude representation merely bybitwise complementingthe magnitude (inverting all the bits after the first). For example, the decimal number −125 with its sign–magnitude representation 11111101 can be represented in ones' complement form as 10000010. In thetwo's complementrepresentation, a negative number is represented by the bit pattern corresponding to thebitwise NOT(i.e. the "complement") of the positive number plus one, i.e. to the ones' complement plus one. It circumvents the problems of multiple representations of 0 and the need for theend-around carryof the ones' complement representation. This can also be thought of as the most significant bit representing the inverse of its value in an unsigned integer; in an 8-bit unsigned byte, the most significant bit represents the 128ths place, where in two's complement that bit would represent −128. In two's-complement, there is only one zero, represented as 00000000. Negating a number (whether negative or positive) is done by inverting all the bits and then adding one to that result.[10]This actually reflects theringstructure on all integersmodulo2N:Z/2NZ{\displaystyle \mathbb {Z} /2^{N}\mathbb {Z} }. Addition of a pair of two's-complement integers is the same as addition of a pair ofunsigned numbers(except for detection ofoverflow, if that is done); the same is true for subtraction and even forNlowest significant bits of a product (value of multiplication). For instance, a two's-complement addition of 127 and −128 gives the same binary bit pattern as an unsigned addition of 127 and 128, as can be seen from the 8-bit two's complement table. An easier method to get the negation of a number in two's complement is as follows: Method two: Example: for +2, which is 00000010 in binary (the ~ character is theCbitwise NOToperator, so ~X means "invert all the bits in X"): In theoffset binaryrepresentation, also calledexcess-Korbiased, a signed number is represented by the bit pattern corresponding to the unsigned number plusK, withKbeing thebiasing valueoroffset. Thus 0 is represented byK, and −Kis represented by an all-zero bit pattern. This can be seen as a slight modification and generalization of the aforementioned two's-complement, which is virtually theexcess-(2N−1)representation withnegatedmost significant bit. Biased representations are now primarily used for the exponent offloating-pointnumbers. TheIEEE 754 floating-point standarddefines the exponent field of asingle-precision(32-bit) number as an 8-bitexcess-127field. Thedouble-precision(64-bit) exponent field is an 11-bitexcess-1023field; seeexponent bias. It also had use for binary-coded decimal numbers asexcess-3. In thebase −2representation, a signed number is represented using a number system with base −2. In conventional binary number systems, the base, orradix, is 2; thus the rightmost bit represents 20, the next bit represents 21, the next bit 22, and so on. However, a binary number system with base −2 is also possible. The rightmost bit represents(−2)0= +1, the next bit represents(−2)1= −2, the next bit(−2)2= +4and so on, with alternating sign. The numbers that can be represented with four bits are shown in the comparison table below. The range of numbers that can be represented is asymmetric. If the word has an even number of bits, the magnitude of the largest negative number that can be represented is twice as large as the largest positive number that can be represented, and vice versa if the word has an odd number of bits. The following table shows the positive and negative integers that can be represented using four bits. Same table, as viewed from "given these binary bits, what is the number as interpreted by the representation system": Google'sProtocol Buffers"zig-zag encoding" is a system similar to sign–magnitude, but uses theleast significant bitto represent the sign and has a single representation of zero. This allows avariable-length quantityencoding intended for nonnegative (unsigned) integers to be used efficiently for signed integers.[11] A similar method is used in theAdvanced Video Coding/H.264andHigh Efficiency Video Coding/H.265video compression standards toextend exponential-Golomb codingto negative numbers. In that extension, theleast significant bitis almost a sign bit; zero has the same least significant bit (0) as all the negative numbers. This choice results in the largest magnitude representable positive number being one higher than the largest magnitude negative number, unlike in two's complement or the Protocol Buffers zig-zag encoding. Another approach is to give eachdigita sign, yielding thesigned-digit representation. For instance, in 1726,John Colsonadvocated reducing expressions to "small numbers", numerals 1, 2, 3, 4, and 5. In 1840,Augustin Cauchyalso expressed preference for such modified decimal numbers to reduce errors in computation.
https://en.wikipedia.org/wiki/Signed_number_representations
Thedashis apunctuationmark consisting of a long horizontal line. It is similar in appearance to thehyphenbut is longer and sometimes higher from thebaseline. The most common versions are theendash–, generally longer than the hyphen but shorter than theminus sign; theemdash—, longer than either the en dash or the minus sign; and thehorizontalbar―, whose length varies acrosstypefacesbut tends to be between those of theenandemdashes.[a] Typical uses of dashes are to mark a break in a sentence, to set off an explanatory remark (similar to parenthesis), or to show spans of time or ranges of values. The em dash is sometimes used as a leading character to identify the source of a quoted text. In the early 17th century, inOkes-printedplaysofWilliam Shakespeare, dashes are attested that indicate a thinking pause, interruption, mid-speech realization, or change of subject.[1]The dashes are variously longer⸺(as inKing Learreprinted 1619) or composed of hyphens---(as inOthelloprinted 1622); moreover, the dashes are often, but not always, prefixed by a comma, colon, or semicolon.[2][3][1][4] In 1733, inJonathan Swift'sOn Poetry, the termsbreakanddashare attested for⸺and—marks:[5] Blot out, correct, insert, refine,Enlarge, diminish, interline;Be mindful, when Invention fails;To scratch your Head, and bite your Nails.Your poem finish'd, next your CareIs needful, to transcribe it fair.In modern Wit all printed Trash, isSet off with num'rousBreaks⸺andDashes— Usage varies both within English and within other languages, but the usual conventions for the most common dashes in printed English text are these: Glitter, felt, yarn, and buttons—his kitchen looked as if a clown had exploded.A flock of sparrows—some of them juveniles—alighted and sang. Glitter, felt, yarn, and buttons – his kitchen looked as if a clown had exploded.A flock of sparrows – some of them juveniles – alighted and sang. The French and Indian War (1754–1763) was fought in western Pennsylvania and along the present US–Canada border Seven social sins: politics without principles, wealth without work, pleasure without conscience, knowledge without character, commerce without morality, science without humanity, and worship without sacrifice. Thefigure dash‒(U+2012‒FIGURE DASH) has the same width as a numerical digit. (Manycomputer fontshave digits of equal width.[9]) It is used within numbers, such as the phone number 555‒0199, especially in columns so as to maintain alignment. In contrast, the en dash–(U+2013–EN DASH) is generally used for a range of values.[10] Theminus sign−(U+2212−MINUS SIGN)glyphis generally set a little higher, so as to be level with the horizontal bar of theplus sign. In informal usage, thehyphen-minus-(U+002D-HYPHEN-MINUS), provided as standard on most keyboards, is often used instead of the figure dash. InTeX, the standard fonts have no figure dash; however, the digits normally all have the same width as the en dash, so an en dash can be a substitution for the figure dash. InXeLaTeX, one can use\char"2012.[11]TheLinux Libertinefont also has the figure dash glyph. Theen dash,en rule, ornut dash[12]–is traditionally half the width of anem dash.[13][14]In modern fonts, the length of the en dash is not standardized, and the en dash is often more than half the width of the em dash.[15]The widths of en and em dashes have also been specified as being equal to those of the uppercase letters N and M, respectively,[16][17]and at other times to the widths of the lower-case letters.[15][18] The three main uses of the en dash are: The en dash is commonly used to indicate a closed range of values – a range with clearly defined and finite upper and lower boundaries – roughly signifying what might otherwise be communicated by the word "through" in American English, or "to" in International English.[19]This may include ranges such as those between dates, times, or numbers.[20][21][22][23]Variousstyle guidesrestrict this range indication style to only parenthetical or tabular matter, requiring "to" or "through" in running text. Preference for hyphen vs. en dash in ranges varies. For example, theAPA style(named after the American Psychological Association) uses an en dash in ranges, but theAMA style(named after the American Medical Association) uses a hyphen: Some style guides (including theGuide for the Use of the International System of Units (SI)and theAMA Manual of Style) recommend that, when a number range might be misconstrued as subtraction, the word "to" should be used instead of an en dash. For example, "a voltage of 50 V to 100 V" is preferable to using "a voltage of 50–100 V". Relatedly, in ranges that include negative numbers, "to" is used to avoid ambiguity or awkwardness (for example, "temperatures ranged from −18°C to −34°C"). It is also considered poor style (best avoided) to use the en dash in place of the words "to" or "and" in phrases that follow the formsfrom X to Yandbetween X and Y.[21][22] The en dash is used to contrast values or illustrate a relationship between two things.[20][23]Examples of this usage include: A distinction is often made between "simple" attributive compounds (written with a hyphen) and other subtypes (written with an en dash); at least one authority considers name pairs, where the paired elements carry equal weight, as in theTaft–Hartley Actto be "simple",[21]while others consider an en dash appropriate in instances such as these[24][25][26]to represent the parallel relationship, as in theMcCain–Feingold billorBose–Einstein statistics. When an act of the U.S. Congress is named using the surnames of the senator and representative who sponsored it, the hyphen-minus is used in theshort title; thus, the short title ofPublic Law 111–203is "The Dodd-Frank Wall Street Reform and Consumer Protection Act", with ahyphen-minusrather than an en dash between "Dodd" and "Frank".[27]However, there is a difference between something named for a parallel/coordinate relationship between two people – for example,Satyendra Nath BoseandAlbert Einstein– and something named for a single person who had acompound surname, which may be written with a hyphen or a space but not an en dash – for example, theLennard-Jones potential[hyphen] is named after one person (John Lennard-Jones), as areBence Jones proteinsandHughlings Jackson syndrome. Copyeditors use dictionaries (general, medical, biographical, and geographical) to confirm theeponymity(and thus the styling) for specific terms, given that no one can know them all offhand. Preference for an en dash instead of a hyphen in these coordinate/relationship/connection types of terms is a matter of style, not inherent orthographic "correctness"; both are equally "correct", and each is the preferred style in some style guides. For example,the American Heritage Dictionary of the English Language, theAMA Manual of Style, andDorland's medical reference worksuse hyphens, not en dashes, in coordinate terms (such as "blood-brain barrier"), ineponyms(such as "Cheyne-Stokes respiration", "Kaplan-Meier method"), and so on. In other styles, AP Style or Chicago Style, the en dash is used to describe two closely related entities in a formal manner. In English, the en dash is usually used instead of ahyphenincompound (phrasal) attributivesin which one or both elements is itself a compound, especially when the compound element is anopen compound, meaning it is not itself hyphenated. This manner of usage may include such examples as:[21][22][28][29] The disambiguating value of the en dash in these patterns was illustrated by Strunk and White inThe Elements of Stylewith the following example: WhenChattanooga NewsandChattanooga Free Pressmerged, the joint company was inaptly namedChattanooga News-Free Press(using a hyphen), which could be interpreted as meaning that their newspapers were news-free.[30] An exception to the use of en dashes is usually made whenprefixingan already-hyphenated compound; an en dash is generally avoided as a distraction in this case. Examples of this include:[30] An en dash can be retained to avoid ambiguity, but whether any ambiguity is plausible is a judgment call.AMA styleretains the en dashes in the following examples:[31] As discussed above, the en dash is sometimes recommended instead of a hyphen incompound adjectiveswhere neither part of the adjective modifies the other—that is, when each modifies the noun, as inlove–hate relationship. The Chicago Manual of Style(CMOS), however, limits the use of the en dash to two main purposes: That is, theCMOSfavors hyphens in instances where some other guides suggest en dashes, with the 16th edition explaining that "Chicago's sense of the en dash does not extend tobetween", to rule out its use in "US–Canadian relations".[33] In these two uses, en dashes normally do not have spaces around them. Some make an exception when they believe avoiding spaces may cause confusion or look odd. For example, compare"12 June – 3 July"with"12 June–3 July".[34]However, other authorities disagree and state there should be no space between an en dash and adjacent text. These authorities would not use a space in, for example,"11:00 a.m.⁠–⁠1:00 p.m."[35]or"July 9–August 17".[36][37] En dashes can be used instead of pairs of commas that mark off a nested clause or phrase. They can also be used around parenthetical expressions – such as this one – rather than the em dashes preferred by some publishers.[38][8] The en dash can also signify a rhetorical pause. For example, anopinion piecefromThe Guardianis entitled: Who is to blame for the sweltering weather? My kids say it's boomers – and me[39] In these situations, en dashes must have a single space on each side.[8] In most uses of en dashes, such as when used in indicating ranges, they are typeset closed up to the adjacent words or numbers. Examples include "the 1914–18war" or "the Dover–Calais crossing". It is only when en dashes are used in setting off parenthetical expressions – such as this one – that they take spaces around them.[40]For more on the choice of em versus en in this context, seeEn dash versus em dash. When an en dash is unavailable in a particularcharacter encodingenvironment—as in theASCIIcharacter set—there are some conventional substitutions. Often two consecutive hyphens are the substitute. The en dash is encoded in Unicode as U+2013 (decimal 8211) and represented in HTML by thenamed character entity&ndash;. The en dash is sometimes used as a substitute for theminus sign, when the minus sign character is not available since the en dash is usually the same width as a plus sign and is often available when the minus sign is not; seebelow. For example, the original 8-bitMacintosh Character Sethad an en dash, useful for the minus sign, years before Unicode with a dedicated minus sign was available. The hyphen-minus is usually too narrow to make a typographically acceptable minus sign. However, the en dash cannot be used for a minus sign inprogramming languagesbecause the syntax usually requires a hyphen-minus. Either the en dash or the em dash may be used as abulletat the start of each item in a bulleted list. Theem dash,em rule, ormutton dash[12]—is longer than anen dash. The character is called anem dashbecause it is oneemwide, a length that varies depending on the font size. One em is the same length as the font's height (which is typically measured inpoints). So in 9-point type, an em dash is nine points wide, while in 24-point type the em dash is 24 points wide. By comparison, the en dash, with its1enwidth, is in mostfontseither a half-em wide[41]or the width of an upper-case "N".[42] The em dash is encoded in Unicode as U+2014 (decimal 8212) and represented in HTML by the named character entity&mdash;. The em dash is used in several ways. It is primarily used in places where a set ofparenthesesor acolonmight otherwise be used,[43][full citation needed]and it can also show an abrupt change in thought (or an interruption in speech) or be used where afull stop(period) is too strong and acommais too weak (similar to that of a semicolon). Em dashes are also used to set off summaries or definitions.[44]Common uses and definitions are cited below with examples. It may indicate an interpolation stronger than that demarcated by parentheses, as in the following fromNicholson Baker'sThe Mezzanine(the degree of difference is subjective). In a related use, it may visually indicate the shift between speakers when they overlap in speech. For example, the em dash is used this way inJoseph Heller'sCatch-22: Lord Cardinal! if thou think'st on heaven's bliss,Hold up thy hand, make signal of that hope.—He dies, and makes no sign! This is aquotation dash. It may be distinct from an em dash in its coding (seehorizontal bar). It may be used to indicate turns in a dialogue, in which case each dash starts a paragraph.[46]It replaces other quotation marks and was preferred by authors such asJames Joyce:[47] The Walrus and the CarpenterWere walking close at hand;They wept like anything to seeSuch quantities of sand:"If this were only cleared away,"They said, "it would be grand!" An em dash may be used to indicate omitted letters in a word redacted to an initial or single letter or tofilleta word, by leaving the start and end letters whilst replacing the middle letters with a dash or dashes (forcensorshipor simplydata anonymization). It may also censor the end letter. In this use, it is sometimes doubled. Three em dashes might be used to indicate a completely missing word.[48] Either the en dash or the em dash may be used as abulletat the start of each item in a bulleted list, but a plain hyphen is more commonly used. Three em dashes one after another can be used in a footnote, endnote, or another form of bibliographic entry to indicate repetition of the same author's name as that of the previous work,[48]which is similar to the use ofid. According to most American sources (such asThe Chicago Manual of Style) and some British sources (such asThe Oxford Guide to Style), an em dash should always be set closed, meaning it should not be surrounded by spaces. But the practice in some parts of the English-speaking world, including the style recommended byThe New York Times Manual of Style and Usagefor printed newspapers and theAP Stylebook, sets it open, separating it from its surrounding words by using spaces orhair spaces(U+200A) when it is being used parenthetically.[49][50]TheAP Stylebookrejects the use of the open em dash to set off introductory items in lists. However, the "space, en dash, space" sequence is the predominant style in German and Frenchtypography. (SeeEn dash versus em dashbelow.) In Canada,The Canadian Style: A Guide to Writing and Editing,The Oxford Canadian A to Z of Grammar, Spelling & Punctuation: Guide to Canadian English Usage(2nd ed.),Editing Canadian English, and theCanadian Oxford Dictionaryall specify that an em dash should be set closed when used between words, a word and numeral, or two numerals. The Australian government'sStyle Manual for Authors, Editors and Printers(6th ed.), also specifies that em dashes inserted between words, a word and numeral, or two numerals, should be set closed. A section on the 2-em rule (⸺) also explains that the 2-em can be used to mark an abrupt break in direct or reported speech, but a space is used before the 2-em if a complete word is missing, while no space is used if part of a word exists before the sudden break. Two examples of this are as follows: When an em dash is unavailable in a particularcharacter encodingenvironment—as in theASCIIcharacter set—it has usually beenapproximatedas consecutive double (--) or triple (---) hyphen-minuses. The two-hyphen em dash proxy is perhaps more common, being a widespread convention in thetypewritingera. (It is still described for hard copy manuscript preparation inThe Chicago Manual of Styleas of the 16th edition, although the manual conveys that typewritten manuscript and copyediting on paper are now dated practices.) The three-hyphen em dash proxy was popular with various publishers because the sequence of one, two, or three hyphens could then correspond to the hyphen, en dash, and em dash, respectively. Because early comic booklettererswere not aware of the typographic convention of replacing a typewritten double hyphen with an em dash, the double hyphen became traditional in American comics. This practice has continued despite the development of computer lettering.[51][52] The en dash is wider than thehyphenbut not as wide as the em dash. Anem widthis defined as the point size of the currently used font, since the M character is not always the width of the point size.[53]In running text, various dash conventions are employed: an em dash—like so—or a spaced em dash — like so — or a spaced en dash – like so – can be seen in contemporary publications. Various style guides and national varieties of languages prescribe different guidance on dashes. Dashes have been cited as being treated differently in the US and the UK, with the former preferring the use of an em dash with no additional spacing and the latter preferring a spaced en dash.[38]As examples of the US style,The Chicago Manual of StyleandThe Publication Manual of the American Psychological Associationrecommend unspaced em dashes. Style guides outside the US are more variable. For example,The Elements of Typographic Styleby Canadian typographerRobert Bringhurstrecommends the spaced en dash – like so – and argues that the length and visual magnitude of an em dash "belongs to the padded and corseted aesthetic of Victorian typography".[8]In the United Kingdom, the spaced en dash is the house style for certain major publishers, including thePenguin Group, theCambridge University Press, andRoutledge. However, this convention is not universal. TheOxford Guide to Style(2002, section 5.10.10) acknowledges that the spaced en dash is used by "other British publishers" but states that theOxford University Press, like "most US publishers", uses the unspaced em dash.Fowler's Modern English Usage, saying that it is summarising theNew Hart's Rules, describes the principal uses of the em dash as "a single dash used to introduce an explanation or expansion" and "a pair of dashes used to indicate asides and parentheses", without stipulating whether it should be spaced but giving only unspaced examples.[54] The en dash – always with spaces in running text when, as discussed in this section, indicating a parenthesis or pause – and the spaced em dash both have a certain technical advantage over the unspaced em dash. Most typesetting and word processing expects word spacing to vary to supportfull justification. Alone among punctuation that marks pauses or logical relations in text, the unspaced em dash disables this for the words it falls between. This can cause uneven spacing in the text, but can be mitigated by the use ofthin spaces,hair spaces, or evenzero-width spaceson the sides of the em dash. This provides the appearance of an unspaced em dash, but allows the words and dashes to break between lines. The spaced em dash risks introducing excessive separation of words. In full justification, the adjacent spaces may be stretched, and the separation of words further exaggerated. En dashes may also be preferred to em dashes when text is set in narrow columns, such as in newspapers and similar publications, since the en dash is smaller. In such cases, its use is based purely on space considerations and is not necessarily related to other typographical concerns. On the other hand, a spaced en dash may be ambiguous when it is also used for ranges, for example, in dates or between geographical locations with internal spaces. Thehorizontal bar(U+2015―HORIZONTAL BAR), also known as aquotation dash, is used to introduce quoted text. This is the standard method of printingdialoguein some languages. The em dash is equally suitable if the quotation dash is unavailable or is contrary to the house style being used. There is no support in the standard TeX fonts, but one can use\hbox{---}\kern-.5em---or an em dash. Theswung dash(U+2053⁓SWUNG DASH) resembles a lengthenedtildeand is used to separate alternatives or approximates. Indictionaries, it is frequently used to stand in for the term being defined. A dictionary entry providing an example for the termhenceforthmight employ the swung dash as follows: In the following tables, the "Em and 5×" column uses a capital M as a standard comparison to demonstrate the vertical position of different Unicode dash characters. "5×" means that there are five copies of this type of dash. This table lists characters with propertyDash=yesin Unicode.[55] This table lists characters similar to dashes, but with propertyDash=noin Unicode. In many languages, such asPolish, the em dash is used as an openingquotation mark. There is no matching closing quotation mark; typically a new paragraph will be started, introduced by a dash, for each turn in the dialogue.[citation needed] Corpusstudies indicate that em dashes are more commonly used in Russian than in English.[59]In Russian, the em dash is used for the presentcopula(meaning 'am/is/are'), which is unpronounced in spoken Russian. InFrenchandItalian, em or en dashes can be used asparentheses(brackets), but the use of a second dash as a closing parenthesis is optional. When a closing dash is not used, the sentence is ended with a period (full-stop) as usual. Dashes are, however, much less common than parentheses.[citation needed] InSpanish, em dashes can be used to mark off parenthetical phrases. Unlike in English, the em dashes are spaced like brackets, i.e., there is a space between main sentence and dash, but not between parenthetical phrase and dash.[60]For example: "Llevaba la fidelidad a su maestro—unbuenprofesor—hasta extremos insospechados." (In English: 'He took his loyalty to his teacher – a good teacher – to unsuspected extremes.')[61]
https://en.wikipedia.org/wiki/En_dash
Amathematical symbolis a figure or a combination of figures that is used to represent amathematical object, an action on mathematical objects, a relation between mathematical objects, or for structuring the other symbols that occur in aformula. As formulas are entirely constituted with symbols of various types, many symbols are needed for expressing all mathematics. The most basic symbols are thedecimal digits(0, 1, 2, 3, 4, 5, 6, 7, 8, 9), and the letters of theLatin alphabet. The decimal digits are used for representing numbers through theHindu–Arabic numeral system. Historically, upper-case letters were used for representingpointsin geometry, and lower-case letters were used forvariablesandconstants. Letters are used for representing many other types ofmathematical object. As the number of these types has increased, theGreek alphabetand someHebrew lettershave also come to be used. For more symbols, other typefaces are also used, mainlyboldface⁠a,A,b,B,…{\displaystyle \mathbf {a,A,b,B} ,\ldots }⁠,script typefaceA,B,…{\displaystyle {\mathcal {A,B}},\ldots }(the lower-case script face is rarely used because of the possible confusion with the standard face),German fraktur⁠a,A,b,B,…{\displaystyle {\mathfrak {a,A,b,B}},\ldots }⁠, andblackboard bold⁠N,Z,Q,R,C,H,Fq{\displaystyle \mathbb {N,Z,Q,R,C,H,F} _{q}}⁠(the other letters are rarely used in this face, or their use is unconventional). It is commonplace to use alphabets, fonts and typefaces to group symbols by type. The use of specific Latin and Greek letters as symbols for denoting mathematical objects is not described in this article. For such uses, seeVariable § Conventional variable namesandList of mathematical constants. However, some symbols that are described here have the same shape as the letter from which they are derived, such as∏{\displaystyle \textstyle \prod {}}and∑{\displaystyle \textstyle \sum {}}. These letters alone are not sufficient for the needs of mathematicians, and many other symbols are used. Some take their origin inpunctuation marksanddiacriticstraditionally used intypography; others by deformingletter forms, as in the cases of∈{\displaystyle \in }and∀{\displaystyle \forall }. Others, such as+and=, were specially designed for mathematics. Severallogical symbolsare widely used in all mathematics, and are listed here. For symbols that are used only inmathematical logic, or are rarely used, seeList of logic symbols. Theblackboard boldtypefaceis widely used for denoting the basicnumber systems. These systems are often also denoted by the corresponding uppercase bold letter. A clear advantage of blackboard bold is that these symbols cannot be confused with anything else. This allows using them in any area of mathematics, without having to recall their definition. For example, if one encountersR{\displaystyle \mathbb {R} }incombinatorics, one should immediately know that this denotes thereal numbers, although combinatorics does not study the real numbers (but it uses them for many proofs). Many types ofbracketare used in mathematics. Their meanings depend not only on their shapes, but also on the nature and the arrangement of what is delimited by them, and sometimes what appears between or before them. For this reason, in the entry titles, the symbol□is used as a placeholder for schematizing the syntax that underlies the meaning. In this section, the symbols that are listed are used as some sorts of punctuation marks in mathematical reasoning, or as abbreviations of natural language phrases. They are generally not used inside a formula. Some were used inclassical logicfor indicating the logical dependence between sentences written in plain language. Except for the first two, they are normally not used in printed mathematical texts since, for readability, it is generally recommended to have at least one word between two formulas. However, they are still used on ablack boardfor indicating relationships between formulas.
https://en.wikipedia.org/wiki/Glossary_of_mathematical_symbols
Circled plus(⊕) orn-ary circled plus(⨁) (in Unicode,U+2295⊕CIRCLED PLUS,U+2A01⨁N-ARY CIRCLED PLUS OPERATOR) may refer to: Inmathematics and computing: Inlanguages: Other uses
https://en.wikipedia.org/wiki/%E2%8A%95_(disambiguation)
Inlogic, aconditional quantifieris a kind ofLindström quantifier(orgeneralized quantifier)QAthat, relative to a classical modelA, satisfies some or all of the following conditions ("X" and "Y" range over arbitrary formulas in onefree variable): (The implication arrow denotes material implication in the metalanguage.) Theminimal conditional logicMis characterized by the first six properties, and stronger conditional logics include some of the other ones. For example, the quantifier ∀A, which can be viewed as set-theoretic inclusion, satisfies all of the above except [symmetry]. Clearly [symmetry] holds for ∃Awhile e.g. [contraposition] fails. A semantic interpretation of conditional quantifiers involves a relation between sets of subsets of a given structure—i.e. a relation between properties defined on the structure. Some of the details can be found in the articleLindström quantifier. Conditional quantifiers are meant to capture certain properties concerning conditional reasoning at an abstract level. Generally, it is intended to clarify the role of conditionals in a first-order language as they relate to otherconnectives, such as conjunction or disjunction. While they can cover nested conditionals, the greater complexity of the formula, specifically the greater the number of conditional nesting, the less helpful they are as a methodological tool for understanding conditionals, at least in some sense. Compare this methodological strategy for conditionals with that offirst-degree entailmentlogics. Serge Lapierre.Conditionals and Quantifiers, inQuantifiers, Logic, and Language, Stanford University, pp. 237–253, 1995.
https://en.wikipedia.org/wiki/Conditional_quantifier
Inmathematical logic, theimplicational propositional calculusis a version ofclassicalpropositional calculusthat uses only oneconnective, calledimplication or conditional. Informulas, thisbinary operationis indicated by "implies", "if ..., then ...", "→", "→{\displaystyle \rightarrow }", etc.. Implication alone is notfunctionally completeas alogical operatorbecause one cannot form all othertwo-valuedtruth functionsfrom it. For example, the two-place truth function that always returnsfalseis not definable from → and arbitrary propositional variables: any formula constructed from → and propositional variables must receive the valuetruewhen all of its variables are evaluated to true. It follows that {→} is not functionally complete. However, if one adds a nullary connective ⊥ for falsity, then one can define all other truth functions. Formulas over the resulting set of connectives {→, ⊥} are calledf-implicational.[1]IfPandQare propositions, then: Since the above operators are known to be functionally complete, it follows that any truth function can be expressed in terms of → and ⊥. The following statements are consideredtautologies(irreducible and intuitively true, by definition). Where in each case,P,Q, andRmay be replaced by any formulas that contain only "→" as a connective. If Γ is a set of formulas andAa formula, thenΓ⊢A{\displaystyle \Gamma \vdash A}means thatAis derivable using the axioms and rules above and formulas from Γ as additional hypotheses. Łukasiewicz(1948) found an axiom system for the implicational calculus that replaces the schemas 1–3 above with a single schema He also argued that there is no shorter axiom system.[2] Since all axioms and rules of the calculus are schemata, derivation is closed undersubstitution: where σ is any substitution (of formulas using only implication). The implicational propositional calculus also satisfies thededuction theorem: As explained in thededuction theoremarticle, this holds for any axiomatic extension of the system containing axiom schemas 1 and 2 above and modus ponens. The implicational propositional calculus issemantically completewith respect to the usual two-valued semantics of classical propositional logic. That is, if Γ is a set of implicational formulas, andAis an implicational formulaentailedby Γ, thenΓ⊢A{\displaystyle \Gamma \vdash A}. A proof of the completeness theorem is outlined below. First, using thecompactness theoremand the deduction theorem, we may reduce the completeness theorem to its special case with empty Γ, i.e., we only need to show that every tautology is derivable in the system. The proof is similar to completeness of full propositional logic, but it also uses the following idea to overcome the functional incompleteness of implication. IfAandFare formulas, thenA→Fis equivalent to(¬A*) ∨F,whereA*is the result of replacing inAall, some, or none of the occurrences ofFby falsity. Similarly,(A→F) →Fis equivalent toA*∨F.So under some conditions, one can use them as substitutes for sayingA*is false orA*is true respectively. We first observe some basic facts about derivability: LetFbe an arbitrary fixed formula. For any formulaA, we defineA0= (A→F)andA1= ((A→F) →F).Consider only formulas in propositional variablesp1, ...,pn. We claim that for every formulaAin these variables and everytruth assignmente, We prove (4) by induction onA. The base caseA=piis trivial. LetA= (B→C).We distinguish three cases: Now letFbe a tautology in variablesp1, ...,pn. We will prove by reverse induction onk=n,...,0 that for every assignmente, The base casek=nfollows from a special case of (4) using and the fact thatF→Fis a theorem by the deduction theorem. Assume that (5) holds fork+ 1, we will show it fork. By applying deduction theorem to the induction hypothesis, we obtain by first settinge(pk+1) = 0 and second settinge(pk+1) = 1. From this we derive (5) using modus ponens. Fork= 0 we obtain that the tautologyFis provable without assumptions. This is what was to be proved. This proof is constructive. That is, given a tautology, one could actually follow the instructions and create a proof of it from the axioms. However, the length of such a proof increases exponentially with the number of propositional variables in the tautology, hence it is not a practical method for any but the very shortest tautologies. The Bernays–Tarski axiom system is often used. In particular, Łukasiewicz's paper derives the Bernays–Tarski axioms from Łukasiewicz's sole axiom as a means of showing its completeness.It differs from the axiom schemas above by replacing axiom schema 2, (P→(Q→R))→((P→Q)→(P→R)), with which is calledhypothetical syllogism. This makes derivation of the deduction meta-theorem a little more difficult, but it can still be done. We show that fromP→(Q→R) andP→Qone can deriveP→R. This fact can be used in lieu of axiom schema 2 to get the meta-theorem. Satisfiability in the implicational propositional calculus is trivial, because every formula is satisfiable: just set all variables to true. Falsifiability in the implicational propositional calculus isNP-complete,[3]meaning that validity (tautology) isco-NP-complete. In this case, a useful technique is to presume that the formula is not a tautology and attempt to find a valuation that makes it false. If one succeeds, then it is indeed not a tautology. If one fails, then it is a tautology. Example of a non-tautology: Suppose [(A→B)→((C→A)→E)]→([F→((C→D)→E)]→[(A→F)→(D→E)]) is false. Then (A→B)→((C→A)→E) is true;F→((C→D)→E) is true;A→Fis true;Dis true; andEis false. SinceDis true,C→Dis true. So the truth ofF→((C→D)→E) is equivalent to the truth ofF→E. Then sinceEis false andF→Eis true, we get thatFis false. SinceA→Fis true,Ais false. ThusA→Bis true and (C→A)→Eis true. C→Ais false, soCis true. The value ofBdoes not matter, so we can arbitrarily choose it to be true. Summing up, the valuation that setsB,CandDto be true andA,EandFto be false will make [(A→B)→((C→A)→E)]→([F→((C→D)→E)]→[(A→F)→(D→E)]) false. So it is not a tautology. Example of a tautology: Suppose ((A→B)→C)→((C→A)→(D→A)) is false. Then (A→B)→Cis true;C→Ais true;Dis true; andAis false. SinceAis false,A→Bis true. SoCis true. ThusAmust be true, contradicting the fact that it is false. Thus there is no valuation that makes ((A→B)→C)→((C→A)→(D→A)) false. Consequently, it is a tautology. What would happen if another axiom schema were added to those listed above? There are two cases: (1) it is a tautology; or (2) it is not a tautology. If it is a tautology, then the set of theorems remains the set of tautologies as before. However, in some cases it may be possible to find significantly shorter proofs for theorems. Nevertheless, the minimum length of proofs of theorems will remain unbounded, that is, for any natural numbernthere will still be theorems that cannot be proved innor fewer steps. If the new axiom schema is not a tautology, then every formula becomes a theorem (which makes the concept of a theorem useless in this case). What is more, there is then an upper bound on the minimum length of a proof of every formula, because there is a common method for proving every formula. For example, suppose the new axiom schema were ((B→C)→C)→B. Then ((A→(A→A))→(A→A))→Ais an instance (one of the new axioms) and also not a tautology. But [((A→(A→A))→(A→A))→A]→Ais a tautology and thus a theorem due to the old axioms (using the completeness result above). Applying modus ponens, we get thatAis a theorem of the extended system. Then all one has to do to prove any formula is to replaceAby the desired formula throughout the proof ofA. This proof will have the same number of steps as the proof ofA. The axioms listed above primarily work through the deduction metatheorem to arrive at completeness. Here is another axiom system that aims directly at completeness without going through the deduction metatheorem. First we have axiom schemas that are designed to efficiently prove the subset of tautologies that contain only one propositional variable. The proof of each such tautology would begin with two parts (hypothesis and conclusion) that are the same. Then insert additional hypotheses between them. Then insert additional tautological hypotheses (which are true even when the sole variable is false) into the original hypothesis. Then add more hypotheses outside (on the left). This procedure will quickly give every tautology containing only one variable. (The symbol "ꞈ" in each axiom schema indicates where the conclusion used in the completeness proof begins. It is merely a comment, not a part of the formula.) Consider any formula Φ that may containA,B,C1, ...,Cnand ends withAas its final conclusion. Then we take as an axiom schema where Φ−is the result of replacingBbyAthroughout Φ and Φ+is the result of replacingBby (A→A) throughout Φ. This is a schema for axiom schemas since there are two level of substitution: in the first Φ is substituted (with variations); in the second, any of the variables (including bothAandB) may be replaced by arbitrary formulas of the implicational propositional calculus. This schema allows one to prove tautologies with more than one variable by considering the case whenBis false Φ−and the case whenBis true Φ+. If the variable that is the final conclusion of a formula takes the value true, then the whole formula takes the value true regardless of the values of the other variables. Consequently ifAis true, then Φ, Φ−, Φ+and Φ−→(Φ+→Φ) are all true. So without loss of generality, we may assume thatAis false. Notice that Φ is a tautology if and only if both Φ−and Φ+are tautologies. But while Φ hasn+2 distinct variables, Φ−and Φ+both haven+1. So the question of whether a formula is a tautology has been reduced to the question of whether certain formulas with one variable each are all tautologies. Also notice that Φ−→(Φ+→Φ) is a tautology regardless of whether Φ is, because if Φ is false then either Φ−or Φ+will be false depending on whetherBis false or true. Examples: Deriving Peirce's law Deriving Łukasiewicz' sole axiom Using a truth table to verify Łukasiewicz' sole axiom would require consideration of 16=24cases since it contains 4 distinct variables. In this derivation, we were able to restrict consideration to merely 3 cases:Ris false andQis false,Ris false andQis true, andRis true. However because we are working within the formal system of logic (instead of outside it, informally), each case required much more effort.
https://en.wikipedia.org/wiki/Implicational_propositional_calculus
Laws of Form(hereinafterLoF) is a book byG. Spencer-Brown, published in 1969, that straddles the boundary betweenmathematicsandphilosophy.LoFdescribes three distinctlogical systems: "Boundary algebra" is aMeguire (2011)term for the union of the primary algebra and the primary arithmetic.Laws of Formsometimes loosely refers to the "primary algebra" as well as toLoF. The preface states that the work was first explored in 1959, and Spencer Brown citesBertrand Russellas being supportive of his endeavour.[a]He also thanksJ. C. P. MillerofUniversity College Londonfor helping with the proofreading and offering other guidance. In 1963 Spencer Brown was invited by Harry Frost, staff lecturer in the physical sciences at the department of Extra-Mural Studies of theUniversity of London, to deliver a course on the mathematics of logic.[citation needed] LoFemerged from work in electronic engineering its author did around 1960. Key ideas of theLOFwere first outlined in his 1961 manuscriptDesign with the Nor, which remained unpublished until 2021,[1]and further refined during subsequent lectures onmathematical logiche gave under the auspices of theUniversity of London's extension program.LoFhas appeared in several editions. The second series of editions appeared in 1972 with the "Preface to the First American Edition", which emphasised the use of self-referential paradoxes,[2]and the most recent being a 1997 German translation.LoFhas never gone out of print. LoF'smysticaland declamatory prose and its love ofparadoxmake it a challenging read for all. Spencer-Brown was influenced byWittgensteinandR. D. Laing.LoFalso echoes a number of themes from the writings ofCharles Sanders Peirce,Bertrand Russell, andAlfred North Whitehead. The work has had curious effects on some classes of its readership; for example, on obscure grounds, it has been claimed that the entire book is written in an operational way, giving instructions to the reader instead of telling them what "is", and that in accordance with G. Spencer-Brown's interest in paradoxes, the only sentence that makes a statement that somethingis, is the statement which says no such statements are used in this book.[3]Furthermore, the claim asserts that except for this one sentence the book can be seen as an example ofE-Prime. What prompted such a claim, is obscure, either in terms of incentive, logical merit, or as a matter of fact, because the book routinely and naturally uses the verbto bethroughout, and in all its grammatical forms, as may be seen both in the original and in quotes shown below.[4] Ostensibly a work of formal mathematics and philosophy,LoFbecame something of acult classic: it was praised byHeinz von Foersterwhen he reviewed it for theWhole Earth Catalog.[5]Those who agree point toLoFas embodying an enigmatic "mathematics ofconsciousness", its algebraic symbolism capturing an (perhaps even "the") implicit root ofcognition: the ability to "distinguish".LoFargues that primary algebra reveals striking connections amonglogic,Boolean algebra, and arithmetic, and thephilosophy of languageandmind. Stafford Beerwrote in a review forNature, "When one thinks of all that Russell went through sixty years ago, to write thePrincipia, and all we his readers underwent in wrestling with those three vast volumes, it is almost sad".[6] Banaschewski (1977)[7]argues that the primary algebra is nothing but new notation for Boolean algebra. Indeed, thetwo-element Boolean algebra2can be seen as the intended interpretation of the primary algebra. Yet the notation of the primary algebra: Moreover, the syntax of the primary algebra can be extended to formal systems other than2and sentential logic, resulting in boundary mathematics (see§ Related workbelow). LoFhas influenced, among others,Heinz von Foerster,Louis Kauffman,Niklas Luhmann,Humberto Maturana,Francisco VarelaandWilliam Bricken. Some of these authors have modified the primary algebra in a variety of interesting ways. LoFclaimed that certain well-known mathematical conjectures of very long standing, such as thefour color theorem,Fermat's Last Theorem, and theGoldbach conjecture, are provable using extensions of the primary algebra. Spencer-Brown eventually circulated a purported proof of the four color theorem, but it was met with skepticism.[8] The symbol: Also called the "mark" or "cross", is the essential feature of the Laws of Form. In Spencer-Brown's inimitable and enigmatic fashion, the Mark symbolizes the root ofcognition, i.e., thedualisticMark indicates the capability of differentiating a "this" from "everything elsebutthis". InLoF, a Cross denotes the drawing of a "distinction", and can be thought of as signifying the following, all at once: All three ways imply an action on the part of the cognitive entity (e.g., person) making the distinction. AsLoFputs it: "The first command: can well be expressed in such ways as: Or: The counterpoint to the Marked state is the Unmarked state, which is simply nothing, the void, or the un-expressable infinite represented by a blank space. It is simply the absence of a Cross. No distinction has been made and nothing has been crossed. The Marked state and the void are the two primitive values of the Laws of Form. The Cross can be seen as denoting the distinction between two states, one "considered as a symbol" and another not so considered. From this fact arises a curious resonance with some theories ofconsciousnessandlanguage. Paradoxically, the Form is at once Observer and Observed, and is also the creative act of making an observation.LoF(excluding back matter) closes with the words: ...the first distinction, the Mark and the observer are not only interchangeable, but, in the form, identical. C. S. Peircecame to a related insight in the 1890s; see§ Related work. Thesyntaxof the primary arithmetic goes as follows. There are just twoatomic expressions: There are two inductive rules: Thesemanticsof the primary arithmetic are perhaps nothing more than the sole explicitdefinitioninLoF: "Distinction is perfect continence". Let the "unmarked state" be a synonym for the void. Let an empty Cross denote the "marked state". To cross is to move from one value, the unmarked or marked state, to the other. We can now state the "arithmetical"axiomsA1 and A2, which ground the primary arithmetic (and hence all of the Laws of Form): "A1. The law of Calling". Calling twice from a state is indistinguishable from calling once. To make a distinction twice has the same effect as making it once. For example, saying "Let there be light" and then saying "Let there be light" again, is the same as saying it once. Formally: "A2. The law of Crossing". After crossing from the unmarked to the marked state, crossing again ("recrossing") starting from the marked state returns one to the unmarked state. Hence recrossing annuls crossing. Formally: In both A1 and A2, the expression to the right of '=' has fewer symbols than the expression to the left of '='. This suggests that every primary arithmetic expression can, by repeated application of A1 and A2, besimplifiedto one of two states: the marked or the unmarked state. This is indeed the case, and the result is the expression's "simplification". The two fundamental metatheorems of the primary arithmetic state that: Thus therelationoflogical equivalencepartitionsall primary arithmetic expressions into twoequivalence classes: those that simplify to the Cross, and those that simplify to the void. A1 and A2 have loose analogs in the properties of series and parallel electrical circuits, and in other ways of diagramming processes, including flowcharting. A1 corresponds to a parallel connection and A2 to a series connection, with the understanding that making a distinction corresponds to changing how two points in a circuit are connected, and not simply to adding wiring. The primary arithmetic is analogous to the following formal languages frommathematicsandcomputer science: The phrase "calculus of indications" inLoFis a synonym for "primary arithmetic". WhileLoFdoes not formally define canon, the following two excerpts from the Notes to chpt. 2 are apt: The more important structures of command are sometimes calledcanons. They are the ways in which the guiding injunctions appear to group themselves in constellations, and are thus by no means independent of each other. A canon bears the distinction of being outside (i.e., describing) the system under construction, but a command to construct (e.g., 'draw a distinction'), even though it may be of central importance, is not a canon. A canon is an order, or set of orders, to permit or allow, but not to construct or create. ...the primary form of mathematical communication is not description but injunction... Music is a similar art form, the composer does not even attempt to describe the set of sounds he has in mind, much less the set of feelings occasioned through them, but writes down a set of commands which, if they are obeyed by the performer, can result in a reproduction, to the listener, of the composer's original experience. These excerpts relate to the distinction inmetalogicbetween the object language, the formal language of the logical system under discussion, and themetalanguage, a language (often a natural language) distinct from the object language, employed to exposit and discuss the object language. The first quote seems to assert that thecanonsare part of the metalanguage. The second quote seems to assert that statements in the object language are essentially commands addressed to the reader by the author. Neither assertion holds in standard metalogic. Given any valid primary arithmetic expression, insert into one or more locations any number of Latin letters bearing optional numerical subscripts; the result is a primary algebraformula. Letters so employed inmathematicsandlogicare calledvariables. A primary algebra variable indicates a location where one can write the primitive valueor its complement. Multiple instances of the same variable denote multiple locations of the same primitive value. The sign '=' may link two logically equivalent expressions; the result is anequation. By "logically equivalent" is meant that the two expressions have the same simplification.Logical equivalenceis anequivalence relationover the set of primary algebra formulas, governed by the rules R1 and R2. Let "C" and "D" be formulae each containing at least one instance of the subformulaA: R2is employed very frequently inprimary algebrademonstrations (see below), almost always silently. These rules are routinely invoked inlogicand most of mathematics, nearly always unconsciously. Theprimary algebraconsists ofequations, i.e., pairs of formulae linked by aninfix operator'='.R1andR2enable transforming one equation into another. Hence theprimary algebrais anequationalformal system, like the manyalgebraic structures, includingBoolean algebra, that arevarieties. Equational logic was common beforePrincipia Mathematica(e.g.Johnson (1892)), and has present-day advocates (Gries & Schneider (1993)). Conventionalmathematical logicconsists oftautologicalformulae, signalled by a prefixedturnstile. To denote that theprimary algebraformulaAis atautology, simply write "A=". If one replaces '=' inR1andR2with thebiconditional, the resulting rules hold in conventional logic. However, conventional logic relies mainly on the rulemodus ponens; thus conventional logic isponential. The equational-ponential dichotomy distills much of what distinguishes mathematical logic from the rest of mathematics. Aninitialis aprimary algebraequation verifiable by adecision procedureand as such isnotanaxiom.LoFlays down the initials: The absence of anything to the right of the "=" above, is deliberate. J2is the familiardistributive lawofsentential logicandBoolean algebra. Another set of initials, friendlier to calculations, is: It is thanks toC2that theprimary algebrais alattice. By virtue ofJ1a, it is acomplemented latticewhose upper bound is. ByJ0,is the corresponding lower bound andidentity element.J0is also an algebraic version ofA2and makes clear the sense in whichaliases with the blank page. T13 inLoFgeneralizesC2as follows. Anyprimary algebra(or sentential logic) formulaBcan be viewed as anordered treewithbranches. Then: T13: AsubformulaAcan be copied at will into any depth ofBgreater than that ofA, as long asAand its copy are in the same branch ofB. Also, given multiple instances ofAin the same branch ofB, all instances but the shallowest are redundant. While a proof of T13 would requireinduction, the intuition underlying it should be clear. C2or its equivalent is named: Perhaps the first instance of an axiom or rule with the power ofC2was the "Rule of (De)Iteration", combining T13 andAA=A, ofC. S. Peirce'sexistential graphs. LoFasserts that concatenation can be read ascommutingandassociatingby default and hence need not be explicitly assumed or demonstrated. (Peirce made a similar assertion about hisexistential graphs.) Let a period be a temporary notation to establish grouping. That concatenation commutes and associates may then be demonstrated from the: Having demonstrated associativity, the period can be discarded. The initials inMeguire (2011)areAC.D=CD.A, calledB1;B2, J0 above;B3, J1a above; andB4, C2. By design, these initials are very similar to the axioms for anabelian group,G1-G3below. Theprimary algebracontains three kinds of proved assertions: The distinction between consequence andtheoremholds for all formal systems, including mathematics and logic, but is usually not made explicit. A demonstration ordecision procedurecan be carried out and verified by computer. Theproofof atheoremcannot be. LetAandBbeprimary algebraformulas. A demonstration ofA=Bmay proceed in either of two ways: OnceA=Bhas been demonstrated,A=Bcan be invoked to justify steps in subsequent demonstrations.primary algebrademonstrations and calculations often require no more thanJ1a,J2,C2, and the consequences(C3inLoF),(C1), andAA=A(C5). The consequence,C7'inLoF, enables analgorithm, sketched inLoFs proof of T14, that transforms an arbitraryprimary algebraformula to an equivalent formula whose depth does not exceed two. The result is anormal form, theprimary algebraanalog of theconjunctive normal form.LoF(T14–15) proves theprimary algebraanalog of the well-knownBoolean algebratheorem that every formula has a normal form. LetAbe asubformulaof someformulaB. When paired withC3,J1acan be viewed as the closure condition for calculations:Bis atautologyif and only ifAand (A) both appear in depth 0 ofB. A related condition appears in some versions ofnatural deduction. A demonstration by calculation is often little more than: The last step of a calculation always invokesJ1a. LoFincludes elegant new proofs of the following standardmetatheory: Thatsentential logicis complete is taught in every first university course inmathematical logic. But university courses in Boolean algebra seldom mention the completeness of2. If the Marked and Unmarked states are read as theBooleanvalues 1 and 0 (orTrueandFalse), theprimary algebrainterprets2(orsentential logic).LoFshows how theprimary algebracan interpret thesyllogism. Each of theseinterpretationsis discussed in a subsection below. Extending theprimary algebraso that it couldinterpretstandardfirst-order logichas yet to be done, butPeirce'sbetaexistential graphssuggest that this extension is feasible. Theprimary algebrais an elegant minimalist notation for thetwo-element Boolean algebra2. Let: If join (meet) interpretsAC, then meet (join) interpretsA|¯C|¯|¯{\displaystyle {\overline {{\overline {A|}}\ \ {\overline {C|}}{\Big |}}}}. Hence theprimary algebraand2are isomorphic but for one detail:primary algebracomplementation can be nullary, in which case it denotes a primitive value. Modulo this detail,2is amodelof the primary algebra. The primary arithmetic suggests the following arithmetic axiomatization of2: 1+1=1+0=0+1=1=~0, and 0+0=0=~1. ThesetB={{\displaystyle \ B=\{},{\displaystyle ,}}{\displaystyle \ \}}is theBoolean domainorcarrier. In the language ofuniversal algebra, theprimary algebrais thealgebraic structure⟨B,−−,−|¯,|¯⟩{\displaystyle \langle B,-\ -,{\overline {-\ |}},{\overline {\ \ |}}\rangle }of type⟨2,1,0⟩{\displaystyle \langle 2,1,0\rangle }. Theexpressive adequacyof theSheffer strokepoints to theprimary algebraalso being a⟨B,−−|¯,|¯⟩{\displaystyle \langle B,{\overline {-\ -\ |}},{\overline {\ \ |}}\rangle }algebra of type⟨2,0⟩{\displaystyle \langle 2,0\rangle }. In both cases, the identities are J1a, J0, C2, andACD=CDA. Since theprimary algebraand2areisomorphic,2can be seen as a⟨B,+,¬,1⟩{\displaystyle \langle B,+,\lnot ,1\rangle }algebra of type⟨2,1,0⟩{\displaystyle \langle 2,1,0\rangle }. This description of2is simpler than the conventional one, namely an⟨B,+,×,¬,1,0⟩{\displaystyle \langle B,+,\times ,\lnot ,1,0\rangle }algebra of type⟨2,2,1,0,0⟩{\displaystyle \langle 2,2,1,0,0\rangle }. The two possible interpretations are dual to each other in the Boolean sense. (In Boolean algebra, exchanging AND ↔ OR and 1 ↔ 0 throughout an equation yields an equally valid equation.) The identities remain invariant regardless of which interpretation is chosen, so the transformations or modes of calculation remain the same; only the interpretation of each form would be different. Example: J1a is. Interpreting juxtaposition as OR andas 1, this translates to¬A∨A=1{\displaystyle \neg A\lor A=1}which is true. Interpreting juxtaposition as AND andas 0, this translates to¬A∧A=0{\displaystyle \neg A\land A=0}which is true as well (and the dual of¬A∨A=1{\displaystyle \neg A\lor A=1}). The marked state,, is both an operator (e.g., the complement) and operand (e.g., the value 1). This can be summarized neatly by defining two functionsm(x){\displaystyle m(x)}andu(x){\displaystyle u(x)}for the marked and unmarked state, respectively: letm(x)=1−max({0}∪x){\displaystyle m(x)=1-\max(\{0\}\cup x)}andu(x)=max({0}∪x){\displaystyle u(x)=\max(\{0\}\cup x)}, wherex{\displaystyle x}is a (possibly empty) set of boolean values. This reveals thatu{\displaystyle u}is either the value 0 or the OR operator, whilem{\displaystyle m}is either the value 1 or the NOR operator, depending on whetherx{\displaystyle x}is the empty set or not. As noted above, there is a dual form of these functions exchanging AND ↔ OR and 1 ↔ 0. Let the blank page denoteFalse, and let a Cross be read asNot. Then the primary arithmetic has the following sentential reading: Theprimary algebrainterprets sentential logic as follows. A letter represents any given sentential expression. Thus: Thus any expression insentential logichas aprimary algebratranslation. Equivalently, theprimary algebrainterpretssentential logic. Given an assignment of every variable to the Marked or Unmarked states, thisprimary algebratranslation reduces to a primary arithmetic expression, which can be simplified. Repeating this exercise for all possible assignments of the two primitive values to each variable, reveals whether the original expression istautologicalorsatisfiable. This is an example of adecision procedure, one more or less in the spirit of conventional truth tables. Given someprimary algebraformula containingNvariables, this decision procedure requires simplifying 2Nprimary arithmetic formulae. For a less tedious decision procedure more in the spirit ofQuine's "truth value analysis", seeMeguire (2003). Schwartz (1981)proved that theprimary algebrais equivalent —syntactically,semantically, andproof theoretically— with theclassical propositional calculus. Likewise, it can be shown that theprimary algebrais syntactically equivalent with expressions built up in the usual way from the classicaltruth valuestrueandfalse, thelogical connectivesNOT, OR, and AND, and parentheses. Interpreting the Unmarked State asFalseis wholly arbitrary; that state can equally well be read asTrue. All that is required is that the interpretation ofconcatenationchange from OR to AND. IF A THEN B now translates asinstead of. More generally, theprimary algebrais "self-dual", meaning that anyprimary algebraformula has twosententialorBooleanreadings, each thedualof the other. Another consequence of self-duality is the irrelevance ofDe Morgan's laws; those laws are built into the syntax of theprimary algebrafrom the outset. The true nature of the distinction between theprimary algebraon the one hand, and2and sentential logic on the other, now emerges. In the latter formalisms,complementation/negationoperating on "nothing" is not well-formed. But an empty Cross is a well-formedprimary algebraexpression, denoting the Marked state, a primitive value. Hence a nonempty Cross is anoperator, while an empty Cross is anoperandbecause it denotes a primitive value. Thus theprimary algebrareveals that the heretofore distinct mathematical concepts of operator and operand are in fact merely different facets of a single fundamental action, the making of a distinction. Appendix 2 ofLoFshows how to translate traditionalsyllogismsandsoritesinto theprimary algebra. A valid syllogism is simply one whoseprimary algebratranslation simplifies to an empty Cross. LetA* denote aliteral, i.e., eitherAorA|¯{\displaystyle {\overline {A|}}}, indifferently. Then every syllogism that does not require that one or more terms be assumed nonempty is one of 24 possible permutations of a generalization ofBarbarawhoseprimary algebraequivalent isA∗B|¯B|¯C∗|¯A∗C∗{\displaystyle {\overline {A^{*}\ B|}}\ \ {\overline {{\overline {B|}}\ C^{*}{\Big |}}}\ A^{*}\ C^{*}}. These 24 possible permutations include the 19 syllogistic forms deemed valid inAristotelianandmedieval logic. Thisprimary algebratranslation of syllogistic logic also suggests that theprimary algebracaninterpretmonadicandterm logic, and that theprimary algebrahas affinities to theBoolean term schemataofQuine (1982), Part II. The following calculation ofLeibniz's nontrivialPraeclarum Theoremaexemplifies the demonstrative power of theprimary algebra. Let C1 beA|¯|¯{\displaystyle {\overline {{\overline {A|}}{\Big |}}}}=A, C2 beAAB|¯=AB|¯{\displaystyle A\ {\overline {A\ B|}}=A\ {\overline {B|}}}, C3 be|¯A=|¯{\displaystyle {\overline {\ \ |}}\ A={\overline {\ \ |}}}, J1a beA|¯A=|¯{\displaystyle {\overline {A|}}\ A={\overline {\ \ |}}}, and let OI mean that variables and subformulae have been reordered in a way that commutativity and associativity permit. Theprimary algebraembodies a point noted byHuntingtonin 1933:Boolean algebrarequires, in addition to oneunary operation, one, and not two,binary operations. Hence the seldom-noted fact that Boolean algebras aremagmas. (Magmas were calledgroupoidsuntil the latter term was appropriated bycategory theory.) To see this, note that theprimary algebrais acommutative: Groupsalso require aunary operation, calledinverse, the group counterpart ofBoolean complementation. Letdenote the inverse ofa. Letdenote the groupidentity element. Then groups and theprimary algebrahave the samesignatures, namely they are both⟨−−,−|¯,|¯⟩{\displaystyle \langle -\ -,{\overline {-\ |}},{\overline {\ \ |}}\rangle }algebras of type 〈2,1,0〉. Hence theprimary algebrais aboundary algebra. The axioms for anabelian group, in boundary notation, are: FromG1andG2, the commutativity and associativity of concatenation may be derived, as above. Note thatG3andJ1aare identical.G2andJ0would be identical if=replacedA2. This is the defining arithmetical identity of group theory, in boundary notation. Theprimary algebradiffers from anabelian groupin two ways: BothA2andC2follow fromB's being anordered set. Chapter 11 ofLoFintroducesequations of the second degree, composed ofrecursiveformulae that can be seen as having "infinite" depth. Some recursive formulae simplify to the marked or unmarked state. Others "oscillate" indefinitely between the two states depending on whether a given depth is even or odd. Specifically, certain recursive formulae can be interpreted as oscillating betweentrueandfalseover successive intervals of time, in which case a formula is deemed to have an "imaginary" truth value. Thus the flow of time may be introduced into theprimary algebra. Turney (1986)shows how these recursive formulae can be interpreted viaAlonzo Church's Restricted Recursive Arithmetic (RRA). Church introduced RRA in 1955 as an axiomatic formalization offinite automata. Turney presents a general method for translating equations of the second degree into Church's RRA, illustrating his method using the formulaeE1,E2, andE4in chapter 11 ofLoF. This translation into RRA sheds light on the names Spencer-Brown gave toE1andE4, namely "memory" and "counter". RRA thus formalizes and clarifiesLoF's notion of an imaginary truth value. Gottfried Leibniz, in memoranda not published before the late 19th and early 20th centuries, inventedBoolean logic. His notation was isomorphic to that ofLoF: concatenation read asconjunction, and "non-(X)" read as thecomplementofX. Recognition of Leibniz's pioneering role inalgebraic logicwas foreshadowed byLewis (1918)andRescher (1954). But a full appreciation of Leibniz's accomplishments had to await the work of Wolfgang Lenzen, published in the 1980s and reviewed inLenzen (2004). Charles Sanders Peirce(1839–1914) anticipated theprimary algebrain three veins of work: LoFcites vol. 4 of Peirce'sCollected Papers,the source for the formalisms in (2) and (3) above. (1)-(3) were virtually unknown at the time when (1960s) and in the place where (UK)LoFwas written. Peirce'ssemiotics, about whichLoFis silent, may yet shed light on the philosophical aspects ofLoF. Kauffman (2001)discusses another notation similar to that ofLoF, that of a 1917 article byJean Nicod, who was a disciple ofBertrand Russell's. The above formalisms are, like theprimary algebra, all instances ofboundary mathematics, i.e., mathematics whose syntax is limited to letters and brackets (enclosing devices). A minimalist syntax of this nature is a "boundary notation". Boundary notation is free of infix operators,prefix, orpostfixoperator symbols. The very well known curly braces ('{', '}') of set theory can be seen as a boundary notation. The work of Leibniz, Peirce, and Nicod is innocent of metatheory, as they wrote beforeEmil Post's landmark 1920 paper (whichLoFcites), proving thatsentential logicis complete, and beforeHilbertandŁukasiewiczshowed how to proveaxiom independenceusingmodels. Craig (1979)argued that the world, and how humans perceive and interact with that world, has a rich Boolean structure.Craigwas an orthodox logician and an authority onalgebraic logic. Second-generationcognitive scienceemerged in the 1970s, afterLoFwas written. On cognitive science and its relevance to Boolean algebra, logic, andset theory, seeLakoff (1987)(see index entries under "Image schema examples: container") andLakoff & Núñez (2000). Neither book citesLoF. The biologists and cognitive scientistsHumberto Maturanaand his studentFrancisco Varelaboth discussLoFin their writings, which identify "distinction" as the fundamental cognitive act. The Berkeley psychologist and cognitive scientistEleanor Roschhas written extensively on the closely related notion of categorization. Other formal systems with possible affinities to the primary algebra include: The primary arithmetic and algebra are a minimalist formalism forsentential logicand Boolean algebra. Other minimalist formalisms having the power ofset theoryinclude:
https://en.wikipedia.org/wiki/Laws_of_Form
Inlogicandmathematics, statementsp{\displaystyle p}andq{\displaystyle q}are said to belogically equivalentif they have the sametruth valuein everymodel.[1]The logical equivalence ofp{\displaystyle p}andq{\displaystyle q}is sometimes expressed asp≡q{\displaystyle p\equiv q},p::q{\displaystyle p::q},Epq{\displaystyle {\textsf {E}}pq}, orp⟺q{\displaystyle p\iff q}, depending on the notation being used. However, these symbols are also used formaterial equivalence, so proper interpretation would depend on the context. Logical equivalence is different from material equivalence, although the two concepts are intrinsically related. In logic, many common logical equivalences exist and are often listed as laws or properties. The following tables illustrate some of these. Where⊕{\displaystyle \oplus }representsXOR. The following statements are logically equivalent: Syntactically, (1) and (2) are derivable from each other via the rules ofcontrapositionanddouble negation. Semantically, (1) and (2) are true in exactly the same models (interpretations, valuations); namely, those in which eitherLisa is in Denmarkis false orLisa is in Europeis true. (Note that in this example,classical logicis assumed. Somenon-classical logicsdo not deem (1) and (2) to be logically equivalent.) Logical equivalence is different from material equivalence. Formulasp{\displaystyle p}andq{\displaystyle q}are logically equivalent if and only if the statement of their material equivalence (p↔q{\displaystyle p\leftrightarrow q}) is a tautology.[2] The material equivalence ofp{\displaystyle p}andq{\displaystyle q}(often written asp↔q{\displaystyle p\leftrightarrow q}) is itself another statement in the sameobject languageasp{\displaystyle p}andq{\displaystyle q}. This statement expresses the idea "'p{\displaystyle p}if and only ifq{\displaystyle q}'". In particular, the truth value ofp↔q{\displaystyle p\leftrightarrow q}can change from one model to another. On the other hand, the claim that two formulas are logically equivalent is a statement inmetalanguage, which expresses a relationship between two statementsp{\displaystyle p}andq{\displaystyle q}. The statements are logically equivalent if, in every model, they have the same truth value.
https://en.wikipedia.org/wiki/Logical_equivalence
Inclassicalpropositional logic,material implication[1][2]is avalidrule of replacementthat allows aconditional statementto be replaced by adisjunctionin which theantecedentisnegated. The rule states thatP implies Qislogically equivalenttonot-P{\displaystyle P}orQ{\displaystyle Q}and that either form can replace the other inlogical proofs. In other words, ifP{\displaystyle P}is true, thenQ{\displaystyle Q}must also be true, while ifQ{\displaystyle Q}isnottrue, thenP{\displaystyle P}cannot be true either; additionally, whenP{\displaystyle P}is not true,Q{\displaystyle Q}may be either true or false. where "⇔{\displaystyle \Leftrightarrow }" is ametalogicalsymbolrepresenting "can be replaced in a proof with",PandQare any given logicalstatements, and¬P∨Q{\displaystyle \neg P\lor Q}can be read as "(notP) orQ". To illustrate this, consider the following statements: Then, to say "Sam ate an orange for lunch"implies"Sam ate a fruit for lunch" (P→Q{\displaystyle P\to Q}). Logically, if Sam did not eat a fruit for lunch, then Sam also cannot have eaten an orange for lunch (bycontraposition). However, merely saying that Sam did not eat an orange for lunch provides no information on whether or not Sam ate a fruit (of any kind) for lunch. Suppose we are given thatP→Q{\displaystyle P\to Q}. Then we have¬P∨P{\displaystyle \neg P\lor P}by thelaw of excluded middle(i.e. eitherP{\displaystyle P}must be true, orP{\displaystyle P}must not be true). Subsequently, sinceP→Q{\displaystyle P\to Q},P{\displaystyle P}can be replaced byQ{\displaystyle Q}in the statement, and thus it follows that¬P∨Q{\displaystyle \neg P\lor Q}(i.e. eitherQ{\displaystyle Q}must be true, orP{\displaystyle P}must not be true). Suppose, conversely, we are given¬P∨Q{\displaystyle \neg P\lor Q}. Then ifP{\displaystyle P}is true, that rules out the first disjunct, so we haveQ{\displaystyle Q}. In short,P→Q{\displaystyle P\to Q}.[3]However, ifP{\displaystyle P}is false, then this entailment fails, because the first disjunct¬P{\displaystyle \neg P}is true, which puts no constraint on the second disjunctQ{\displaystyle Q}. Hence, nothing can be said aboutP→Q{\displaystyle P\to Q}. In sum, the equivalence in the case of falseP{\displaystyle P}is only conventional, and hence the formal proof of equivalence is only partial. This can also be expressed with atruth table: An example: we are given the conditional fact that if it is a bear, then it can swim. Then, all 4 possibilities in the truth table are compared to that fact. Thus, the conditional fact can be converted to¬P∨Q{\displaystyle \neg P\vee Q}, which is "it is not a bear" or "it can swim", whereP{\displaystyle P}is the statement "it is a bear" andQ{\displaystyle Q}is the statement "it can swim". Intuitionistic logicdoes not treatP→Q{\displaystyle P\to Q}as equivalent to¬P∨Q{\displaystyle \neg P\vee Q}because GivenP→Q{\displaystyle P\to Q}, one can constructively transform a proof ofP{\displaystyle P}into a proof ofQ{\displaystyle Q}. In particular,P→P{\displaystyle P\to P}holds in intuitionistic logic. IfP→Q⇒¬P∨Q{\displaystyle P\to Q\Rightarrow \neg P\lor Q}would hold, then¬P∨P{\displaystyle \neg P\lor P}could be derived. However, the latter is thelaw of excluded middle, which is not accepted by intuitionistic logic (one cannot assume¬P∨P{\displaystyle \neg P\lor P}without knowing which case applies).
https://en.wikipedia.org/wiki/Material_implication_(rule_of_inference)
Connexive logicis a class ofnon-classical logicsdesigned to exclude theparadoxes of material implication. The characteristic that separates connexive logic from other non-classical logics is its acceptance ofAristotle's thesis, i.e. the formula,¬(¬p→p){\displaystyle \lnot (\lnot p\rightarrow p)}as alogical truth. Aristotle's thesis asserts that no statementfollows fromits own denial. Stronger connexive logics also acceptBoethius' thesis,(p→q)→¬(p→¬q){\displaystyle (p\rightarrow q)\rightarrow \lnot (p\rightarrow \lnot q)}which states that if a statement implies one thing, it does not imply its opposite. Relevance logicis another logical theory that tries to avoid the paradoxes of material implication. Connexive logic is arguably one of the oldest approaches to logic. Aristotle's thesis is named afterAristotlebecause he uses this principle in a passage in thePrior Analytics. It is impossible that the same thing should be necessitated by the being and the not-being of the same thing. I mean, for example, that it is impossible that B should necessarily be great if A is white, and that B should necessarily be great if A is not white. For if B is not great A cannot be white. But if, when A is not white, it is necessary that B should be great, it necessarily results that if B is not great, B itself is great. But this is impossible.An. Pr. ii 4.57b3. The sense of this passage is to perform areductio ad absurdumproof on the claim that two formulas, (A → B) and (~A → B), can be true simultaneously. The proof is: Aristotle then declares the last line to be impossible, completing thereductio. But if it is impossible, its denial, ~(~B → B), is a logical truth. Aristoteliansyllogisms(as opposed to Boolean syllogisms) appear to be based on connexive principles. For example, the contrariety of A and E statements, "All S are P," and "No S are P," follows by areductio ad absurdumargument similar to the one given by Aristotle. Later logicians, notablyChrysippus, are also thought to have endorsed connexive principles. By 100 BCE logicians had divided into four or five distinct schools concerning the correct understanding of conditional ("if...then...") statements.Sextus Empiricusdescribed one school as follows: And those who introduce the notion of connexion say that a conditional is sound when the contradictory of its consequent is incompatible with its antecedent. The term "connexivism" is derived from this passage (as translated by Kneale and Kneale). It is believed that Sextus was here describing the school of Chrysippus. That this school accepted Aristotle's thesis seems clear because the definition of the conditional, requires that Aristotle's thesis be a logical truth, provided we assume that every statement is compatible with itself, which seems fairly fundamental to the concept of compatibility. The medieval philosopherBoethiusalso accepted connexive principles. InDe Syllogismo Hypothetico, he argues that from, "If A, then if B then C," and "If B then not-C," we may infer "not-A," bymodus tollens. However, this follows only if the two statements, "If B then C," and "If B then not-C," are considered incompatible. Since Aristotelian logic was the standard logic studied until the 19th century, it could reasonably be claimed that connexive logic was the accepted school of thought among logicians for most of Western history. (Of course, logicians were not necessarily aware of belonging to the connexivist school.) However, in the 19th century Boolean syllogisms, and a propositional logic based ontruth functions, became the standard. Since then, relatively few logicians have subscribed to connexivism. These few include Everett J. Nelson andP. F. Strawson. The objection that is made to the truth-functional definition of conditionals is that there is no requirement that the consequentactually followfrom the antecedent. So long as the antecedent is false or the consequent true, the conditional is considered to be true whether there is any relation between the antecedent and the consequent or not. Hence, as the philosopherCharles Sanders Peirceonce remarked, you can cut up a newspaper, sentence by sentence, put all the sentences in a hat, and draw any two at random. It is guaranteed that either the first sentence will imply the second, or vice versa. But when we use the words "if" and "then" we generally mean to assert that there is some relation between the antecedent and the consequent. What is the nature of that relationship? Relevance (or relevant) logicians take the view that, in addition to saying that the consequent cannot be false while the antecedent is true, the antecedent must be "relevant" to the consequent. At least initially, this means that there must be at least some terms (or variables) that appear in both the antecedent and the consequent. Connexivists generally claim instead that there must be some "real connection" between the antecedent and the consequent, such as might be the result of real class inclusion relations. For example, the class relations, "All men are mortal," would provide a real connection that would warrant the conditional, "If Socrates is a man, then Socrates is mortal." However, more remote connections, for example "If she apologized to him, then he lied to me." (suggested byBennett) still defy connexivist analysis.
https://en.wikipedia.org/wiki/Connexive_logic
The phrase "correlation does not imply causation" refers to the inability to legitimately deduce acause-and-effectrelationship between two events orvariablessolely on the basis of an observed association orcorrelationbetween them.[1][2]The idea that "correlation implies causation" is an example of aquestionable-causelogical fallacy, in which two events occurring together are taken to have established a cause-and-effect relationship. This fallacy is also known by the Latin phrasecum hoc ergo propter hoc('with this, therefore because of this'). This differs from the fallacy known aspost hoc ergo propter hoc("after this, therefore because of this"), in which an event following another is seen as anecessary consequenceof the former event, and fromconflation, the errant merging of two events, ideas, databases, etc., into one. As with any logical fallacy, identifying that the reasoning behind an argument is flaweddoes not necessarily implythat the resulting conclusion is false.Statisticalmethods have been proposed that use correlation as the basis forhypothesis testsfor causality, including theGranger causality testandconvergent cross mapping. TheBradford Hill criteria, also known as Hill's criteria for causation, are a group of nine principles that can be useful in establishing epidemiologic evidence of a causal relationship. In casual use, the word "implies" loosely meanssuggests, rather thanrequires. However, inlogic, the technical use of the word "implies" means "is asufficient conditionfor."[3]That is the meaning intended by statisticians when they say causation is not certain. Indeed,p implies qhas the technical meaning of thematerial conditional:if p then qsymbolized asp → q. That is, "if circumstancepis true, thenqfollows." In that sense, it is always correct to say "Correlation does notimplycausation." The word "cause" (or "causation") has multiple meanings in English. In philosophical terminology, "cause" can refer tonecessary, sufficient, or contributingcauses. In examining correlation, "cause" is most often used to mean "one contributing cause" (but not necessarily the only contributing cause). Reverse causationorreverse causalityorwrong directionis aninformal fallacyofquestionable causewhere cause and effect are reversed. The cause is said to be the effect and vice versa. In this example, the correlation (simultaneity) between windmill activity and wind velocity does not imply that wind is caused by windmills. It is rather the other way around, as suggested by the fact that wind does not need windmills to exist, while windmills need wind to rotate. Wind can be observed in places where there are no windmills or non-rotating windmills—and there are good reasons to believe that wind existed before the invention of windmills. Causality is actually the other way around, since some diseases, such as cancer, cause low cholesterol due to a myriad of factors, such as weight loss, and they also cause an increase in mortality.[6]This can also be seen in alcoholics.[citation needed]As alcoholics become diagnosed with cirrhosis of the liver, many quit drinking. However, they also experience an increased risk of mortality. In these instances, it is the diseases that cause an increased risk of mortality, but the increased mortality is attributed to the beneficial effects that follow the diagnosis, making healthy changes look unhealthy. Example 3 In other cases it may simply be unclear which is the cause and which is the effect. For example: This could easily be the other way round; that is, violent children like watching more TV than less violent ones. Example 4 A correlation betweenrecreational drug useandpsychiatric disordersmight be either way around: perhaps the drugs cause the disorders, or perhaps people use drugs toself medicatefor preexisting conditions.Gateway drug theorymay argue thatmarijuanausage leads to usage of harder drugs, but hard drug usage may lead to marijuana usage (see alsoconfusion of the inverse). Indeed, in thesocial scienceswhere controlled experiments often cannot be used to discern the direction of causation, this fallacy can fuel long-standing scientific arguments. One such example can be found ineducation economics, between thescreening/signalingandhuman capitalmodels: it could either be that having innate ability enables one to complete an education, or that completing an education builds one's ability. Example 5 A historical example of this is that Europeans in theMiddle Agesbelieved thatlicewere beneficial to health since there would rarely be any lice on sick people. The reasoning was that the people got sick because the lice left. The real reason however is that lice are extremely sensitive tobody temperature. A small increase of body temperature, such as in afever, makes the lice look for another host. The medicalthermometerhad not yet been invented and so that increase in temperature was rarely noticed. Noticeable symptoms came later, which gave the impression that the lice had left before the person became sick.[7] In other cases, two phenomena can each be a partial cause of the other; consider poverty and lack of education, or procrastination and poor self-esteem. One making an argument based on these two phenomena must however be careful to avoid the fallacy ofcircular cause and consequence. Poverty isacause of lack of education, but it is not thesolecause, and vice versa. Thethird-cause fallacy(also known asignoring a common cause[8]orquestionable cause[8]) is alogical fallacyin which aspurious relationshipis confused forcausation. It asserts that X causes Y when in reality, both X and Y are caused by Z. It is a variation on thepost hoc ergo propter hocfallacy and a member of thequestionable causegroup of fallacies. All of those examples deal with alurking variable, which is simply a hidden third variable that affects both of the variables observed to be correlated. That third variable is also known as aconfoundingvariable, with the slight difference that confounding variables need not be hidden and may thus be corrected for in an analysis. Note that the Wikipedia link to lurking variable redirects to confounding. A difficulty often also arises where the third factor, though fundamentally different from A and B, is so closely related to A and/or B as to be confused with them or very difficult to scientifically disentangle from them (see Example 4). The above example commits the correlation-implies-causation fallacy, as it prematurely concludes that sleeping with one's shoes on causes headache. A more plausible explanation is that both are caused by a third factor, in this case going to beddrunk, which thereby gives rise to a correlation. So the conclusion is false. This is a scientific example that resulted from a study at theUniversity of PennsylvaniaMedical Center. Published in the May 13, 1999, issue ofNature,[9]the study received much coverage at the time in the popular press.[10]However, a later study atOhio State Universitydid not find thatinfantssleeping with the light on caused the development of myopia. It did find a strong link between parental myopia and the development of child myopia, also noting that myopic parents were more likely to leave a light on in their children's bedroom.[11][12][13][14]In this case, the cause of both conditions is parental myopia, and the above-stated conclusion is false. This example fails to recognize the importance of time of year and temperature to ice cream sales. Ice cream is sold during the hot summer months at a much greater rate than during colder times, and it is during these hot summer months that people are more likely to engage in activities involving water, such asswimming. The increased drowning deaths are simply caused by more exposure to water-based activities, not ice cream. The stated conclusion is false. However, as encountered in many psychological studies, another variable, a "self-consciousness score", is discovered that has a sharper correlation (+.73) with shyness. This suggests a possible "third variable" problem, however, when three such closely related measures are found, it further suggests that each may have bidirectional tendencies (see "bidirectional variable", above), being a cluster of correlated values each influencing one another to some extent. Therefore, the simple conclusion above may be false. Richer populations tend to eat more food and produce more CO2. Further research[16]has called this conclusion into question. Instead, it may be that other underlying factors, like genes, diet and exercise, affect both HDL levels and the likelihood of having a heart attack; it is possible that medicines may affect the directly measurable factor, HDL levels, without affecting the chance of heart attack. Causality is not necessarily one-way;[dubious–discuss]in apredator-prey relationship, predator numbers affect prey numbers, but prey numbers, i.e. food supply, also affect predator numbers. Another well-known example is that cyclists have a lowerBody Mass Indexthan people who do not cycle. This is often explained by assuming that cycling increasesphysical activitylevels and therefore decreases BMI. Because results from prospective studies on people who increase their bicycle use show a smaller effect on BMI than cross-sectional studies, there may be some reverse causality as well. For example, people with a lower BMI may be more likely to want to cycle in the first place.[17] The two variables are not related at all, but correlate by chance. The more things are examined, the more likely it is that two unrelated variables will appear to be related. For example: Much of scientific evidence is based upon a correlation of variables[18]that are observed to occur together. Scientists are careful to point out that correlation does not necessarily mean causation. The assumption that A causes B simply because A correlates with B is not accepted as a legitimate form of argument. However, sometimes people commit the opposite fallacy of dismissing correlation entirely. That would dismiss a large swath of important scientific evidence.[18]Since it may be difficult or ethically impossible to run controlleddouble-blindstudies to address certain questions, correlational evidence from several different angles may be useful forpredictiondespite failing to provide evidence forcausation. For example, social workers might be interested in knowing how child abuse relates to academic performance. Although it would be unethical to perform an experiment in which children are randomly assigned to receive or not receive abuse, researchers can look at existing groups using a non-experimental correlational design. If in fact a negative correlation exists between abuse and academic performance, researchers could potentially use this knowledge of a statistical correlation to make predictions about children outside the study who experience abuse even though the study failed to provide causal evidence that abuse decreases academic performance.[19]The combination of limited available methodologies with the dismissing correlation fallacy has on occasion been used to counter a scientific finding. For example, thetobacco industryhas historically relied on a dismissal of correlational evidence to reject a link betweentobacco smoke and lung cancer,[20]as did biologist and statisticianRonald Fisher(frequently on the industry's behalf).[list 1] Correlation is a valuable type ofscientific evidencein fields such as medicine, psychology, and sociology. Correlations must first be confirmed as real, and every possible causative relationship must then be systematically explored. In the end, correlation alone cannot be used as evidence for a cause-and-effect relationship between a treatment and benefit, a risk factor and a disease, or a social or economic factor and various outcomes. It is one of the most abused types of evidence because it is easy and even tempting to come to premature conclusions based upon the preliminary appearance of a correlation.[20]
https://en.wikipedia.org/wiki/Correlation_does_not_imply_causation
Counterfactual conditionals(alsocontrafactual,subjunctiveorX-marked) areconditional sentenceswhich discuss what would have been true under different circumstances, e.g. "If Peter believed in ghosts, he would be afraid to be here." Counterfactuals are contrasted withindicatives, which are generally restricted to discussing open possibilities. Counterfactuals are characterized grammatically by their use offake tense morphology, which some languages use in combination with other kinds ofmorphologyincludingaspectandmood. Counterfactuals are one of the most studied phenomena inphilosophical logic,formal semantics, andphilosophy of language. They were first discussed as a problem for thematerial conditionalanalysis of conditionals, which treats them all as trivially true. Starting in the 1960s, philosophers and linguists developed the now-classicpossible worldapproach, in which a counterfactual's truth hinges on its consequent holding at certain possible worlds where its antecedent holds. More recent formal analyses have treated them using tools such ascausal modelsanddynamic semantics. Other research has addressed their metaphysical, psychological, and grammatical underpinnings, while applying some of the resultant insights to fields including history, marketing, and epidemiology. An example of the difference betweenindicativeand counterfactual conditionals is the followingEnglishminimal pair: These conditionals differ in both form and meaning. The indicative conditional uses the present tense form "owns" and therefore conveys that the speaker is agnostic about whether Sally in fact owns a donkey. The counterfactual example uses thefake tenseform "owned" in the "if" clause and the past-inflectedmodal"would" in the "then" clause. As a result, it conveys that Sally does not in fact own a donkey. English has several other grammatical forms whose meanings are sometimes included under the umbrella of counterfactuality. One is thepast perfectcounterfactual, which contrasts with indicatives and simple past counterfactuals in its use of pluperfect morphology:[5] Another kind of conditional uses the form "were", generally referred to as theirrealisor subjunctive form.[6] Past perfect and irrealis counterfactuals can undergoconditional inversion:[7] The termcounterfactual conditionalis widely used as an umbrella term for the kinds of sentences shown above. However, not all conditionals of this sort express contrary-to-fact meanings. For instance, the classic example known as the "Anderson Case" has the characteristic grammatical form of a counterfactual conditional, but does not convey that its antecedent is false or unlikely.[8][9] Such conditionals are also widely referred to assubjunctive conditionals, though this term is likewise acknowledged as a misnomer even by those who use it.[11]Many languages do not have a morphologicalsubjunctive(e.g.DanishandDutch) and many that do have it do not use it for this sort of conditional (e.g.French,Swahili, allIndo-Aryan languagesthat have a subjunctive). Moreover, languages that do use the subjunctive for such conditionals only do so if they have a specific past subjunctive form. Thus, subjunctive marking is neither necessary nor sufficient for membership in this class of conditionals.[12][13][9] The termscounterfactualandsubjunctivehave sometimes been repurposed for more specific uses. For instance, the term "counterfactual" is sometimes applied to conditionals that express a contrary-to-fact meaning, regardless of their grammatical structure.[14][8]Along similar lines, the term "subjunctive" is sometimes used to refer to conditionals that bear fake past or irrealis marking, regardless of the meaning they convey.[14][15] Recently the termX-Markedhas been proposed as a replacement, evoking theextra marking that these conditionals bear. Those adopting this terminology refer to indicative conditionals asO-Markedconditionals, reflecting theirordinary marking.[16][17][3] Theantecedentof a conditional is sometimes referred to as its"if"-clauseorprotasis. Theconsequentof a conditional is sometimes referred to as a"then"-clause or as an apodosis. Counterfactuals were first discussed byNelson Goodmanas a problem for thematerial conditionalused inclassical logic. Because of these problems, early work such as that ofW.V. Quineheld that counterfactuals are not strictly logical, and do not make true or false claims about the world. However, in the 1960s and 1970s, work byRobert StalnakerandDavid Lewisshowed that these problems are surmountable given an appropriateintensionallogical framework. Work since then informal semantics,philosophical logic,philosophy of language, andcognitive sciencehas built on this insight, taking it in a variety of different directions.[18] According to thematerial conditionalanalysis, a natural language conditional, a statement of the form "if P then Q", is true whenever its antecedent, P, is false. Since counterfactual conditionals are those whose antecedents are false, this analysis would wrongly predict that all counterfactuals are vacuously true. Goodman illustrates this point using the following pair in a context where it is understood that the piece of butter under discussion had not been heated.[19] More generally, such examples show that counterfactuals are not truth-functional. In other words, knowing whether the antecedent and consequent are actually true is not sufficient to determine whether the counterfactual itself is true.[18] Counterfactuals arecontext dependentandvague. For example, either of the following statements can be reasonably held true, though not at the same time:[20] Counterfactuals arenon-monotonicin the sense that their truth values can be changed by adding extra material to their antecedents. This fact is illustrated bySobel sequencessuch as the following:[19][21][22] One way of formalizing this fact is to say that the principle ofAntecedent Strengtheningshouldnothold for any connective > intended as a formalization of natural language conditionals. The most common logical accounts of counterfactuals are couched in thepossible world semantics. Broadly speaking, these approaches have in common that they treat a counterfactualA>Bas true ifBholds across some set of possible worlds where A is true. They vary mainly in how they identify the set of relevant A-worlds. David Lewis'svariably strict conditionalis considered the classic analysis within philosophy. The closely relatedpremise semanticsproposed byAngelika Kratzeris often taken as the standard within linguistics. However, there are numerous possible worlds approaches on the market, includingdynamicvariants of thestrict conditionalanalysis originally dismissed by Lewis. Thestrict conditionalanalysis treats natural language counterfactuals as being equivalent to themodal logicformula◻(P→Q){\displaystyle \Box (P\rightarrow Q)}. In this formula,◻{\displaystyle \Box }expresses necessity and→{\displaystyle \rightarrow }is understood asmaterial implication. This approach was first proposed in 1912 byC.I. Lewisas part of hisaxiomatic approachto modal logic.[18]In modernrelational semantics, this means that the strict conditional is true atwiff the corresponding material conditional is true throughout the worlds accessible fromw. More formally: Unlike the material conditional, the strict conditional is not vacuously true when its antecedent is false. To see why, observe that bothP{\displaystyle P}and◻(P→Q){\displaystyle \Box (P\rightarrow Q)}will be false atw{\displaystyle w}if there is some accessible worldv{\displaystyle v}whereP{\displaystyle P}is true andQ{\displaystyle Q}is not. The strict conditional is also context-dependent, at least when given a relational semantics (or something similar). In the relational framework, accessibility relations are parameters of evaluation which encode the range of possibilities which are treated as "live" in the context. Since the truth of a strict conditional can depend on the accessibility relation used to evaluate it, this feature of the strict conditional can be used to capture context-dependence. The strict conditional analysis encounters many known problems, notably monotonicity. In the classical relational framework, when using a standard notion of entailment, the strict conditional is monotonic, i.e. it validatesAntecedent Strengthening. To see why, observe that ifP→Q{\displaystyle P\rightarrow Q}holds at every world accessible fromw{\displaystyle w}, the monotonicity of the material conditional guarantees thatP∧R→Q{\displaystyle P\land R\rightarrow Q}will be too. Thus, we will have that◻(P→Q)⊨◻(P∧R→Q){\displaystyle \Box (P\rightarrow Q)\models \Box (P\land R\rightarrow Q)}. This fact led to widespread abandonment of the strict conditional, in particular in favor of Lewis'svariably strict analysis. However, subsequent work has revived the strict conditional analysis by appealing to context sensitivity. This approach was pioneered by Warmbrōd (1981), who argued thatSobel sequencesdo not demand anon-monotoniclogic, but in fact can rather be explained by speakers switching to more permissive accessibility relations as the sequence proceeds. In his system, a counterfactual like "If Hannah had drunk coffee, she would be happy" would normally be evaluated using a model where Hannah's coffee is gasoline-free in all accessible worlds. If this same model were used to evaluate a subsequent utterance of "If Hannah had drunk coffee and the coffee had gasoline in it...", this second conditional would come out as trivially true, since there are no accessible worlds where its antecedent holds. Warmbrōd's idea was that speakers will switch to a model with a more permissive accessibility relation in order to avoid this triviality. Subsequent work by Kai von Fintel (2001), Thony Gillies (2007), and Malte Willer (2019) has formalized this idea in the framework ofdynamic semantics, and given a number of linguistic arguments in favor. One argument is that conditional antecedents licensenegative polarity items, which are thought to be licensed only by monotonic operators. Another argument in favor of the strict conditional comes fromIrene Heim'sobservation that Sobel Sequences are generallyinfelicitous(i.e. sound strange) in reverse. Sarah Moss (2012) and Karen Lewis (2018) have responded to these arguments, showing that a version of the variably strict analysis can account for these patterns, and arguing that such an account is preferable since it can also account for apparent exceptions. As of 2020, this debate continues in the literature, with accounts such as Willer (2019) arguing that a strict conditional account can cover these exceptions as well.[18] In the variably strict approach, the semantics of a conditionalA>Bis given by some function on the relative closeness of worlds where A is true and B is true, on the one hand, and worlds where A is true but B is not, on the other. On Lewis's account, A > C is (a) vacuously true if and only if there are no worlds where A is true (for example, if A is logically or metaphysically impossible); (b) non-vacuously true if and only if, among the worlds where A is true, some worlds where C is true are closer to the actual world than any world where C is not true; or (c) false otherwise. Although in Lewis'sCounterfactualsit was unclear what he meant by 'closeness', in later writings, Lewis made it clear that he didnotintend the metric of 'closeness' to be simply our ordinary notion ofoverall similarity. Example: On Lewis's account, the truth of this statement consists in the fact that, among possible worlds where he ate more for breakfast, there is at least one world where he is not hungry at 11 am and which is closer to our world than any world where he ate more for breakfast but is still hungry at 11 am. Stalnaker's account differs from Lewis's most notably in his acceptance of thelimitanduniqueness assumptions. The uniqueness assumption is the thesis that, for any antecedent A, among the possible worlds where A is true, there is a single (unique) one that isclosestto the actual world. The limit assumption is the thesis that, for a given antecedent A, if there is a chain of possible worlds where A is true, each closer to the actual world than its predecessor, then the chain has alimit: a possible world where A is true that is closer to the actual worlds than all worlds in the chain. (The uniqueness assumptionentailsthe limit assumption, but the limit assumption does not entail the uniqueness assumption.) On Stalnaker's account, A > C is non-vacuously true if and only if, at the closest world where A is true, C is true. So, the above example is true just in case at the single, closest world where he ate more breakfast, he does not feel hungry at 11 am. Although it is controversial, Lewis rejected the limit assumption (and therefore the uniqueness assumption) because it rules out the possibility that there might be worlds that get closer and closer to the actual world without limit. For example, there might be an infinite series of worlds, each with a coffee cup a smaller fraction of an inch to the left of its actual position, but none of which is uniquely the closest. (See Lewis 1973: 20.) One consequence of Stalnaker's acceptance of the uniqueness assumption is that, if thelaw of excluded middleis true, then all instances of the formula (A > C) ∨ (A > ¬C) are true. The law of excluded middle is the thesis that for all propositions p, p ∨ ¬p is true. If the uniqueness assumption is true, then for every antecedent A, there is a uniquely closest world where A is true. If the law of excluded middle is true, any consequent C is either true or false at that world where A is true. So for every counterfactual A > C, either A > C or A > ¬C is true. This is called conditional excluded middle (CEM). Example: On Stalnaker's analysis, there is a closest world where the fair coin mentioned in (1) and (2) is flipped and at that world either it lands heads or it lands tails. So either (1) is true and (2) is false or (1) is false and (2) true. On Lewis's analysis, however, both (1) and (2) are false, for the worlds where the fair coin lands heads are no more or less close than the worlds where they land tails. For Lewis, "If the coin had been flipped, it would have landed heads or tails" is true, but this does not entail that "If the coin had been flipped, it would have landed heads, or: If the coin had been flipped it would have landed tails." Thecausal models frameworkanalyzes counterfactuals in terms of systems ofstructural equations. In a system of equations, each variable is assigned a value that is an explicit function of other variables in the system. Given such a model, the sentence "Ywould beyhadXbeenx" (formally,X = x>Y = y) is defined as the assertion: If we replace the equation currently determiningXwith a constantX = x, and solve the set of equations for variableY, the solution obtained will beY = y. This definition has been shown to be compatible with the axioms of possible world semantics and forms the basis for causal inference in the natural and social sciences, since each structural equation in those domains corresponds to a familiar causal mechanism that can be meaningfully reasoned about by investigators. This approach was developed byJudea Pearl(2000) as a means of encoding fine-grained intuitions about causal relations which are difficult to capture in other proposed systems.[23] In thebelief revisionframework, counterfactuals are treated using a formal implementation of theRamsey test. In these systems, a counterfactualA>Bholds if and only if the addition ofAto the current body of knowledge hasBas a consequence. This condition relates counterfactual conditionals tobelief revision, as the evaluation ofA>Bcan be done by first revising the current knowledge withAand then checking whetherBis true in what results. Revising is easy whenAis consistent with the current beliefs, but can be hard otherwise. Every semantics for belief revision can be used for evaluating conditional statements. Conversely, every method for evaluating conditionals can be seen as a way for performing revision. Ginsberg (1986) has proposed a semantics for conditionals which assumes that the current beliefs form a set ofpropositional formulae, considering the maximal sets of these formulae that are consistent withA, and addingAto each. The rationale is that each of these maximal sets represents a possible state of belief in whichAis true that is as similar as possible to the original one. The conditional statementA>Btherefore holds if and only ifBis true in all such sets.[24] Languages use different strategies for expressing counterfactuality. Some have a dedicated counterfactualmorphemes, while others recruit morphemes which otherwise expresstense,aspect,mood, or a combination thereof. Since the early 2000s, linguists, philosophers of language, and philosophical logicians have intensely studied the nature of this grammatical marking, and it continues to be an active area of study. In many languages, counterfactuality is marked bypast tensemorphology.[25]Since these uses of the past tense do not convey their typical temporal meaning, they are calledfake pastorfake tense.[26][27][28]English is one language which uses fake past to mark counterfactuality, as shown in the followingminimal pair.[29]In the indicative example, the bolded words are present tense forms. In the counterfactual example, both words take their past tense form. This use of the past tense cannot have its ordinary temporal meaning, since it can be used with the adverb "tomorrow" without creating a contradiction.[25][26][27][28] Modern Hebrewis another language where counterfactuality is marked with a fake past morpheme:[30] im if Dani Dani haya be.PST.3S.M ba-bayit in-home maχa ɾ tomorrow hayinu be.PST.1PL mevakRim visit.PTC.PL oto he.ACC im Danihayaba-bayit {maχa ɾ}hayinumevakRim oto if Dani be.PST.3S.M in-home tomorrow be.PST.1PL visit.PTC.PL he.ACC "If Dani had been home tomorrow, we would've visited him." Palestinian Arabicis another:[30] iza if kaan be.PST.3S.M fi in l-bet the-house bukra tomorrow kunna be.PST.1PL zurna-a visit.PST.PFV.1PL-him izakaanfi l-bet bukra kunnazurna-a if be.PST.3S.M in the-house tomorrow be.PST.1PL visit.PST.PFV.1PL-him "If he had been home tomorrow, we would've visited him." Fake past is extremely prevalent cross-linguistically, either on its own or in combination with other morphemes. Moreover,theoretical linguistsandphilosophers of languagehave argued that other languages' strategies for marking counterfactuality are actuallyrealizationsof fake tense along with other morphemes. For this reason, fake tense has often been treated as the locus of the counterfactual meaning itself.[26][31] Informal semanticsandphilosophical logic, fake past is regarded as a puzzle, since it is not obvious why so many unrelated languages would repurpose a tensemorphemeto mark counterfactuality. Proposed solutions to this puzzle divide into two camps:past as modalandpast as past. These approaches differ in whether or not they take the past tense's core meaning to be about time.[32][33] In thepast as modal approach, thedenotationof the past tense is not fundamentally about time. Rather, it is anunderspecifiedskeleton which can apply either tomodalor temporal content.[26][32][34]For instance, the particular past as modal proposal of Iatridou (2000), the past tense's core meaning is what is shown schematically below: Depending on how this denotationcomposes,xcan be a time interval or apossible world. Whenxis a time, the past tense will convey that the sentence is talking about non-current times, i.e. the past. Whenxis a world, it will convey that the sentence is talking about a potentially non-actual possibility. The latter is what allows for a counterfactual meaning. Thepast as past approachtreats the past tense as having an inherently temporal denotation. On this approach, so-called fake tense is not actually fake. It differs from "real" tense only in how it takesscope, i.e. which component of the sentence's meaning is shifted to an earlier time. When a sentence has "real" past marking, it discusses something that happened at an earlier time; when a sentence has so-called fake past marking, it discusses possibilities that wereaccessibleat an earlier time but may no longer be.[35][36][37] Fakeaspectoften accompanies fake tense in languages that mark aspect. In some languages (e.g.Modern Greek,Zulu, and theRomance languages) this fake aspect isimperfective. In other languages (e.g.Palestinian Arabic) it isperfective. However, in other languages includingRussianandPolish, counterfactuals can have either perfective or imperfective aspect.[31] Fake imperfective aspect is demonstrated by the twoModern Greeksentences below. These examples form aminimal pair, since they are identical except that the first uses past imperfective marking where the second uses past perfective marking. As a result of this morphological difference, the first has a counterfactual meaning, while the second does not.[26] An if eperne take.PST.IPFV afto this to siropi syrup θa FUT γinotan become.PST.IPFV kala well An eperne afto to siropi θa γinotan kala if take.PST.IPFVthis {} syrup FUT become.PST.IPFVwell 'If he took this syrup, he would get better' An if ipχe take.PST.PFV afto this to siropi syrup θa FUT eγine become.PST.PFV kala well An ipχe afto to siropi θa eγine kala if take.PST.PFVthis {} syrup FUT become.PST.PFVwell "If he took this syrup, he must be better." This imperfective marking has been argued to be fake on the grounds that it is compatible withcompletive adverbialssuch as "in one month":[26] An if eχtizes build.IPFV to the spiti house (mesa) se in ena one mina month θa FUT prolavenes have-time-enough.IPFV na to to it pulisis sell prin before to the kalokeri summer An eχtizes to spiti (mesa) se ena mina θa prolavenes na to pulisis prin to kalokeri if build.IPFVthe house {} in one month FUT have-time-enough.IPFVto it sell before the summer "If you built this house in a month, you would be able to sell it before the summer." In ordinary non-conditional sentences, such adverbials are compatible with perfective aspect but not with imperfective aspect:[26] Eχtise build.PFV afto this to spiti house (mesa) in se ena one mina month Eχtise afto to spiti (mesa) se ena mina build.PFVthis {} house in {} one month "She built this house in one month." * Eχtize build.IPFV afto this to spiti house (mesa) in se ena one mina month * Eχtize afto to spiti (mesa) se ena mina {} build.IPFVthis {} house in {} one month "She was building this house in one month." People engage incounterfactual thinkingfrequently. Experimental evidence indicates that people's thoughts about counterfactual conditionals differ in important ways from their thoughts about indicative conditionals. Participants in experiments were asked to read sentences, including counterfactual conditionals, e.g., "If Mark had left home early, he would have caught the train". Afterwards, they were asked to identify which sentences they had been shown. They often mistakenly believed they had been shown sentences corresponding to the presupposed facts, e.g., "Mark did not leave home early" and "Mark did not catch the train".[38]In other experiments, participants were asked to read short stories that contained counterfactual conditionals, e.g., "If there had been roses in the flower shop then there would have been lilies". Later in the story, they read sentences corresponding to the presupposed facts, e.g., "there were no roses and there were no lilies". The counterfactual conditionalprimedthem to read the sentence corresponding to the presupposed facts very rapidly; no such priming effect occurred for indicative conditionals.[39]They spent different amounts of time 'updating' a story that contains a counterfactual conditional compared to one that contains factual information[40]and focused on different parts of counterfactual conditionals.[41] Experiments have compared the inferences people make from counterfactual conditionals and indicative conditionals. Given a counterfactual conditional, e.g., "If there had been a circle on the blackboard then there would have been a triangle", and the subsequent information "in fact there was no triangle", participants make themodus tollensinference "there was no circle" more often than they do from an indicative conditional.[42]Given the counterfactual conditional and the subsequent information "in fact there was a circle", participants make themodus ponensinference as often as they do from an indicative conditional. Byrneargues that people constructmental representationsthat encompass two possibilities when they understand, and reason from, a counterfactual conditional, e.g., "if Oswald had not shot Kennedy, then someone else would have". They envisage the conjecture "Oswald did not shoot Kennedy and someone else did" and they also think about the presupposed facts "Oswald did shoot Kennedy and someone else did not".[43]According to themental model theory of reasoning, they constructmental modelsof the alternative possibilities.[44]
https://en.wikipedia.org/wiki/Counterfactuals
Afalse dilemma, also referred to asfalse dichotomyorfalse binary, is aninformal fallacybased on a premise that erroneously limits what options are available. The source of the fallacy lies not in an invalid form of inference but in a false premise. This premise has the form of adisjunctive claim: it asserts that one among a number of alternatives must be true. This disjunction is problematic because it oversimplifies the choice by excluding viable alternatives, presenting the viewer with only two absolute choices when, in fact, there could be many. False dilemmas often have the form of treating twocontraries, which may both be false, ascontradictories, of which one is necessarily true. Various inferential schemes are associated with false dilemmas, for example, theconstructive dilemma, thedestructive dilemmaor thedisjunctive syllogism. False dilemmas are usually discussed in terms ofdeductive arguments, but they can also occur asdefeasible arguments. The human liability to commit false dilemmas may be due to the tendency to simplify reality by ordering it through either-or-statements, which is to some extent already built into human language. This may also be connected to the tendency to insist on clear distinction while denying the vagueness of many common expressions. Afalse dilemmais aninformal fallacybased on a premise that erroneously limits what options are available.[1][2][3]In its most simple form, called thefallacy of bifurcation, all but two alternatives are excluded. A fallacy is anargument, i.e. a series of premises together with a conclusion, that isunsound, i.e. not both valid and true. Fallacies are usually divided intoformalandinformalfallacies.Formal fallaciesare unsound because of their structure, while informal fallacies are unsound because of their content.[3][4][1][5]The problematic content in the case of thefalse dilemmahas the form of adisjunctive claim: it asserts that one among a number of alternatives must be true. This disjunction is problematic because it oversimplifies the choice by excluding viable alternatives.[1]Sometimes a distinction is made between afalse dilemmaand afalse dichotomy. On this view, the term "false dichotomy" refers to the false disjunctive claim while the term "false dilemma" refers not just to this claim but to the argument based on this claim.[1] In its most common form, afalse dilemmapresents the alternatives ascontradictories, while in truth they are merelycontraries.[5][6]Two propositions are contradictories if it has to be the case that one is true and the other is false. Two propositions are contraries if at most one of them can be true, but leaves open the option that both of them might be false, which is not possible in the case of contradictories.[5]Contradictories follow thelaw of the excluded middlebut contraries do not.[6]For example, the sentence "the exact number of marbles in the urn is either 10 or not 10" presents two contradictory alternatives. The sentence "the exact number of marbles in the urn is either 10 or 11" presents two contrary alternatives: the urn could also contain 2 marbles or 17 marbles. A common form of using contraries infalse dilemmasis to force a choice between extremes on the agent: someone is either good or bad, rich or poor, normal or abnormal. Such cases ignore that there is a continuous spectrum between the extremes that is excluded from the choice.[5]Whilefalse dilemmasinvolving contraries, i.e.exclusiveoptions, are a very common form, this is just a special case: there are also arguments with non-exclusive disjunctions that are false dilemmas.[1]For example, a choice between security and freedom does not involve contraries since these two terms are compatible with each other.[5] Inlogic, there are two main types of inferences known as dilemmas: theconstructive dilemmaand thedestructive dilemma. In their most simple form, they can be expressed in the following way:[7][6][1] The source of the fallacy is found in the disjunctive claim in the third premise, i.e.P∨R{\displaystyle P\lor R}and¬Q∨¬R{\displaystyle \lnot Q\lor \lnot R}respectively. The following is an example of afalse dilemmawith thesimple constructive form: (1) "If you tell the truth, you force your friend into a social tragedy; and therefore, are an immoral person". (2) "If you lie, you are an immoral person (since it is immoral to lie)". (3) "Either you tell the truth, or you lie". Therefore "[y]ou are an immoral person (whatever choice you make in the given situation)".[1]This example constitutes a false dilemma because there are other choices besides telling the truth and lying, like keeping silent. A false dilemma can also occur in the form of adisjunctive syllogism:[6] In this form, the first premise (P∨Q{\displaystyle P\lor Q}) is responsible for the fallacious inference.Lewis's trilemmais a famous example of this type of argument involving three disjuncts: "Jesus was either a liar, a lunatic, or Lord".[3]By denying that Jesus was a liar or a lunatic, one is forced to draw the conclusion that he was God. But this leaves out various other alternatives, for example, that Jesus was a prophet.[3] False dilemmas are usually discussed in terms ofdeductive arguments. But they can also occur asdefeasible arguments.[1]A valid argument is deductive if the truth of its premises ensures the truth of its conclusion. For a valid defeasible argument, on the other hand, it is possible for all its premises to be true and the conclusion to be false. The premises merely offer a certain degree of support for the conclusion but do not ensure it.[8]In the case of a defeasible false dilemma, the support provided for the conclusion is overestimated since various alternatives are not considered in the disjunctive premise.[1] Part of understandingfallaciesinvolves going beyond logic to empirical psychology in order toexplainwhy there is a tendency to commit or fall for the fallacy in question.[9][1]In the case of thefalse dilemma, the tendency to simplify reality by ordering it through either-or-statements may play an important role. This tendency is to some extent built into human language, which is full of pairs of opposites.[5]This type of simplification is sometimes necessary to make decisions when there is not enough time to get a more detailed perspective. In order toavoidfalse dilemmas, the agent should become aware of additional options besides the prearranged alternatives.Critical thinkingand creativity may be necessary to see through thefalse dichotomyand to discover new alternatives.[1] Somephilosophersand scholars believe that "unless a distinction can be made rigorous and precise it isn't really a distinction".[10]An exception isanalytic philosopherJohn Searle, who called it an incorrect assumption that produces false dichotomies. Searle insists that "it is a condition of the adequacy of a precise theory of an indeterminate phenomenon that it should precisely characterize that phenomenon as indeterminate; and a distinction is no less a distinction for allowing for a family of related, marginal, diverging cases."[11]Similarly, when two options are presented, they often are, although not always, two extreme points on some spectrum of possibilities; this may lend credence to the larger argument by giving the impression that the options aremutually exclusive, even though they need not be.[12]Furthermore, the options in false dichotomies typically are presented as beingcollectively exhaustive, in which case the fallacy may be overcome, or at least weakened, by considering other possibilities, or perhaps by considering a whole spectrum of possibilities, as infuzzy logic.[13]This issue arises from real dichotomies in nature, the most prevalent example is the occurrence of an event. It either happened or it did not happen. Thisontologysets a logical construct that cannot be reasonably applied toepistemology. The presentation of afalse choiceoften reflects a deliberate attempt to eliminate several options that may occupy the middle ground on an issue. A common argument againstnoise pollutionlaws involves a false choice. It might be argued that inNew York Citynoise should not be regulated, because if it were, a number of businesses would be required to close. This argument assumes that, for example, a bar must be shut down to prevent disturbing levels of noise emanating from it after midnight. This ignores the fact that law could require the bar to lower its noise levels, or installsoundproofingstructural elements to keep the noise from excessively transmitting onto others' properties.[14] In psychology, a phenomenon related to the false dilemma is "black-and-white thinking" or "thinking in black and white". There are people who routinely engage in black-and-white thinking, an example of which is someone who categorizes other people as all good or all bad.[15] Various different terms are used to refer to false dilemmas. Some of the following terms are equivalent to the termfalse dilemma, some refer to special forms of false dilemmas and others refer to closely related concepts.
https://en.wikipedia.org/wiki/False_dilemma
Inpropositional logic,import-exportis a name given to the propositional form ofExportation: This already holds inminimal logic, and thus also inclassical logic, where the conditional operator "→{\displaystyle \rightarrow }" is taken asmaterial implication. In theCurry-Howard correspondencefor intuitionistic logics, it can be realized throughcurryingand uncurrying. Import-export expresses adeductiveargument form. Innatural languageterms, the formula states that the followingEnglishsentences arelogically equivalent:[1][2][3] There are logics where it does not hold and its status as a true principle of logic is a matter of debate. Controversy over the principle arises from the fact that any conditional operator that satisfies it will collapse to material implication when combined with certain other principles. This conclusion would be problematic given theparadoxes of material implication, which are commonly taken to show that natural language conditionals are not material implication.[2][3][4] This problematic conclusion can be avoided within the framework ofdynamic semantics, whose expressive power allows one to define a non-material conditional operator which nonetheless satisfies import-export along with the other principles.[3][5]However, other approaches reject import-export as a general principle, motivated by cases such as the following, uttered in a context where it is most likely that the match will be lit by throwing it into a campfire, but where it is possible that it could be lit by striking it. In this context, the first sentence is intuitively true but the second is intuitively false.[5][6][7]
https://en.wikipedia.org/wiki/Import-Export_(logic)
Inpropositional logic,modus ponens(/ˈmoʊdəsˈpoʊnɛnz/;MP), also known asmodus ponendo ponens(fromLatin'mode that by affirming affirms'),[1]implication elimination, oraffirming the antecedent,[2]is adeductiveargument formandrule of inference.[3]It can be summarized as "PimpliesQ.Pis true. Therefore,Qmust also be true." Modus ponensis a mixedhypothetical syllogismand is closely related to anothervalidform of argument,modus tollens. Both have apparently similar but invalid forms:affirming the consequentanddenying the antecedent.Constructive dilemmais thedisjunctiveversion ofmodus ponens. The history ofmodus ponensgoes back toantiquity.[4]The first to explicitly describe the argument formmodus ponenswasTheophrastus.[5]It, along withmodus tollens, is one of the standard patterns of inference that can be applied to derive chains of conclusions that lead to the desired goal. The form of amodus ponensargument is a mixedhypothetical syllogism, with two premises and a conclusion: The first premise is aconditional("if–then") claim, namely thatPimpliesQ. The second premise is an assertion thatP, theantecedentof the conditional claim, is the case. From these two premises it can be logically concluded thatQ, theconsequentof the conditional claim, must be the case as well. An example of an argument that fits the formmodus ponens: This argument isvalid, but this has no bearing on whether any of the statements in the argument are actuallytrue; formodus ponensto be asoundargument, the premises must be true for any true instances of the conclusion. Anargumentcan be valid but nonetheless unsound if one or more premises are false; if an argument is validandall the premises are true, then the argument is sound. For example, John might be going to work on Wednesday. In this case, the reasoning for John's going to work (because it is Wednesday) is unsound. The argument is only sound on Tuesdays (when John goes to work), but valid on every day of the week. Apropositionalargument usingmodus ponensis said to bedeductive. In single-conclusionsequent calculi,modus ponensis the Cut rule. Thecut-elimination theoremfor a calculus says that every proof involving Cut can be transformed (generally, by a constructive method) into a proof without Cut, and hence that Cut isadmissible. TheCurry–Howard correspondencebetween proofs and programs relatesmodus ponenstofunction application: iffis a function of typeP→Qandxis of typeP, thenf xis of typeQ. Inartificial intelligence,modus ponensis often calledforward chaining. Themodus ponensrule may be written insequentnotation as whereP,QandP→Qare statements (or propositions) in a formal language and⊢is ametalogicalsymbol meaning thatQis asyntactic consequenceofPandP→Qin somelogical system. In classical two-valued logic,modus ponensis encoded in thetruth tableof thematerial conditional(implication) operator. A truth table lists all possible combinations of the truth values of the arguments, in this casepandq, one case per row.Modus ponensis the case where bothp→qandpmay be assumed (denoted as true). Encodingmodus ponensfaithfully,qmay also be assumed and therefore is also denoted as true. The truth table of implication also expresses other common inference rules, such asmodus tollenson the fourth row, assumingp→qand notqtherefore notp, and themonotonicity of entailmenton the first and third rows, assumingqandp→q, expressing howpmay or may not be assumed. Whilemodus ponensis one of the most commonly usedargument formsin logic, it must not be mistaken for a logical law; rather, it is one of the accepted mechanisms for the construction of deductive proofs that includes the "rule of definition" and the "rule of substitution".[6]Modus ponensallows one to eliminate aconditional statementfrom alogical proof or argument(the antecedents) and thereby not carry these antecedents forward in an ever-lengthening string of symbols; for this reason modus ponens is sometimes called therule of detachment[7]or thelaw of detachment.[8]Enderton, for example, observes that "modus ponens can produce shorter formulas from longer ones",[9]and Russell observes that "the process of the inference cannot be reduced to symbols. Its sole record is the occurrence of ⊦q [the consequent] ... an inference is the dropping of a true premise; it is the dissolution of an implication".[10] A justification for the "trust in inference is the belief that if the two former assertions [the antecedents] are not in error, the final assertion [the consequent] is not in error".[10]In other words: if onestatementorpropositionimpliesa second one, and the first statement or proposition is true, then the second one is also true. IfPimpliesQandPis true, thenQis true.[11] In mathematical logic,algebraic semanticstreats every sentence as a name for an element in an ordered set. Typically, the set can be visualized as alattice-like structure with a single element (the "always-true") at the top and another single element (the "always-false") at the bottom. Logical equivalence becomes identity, so that when¬(P∧Q){\displaystyle \neg {(P\wedge Q)}}and¬P∨¬Q{\displaystyle \neg {P}\vee \neg {Q}}, for instance, are equivalent (as is standard), then¬(P∧Q)=¬P∨¬Q{\displaystyle \neg {(P\wedge Q)}=\neg {P}\vee \neg {Q}}. Logical implication becomes a matter of relative position:P{\displaystyle P}logically impliesQ{\displaystyle Q}just in caseP≤Q{\displaystyle P\leq Q}, i.e., when eitherP=Q{\displaystyle P=Q}or elseP{\displaystyle P}lies belowQ{\displaystyle Q}and is connected to it by an upward path. In this context, to say thatP{\textstyle P}andP→Q{\displaystyle P\rightarrow Q}together implyQ{\displaystyle Q}—that is, to affirmmodus ponensas valid—is to say that the highest point which lies below bothP{\displaystyle P}andP→Q{\displaystyle P\rightarrow Q}lies belowQ{\displaystyle Q}, i.e., thatP∧(P→Q)≤Q{\displaystyle P\wedge (P\rightarrow Q)\leq Q}.[a]In the semantics for basic propositional logic, the algebra isBoolean, with→{\displaystyle \rightarrow }construed as thematerial conditional:P→Q=¬P∨Q{\displaystyle P\rightarrow Q=\neg {P}\vee Q}. Confirming thatP∧(P→Q)≤Q{\displaystyle P\wedge (P\rightarrow Q)\leq Q}is then straightforward, becauseP∧(P→Q)=P∧Q{\displaystyle P\wedge (P\rightarrow Q)=P\wedge Q}andP∧Q≤Q{\displaystyle P\wedge Q\leq Q}. With other treatments of→{\displaystyle \rightarrow }, the semantics becomes more complex, the algebra may be non-Boolean, and the validity of modus ponens cannot be taken for granted. IfPr(P→Q)=x{\displaystyle \Pr(P\rightarrow Q)=x}andPr(P)=y{\displaystyle \Pr(P)=y}, thenPr(Q){\displaystyle \Pr(Q)}must lie in the interval[x+y−1,x]{\displaystyle [x+y-1,x]}.[b][12]For the special casex=y=1{\displaystyle x=y=1},Pr(Q){\displaystyle \Pr(Q)}must equal1{\displaystyle 1}. Modus ponensrepresents an instance of the binomial deduction operator insubjective logicexpressed as: ωQ‖PA=(ωQ|PA,ωQ|¬PA)⊚ωPA,{\displaystyle \omega _{Q\|P}^{A}=(\omega _{Q|P}^{A},\omega _{Q|\lnot P}^{A})\circledcirc \omega _{P}^{A}\,,} whereωPA{\displaystyle \omega _{P}^{A}}denotes the subjective opinion aboutP{\displaystyle P}as expressed by sourceA{\displaystyle A}, and the conditional opinionωQ|PA{\displaystyle \omega _{Q|P}^{A}}generalizes the logical implicationP→Q{\displaystyle P\to Q}. The deduced marginal opinion aboutQ{\displaystyle Q}is denoted byωQ‖PA{\displaystyle \omega _{Q\|P}^{A}}. The case whereωPA{\displaystyle \omega _{P}^{A}}is an absolute TRUE opinion aboutP{\displaystyle P}is equivalent to sourceA{\displaystyle A}saying thatP{\displaystyle P}is TRUE, and the case whereωPA{\displaystyle \omega _{P}^{A}}is an absolute FALSE opinion aboutP{\displaystyle P}is equivalent to sourceA{\displaystyle A}saying thatP{\displaystyle P}is FALSE. The deduction operator⊚{\displaystyle \circledcirc }ofsubjective logicproduces an absolute TRUE deduced opinionωQ‖PA{\displaystyle \omega _{Q\|P}^{A}}when the conditional opinionωQ|PA{\displaystyle \omega _{Q|P}^{A}}is absolute TRUE and the antecedent opinionωPA{\displaystyle \omega _{P}^{A}}is absolute TRUE. Hence, subjective logic deduction represents a generalization of bothmodus ponensand theLaw of total probability.[13] Philosophers and linguists have identified a variety of cases wheremodus ponensappears to fail.Vann McGee, for instance, argued thatmodus ponenscan fail for conditionals whose consequents are themselves conditionals.[14]The following is an example: Since Shakespeare did writeHamlet, the first premise is true. The second premise is also true, since starting with a set of possible authors limited to just Shakespeare and Hobbes and eliminating one of them leaves only the other. However, the conclusion is doubtful, since ruling out Shakespeare as the author ofHamletwould leave numerous possible candidates, many of them more plausible alternatives than Hobbes (if the if-thens in the inference are read as material conditionals, the conclusion comes out true simply by virtue of the false antecedent. This is one of theparadoxes of material implication). The general form of McGee-type counterexamples tomodus ponensis simplyP,P→(Q→R){\displaystyle P,P\rightarrow (Q\rightarrow R)}, therefore,Q→R{\displaystyle Q\rightarrow R}; it is not essential thatP{\displaystyle P}be a disjunction, as in the example given. That these kinds of cases constitute failures ofmodus ponensremains a controversial view among logicians, but opinions vary on how the cases should be disposed of.[15][16][17] Indeontic logic, some examples of conditional obligation also raise the possibility ofmodus ponensfailure. These are cases where the conditional premise describes an obligation predicated on an immoral or imprudent action, e.g., "If Doe murders his mother, he ought to do so gently," for which the dubious unconditional conclusion would be "Doe ought to gently murder his mother."[18]It would appear to follow that if Doe is in fact gently murdering his mother, then bymodus ponenshe is doing exactly what he should, unconditionally, be doing. Here again,modus ponens'failure is not a popular diagnosis but is sometimes argued for.[19] The fallacy ofaffirming the consequentis a common misinterpretation of themodus ponens.[20]
https://en.wikipedia.org/wiki/Modus_ponens
"The Moon is made of green cheese" is a statement referring to a fanciful belief that theMoonis composed ofcheese. In its original formulation as aproverbandmetaphorforcredulitywith roots in fable, this refers to the perception of asimpletonwho sees a reflection of the Moon in water and mistakes it for a round cheese wheel. It is widespread as afolkloric motifamong many of the world's cultures, and the notion has also found its way into children'sfolkloreand modern popular culture. The phrase "green cheese" in the common version of this proverb (sometimes "cream cheese" is used),[1]may refer to a young, unripe cheese[2][3][4][5]or to cheese with a greenish tint.[6] There was never an actual historical popular belief that the Moon is made of green cheese (cf.Flat Earthand themyth of the flat Earth).[A]It was typically used as an example of extreme credulity, a meaning that was clear and commonly understood as early as 1638.[9] There exists a family of stories, incomparative mythologyin diverse countries that concern asimpletonwho sees a reflection of the Moon and mistakes it for a round cheese: ... theServian talewhere the fox leads the wolf to believe the moon reflection in the water is a cheese and the wolf bursts in the attempt to drink up the water to get at the cheese; theZulu taleof the hyena that drops the bone to go after the moon reflection in the water; theGascon taleof the peasant watering hisasson a moonlight night. A cloud obscures the moon, and the peasant, thinking the ass has drunk the moon, kills the beast to recover the moon; theTurkish taleof theKhoja Nasru-'d-Dinwho thinks the moon has fallen into the well and gets a rope and chain with which to pull it out. In his efforts the rope breaks, and he falls back, but seeing the moon in the sky, praises Allah that the moon is safe; theScottish taleof the wolf fishing with his tail for the moon reflection; Thisfolkloric motifis first recorded in literature during theHigh Middle Agesby the French rabbiRashiwith aRabbinic parablein his commentary weaving together three Biblical quotations given in the main text (including one on "sour grapes") into a reconstruction of some of the TalmudicRabbi Meir's supposed three hundredfox fablesin the tractateSanhedrin:[11] A fox once craftily induced a wolf to go and join the Jews in theirSabbathpreparations and share in their festivities. On his appearing in their midst the Jews fell upon him with sticks and beat him. He therefore came back determined to kill the fox. But the latter pleaded: 'It is no fault of mine that you were beaten, but they have a grudge against your father who once helped them in preparing their banquet and then consumed all the choice bits.' 'And was I beaten for the wrong done by my father?' cried the indignant wolf. 'Yes,' replied the fox, 'the fathers have eaten sour grapes and the children's teeth are set on edge. However,' he continued, 'come with me and I will supply you with abundant food. He led him to a well which had a beam across it from either end of which hung a rope with a bucket attached. The fox entered the upper bucket and descended into the well whilst the lower one was drawn up. 'Where are you going?' asked the wolf. The fox, pointing to the cheese-like reflection of the moon, replied: 'Here is plenty of meat and cheese; get into the other bucket and come down at once.' The wolf did so, and as he descended, the fox was drawn up. 'And how am I to get out?' demanded the wolf. 'Ah' said the fox 'the righteous is delivered out of trouble and the wicked cometh in his stead. Is it not written,Just balances, just weights'? Rashi as the first literary reference may reflect the well-knownbeast fabletradition ofFrench folklore, or a more obscure such tradition inJewish folkloreas it appears inMishlè Shu'alim. The near-contemporary Iraqi rabbiHai Gaonalso reconstructed this Rabbi Meir tale, sharing some elements of Rashi's story, but with a lion caught in atrapping pitrather than a wolf in a well. However, Rashi may have actively "adapted contemporary [French] folklore to the [T]almudic passage", as washomileticallypracticed in different Jewish communities.[13]Though the tale itself is probably of non-Jewish European origin, Rashi's form and elements are likely closer to the original in oral folklore than the somewhat later variation recorded featuringReynard. Rashi's version already includes the fox, the wolf, thewelland the Moon that are seen in later versions.Petrus Alphonsi, a Spanish Jewish convert to Christianity, popularized this tale in Europe in his collectionDisciplina Clericalis.[10] The variation featuringReynardthe Fox appeared soon after Petrus Alphonsi in the French classicLe Roman de Renart(as "Renart et Ysengrin dans le puits" in Branch IV); the Moon/cheese element is absent (it is replaced by a promise of Paradise at the bottom of the well), but such a version is alluded to in another part of the collection. This was the first Reynard tale to be adapted into English (as the Middle English "þe Vox and þe Wolf"), preceding Chaucer's "The Nun's Priest's Tale" and the much later work ofWilliam Caxton.[10]Later still, the Middle ScotsThe Fox, the Wolf and the Husbandmandoes include the Moon/cheese element. La Fontaine includes the story in the French classic compilationFables("Le Loup et le Renard" in Book XI). The German tale ofThe Wolf and the Foxin Grimm replaces the well with a well-stocked cellar, where a newly satiated wolf is trapped and subject to the farmer's revenge, being now too overstuffed to escape through the exit. One of the facets of this morphology is grouped as "The Wolf Dives into the Water for Reflected Cheese" (Type 34) of theAarne–Thompson classificationof folktales, where the Moon's reflection is mistaken for cheese, in the section devoted to tales ofThe Clever Fox. It can also be grouped as "The Moon in the Well" (Type 1335A), in the section devoted toStories about a Fool, referring to stories where the simpleton believes the Moon itself is a tangible object in the water. "The Moon is made of green cheese" was one of the most popular proverbs in 16th- and 17th-century English literature,[14]and it was also in use after this time. It likely originated in this formulation in 1546, whenThe Proverbs of John Heywoodclaimed "the moon is made of a greene cheese."[B]A common variation at that time was "to make one believe the Moon is made of green cheese" (i.e., to hoax), as seen inJohn Wilkins' bookThe Discovery of a World in the Moone.[16] In French, there is the proverb "Il veut prendre la lune avec les dents" ("He wants to take the moon with his teeth"), alluded to inRabelais.[17] The characterization is also common in stories ofgothamites, including theMoonrakersof Wiltshire, who were said to have taken advantage of this trope, and the assumption of their own naivete, to hide their smuggling activities from government officials.[citation needed] A 1902 survey ofchildloreby psychologistG. Stanley Hallin the United States found that though most young children were unsure of the Moon's composition, that it was made of cheese was the single most common explanation: Careful inquiry and reminiscence concerning the substance of the moon show that eighteen children [of 423], averaging five years, thought it made of cheese. Sometime the mice eat it horseshoe-shaped, or that it could be fed by throwing cheese up so clouds could catch it; or it was green because the man in the moon fed on green grass; its spots were mould; it was really green but looked yellow, because wrapped in yellow cheese cloth; it was cheese mixed with wax or with melted lava, which might be edible; there were many rats, mice and skippers there; it grew big from a starry speck of light by eating cheese.[18] Before that time, and since, the idea of the Moon actually being made of cheese has appeared as a humorous conceit in much of children's popular culture with astronomical themes (cf.theMan in the Moon), and in adult references to it. At theScience Writers'conference,theoretical physicistSean M. Carrollexplained why there was no need to "sample the moon to know it's not made of cheese." He said the hypothesis is "absurd", failing against our knowledge of the universe and, "This is not a proof, there is no metaphysical proof, like you can proof a statement in logic or math that the moon is not made of green cheese. But science nevertheless passes judgments on claims based on how well they fit in with the rest of our theoretical understanding."[19][C]Notwithstanding this uncontrovertible argument, the harmonic signature of Moon rock — theseismic velocityat whichshockwavestravel — is said to be closer to green cheese than to any rock on Earth.[20] Dennis Lindley used the myth to help explain the necessity ofCromwell's ruleinBayesian probability: "In other words, if a decision-maker thinks something cannot be true and interprets this to mean it has zero probability, he will never be influenced by any data, which is surely absurd. So leave a little probability for the moon being made of green cheese; it can be as small as 1 in a million, but have it there since otherwise an army of astronauts returning with samples of the said cheese will leave you unmoved."[21] In the 1989 filmA Grand Day Out, the plot hinges onWallace and Gromitgoing to the Moon to gather cheese due to a lack of it at home on abank holiday.
https://en.wikipedia.org/wiki/The_Moon_is_made_of_green_cheese
Relevance logic, also calledrelevant logic, is a kind ofnon-classical logicrequiring theantecedentandconsequentofimplicationsto be relevantly related. They may be viewed as a family ofsubstructuralormodallogics. It is generally, but not universally, calledrelevant logicby British and, especially, Australianlogicians, andrelevance logicby American logicians. Relevance logic aims to capture aspects of implication that are ignored by the "material implication" operator in classicaltruth-functional logic, namely the notion of relevance between antecedent and conditional of a true implication. This idea is not new:C. I. Lewiswas led to invent modal logic, and specificallystrict implication, on the grounds that classical logic grantsparadoxes of material implicationsuch as the principle thata falsehood implies any proposition.[1][2]Hence "if I'm a donkey, then two and two is four" is true when translated as a material implication, yet it seems intuitively false since a true implication must tie the antecedent and consequent together by some notion of relevance. And whether or not the speaker is a donkey seems in no way relevant to whether two and two is four. In terms of a syntactical constraint for apropositional calculus, it is necessary, but not sufficient, that premises and conclusion shareatomic formulae(formulae that do not contain anylogical connectives). In apredicate calculus, relevance requires sharing of variables and constants between premises and conclusion. This can be ensured (along with stronger conditions) by, e.g., placing certain restrictions on the rules of a natural deduction system. In particular, a Fitch-stylenatural deductioncan be adapted to accommodate relevance by introducing tags at the end of each line of an application of an inference indicating the premises relevant to the conclusion of the inference.Gentzen-stylesequent calculican be modified by removing the weakening rules that allow for the introduction of arbitrary formulae on the right or left side of thesequents. A notable feature of relevance logics is that they areparaconsistent logics: the existence of a contradiction will not necessarily cause an "explosion." This follows from the fact that a conditional with a contradictory antecedent that does not share any propositional or predicate letters with the consequent cannot be true (or derivable). Relevance logic was proposed in 1928 by Soviet philosopherIvan E. Orlov(1886 – circa 1936) in his strictly mathematical paper "The Logic of Compatibility of Propositions" published inMatematicheskii Sbornik. The basic idea of relevant implication appears in medieval logic, and some pioneering work was done byAckermann,[3]Moh,[4]andChurch[5]in the 1950s. Drawing on them,Nuel BelnapandAlan Ross Anderson(with others) wrote themagnum opusof the subject,Entailment: The Logic of Relevance and Necessityin the 1970s (the second volume being published in the nineties). They focused on both systems ofentailmentand systems of relevance, where implications of the former kinds are supposed to be both relevant and necessary. The early developments in relevance logic focused on the stronger systems. The development of the Routley–Meyer semantics brought out a range of weaker logics. The weakest of these logics is the relevance logic B. It is axiomatized with the following axioms and rules. The rules are the following. Stronger logics can be obtained by adding any of the following axioms. There are some notable logics stronger than B that can be obtained by adding axioms to B as follows. The standard model theory for relevance logics is the Routley-Meyer ternary-relational semantics developed byRichard RoutleyandRobert Meyer. A Routley–Meyer frame F for a propositional language is a quadruple (W,R,*,0), where W is a non-empty set, R is a ternary relation on W, and * is a function from W to W, and0∈W{\displaystyle 0\in W}. A Routley-Meyer model M is a Routley-Meyer frame F together with a valuation,⊩{\displaystyle \Vdash }, that assigns a truth value to each atomic proposition relative to each pointa∈W{\displaystyle a\in W}. There are some conditions placed on Routley-Meyer frames. Definea≤b{\displaystyle a\leq b}asR0ab{\displaystyle R0ab}. WriteM,a⊩A{\displaystyle M,a\Vdash A}andM,a⊮A{\displaystyle M,a\nVdash A}to indicate that the formulaA{\displaystyle A}is true, or not true, respectively, at pointa{\displaystyle a}inM{\displaystyle M}. One final condition on Routley-Meyer models is the hereditariness condition. By an inductive argument, hereditariness can be shown to extend to complex formulas, using the truth conditions below. The truth conditions for complex formulas are as follows. A formulaA{\displaystyle A}holds in a modelM{\displaystyle M}just in caseM,0⊩A{\displaystyle M,0\Vdash A}. A formulaA{\displaystyle A}holds on a frameF{\displaystyle F}iff A holds in every model(F,⊩){\displaystyle (F,\Vdash )}. A formulaA{\displaystyle A}is valid in a class of frames iff A holds on every frame in that class. The class of all Routley–Meyer frames satisfying the above conditions validates that relevance logic B. One can obtain Routley-Meyer frames for other relevance logics by placing appropriate restrictions on R and on *. These conditions are easier to state using some standard definitions. LetRabcd{\displaystyle Rabcd}be defined as∃x(Rabx∧Rxcd){\displaystyle \exists x(Rabx\land Rxcd)}, and letRa(bc)d{\displaystyle Ra(bc)d}be defined as∃x(Rbcx∧Raxd){\displaystyle \exists x(Rbcx\land Raxd)}. Some of the frame conditions and the axioms they validate are the following. The last two conditions validate forms of weakening that relevance logics were originally developed to avoid. They are included to show the flexibility of the Routley–Meyer models. Operational models for negation-free fragments of relevance logics were developed byAlasdair Urquhartin his PhD thesis and in subsequent work. The intuitive idea behind the operational models is that points in a model are pieces of information, and combining information supporting a conditional with the information supporting its antecedent yields some information that supports the consequent. Since the operational models do not generally interpret negation, this section will consider only languages with a conditional, conjunction, and disjunction. An operational frameF{\displaystyle F}is a triple(K,⋅,0){\displaystyle (K,\cdot ,0)}, whereK{\displaystyle K}is a non-empty set,0∈K{\displaystyle 0\in K}, and⋅{\displaystyle \cdot }is a binary operation onK{\displaystyle K}. Frames have conditions, some of which may be dropped to model different logics. The conditions Urquhart proposed to model the conditional of the relevance logic R are the following. Under these conditions, the operational frame is ajoin-semilattice. An operational modelM{\displaystyle M}is a frameF{\displaystyle F}with a valuationV{\displaystyle V}that maps pairs of points and atomic propositions to truth values, T or F.V{\displaystyle V}can be extended to a valuation⊩{\displaystyle \Vdash }on complex formulas as follows. A formulaA{\displaystyle A}holds in a modelM{\displaystyle M}iffM,0⊩A{\displaystyle M,0\Vdash A}. A formulaA{\displaystyle A}is valid in a class of modelsC{\displaystyle C}iff it holds in each modelM∈C{\displaystyle M\in C}. The conditional fragment of R is sound and complete with respect to the class of semilattice models. The logic with conjunction and disjunction is properly stronger than the conditional, conjunction, disjunction fragment of R. In particular, the formula(A→(B∨C))∧(B→C)→(A→C){\displaystyle (A\to (B\lor C))\land (B\to C)\to (A\to C)}is valid for the operational models but it is invalid in R. The logic generated by the operational models for R has a complete axiomatic proof system, dueKit Fineand to Gerald Charlwood. Charlwood also provided a natural deduction system for the logic, which he proved equivalent to the axiomatic system. Charlwood showed that his natural deduction system is equivalent to a system provided byDag Prawitz. The operational semantics can be adapted to model the conditional of E by adding a non-empty set of worldsW{\displaystyle W}and an accessibility relation≤{\displaystyle \leq }onW×W{\displaystyle W\times W}to the frames. The accessibility relation is required to be reflexive and transitive, to capture the idea that E's conditional has an S4 necessity. The valuations then map triples of atomic propositions, points, and worlds to truth values. The truth condition for the conditional is changed to the following. The operational semantics can be adapted to model the conditional of T by adding a relation≤{\displaystyle \leq }onK×K{\displaystyle K\times K}. The relation is required to obey the following conditions. The truth condition for the conditional is changed to the following. There are two ways to model the contraction-less relevance logics TW and RW with the operational models. The first way is to drop the condition thatx⋅x=x{\displaystyle x\cdot x=x}. The second way is to keep the semilattice conditions on frames and add a binary relation,J{\displaystyle J}, of disjointness to the frame. For these models, the truth conditions for the conditional is changed to the following, with the addition of the ordering in the case of TW. Urquhart showed that the semilattice logic for R is properly stronger than the positive fragment of R. Lloyd Humberstone provided an enrichment of the operational models that permitted a different truth condition for disjunction. The resulting class of models generates exactly the positive fragment of R. An operational frameF{\displaystyle F}is a quadruple(K,⋅,+,0){\displaystyle (K,\cdot ,+,0)}, whereK{\displaystyle K}is a non-empty set,0∈K{\displaystyle 0\in K}, and {⋅{\displaystyle \cdot },+{\displaystyle +}} are binary operations onK{\displaystyle K}. Leta≤b{\displaystyle a\leq b}be defined as∃x(a+x=b){\displaystyle \exists x(a+x=b)}. The frame conditions are the following. An operational modelM{\displaystyle M}is a frameF{\displaystyle F}with a valuationV{\displaystyle V}that maps pairs of points and atomic propositions to truth values, T or F.V{\displaystyle V}can be extended to a valuation⊩{\displaystyle \Vdash }on complex formulas as follows. A formulaA{\displaystyle A}holds in a modelM{\displaystyle M}iffM,0⊩A{\displaystyle M,0\Vdash A}. A formulaA{\displaystyle A}is valid in a class of modelsC{\displaystyle C}iff it holds in each modelM∈C{\displaystyle M\in C}. The positive fragment of R is sound and complete with respect to the class of these models. Humberstone's semantics can be adapted to model different logics by dropping or adding frame conditions as follows. Some relevance logics can be given algebraic models, such as the logic R. The algebraic structures for R are de Morgan monoids, which are sextuples(D,∧,∨,¬,∘,e){\displaystyle (D,\land ,\lor ,\lnot ,\circ ,e)}where The operationx→y{\displaystyle x\to y}interpreting the conditional of R is defined as¬(x∘¬y){\displaystyle \lnot (x\circ \lnot y)}. A de Morgan monoid is aresiduated lattice, obeying the following residuation condition. An interpretationv{\displaystyle v}is ahomomorphismfrom the propositional language to a de Morgan monoidM{\displaystyle M}such that Given a de Morgan monoidM{\displaystyle M}and an interpretationv{\displaystyle v}, one can say that formulaA{\displaystyle A}holds onv{\displaystyle v}just in casee≤v(A){\displaystyle e\leq v(A)}. A formulaA{\displaystyle A}is valid just in case it holds on all interpretations on all de Morgan monoids. The logic R is sound and complete for de Morgan monoids.
https://en.wikipedia.org/wiki/Relevance_logic
Inmathematicsandlogic, avacuous truthis aconditionaloruniversalstatement(a universal statement that can be converted to a conditional statement) that is true because theantecedentcannot besatisfied.[1]It is sometimes said that a statement is vacuously true because it does not really say anything.[2]For example, the statement "all cell phones in the room are turned off" will betruewhen no cell phones are present in the room. In this case, the statement "all cell phones in the room are turnedon" would also be vacuously true, as would theconjunctionof the two: "all cell phones in the room are turned onandturned off", which would otherwise be incoherent and false. More formally, a relativelywell-definedusage refers to a conditional statement (or a universal conditional statement) with a falseantecedent.[1][3][2][4]One example of such a statement is "if Tokyo is in Spain, then the Eiffel Tower is in Bolivia". Such statements are considered vacuous truths because the fact that the antecedent is false prevents using the statement to infer anything about the truth value of theconsequent. In essence, a conditional statement, that is based on thematerial conditional, is true when the antecedent ("Tokyo is in Spain" in the example) is false regardless of whether the conclusion orconsequent("the Eiffel Tower is in Bolivia" in the example) is true or false because the material conditional is defined in that way. Examples common to everyday speech include conditional phrases used asidioms of improbabilitylike "when hell freezes over ..." and "when pigs can fly ...", indicating that not before the given (impossible) condition is met will the speaker accept some respective (typically false or absurd) proposition. Inpure mathematics, vacuously true statements are not generally of interest by themselves, but they frequently arise as the base case of proofs bymathematical induction.[5]This notion has relevance inpure mathematics, as well as in any other field that usesclassical logic. Outside of mathematics, statements in the form of a vacuous truth, while logically valid, can nevertheless be misleading. Such statements make reasonable assertions aboutqualifiedobjects whichdo not actually exist. For example, a child might truthfully tell their parent "I ate every vegetable on my plate", when there were no vegetables on the child's plate to begin with. In this case, the parent can believe that the child has actually eaten some vegetables, even though that is not true. A statementS{\displaystyle S}is "vacuously true" if itresemblesamaterial conditionalstatementP⇒Q{\displaystyle P\Rightarrow Q}, where theantecedentP{\displaystyle P}is known to be false.[1][3][2] Vacuously true statements that can be reduced (with suitable transformations) to this basic form (material conditional) include the followinguniversally quantifiedstatements: Vacuous truths most commonly appear inclassical logicwithtwo truth values. However, vacuous truths can also appear in, for example,intuitionistic logic, in the same situations as given above. Indeed, ifP{\displaystyle P}is false, thenP⇒Q{\displaystyle P\Rightarrow Q}will yield a vacuous truth in any logic that uses thematerial conditional; ifP{\displaystyle P}is anecessary falsehood, then it will also yield a vacuous truth under thestrict conditional. Other non-classical logics, such asrelevance logic, may attempt to avoid vacuous truths by using alternative conditionals (such as the case of thecounterfactual conditional). Many programming environments have a mechanism for querying if every item in a collection of items satisfies some predicate. It is common for such a query to always evaluate as true for an empty collection. For example: These examples, one frommathematicsand one fromnatural language, illustrate the concept of vacuous truths:
https://en.wikipedia.org/wiki/Vacuous_truth
Exclusive or,exclusive disjunction,exclusive alternation,logical non-equivalence, orlogical inequalityis alogical operatorwhose negation is thelogical biconditional. With two inputs, XOR is true if and only if the inputs differ (one is true, one is false). With multiple inputs, XOR is true if and only if the number of true inputs isodd.[1] It gains the name "exclusive or" because the meaning of "or" is ambiguous when bothoperandsare true. XORexcludesthat case. Some informal ways of describing XOR are "one or the other but not both", "either one or the other", and "A or B, but not A and B". It issymbolizedby the prefix operatorJ{\displaystyle J}[2]: 16and by theinfix operatorsXOR(/ˌɛksˈɔːr/,/ˌɛksˈɔː/,/ˈksɔːr/or/ˈksɔː/),EOR,EXOR,∨˙{\displaystyle {\dot {\vee }}},∨¯{\displaystyle {\overline {\vee }}},∨_{\displaystyle {\underline {\vee }}},⩛,⊕{\displaystyle \oplus },↮{\displaystyle \nleftrightarrow }, and≢{\displaystyle \not \equiv }. Thetruth tableofA↮B{\displaystyle A\nleftrightarrow B}shows that it outputs true whenever the inputs differ: Exclusive disjunction essentially means 'either one, but not both nor none'. In other words, the statement is trueif and only ifone is true and the other is false. For example, if two horses are racing, then one of the two will win the race, but not both of them. The exclusive disjunctionp↮q{\displaystyle p\nleftrightarrow q}, also denoted byp?⁡q{\displaystyle p\operatorname {?} q}orJpq{\displaystyle Jpq}, can be expressed in terms of thelogical conjunction("logical and",∧{\displaystyle \land }), thedisjunction("logical or",∨{\displaystyle \vee }), and thenegation(¬{\displaystyle \neg }) as follows: The exclusive disjunctionp↮q{\displaystyle p\nleftrightarrow q}can also be expressed in the following way: This representation of XOR may be found useful when constructing a circuit or network, because it has only one¬{\displaystyle \lnot }operation and small number of∧{\displaystyle \land }and∨{\displaystyle \lor }operations. A proof of this identity is given below: It is sometimes useful to writep↮q{\displaystyle p\nleftrightarrow q}in the following way: or: This equivalence can be established by applyingDe Morgan's lawstwice to the fourth line of the above proof. The exclusive or is also equivalent to the negation of alogical biconditional, by the rules of material implication (amaterial conditionalis equivalent to the disjunction of the negation of itsantecedentand its consequence) andmaterial equivalence. In summary, we have, in mathematical and in engineering notation: By applying the spirit ofDe Morgan's laws, we get:¬(p↮q)≡¬p↮q≡p↮¬q.{\displaystyle \neg (p\nleftrightarrow q)\equiv \neg p\nleftrightarrow q\equiv p\nleftrightarrow \neg q.} Although theoperators∧{\displaystyle \wedge }(conjunction) and∨{\displaystyle \lor }(disjunction) are very useful in logic systems, they fail a more generalizable structure in the following way: The systems({T,F},∧){\displaystyle (\{T,F\},\wedge )}and({T,F},∨){\displaystyle (\{T,F\},\lor )}aremonoids, but neither is agroup. This unfortunately prevents the combination of these two systems into larger structures, such as amathematical ring. However, the system using exclusive or({T,F},⊕){\displaystyle (\{T,F\},\oplus )}isanabelian group. The combination of operators∧{\displaystyle \wedge }and⊕{\displaystyle \oplus }over elements{T,F}{\displaystyle \{T,F\}}produce the well-knowntwo-element fieldF2{\displaystyle \mathbb {F} _{2}}. This field can represent any logic obtainable with the system(∧,∨){\displaystyle (\land ,\lor )}and has the added benefit of the arsenal of algebraic analysis tools for fields. More specifically, if one associatesF{\displaystyle F}with 0 andT{\displaystyle T}with 1, one can interpret the logical "AND" operation as multiplication onF2{\displaystyle \mathbb {F} _{2}}and the "XOR" operation as addition onF2{\displaystyle \mathbb {F} _{2}}: The description of aBoolean functionas apolynomialinF2{\displaystyle \mathbb {F} _{2}}, using this basis, is called the function'salgebraic normal form.[3] Disjunction is often understood exclusively innatural languages. In English, the disjunctive word "or" is often understood exclusively, particularly when used with the particle "either". The English example below would normally be understood in conversation as implying that Mary is not both a singer and a poet.[4][5] However, disjunction can also be understood inclusively, even in combination with "either". For instance, the first example below shows that "either" can befelicitouslyused in combination with an outright statement that both disjuncts are true. The second example shows that the exclusive inference vanishes away underdownward entailingcontexts. If disjunction were understood as exclusive in this example, it would leave open the possibility that some people ate both rice and beans.[4] Examples such as the above have motivated analyses of the exclusivity inference aspragmaticconversational implicaturescalculated on the basis of an inclusivesemantics. Implicatures are typicallycancellableand do not arise in downward entailing contexts if their calculation depends on theMaxim of Quantity. However, some researchers have treated exclusivity as a bona fide semanticentailmentand proposed nonclassical logics which would validate it.[4] This behavior of English "or" is also found in other languages. However, many languages have disjunctive constructions which are robustly exclusive such as Frenchsoit... soit.[4] The symbol used for exclusive disjunction varies from one field of application to the next, and even depends on the properties being emphasized in a given context of discussion. In addition to the abbreviation "XOR", any of the following symbols may also be seen: If usingbinaryvalues for true (1) and false (0), thenexclusive orworks exactly likeadditionmodulo2. Exclusive disjunction is often used for bitwise operations. Examples: As noted above, since exclusive disjunction is identical to addition modulo 2, the bitwise exclusive disjunction of twon-bit strings is identical to the standard vector of addition in thevector space(Z/2Z)n{\displaystyle (\mathbb {Z} /2\mathbb {Z} )^{n}}. In computer science, exclusive disjunction has several uses: In logical circuits, a simpleaddercan be made with anXOR gateto add the numbers, and a series of AND, OR and NOT gates to create the carry output. On some computer architectures, it is more efficient to store a zero in a register by XOR-ing the register with itself (bits XOR-ed with themselves are always zero) than to load and store the value zero. Incryptography, XOR is sometimes used as a simple, self-inverse mixing function, such as inone-time padorFeistel networksystems.[citation needed]XOR is also heavily used in block ciphers such as AES (Rijndael) or Serpent and in block cipher implementation (CBC, CFB, OFB or CTR). In simple threshold-activatedartificial neural networks, modeling the XOR function requires a second layer because XOR is not alinearly separablefunction. Similarly, XOR can be used in generatingentropy poolsforhardware random number generators. The XOR operation preserves randomness, meaning that a random bit XORed with a non-random bit will result in a random bit. Multiple sources of potentially random data can be combined using XOR, and the unpredictability of the output is guaranteed to be at least as good as the best individual source.[22] XOR is used inRAID3–6 for creating parity information. For example, RAID can "back up" bytes100111002and011011002from two (or more) hard drives by XORing the just mentioned bytes, resulting in (111100002) and writing it to another drive. Under this method, if any one of the three hard drives are lost, the lost byte can be re-created by XORing bytes from the remaining drives. For instance, if the drive containing011011002is lost,100111002and111100002can be XORed to recover the lost byte.[23] XOR is also used to detect an overflow in the result of a signed binary arithmetic operation. If the leftmost retained bit of the result is not the same as the infinite number of digits to the left, then that means overflow occurred. XORing those two bits will give a "1" if there is an overflow. XOR can be used to swap two numeric variables in computers, using theXOR swap algorithm; however this is regarded as more of a curiosity and not encouraged in practice. XOR linked listsleverage XOR properties in order to save space to representdoubly linked listdata structures. Incomputer graphics, XOR-based drawing methods are often used to manage such items asbounding boxesandcursorson systems withoutalpha channelsor overlay planes. It is also called "not left-right arrow" (\nleftrightarrow) inLaTeX-based markdown (↮{\displaystyle \nleftrightarrow }). Apart from the ASCII codes, the operator is encoded atU+22BB⊻XOR(&veebar;) andU+2295⊕CIRCLED PLUS(&CirclePlus;, &oplus;), both in blockmathematical operators.
https://en.wikipedia.org/wiki/Exclusive_disjunction
Inlogic,disjunction(also known aslogical disjunction,logical or,logical addition, orinclusive disjunction) is alogical connectivetypically notated as∨{\displaystyle \lor }and read aloud as "or". For instance, theEnglishlanguage sentence "it is sunny or it is warm" can be represented in logic using the disjunctive formulaS∨W{\displaystyle S\lor W}, assuming thatS{\displaystyle S}abbreviates "it is sunny" andW{\displaystyle W}abbreviates "it is warm". Inclassical logic, disjunction is given atruth functionalsemantics according to which a formulaϕ∨ψ{\displaystyle \phi \lor \psi }is true unless bothϕ{\displaystyle \phi }andψ{\displaystyle \psi }are false. Because this semantics allows a disjunctive formula to be true when both of its disjuncts are true, it is aninclusiveinterpretation of disjunction, in contrast withexclusive disjunction. Classicalproof theoreticaltreatments are often given in terms of rules such asdisjunction introductionanddisjunction elimination. Disjunction has also been given numerousnon-classicaltreatments, motivated by problems includingAristotle's sea battle argument,Heisenberg'suncertainty principle, as well as the numerous mismatches between classical disjunction and its nearest equivalents innatural languages.[1][2] Anoperandof a disjunction is adisjunct.[3] Because the logicalormeans a disjunction formula is true when either one or both of its parts are true, it is referred to as aninclusivedisjunction. This is in contrast with anexclusive disjunction, which is true when one or the other of the arguments are true, but not both (referred to asexclusive or, orXOR). When it is necessary to clarify whether inclusive or exclusiveoris intended, English speakers sometimes uses the phraseand/or. In terms of logic, this phrase is identical toor, but makes the inclusion of both being true explicit. In logic and related fields, disjunction is customarily notated with an infix operator∨{\displaystyle \lor }(UnicodeU+2228∨LOGICAL OR).[1]Alternative notations include+{\displaystyle +}, used mainly inelectronics, as well as|{\displaystyle \vert }and||{\displaystyle \vert \!\vert }in manyprogramming languages. The English wordoris sometimes used as well, often in capital letters. InJan Łukasiewicz'sprefix notation for logic, the operator isA{\displaystyle A}, short for Polishalternatywa(English: alternative).[4] In mathematics, the disjunction of an arbitrary number of elementsa1,…,an{\displaystyle a_{1},\ldots ,a_{n}}can be denoted as aniterated binary operationusing a larger ⋁ (UnicodeU+22C1⋁N-ARY LOGICAL OR):[5] ⋁i=1nai=a1∨a2∨…an−1∨an{\displaystyle \bigvee _{i=1}^{n}a_{i}=a_{1}\lor a_{2}\lor \ldots a_{n-1}\lor a_{n}} In thesemantics of logic, classical disjunction is atruth functionaloperationwhich returns thetruth valuetrueunless both of its arguments arefalse. Its semantic entry is standardly given as follows:[a] This semantics corresponds to the followingtruth table:[1] Inclassical logicsystems where logical disjunction is not a primitive, it can be defined in terms of the primitiveand(∧{\displaystyle \land }) andnot(¬{\displaystyle \lnot }) as: Alternatively, it may be defined in terms ofimplies(→{\displaystyle \to }) andnotas:[6] The latter can be checked by the following truth table: It may also be defined solely in terms of→{\displaystyle \to }: It can be checked by the following truth table: The following properties apply to disjunction: Operatorscorresponding to logical disjunction exist in mostprogramming languages. Disjunction is often used forbitwise operations. Examples: Theoroperator can be used to set bits in abit fieldto 1, byor-ing the field with a constant field with the relevant bits set to 1. For example,x = x | 0b00000001will force the final bit to 1, while leaving other bits unchanged.[citation needed] Many languages distinguish between bitwise and logical disjunction by providing two distinct operators; in languages followingC,bitwise disjunctionis performed with the single pipe operator (|), and logical disjunction with the double pipe (||) operator. Logical disjunction is usuallyshort-circuited; that is, if the first (left) operand evaluates totrue, then the second (right) operand is not evaluated. The logical disjunction operator thus usually constitutes asequence point. In a parallel (concurrent) language, it is possible to short-circuit both sides: they are evaluated in parallel, and if one terminates with value true, the other is interrupted. This operator is thus called theparallel or. Although the type of a logical disjunction expression is Boolean in most languages (and thus can only have the valuetrueorfalse), in some languages (such asPythonandJavaScript), the logical disjunction operator returns one of its operands: the first operand if it evaluates to a true value, and the second operand otherwise.[8][9]This allows it to fulfill the role of theElvis operator. TheCurry–Howard correspondencerelates aconstructivistform of disjunction totagged uniontypes.[citation needed][10] Themembershipof an element of aunion setinset theoryis defined in terms of a logical disjunction:x∈A∪B⇔(x∈A)∨(x∈B){\displaystyle x\in A\cup B\Leftrightarrow (x\in A)\vee (x\in B)}. Because of this, logical disjunction satisfies many of the same identities as set-theoretic union, such asassociativity,commutativity,distributivity, andde Morgan's laws, identifyinglogical conjunctionwithset intersection,logical negationwithset complement.[11] Disjunction innatural languagesdoes not precisely match the interpretation of∨{\displaystyle \lor }in classical logic. Notably, classical disjunction is inclusive while natural language disjunction is often understood exclusively, as the following English example typically would be.[1] This inference has sometimes been understood as anentailment, for instance byAlfred Tarski, who suggested that natural language disjunction isambiguousbetween a classical and a nonclassical interpretation. More recent work inpragmaticshas shown that this inference can be derived as aconversational implicatureon the basis of asemanticdenotation which behaves classically. However, disjunctive constructions includingHungarianvagy... vagyandFrenchsoit... soithave been argued to be inherently exclusive, rendering ungrammaticalityin contexts where an inclusive reading would otherwise be forced.[1] Similar deviations from classical logic have been noted in cases such asfree choice disjunctionandsimplification of disjunctive antecedents, where certainmodal operatorstrigger aconjunction-like interpretation of disjunction. As with exclusivity, these inferences have been analyzed both as implicatures and as entailments arising from a nonclassical interpretation of disjunction.[1] In many languages, disjunctive expressions play a role in question formation. For instance, while the above English example can be interpreted as apolar questionasking whether it's true that Mary is either a philosopher or a linguist, it can also be interpreted as analternative questionasking which of the two professions is hers. The role of disjunction in these cases has been analyzed using nonclassical logics such asalternative semanticsandinquisitive semantics, which have also been adopted to explain the free choice and simplification inferences.[1] In English, as in many other languages, disjunction is expressed by acoordinating conjunction. Other languages express disjunctive meanings in a variety of ways, though it is unknown whether disjunction itself is alinguistic universal. In many languages such asDyirbalandMaricopa, disjunction is marked using a verbsuffix. For instance, in the Maricopa example below, disjunction is marked by the suffixšaa.[1] Johnš John-NOM Billš Bill-NOM vʔaawuumšaa 3-come-PL-FUT-INFER Johnš Billš vʔaawuumšaa John-NOM Bill-NOM 3-come-PL-FUT-INFER 'John or Bill will come.'
https://en.wikipedia.org/wiki/Logical_disjunction
Asyllogism(Ancient Greek:συλλογισμός,syllogismos, 'conclusion, inference') is a kind oflogical argumentthat appliesdeductive reasoningto arrive at aconclusionbased on twopropositionsthat are asserted or assumed to be true. In its earliest form (defined byAristotlein his 350 BC bookPrior Analytics), a deductive syllogism arises when two true premises (propositions or statements) validly imply a conclusion, or the main point that the argument aims to get across.[1]For example, knowing that all men are mortal (major premise), and thatSocratesis a man (minor premise), we may validly conclude that Socrates is mortal. Syllogistic arguments are usually represented in a three-line form: All men are mortal.Socrates is a man.Therefore, Socrates is mortal.[2] In antiquity, two rival syllogistic theories existed:Aristotelian syllogismandStoic syllogism.[3]From theMiddle Agesonwards,categorical syllogismandsyllogismwere usually used interchangeably. This article is concerned only with this historical use. The syllogism was at the core of historical deductive reasoning, whereby facts are determined by combining existing statements, in contrast toinductive reasoning, in which facts are predicted by repeated observations. Within some academic contexts, syllogism has been superseded byfirst-order predicate logicfollowing the work ofGottlob Frege, in particular hisBegriffsschrift(Concept Script; 1879). Syllogism, being a method of valid logical reasoning, will always be useful in most circumstances, and for general-audience introductions to logic and clear-thinking.[4][5] In antiquity, two rival syllogistic theories existed: Aristotelian syllogism and Stoic syllogism.[3] Aristotledefines the syllogism as "a discourse in which certain (specific) things having been supposed, something different from the things supposed results of necessity because these things are so."[6]Despite this very general definition, inPrior AnalyticsAristotle limits himself to categorical syllogisms that consist of threecategorical propositions, including categoricalmodalsyllogisms.[7] The use of syllogisms as a tool for understanding can be dated back to the logical reasoning discussions ofAristotle. Before the mid-12th century, medieval logicians were only familiar with a portion of Aristotle's works, including such titles asCategoriesandOn Interpretation, works that contributed heavily to the prevailing Old Logic, orlogica vetus. The onset of a New Logic, orlogica nova, arose alongside the reappearance ofPrior Analytics, the work in which Aristotle developed his theory of the syllogism. Prior Analytics, upon rediscovery, was instantly regarded by logicians as "a closed and complete body of doctrine", leaving very little for thinkers of the day to debate, and reorganize. Aristotle's theory on the syllogism forassertoricsentences was considered especially remarkable, with only small systematic changes occurring to the concept over time. This theory of the syllogism would not enter the context of the more comprehensive logic of consequence until logic began to be reworked in general in the mid-14th century by the likes ofJohn Buridan. Aristotle'sPrior Analyticsdid not, however, incorporate such a comprehensive theory on the modal syllogism—a syllogism that has at least onemodalizedpremise, that is, a premise containing the modal wordsnecessarily,possibly, orcontingently. Aristotle's terminology in this aspect of his theory was deemed vague, and in many cases unclear, even contradicting some of his statements fromOn Interpretation. His original assertions on this specific component of the theory were left up to a considerable amount of conversation, resulting in a wide array of solutions put forth by commentators of the day. The system for modal syllogisms laid forth by Aristotle would ultimately be deemed unfit for practical use, and would be replaced by new distinctions and new theories altogether. Boethius(c. 475–526) contributed an effort to make the ancient Aristotelian logic more accessible. While his Latin translation ofPrior Analyticswent primarily unused before the 12th century, his textbooks on the categorical syllogism were central to expanding the syllogistic discussion. Rather than in any additions that he personally made to the field, Boethius' logical legacy lies in his effective transmission of prior theories to later logicians, as well as his clear and primarily accurate presentations of Aristotle's contributions. Another of medieval logic's first contributors from the Latin West,Peter Abelard(1079–1142), gave his own thorough evaluation of the syllogism concept, and accompanying theory in theDialectica—a discussion of logic based on Boethius' commentaries and monographs. His perspective on syllogisms can be found in other works as well, such asLogica Ingredientibus. With the help of Abelard's distinction betweende dictomodal sentences andde remodal sentences, medieval logicians began to shape a more coherent concept of Aristotle's modal syllogism model. The French philosopherJean Buridan(c. 1300 – 1361), whom some consider the foremost logician of the later Middle Ages, contributed two significant works:Treatise on ConsequenceandSummulae de Dialectica, in which he discussed the concept of the syllogism, its components and distinctions, and ways to use the tool to expand its logical capability. For 200 years after Buridan's discussions, little was said about syllogistic logic. Historians of logic have assessed that the primary changes in the post-Middle Age era were changes in respect to the public's awareness of original sources, a lessening of appreciation for the logic's sophistication and complexity, and an increase in logical ignorance—so that logicians of the early 20th century came to view the whole system as ridiculous.[8] The Aristotelian syllogism dominated Western philosophical thought for many centuries. Syllogism itself is about drawing valid conclusions from assumptions (axioms), rather than about verifying the assumptions. However, people over time focused on the logic aspect, forgetting the importance of verifying the assumptions. In the 17th century,Francis Baconemphasized that experimental verification of axioms must be carried out rigorously, and cannot take syllogism itself as the best way to draw conclusions in nature.[9]Bacon proposed a more inductive approach to the observation of nature, which involves experimentation, and leads to discovering and building on axioms to create a more general conclusion.[9]Yet, a full method of drawing conclusions in nature is not the scope of logic or syllogism, and the inductive method was covered in Aristotle's subsequent treatise, thePosterior Analytics. In the 19th century, modifications to syllogism were incorporated to deal withdisjunctive("A or B") andconditional("if A then B") statements.Immanuel Kantfamously claimed, inLogic(1800), that logic was the one completed science, and that Aristotelian logic more or less included everything about logic that there was to know. (This work is not necessarily representative of Kant's mature philosophy, which is often regarded as an innovation to logic itself.) Kant's opinion stood unchallenged in the West until 1879, whenGottlob Fregepublished hisBegriffsschrift(Concept Script). This introduced a calculus, a method of representing categorical statements (and statements that are not provided for in syllogism as well) by the use of quantifiers and variables. A noteworthy exception is the logic developed inBernard Bolzano's workWissenschaftslehre(Theory of Science, 1837), the principles of which were applied as a direct critique of Kant, in the posthumously published workNew Anti-Kant(1850). The work of Bolzano had been largely overlooked until the late 20th century, among other reasons, because of the intellectual environment at the time inBohemia, which was then part of theAustrian Empire. In the last 20 years, Bolzano's work has resurfaced and become subject of both translation and contemporary study. This led to the rapid development ofsentential logicand first-orderpredicate logic, subsuming syllogistic reasoning, which was, therefore, after 2000 years, suddenly considered obsolete by many.[original research?]The Aristotelian system is explicated in modern fora of academia primarily in introductory material and historical study. One notable exception to this modern relegation is the continued application of Aristotelian logic by officials of theCongregation for the Doctrine of the Faith, and the Apostolic Tribunal of theRoman Rota, which still requires that any arguments crafted by Advocates be presented in syllogistic format. George Boole's unwavering acceptance of Aristotle's logic is emphasized by the historian of logicJohn Corcoranin an accessible introduction toLaws of Thought.[10][11]Corcoran also wrote a point-by-point comparison ofPrior AnalyticsandLaws of Thought.[12]According to Corcoran, Boole fully accepted and endorsed Aristotle's logic. Boole's goals were "to go under, over, and beyond" Aristotle's logic by:[12] More specifically, Boole agreed with whatAristotlesaid; Boole's 'disagreements', if they might be called that, concern what Aristotle did not say. First, in the realm of foundations, Boole reduced Aristotle's four propositional forms to one form, the form of equations, which by itself was a revolutionary idea. Second, in the realm of logic's problems, Boole's addition of equation solving to logic—another revolutionary idea—involved Boole's doctrine that Aristotle's rules of inference (the "perfect syllogisms") must be supplemented by rules for equation solving. Third, in the realm of applications, Boole's system could handle multi-term propositions and arguments, whereas Aristotle could handle only two-termed subject-predicate propositions and arguments. For example, Aristotle's system could not deduce: "No quadrangle that is a square is a rectangle that is a rhombus" from "No square that is a quadrangle is a rhombus that is a rectangle" or from "No rhombus that is a rectangle is a square that is a quadrangle." A categorical syllogism consists of three parts: Each part is acategorical proposition, and each categorical proposition contains two categorical terms.[13]In Aristotle, each of the premises is in the form "All S are P," "Some S are P", "No S are P" or "Some S are not P", where "S" is the subject-term and "P" is the predicate-term: More modern logicians allow some variation. Each of the premises has one term in common with the conclusion: in a major premise, this is themajor term(i.e., thepredicateof the conclusion); in a minor premise, this is theminor term(i.e., the subject of the conclusion). For example: Each of the three distinct terms represents a category. From the example above,humans,mortal, andGreeks:mortalis the major term, andGreeksthe minor term. The premises also have one term in common with each other, which is known as themiddle term; in this example,humans. Both of the premises are universal, as is the conclusion. Here, the major term isdie, the minor term ismen, and the middle term ismortals. Again, both premises are universal, hence so is the conclusion. A polysyllogism, or asorites, is a form of argument in which a series of incomplete syllogisms is so arranged that the predicate of each premise forms the subject of the next until the subject of the first is joined with the predicate of the last in the conclusion. For example, one might argue that all lions are big cats, all big cats are predators, and all predators are carnivores. To conclude that therefore all lions are carnivores is to construct a sorites argument. There are infinitely many possible syllogisms, but only 256 logically distinct types and only 24 valid types (enumerated below). A syllogism takes the form (note: M – Middle, S – subject, P – predicate.): The premises and conclusion of a syllogism can be any of four types, which are labeled by letters[14]as follows. The meaning of the letters is given by the table: InPrior Analytics, Aristotle uses mostly the letters A, B, and C (Greek lettersalpha,beta, andgamma) as term place holders, rather than giving concrete examples. It is traditional to useisrather thanareas thecopula, henceAll A is Brather thanAll As are Bs. It is traditional and convenient practice to use a, e, i, o asinfix operatorsso the categorical statements can be written succinctly. The following table shows the longer form, the succinct shorthand, and equivalent expressions in predicate logic: The convention here is that the letter S is the subject of the conclusion, P is the predicate of the conclusion, and M is the middle term. The major premise links M with P and the minor premise links M with S. However, the middle term can be either the subject or the predicate of each premise where it appears. The differing positions of the major, minor, and middle terms gives rise to another classification of syllogisms known as thefigure. Given that in each case the conclusion is S-P, the four figures are: (Note, however, that, following Aristotle's treatment of the figures, some logicians—e.g.,Peter AbelardandJean Buridan—reject the fourth figure as a figure distinct from the first.) Putting it all together, there are 256 possible types of syllogisms (or 512 if the order of the major and minor premises is changed, though this makes no difference logically). Each premise and the conclusion can be of type A, E, I or O, and the syllogism can be any of the four figures. A syllogism can be described briefly by giving the letters for the premises and conclusion followed by the number for the figure. For example, the syllogism BARBARA below is AAA-1, or "A-A-A in the first figure". The vast majority of the 256 possible forms of syllogism are invalid (the conclusion does notfollow logicallyfrom the premises). The table below shows the valid forms. Even some of these are sometimes considered to commit theexistential fallacy, meaning they are invalid if they mention an empty category. These controversial patterns are marked initalics. All but four of the patterns in italics (felapton, darapti, fesapo and bamalip) are weakened moods, i.e. it is possible to draw a stronger conclusion from the premises. The letters A, E, I, and O have been used since themedieval Schoolsto formmnemonicnames for the forms as follows: 'Barbara' stands for AAA, 'Celarent' for EAE, etc. Next to each premise and conclusion is a shorthand description of the sentence. So in AAI-3, the premise "All squares are rectangles" becomes "MaP"; the symbols mean that the first term ("square") is the middle term, the second term ("rectangle") is the predicate of the conclusion, and the relationship between the two terms is labeled "a" (All M are P). The following table shows all syllogisms that are essentially different. The similar syllogisms share the same premises, just written in a different way. For example "Some pets are kittens" (SiM inDarii) could also be written as "Some kittens are pets" (MiS in Datisi). In the Venn diagrams, the black areas indicate no elements, and the red areas indicate at least one element. In the predicate logic expressions, a horizontal bar over an expression means to negate ("logical not") the result of that expression. It is also possible to usegraphs(consisting of vertices and edges) to evaluate syllogisms.[15] Similar: Cesare (EAE-2) Camestres is essentially like Celarent with S and P exchanged.Similar: Calemes (AEE-4) Similar: Datisi (AII-3) Disamis is essentially like Darii with S and P exchanged.Similar: Dimatis (IAI-4) Similar: Festino (EIO-2), Ferison (EIO-3), Fresison (EIO-4) Bamalipis exactly likeBarbariwith S and P exchanged: Similar:Cesaro (EAO-2) Similar:Calemos (AEO-4) Similar:Fesapo (EAO-4) This table shows all 24 valid syllogisms, represented byVenn diagrams. Columns indicate similarity, and are grouped by combinations of premises. Borders correspond to conclusions. Those with an existential assumption are dashed. With Aristotle, we may distinguishsingular terms, such asSocrates, and general terms, such asGreeks. Aristotle further distinguished types (a) and (b): Such a predication is known as adistributive, as opposed to non-distributive as inGreeks are numerous. It is clear that Aristotle's syllogism works only for distributive predication, since we cannot reasonAll Greeks are animals, animals are numerous, therefore all Greeks are numerous. In Aristotle's view singular terms were of type (a), and general terms of type (b). Thus,Mencan be predicated ofSocratesbutSocratescannot be predicated of anything. Therefore, for a term to be interchangeable—to be either in the subject or predicate position of a proposition in a syllogism—the terms must be general terms, orcategorical termsas they came to be called. Consequently, the propositions of a syllogism should be categorical propositions (both terms general) and syllogisms that employ only categorical terms came to be calledcategorical syllogisms. It is clear that nothing would prevent a singular term occurring in a syllogism—so long as it was always in the subject position—however, such a syllogism, even if valid, is not a categorical syllogism. An example isSocrates is a man, all men are mortal, therefore Socrates is mortal.Intuitively this is as valid asAll Greeks are men, all men are mortal therefore all Greeks are mortals. To argue that its validity can be explained by the theory of syllogism would require that we show thatSocrates is a manis the equivalent of a categorical proposition. It can be arguedSocrates is a manis equivalent toAll that are identical to Socrates are men, so our non-categorical syllogism can be justified by use of the equivalence above and then citing BARBARA. If a statement includes a term such that the statement is false if the term has no instances, then the statement is said to haveexistential importwith respect to that term. It is ambiguous whether or not a universal statement of the formAll A is Bis to be considered as true, false, or even meaningless if there are no As. If it is considered as false in such cases, then the statementAll A is Bhas existential import with respect to A. It is claimed Aristotle's logic system does not cover cases where there are no instances. Aristotle's goal was to develop a logic for science. He relegates fictions, such as mermaids and unicorns, to the realms of poetry and literature. In his mind, they exist outside the ambit of science, which is why he leaves no room for such non-existent entities in his logic. This is a thoughtful choice, not an inadvertent omission. Technically, Aristotelian science is a search for definitions, where a definition is "a phrase signifying a thing's essence." Because non-existent entities cannot be anything, they do not, in Aristotle's mind, possess an essence. This is why he leaves no place for fictional entities like goat-stags (or unicorns).[16] However, many logic systems developed sincedoconsider the case where there may be no instances. Medieval logicians were aware of the problem of existential import and maintained that negative propositions do not carry existential import, and that positive propositions with subjects that do notsuppositare false. The following problems arise: For example, if it is accepted that AiB is false if there are no As and AaB entails AiB, then AiB has existential import with respect to A, and so does AaB. Further, if it is accepted that AiB entails BiA, then AiB and AaB have existential import with respect to B as well. Similarly, if AoB is false if there are no As, and AeB entails AoB, and AeB entails BeA (which in turn entails BoA) then both AeB and AoB have existential import with respect to both A and B. It follows immediately that all universal categorical statements have existential import with respect to both terms. If AaB and AeB is a fair representation of the use of statements in normal natural language of All A is B and No A is B respectively, then the following example consequences arise: If it is ruled that no universal statement has existential import then the square of opposition fails in several respects (e.g. AaB does not entail AiB) and a number of syllogisms are no longer valid (e.g. BaC, AaB->AiC). These problems and paradoxes arise in both natural language statements and statements in syllogism form because of ambiguity, in particular ambiguity with respect to All. If "Fred claims all his books were Pulitzer Prize winners", is Fred claiming that he wrote any books? If not, then is what he claims true? Suppose Jane says none of her friends are poor; is that true if she has no friends? The first-order predicate calculus avoids such ambiguity by using formulae that carry no existential import with respect to universal statements. Existential claims must be explicitly stated. Thus, natural language statements—of the formsAll A is B, No A is B,Some A is B, andSome A is not B—can be represented in first order predicate calculus in which any existential import with respect to terms A and/or B is either explicit or not made at all. Consequently, the four formsAaB, AeB, AiB, andAoBcan be represented in first order predicate in every combination of existential import—so it can establish which construal, if any, preserves the square of opposition and the validity of the traditionally valid syllogism. Strawson claims such a construal is possible, but the results are such that, in his view, the answer to question (e) above isno. People often make mistakes when reasoning syllogistically.[17] For instance, from the premises some A are B, some B are C, people tend to come to a definitive conclusion that therefore some A are C.[18][19]However, this does not follow according to the rules of classical logic. For instance, while some cats (A) are black things (B), and some black things (B) are televisions (C), it does not follow from the parameters that some cats (A) are televisions (C). This is because in the structure of the syllogism invoked (i.e. III-1) the middle term is not distributed in either the major premise or in the minor premise, a pattern called the "fallacy of the undistributed middle". Because of this, it can be hard to follow formal logic, and a closer eye is needed in order to ensure that an argument is, in fact, valid.[20] Determining the validity of a syllogism involves determining thedistributionof each term in each statement, meaning whether all members of that term are accounted for. In simple syllogistic patterns, the fallacies of invalid patterns are:
https://en.wikipedia.org/wiki/Syllogistic_fallacy
Incomputer programming, abitwise operationoperates on abit string, abit arrayor abinary numeral(considered as a bit string) at the level of its individualbits. It is a fast and simple action, basic to the higher-level arithmetic operations and directly supported by theprocessor. Most bitwise operations are presented as two-operand instructions where the result replaces one of the input operands. On simple low-cost processors, typically, bitwise operations are substantially faster than division, several times faster than multiplication, and sometimes significantly faster than addition. While modern processors usually perform addition and multiplication just as fast as bitwise operations due to their longerinstruction pipelinesand otherarchitecturaldesign choices, bitwise operations do commonly use less power because of the reduced use of resources.[1] In the explanations below, any indication of a bit's position is counted from the right (least significant) side, advancing left. For example, the binary value 0001 (decimal 1) has zeroes at every position but the first (i.e., the rightmost) one. Thebitwise NOT, orbitwise complement, is aunary operationthat performslogical negationon each bit, forming theones' complementof the given binary value. Bits that are 0 become 1, and those that are 1 become 0. For example: The result is equal to thetwo's complementof the value minus one. If two's complement arithmetic is used, thenNOT x = -x − 1. For unsignedintegers, the bitwise complement of a number is the "mirror reflection" of the number across the half-way point of the unsigned integer's range. For example, for 8-bit unsigned integers,NOT x = 255 - x, which can be visualized on a graph as a downward line that effectively "flips" an increasing range from 0 to 255, to a decreasing range from 255 to 0. A simple but illustrative example use is to invert a grayscale image where each pixel is stored as an unsigned integer. Abitwise ANDis abinary operationthat takes two equal-length binary representations and performs thelogical ANDoperation on each pair of the corresponding bits. Thus, if both bits in the compared position are 1, the bit in the resulting binary representation is 1 (1 × 1 = 1); otherwise, the result is 0 (1 × 0 = 0 and 0 × 0 = 0). For example: The operation may be used to determine whether a particular bit isset(1) orcleared(0). For example, given a bit pattern 0011 (decimal 3), to determine whether the second bit is set we use a bitwise AND with a bit pattern containing 1 only in the second bit: Because the result 0010 is non-zero, we know the second bit in the original pattern was set. This is often calledbit masking. (By analogy, the use ofmasking tapecovers, ormasks, portions that should not be altered or portions that are not of interest. In this case, the 0 values mask the bits that are not of interest.) The bitwise AND may be used to clear selected bits (orflags) of aregisterin which each bit represents an individualBoolean state. This technique is an efficient way to store a number of Boolean values using as little memory as possible. For example, 0110 (decimal 6) can be considered a set of four flags numbered from right to left, where the first and fourth flags are clear (0), and the second and third flags are set (1). The third flag may be cleared by using a bitwise AND with the pattern that has a zero only in the third bit: Because of this property, it becomes easy to check theparityof a binary number by checking the value of the lowest valued bit. Using the example above: Because 6 AND 1 is zero, 6 is divisible by two and therefore even. Abitwise ORis abinary operationthat takes two bit patterns of equal length and performs thelogical inclusive ORoperation on each pair of corresponding bits. The result in each position is 0 if both bits are 0, while otherwise the result is 1. For example: The bitwise OR may be used to set to 1 the selected bits of the register described above. For example, the fourth bit of 0010 (decimal 2) may be set by performing a bitwise OR with the pattern with only the fourth bit set: Abitwise XORis abinary operationthat takes two bit patterns of equal length and performs thelogical exclusive ORoperation on each pair of corresponding bits. The result in each position is 1 if only one of the bits is 1, but will be 0 if both are 0 or both are 1. In this we perform the comparison of two bits, being 1 if the two bits are different, and 0 if they are the same. For example: The bitwise XOR may be used to invert selected bits in a register (also called toggle or flip). Any bit may be toggled by XORing it with 1. For example, given the bit pattern 0010 (decimal 2) the second and fourth bits may be toggled by a bitwise XOR with a bit pattern containing 1 in the second and fourth positions: This technique may be used to manipulate bit patterns representing sets of Boolean states. Assembly languageprogrammers and optimizingcompilerssometimes use XOR as a short-cut to setting the value of a register to zero. Performing XOR on a value against itself always yields zero, and on many architectures this operation requires fewer clock cycles and less memory than loading a zero value and saving it to the register. If the set of bit strings of fixed lengthn(i.e.machine words) is thought of as ann-dimensionalvector spaceF2n{\displaystyle {\bf {F}}_{2}^{n}}over thefieldF2{\displaystyle {\bf {F}}_{2}}, then vector addition corresponds to the bitwise XOR. Assuming⁠x≥y{\displaystyle x\geq y}⁠, for the non-negative integers, the bitwise operations can be written as follows: There are 16 possibletruth functionsof twobinary variables; this defines atruth table. Here is the bitwise equivalent operations of two bits P and Q: Thebit shiftsare sometimes considered bitwise operations, because they treat a value as a series of bits rather than as a numerical quantity. In these operations, the digits are moved, orshifted, to the left or right.Registersin a computer processor have a fixed width, so some bits will be "shifted out" of the register at one end, while the same number of bits are "shifted in" from the other end; the differences between bit shift operators lie in how they determine the values of the shifted-in bits. If the width of the register (frequently 32 or even 64) is larger than the number of bits (usually 8) of the smallest addressable unit, frequently called byte, the shift operations induce an addressing scheme from the bytes to the bits. Thereby the orientations "left" and "right" are taken from the standard writing of numbers in aplace-value notation, such that a left shift increases and a right shift decreases the value of the number ― if the left digits are read first, this makes up abig-endianorientation. Disregarding the boundary effects at both ends of the register, arithmetic and logical shift operations behave the same, and a shift by 8 bit positions transports the bit pattern by 1 byte position in the following way: In anarithmetic shift(sticky shift), the bits that are shifted out of either end are discarded. In a left arithmetic shift, zeros are shifted in on the right; in a right arithmetic shift, thesign bit(the MSB in two's complement) is shifted in on the left, thus preserving the sign of the operand. This example uses an 8-bit register, interpreted as two's complement: In the first case, the leftmost digit was shifted past the end of the register, and a new 0 was shifted into the rightmost position. In the second case, the rightmost 1 was shifted out (perhaps into thecarry flag), and a new 1 was copied into the leftmost position, preserving the sign of the number. Multiple shifts are sometimes shortened to a single shift by some number of digits. For example: A left arithmetic shift bynis equivalent to multiplying by 2n(provided the value does notoverflow), while a right arithmetic shift bynof atwo's complementvalue is equivalent to taking thefloorof division by 2n. If the binary number is treated asones' complement, then the same right-shift operation results in division by 2nandrounding toward zero. In alogical shift(zero fill shift), zeros are shifted in to replace the discarded bits. Therefore, the logical and arithmetic left-shifts are exactly the same. However, as the logical right-shift inserts value 0 bits into the most significant bit, instead of copying the sign bit, it is ideal for unsigned binary numbers, while the arithmetic right-shift is ideal for signedtwo's complementbinary numbers. Another form of shift is thecircular shift,bitwise rotationorbit rotation. In this operation, sometimes calledrotate no carry, the bits are "rotated" as if the left and right ends of the register were joined. The value that is shifted into the right during a left-shift is whatever value was shifted out on the left, and vice versa for a right-shift operation. This is useful if it is necessary to retain all the existing bits, and is frequently used in digitalcryptography.[clarification needed] Rotate through carryis a variant of the rotate operation, where the bit that is shifted in (on either end) is the old value of the carry flag, and the bit that is shifted out (on the other end) becomes the new value of the carry flag. A singlerotate through carrycan simulate a logical or arithmetic shift of one position by setting up the carry flag beforehand. For example, if the carry flag contains 0, thenx RIGHT-ROTATE-THROUGH-CARRY-BY-ONEis a logical right-shift, and if the carry flag contains a copy of the sign bit, thenx RIGHT-ROTATE-THROUGH-CARRY-BY-ONEis an arithmetic right-shift. For this reason, some microcontrollers such as low endPICsjust haverotateandrotate through carry, and don't bother with arithmetic or logical shift instructions. Rotate through carry is especially useful when performing shifts on numbers larger than the processor's nativeword size, because if a large number is stored in two registers, the bit that is shifted off one end of the first register must come in at the other end of the second. With rotate-through-carry, that bit is "saved" in the carry flag during the first shift, ready to shift in during the second shift without any extra preparation. In C and C++ languages, the logical shift operators are "<<" for left shift and ">>" for right shift. The number of places to shift is given as the second argument to the operator. For example, assignsxthe result of shiftingyto the left by two bits, which is equivalent to a multiplication by four. Shifts can result in implementation-defined behavior orundefined behavior, so care must be taken when using them. The result of shifting by a bit count greater than or equal to the word's size is undefined behavior in C and C++.[2][3]Right-shifting a negative value is implementation-defined and not recommended by good coding practice;[4]the result of left-shifting a signed value is undefined if the result cannot be represented in the result type.[2] In C#, the right-shift is an arithmetic shift when the first operand is an int or long. If the first operand is of type uint or ulong, the right-shift is a logical shift.[5] The C-family of languages lack a rotate operator (although C++20 providesstd::rotlandstd::rotr), but one can be synthesized from the shift operators. Care must be taken to ensure the statement is well formed to avoidundefined behaviorandtiming attacksin software with security requirements.[6]For example, a naive implementation that left-rotates a 32-bit unsigned valuexbynpositions is simply However, a shift by0bits results in undefined behavior in the right-hand expression(x >> (32 - n))because32 - 0is32, and32is outside the range 0–31 inclusive. A second try might result in where the shift amount is tested to ensure that it does not introduce undefined behavior. However, the branch adds an additional code path and presents an opportunity for timing analysis and attack, which is often not acceptable in high-integrity software.[6]In addition, the code compiles to multiple machine instructions, which is often less efficient than the processor's native instruction. To avoid the undefined behavior and branches underGCCandClang, the following is recommended. The pattern is recognized by many compilers, and the compiler will emit a single rotate instruction:[7][8][9] There are also compiler-specificintrinsicsimplementingcircular shifts, like_rotl8, _rotl16,_rotr8, _rotr16in MicrosoftVisual C++. Clang provides some rotate intrinsics for Microsoft compatibility that suffers the problems above.[9]GCC does not offer rotate intrinsics. Intel also provides x86intrinsics. InJava, all integer types are signed, so the "<<" and ">>" operators perform arithmetic shifts. Java adds the operator ">>>" to perform logical right shifts, but since the logical and arithmetic left-shift operations are identical for signed integer, there is no "<<<" operator in Java. More details of Java shift operators:[10] JavaScriptuses bitwise operations to evaluate each of two or moreunits placeto 1 or 0.[12] In Pascal, as well as in all its dialects (such asObject PascalandStandard Pascal), the logical left and right shift operators are "shl" and "shr", respectively. Even for signed integers,shrbehaves like a logical shift, and does not copy the sign bit. The number of places to shift is given as the second argument. For example, the following assignsxthe result of shiftingyto the left by two bits: Bitwise operations are necessary particularly in lower-level programming such as device drivers, low-level graphics, communications protocol packet assembly, and decoding. Although machines often have efficient built-in instructions for performing arithmetic and logical operations, all these operations can be performed by combining the bitwise operators and zero-testing in various ways.[13]For example, here is apseudocodeimplementation ofancient Egyptian multiplicationshowing how to multiply two arbitrary integersaandb(agreater thanb) using only bitshifts and addition: Another example is a pseudocode implementation of addition, showing how to calculate a sum of two integersaandbusing bitwise operators and zero-testing: Sometimes it is useful to simplify complex expressions made up of bitwise operations, for example when writing compilers. The goal of a compiler is to translate ahigh-level programming languageinto the most efficientmachine codepossible. Boolean algebra is used to simplify complex bitwise expressions. Additionally, XOR can be composed using the 3 basic operations (AND, OR, NOT) It can be hard to solve for variables in Boolean algebra, because unlike regular algebra, several operations do not have inverses. Operations without inverses lose some of the original data bits when they are performed, and it is not possible to recover this missing information. Operations at the top of this list are executed first. See the main article for a more complete list.
https://en.wikipedia.org/wiki/Bitwise_NOR
Inlogic, afunctionally completeset oflogical connectivesorBoolean operatorsis one that can be used to express all possibletruth tablesby combining members of thesetinto aBoolean expression.[1][2]A well-known complete set of connectives is{AND,NOT}. Each of thesingletonsets{NAND}and{NOR}is functionally complete. However, the set{ AND,OR}is incomplete, due to its inability to express NOT. A gate (or set of gates) that is functionally complete can also be called a universal gate (or a universal set of gates). In a context ofpropositional logic, functionally complete sets of connectives are also called (expressively)adequate.[3] From the point of view ofdigital electronics, functional completeness means that every possiblelogic gatecan be realized as a network of gates of the types prescribed by the set. In particular, all logic gates can be assembled from either only binaryNAND gates, or only binaryNOR gates. Modern texts on logic typically take as primitive some subset of the connectives:conjunction(∧{\displaystyle \land });disjunction(∨{\displaystyle \lor });negation(¬{\displaystyle \neg });material conditional(→{\displaystyle \to }); and possibly thebiconditional(↔{\displaystyle \leftrightarrow }). Further connectives can be defined, if so desired, by defining them in terms of these primitives. For example, NOR (the negation of the disjunction, sometimes denoted↓{\displaystyle \downarrow }) can be expressed as conjunction of two negations: Similarly, the negation of the conjunction, NAND (sometimes denoted as↑{\displaystyle \uparrow }), can be defined in terms of disjunction and negation. Every binary connective can be defined in terms of{¬,∧,∨,→,↔}{\displaystyle \{\neg ,\land ,\lor ,\to ,\leftrightarrow \}}, which means that set is functionally complete. However, it contains redundancy: this set is not aminimalfunctionally complete set, because the conditional and biconditional can be defined in terms of the other connectives as It follows that the smaller set{¬,∧,∨}{\displaystyle \{\neg ,\land ,\lor \}}is also functionally complete. (Its functional completeness is also proved by theDisjunctive Normal Form Theorem.)[4]But this is still not minimal, as∨{\displaystyle \lor }can be defined as Alternatively,∧{\displaystyle \land }may be defined in terms of∨{\displaystyle \lor }in a similar manner, or∨{\displaystyle \lor }may be defined in terms of→{\displaystyle \rightarrow }: No further simplifications are possible. Hence, every two-element set of connectives containing¬{\displaystyle \neg }and one of{∧,∨,→}{\displaystyle \{\land ,\lor ,\rightarrow \}}is a minimal functionally completesubsetof{¬,∧,∨,→,↔}{\displaystyle \{\neg ,\land ,\lor ,\to ,\leftrightarrow \}}. Given theBoolean domainB= {0, 1}, a setFof Boolean functionsfi:Bni→Bisfunctionally completeif thecloneonBgenerated by the basic functionsficontains all functionsf:Bn→B, for allstrictly positiveintegersn≥ 1. In other words, the set is functionally complete if every Boolean function that takes at least one variable can be expressed in terms of the functionsfi. Since every Boolean function of at least one variable can be expressed in terms of binary Boolean functions,Fis functionally complete if and only if every binary Boolean function can be expressed in terms of the functions inF. A more natural condition would be that the clone generated byFconsist of all functionsf:Bn→B, for all integersn≥ 0. However, the examples given above are not functionally complete in this stronger sense because it is not possible to write anullaryfunction, i.e. a constant expression, in terms ofFifFitself does not contain at least one nullary function. With this stronger definition, the smallest functionally complete sets would have 2 elements. Another natural condition would be that the clone generated byFtogether with the two nullary constant functions be functionally complete or, equivalently, functionally complete in the strong sense of the previous paragraph. The example of the Boolean function given byS(x,y,z) =zifx=yandS(x,y,z) =xotherwise shows that this condition is strictly weaker than functional completeness.[5][6][7] Emil Postproved that a set of logical connectives is functionally complete if and only if it is not a subset of any of the following sets of connectives: Post gave a complete description of thelatticeof allclones(sets of operations closed under composition and containing all projections) on the two-element set{T,F}, nowadays calledPost's lattice, which implies the above result as a simple corollary: the five mentioned sets of connectives are exactly the maximal nontrivial clones.[8] When a single logical connective or Boolean operator is functionally complete by itself, it is called aSheffer function[9]or sometimes asole sufficient operator. There are nounaryoperators with this property.NANDandNOR, which aredual to each other, are the only two binary Sheffer functions. These were discovered, but not published, byCharles Sanders Peircearound 1880, and rediscovered independently and published byHenry M. Shefferin 1913.[10]In digital electronics terminology, the binaryNAND gate(↑) and the binaryNOR gate(↓) are the only binaryuniversal logic gates. The following are the minimal functionally complete sets of logical connectives witharity≤ 2:[11] There are no minimal functionally complete sets of more than three at most binary logical connectives.[11]In order to keep the lists above readable, operators that ignore one or more inputs have been omitted. For example, an operator that ignores the first input and outputs the negation of the second can be replaced by a unary negation. Note that an electronic circuit or a software function can be optimized by reuse, to reduce the number of gates. For instance, the "A∧B" operation, when expressed by ↑ gates, is implemented with the reuse of "A ↑ B", Apart from logical connectives (Boolean operators), functional completeness can be introduced in other domains. For example, a set ofreversiblegates is called functionally complete, if it can express every reversible operator. The 3-inputFredkin gateis functionally complete reversible gate by itself – a sole sufficient operator. There are many other three-input universal logic gates, such as theToffoli gate. Inquantum computing, theHadamard gateand theT gateare universal, albeit with aslightly more restrictive definitionthan that of functional completeness. There is anisomorphismbetween thealgebra of setsand theBoolean algebra, that is, they have the samestructure. Then, if we map boolean operators into set operators, the "translated" above text are valid also for sets: there are many "minimal complete set of set-theory operators" that can generate any other set relations. The more popular "Minimal complete operator sets" are{¬, ∩}and{¬, ∪}. If theuniversal setis forbidden, set operators are restricted to being falsity (Ø) preserving, and cannot be equivalent to functionally complete Boolean algebra.
https://en.wikipedia.org/wiki/Functional_completeness
Thepropositional calculus[a]is a branch oflogic.[1]It is also calledpropositional logic,[2]statement logic,[1]sentential calculus,[3]sentential logic,[4][1]or sometimeszeroth-order logic.[b][6][7][8]Sometimes, it is calledfirst-orderpropositional logic[9]to contrast it withSystem F, but it should not be confused withfirst-order logic. It deals withpropositions[1](which can betrue or false)[10]and relations between propositions,[11]including the construction of arguments based on them.[12]Compound propositions are formed by connecting propositions bylogical connectivesrepresenting thetruth functionsofconjunction,disjunction,implication,biconditional, andnegation.[13][14][15][16]Some sources include other connectives, as in the table below. Unlikefirst-order logic, propositional logic does not deal with non-logical objects, predicates about them, orquantifiers. However, all the machinery of propositional logic is included in first-order logic and higher-order logics. In this sense, propositional logic is the foundation of first-order logic and higher-order logic. Propositional logic is typically studied with aformal language,[c]in which propositions are represented by letters, which are calledpropositional variables. These are then used, together with symbols for connectives, to makepropositional formula. Because of this, the propositional variables are calledatomic formulasof a formal propositional language.[14][2]While the atomic propositions are typically represented by letters of thealphabet,[d][14]there is a variety of notations to represent the logical connectives. The following table shows the main notational variants for each of the connectives in propositional logic. The most thoroughly researched branch of propositional logic isclassical truth-functional propositional logic,[1]in which formulas are interpreted as having precisely one of two possibletruth values, the truth value oftrueor the truth value offalse.[19]Theprinciple of bivalenceand thelaw of excluded middleare upheld. By comparison withfirst-order logic, truth-functional propositional logic is considered to bezeroth-order logic.[7][8] Although propositional logic (also called propositional calculus) had been hinted by earlier philosophers, it was developed into a formal logic (Stoic logic) byChrysippusin the 3rd century BC[20]and expanded by his successorStoics. The logic was focused onpropositions. This was different from the traditionalsyllogistic logic, which focused onterms. However, most of the original writings were lost[21]and, at some time between the 3rd and 6th century CE, Stoic logic faded into oblivion, to be resurrected only in the 20th century, in the wake of the (re)-discovery of propositional logic.[22] Symbolic logic, which would come to be important to refine propositional logic, was first developed by the 17th/18th-century mathematicianGottfried Leibniz, whosecalculus ratiocinatorwas, however, unknown to the larger logical community. Consequently, many of the advances achieved by Leibniz were recreated by logicians likeGeorge BooleandAugustus De Morgan, completely independent of Leibniz.[23] Gottlob Frege'spredicate logicbuilds upon propositional logic, and has been described as combining "the distinctive features of syllogistic logic and propositional logic."[24]Consequently, predicate logic ushered in a new era in logic's history; however, advances in propositional logic were still made after Frege, includingnatural deduction,truth treesandtruth tables. Natural deduction was invented byGerhard GentzenandStanisław Jaśkowski. Truth trees were invented byEvert Willem Beth.[25]The invention of truth tables, however, is of uncertain attribution. Within works by Frege[26]andBertrand Russell,[27]are ideas influential to the invention of truth tables. The actual tabular structure (being formatted as a table), itself, is generally credited to eitherLudwig WittgensteinorEmil Post(or both, independently).[26]Besides Frege and Russell, others credited with having ideas preceding truth tables include Philo, Boole,Charles Sanders Peirce,[28]andErnst Schröder. Others credited with the tabular structure includeJan Łukasiewicz,Alfred North Whitehead,William Stanley Jevons,John Venn, andClarence Irving Lewis.[27]Ultimately, some have concluded, like John Shosky, that "It is far from clear that any one person should be given the title of 'inventor' of truth-tables".[27] Propositional logic, as currently studied in universities, is a specification of a standard oflogical consequencein which only the meanings ofpropositional connectivesare considered in evaluating the conditions for the truth of a sentence, or whether a sentence logically follows from some other sentence or group of sentences.[2] Propositional logic deals withstatements, which are defined as declarative sentences having truth value.[29][1]Examples of statements might include: Declarative sentences are contrasted withquestions, such as "What is Wikipedia?", andimperativestatements, such as "Please addcitationsto support the claims in this article.".[30][31]Such non-declarative sentences have notruth value,[32]and are only dealt with innonclassical logics, callederoteticandimperative logics. In propositional logic, a statement can contain one or more other statements as parts.[1]Compound sentencesare formed from simpler sentences and express relationships among the constituent sentences.[33]This is done by combining them withlogical connectives:[33][34]the main types of compound sentences arenegations,conjunctions,disjunctions,implications, andbiconditionals,[33]which are formed by using the corresponding connectives to connect propositions.[35][36]InEnglish, these connectives are expressed by the words "and" (conjunction), "or" (disjunction), "not" (negation), "if" (material conditional), and "if and only if" (biconditional).[1][13]Examples of such compound sentences might include: If sentences lack any logical connectives, they are calledsimple sentences,[1]oratomic sentences;[34]if they contain one or more logical connectives, they are calledcompound sentences,[33]ormolecular sentences.[34] Sentential connectivesare a broader category that includes logical connectives.[2][34]Sentential connectives are any linguistic particles that bind sentences to create a new compound sentence,[2][34]or that inflect a single sentence to create a new sentence.[2]Alogical connective, orpropositional connective, is a kind of sentential connective with the characteristic feature that, when the original sentences it operates on are (or express)propositions, the new sentence that results from its application also is (or expresses) aproposition.[2]Philosophers disagree about what exactly a proposition is,[10][2]as well as about which sentential connectives in natural languages should be counted as logical connectives.[34][2]Sentential connectives are also calledsentence-functors,[37]and logical connectives are also calledtruth-functors.[37] Anargumentis defined as apairof things, namely a set of sentences, called thepremises,[g]and a sentence, called theconclusion.[38][34][37]The conclusion is claimed tofollow fromthe premises,[37]and the premises are claimed tosupportthe conclusion.[34] The following is an example of an argument within the scope of propositional logic: Thelogical formof this argument is known asmodus ponens,[39]which is aclassicallyvalidform.[40]So, in classical logic, the argument isvalid, although it may or may not besound, depending on themeteorologicalfacts in a given context. Thisexample argumentwill be reused when explaining§ Formalization. An argument isvalidif, and only if, it isnecessarythat, if all its premises are true, its conclusion is true.[38][41][42]Alternatively, an argument is valid if, and only if, it isimpossiblefor all the premises to be true while the conclusion is false.[42][38] Validity is contrasted withsoundness.[42]An argument issoundif, and only if, it is valid and all its premises are true.[38][42]Otherwise, it isunsound.[42] Logic, in general, aims to precisely specify valid arguments.[34]This is done by defining a valid argument as one in which its conclusion is alogical consequenceof its premises,[34]which, when this is understood assemantic consequence, means that there is nocasein which the premises are true but the conclusion is not true[34]– see§ Semanticsbelow. Propositional logic is typically studied through aformal systemin whichformulasof aformal languageareinterpretedto representpropositions. This formal language is the basis forproof systems, which allow a conclusion to be derived from premises if, and only if, it is alogical consequenceof them. This section will show how this works by formalizing the§ Example argument. The formal language for a propositional calculus will be fully specified in§ Language, and an overview of proof systems will be given in§ Proof systems. Since propositional logic is not concerned with the structure of propositions beyond the point where they cannot be decomposed any more by logical connectives,[39][1]it is typically studied by replacing suchatomic(indivisible) statements with letters of the alphabet, which are interpreted as variables representing statements (propositional variables).[1]With propositional variables, the§ Example argumentwould then be symbolized as follows: WhenPis interpreted as "It's raining" andQas "it's cloudy" these symbolic expressions correspond exactly with the original expression in natural language. Not only that, but they will also correspond with any other inference with the samelogical form. When a formal system is used to represent formal logic, only statement letters (usually capital roman letters such asP{\displaystyle P},Q{\displaystyle Q}andR{\displaystyle R}) are represented directly. The natural language propositions that arise when they're interpreted are outside the scope of the system, and the relation between the formal system and its interpretation is likewise outside the formal system itself. If we assume that the validity ofmodus ponenshas been accepted as anaxiom, then the same§ Example argumentcan also be depicted like this: This method of displaying it isGentzen's notation fornatural deductionandsequent calculus.[43]The premises are shown above a line, called theinference line,[15]separated by acomma, which indicatescombinationof premises.[44]The conclusion is written below the inference line.[15]The inference line representssyntactic consequence,[15]sometimes calleddeductive consequence,[45]> which is also symbolized with ⊢.[46][45]So the above can also be written in one line asP→Q,P⊢Q{\displaystyle P\to Q,P\vdash Q}.[h] Syntactic consequence is contrasted withsemantic consequence,[47]which is symbolized with ⊧.[46][45]In this case, the conclusion followssyntacticallybecause thenatural deductioninference ruleofmodus ponenshas been assumed. For more on inference rules, see the sections on proof systems below. Thelanguage(commonly calledL{\displaystyle {\mathcal {L}}})[45][48][34]of a propositional calculus is defined in terms of:[2][14] Awell-formed formulais any atomic formula, or any formula that can be built up from atomic formulas by means of operator symbols according to the rules of the grammar. The languageL{\displaystyle {\mathcal {L}}}, then, is defined either as beingidentical toits set of well-formed formulas,[48]or ascontainingthat set (together with, for instance, its set of connectives and variables).[14][34] Usually the syntax ofL{\displaystyle {\mathcal {L}}}is defined recursively by just a few definitions, as seen next; some authors explicitly includeparenthesesas punctuation marks when defining their language's syntax,[34][51]while others use them without comment.[2][14] Given a set of atomic propositional variablesp1{\displaystyle p_{1}},p2{\displaystyle p_{2}},p3{\displaystyle p_{3}}, ..., and a set of propositional connectivesc11{\displaystyle c_{1}^{1}},c21{\displaystyle c_{2}^{1}},c31{\displaystyle c_{3}^{1}}, ...,c12{\displaystyle c_{1}^{2}},c22{\displaystyle c_{2}^{2}},c32{\displaystyle c_{3}^{2}}, ...,c13{\displaystyle c_{1}^{3}},c23{\displaystyle c_{2}^{3}},c33{\displaystyle c_{3}^{3}}, ..., a formula of propositional logic isdefined recursivelyby these definitions:[2][14][50][i] Writing the result of applyingcnm{\displaystyle c_{n}^{m}}to⟨{\displaystyle \langle }A, B, C, …⟩{\displaystyle \rangle }in functional notation, ascnm{\displaystyle c_{n}^{m}}(A, B, C, …), we have the following as examples of well-formed formulas: What was given asDefinition 2above, which is responsible for the composition of formulas, is referred to byColin Howsonas theprinciple of composition.[39][j]It is thisrecursionin the definitionof a language's syntax which justifies the use of the word "atomic" to refer to propositional variables, since all formulas in the languageL{\displaystyle {\mathcal {L}}}are built up from the atoms as ultimate building blocks.[2]Composite formulas (all formulas besides atoms) are calledmolecules,[49]ormolecular sentences.[34](This is an imperfect analogy withchemistry, since a chemical molecule may sometimes have only one atom, as inmonatomic gases.)[49] The definition that "nothing else is a formula", given above asDefinition 3, excludes any formula from the language which is not specifically required by the other definitions in the syntax.[37]In particular, it excludesinfinitely longformulas from beingwell-formed.[37]It is sometimes called theClosure Clause.[53] An alternative to the syntax definitions given above is to write acontext-free (CF) grammarfor the languageL{\displaystyle {\mathcal {L}}}inBackus-Naur form(BNF).[54][55]This is more common incomputer sciencethan inphilosophy.[55]It can be done in many ways,[54]of which a particularly brief one, for the common set of five connectives, is this single clause:[55][56] This clause, due to itsself-referentialnature (sinceϕ{\displaystyle \phi }is in some branches of the definition ofϕ{\displaystyle \phi }), also acts as arecursive definition, and therefore specifies the entire language. To expand it to addmodal operators, one need only add …|◻ϕ|◊ϕ{\displaystyle |~\Box \phi ~|~\Diamond \phi }to the end of the clause.[55] Mathematicians sometimes distinguish between propositional constants,propositional variables, and schemata.Propositional constantsrepresent some particular proposition,[57]whilepropositional variablesrange over the set of all atomic propositions.[57]Schemata, orschematic letters, however, range over all formulas.[37][1](Schematic letters are also calledmetavariables.)[38]It is common to represent propositional constants byA,B, andC, propositional variables byP,Q, andR, and schematic letters are often Greek letters, most oftenφ,ψ, andχ.[37][1] However, some authors recognize only two "propositional constants" in their formal system: the special symbol⊤{\displaystyle \top }, called "truth", which always evaluates toTrue, and the special symbol⊥{\displaystyle \bot }, called "falsity", which always evaluates toFalse.[58][59][60]Other authors also include these symbols, with the same meaning, but consider them to be "zero-place truth-functors",[37]or equivalently, "nullaryconnectives".[50] To serve as a model of the logic of a givennatural language, a formal language must be semantically interpreted.[34]Inclassical logic, all propositions evaluate to exactly one of twotruth-values:TrueorFalse.[1][61]For example, "Wikipediais afreeonline encyclopediathat anyone can edit"evaluates toTrue,[62]while "Wikipedia is apaperencyclopedia"evaluates toFalse.[63] In other respects, the following formal semantics can apply to the language of any propositional logic, but the assumptions that there are only two semantic values (bivalence), that only one of the two is assigned to each formula in the language (noncontradiction), and that every formula gets assigned a value (excluded middle), are distinctive features of classical logic.[61][64][37]To learn aboutnonclassical logicswith more than two truth-values, and their unique semantics, one may consult the articles on "Many-valued logic", "Three-valued logic", "Finite-valued logic", and "Infinite-valued logic". For a given languageL{\displaystyle {\mathcal {L}}}, aninterpretation,[65]valuation,[51]Boolean valuation,[66]orcase,[34][k]is anassignmentofsemantic valuesto each formula ofL{\displaystyle {\mathcal {L}}}.[34]For a formal language of classical logic, a case is defined as anassignment, to each formula ofL{\displaystyle {\mathcal {L}}}, of one or the other, but not both, of thetruth values, namelytruth(T, or 1) andfalsity(F, or 0).[67][68]An interpretation that follows the rules of classical logic is sometimes called aBoolean valuation.[51][69]An interpretation of a formal language for classical logic is often expressed in terms oftruth tables.[70][1]Since each formula is only assigned a single truth-value, an interpretation may be viewed as afunction, whosedomainisL{\displaystyle {\mathcal {L}}}, and whoserangeis its set of semantic valuesV={T,F}{\displaystyle {\mathcal {V}}=\{{\mathsf {T}},{\mathsf {F}}\}},[2]orV={1,0}{\displaystyle {\mathcal {V}}=\{1,0\}}.[34] Forn{\displaystyle n}distinct propositional symbols there are2n{\displaystyle 2^{n}}distinct possible interpretations. For any particular symbola{\displaystyle a}, for example, there are21=2{\displaystyle 2^{1}=2}possible interpretations: eithera{\displaystyle a}is assignedT, ora{\displaystyle a}is assignedF. And for the paira{\displaystyle a},b{\displaystyle b}there are22=4{\displaystyle 2^{2}=4}possible interpretations: either both are assignedT, or both are assignedF, ora{\displaystyle a}is assignedTandb{\displaystyle b}is assignedF, ora{\displaystyle a}is assignedFandb{\displaystyle b}is assignedT.[70]SinceL{\displaystyle {\mathcal {L}}}hasℵ0{\displaystyle \aleph _{0}}, that is,denumerablymany propositional symbols, there are2ℵ0=c{\displaystyle 2^{\aleph _{0}}={\mathfrak {c}}}, and thereforeuncountably manydistinct possible interpretations ofL{\displaystyle {\mathcal {L}}}as a whole.[70] WhereI{\displaystyle {\mathcal {I}}}is an interpretation andφ{\displaystyle \varphi }andψ{\displaystyle \psi }represent formulas, the definition of anargument, given in§ Arguments, may then be stated as a pair⟨{φ1,φ2,φ3,...,φn},ψ⟩{\displaystyle \langle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\},\psi \rangle }, where{φ1,φ2,φ3,...,φn}{\displaystyle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\}}is the set of premises andψ{\displaystyle \psi }is the conclusion. The definition of an argument'svalidity, i.e. its property that{φ1,φ2,φ3,...,φn}⊨ψ{\displaystyle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\}\models \psi }, can then be stated as itsabsence of a counterexample, where acounterexampleis defined as a caseI{\displaystyle {\mathcal {I}}}in which the argument's premises{φ1,φ2,φ3,...,φn}{\displaystyle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\}}are all true but the conclusionψ{\displaystyle \psi }is not true.[34][39]As will be seen in§ Semantic truth, validity, consequence, this is the same as to say that the conclusion is asemantic consequenceof the premises. An interpretation assigns semantic values toatomic formulasdirectly.[65][34]Molecular formulas are assigned afunctionof the value of their constituent atoms, according to the connective used;[65][34]the connectives are defined in such a way that thetruth-valueof a sentence formed from atoms with connectives depends on the truth-values of the atoms that they're applied to, andonlyon those.[65][34]This assumption is referred to byColin Howsonas the assumption of thetruth-functionalityof theconnectives.[39] Since logical connectives are defined semantically only in terms of thetruth valuesthat they take when thepropositional variablesthat they're applied to take either of thetwo possibletruth values,[1][34]the semantic definition of the connectives is usually represented as atruth tablefor each of the connectives,[1][34][71]as seen below: This table covers each of the main fivelogical connectives:[13][14][15][16]conjunction(here notatedp∧q{\displaystyle p\land q}),disjunction(p∨q),implication(p→q),biconditional(p↔q) andnegation, (¬p, or ¬q, as the case may be). It is sufficient for determining the semantics of each of these operators.[1][72][34]For more truth tables for more different kinds of connectives, see the article "Truth table". Some authors (viz., all the authors cited in this subsection) write out the connective semantics using a list of statements instead of a table. In this format, whereI(φ){\displaystyle {\mathcal {I}}(\varphi )}is the interpretation ofφ{\displaystyle \varphi }, the five connectives are defined as:[37][51] Instead ofI(φ){\displaystyle {\mathcal {I}}(\varphi )}, the interpretation ofφ{\displaystyle \varphi }may be written out as|φ|{\displaystyle |\varphi |},[37][73]or, for definitions such as the above,I(φ)=T{\displaystyle {\mathcal {I}}(\varphi )={\mathsf {T}}}may be written simply as the English sentence "φ{\displaystyle \varphi }is given the valueT{\displaystyle {\mathsf {T}}}".[51]Yet other authors[74][75]may prefer to speak of aTarskian modelM{\displaystyle {\mathfrak {M}}}for the language, so that instead they'll use the notationM⊨φ{\displaystyle {\mathfrak {M}}\models \varphi }, which is equivalent to sayingI(φ)=T{\displaystyle {\mathcal {I}}(\varphi )={\mathsf {T}}}, whereI{\displaystyle {\mathcal {I}}}is the interpretation function forM{\displaystyle {\mathfrak {M}}}.[75] Some of these connectives may be defined in terms of others: for instance, implication,p→q{\displaystyle p\rightarrow q}, may be defined in terms of disjunction and negation, as¬p∨q{\displaystyle \neg p\lor q};[76]and disjunction may be defined in terms of negation and conjunction, as¬(¬p∧¬q{\displaystyle \neg (\neg p\land \neg q}.[51]In fact, atruth-functionally completesystem,[l]in the sense that all and only the classical propositional tautologies are theorems, may be derived using only disjunction and negation (asRussell,Whitehead, andHilbertdid), or using only implication and negation (asFregedid), or using only conjunction and negation, or even using only a single connective for "not and" (theSheffer stroke),[3]asJean Nicoddid.[2]Ajoint denialconnective (logical NOR) will also suffice, by itself, to define all other connectives. Besides NOR and NAND, no other connectives have this property.[51][m] Some authors, namelyHowson[39]and Cunningham,[78]distinguish equivalence from the biconditional. (As to equivalence, Howson calls it "truth-functional equivalence", while Cunningham calls it "logical equivalence".) Equivalence is symbolized with ⇔ and is a metalanguage symbol, while a biconditional is symbolized with ↔ and is a logical connective in the object languageL{\displaystyle {\mathcal {L}}}. Regardless, an equivalence or biconditional is true if, and only if, the formulas connected by it are assigned the same semantic value under every interpretation. Other authors often do not make this distinction, and may use the word "equivalence",[15]and/or the symbol ⇔,[79]to denote their object language's biconditional connective. Givenφ{\displaystyle \varphi }andψ{\displaystyle \psi }asformulas(or sentences) of a languageL{\displaystyle {\mathcal {L}}}, andI{\displaystyle {\mathcal {I}}}as an interpretation (or case)[n]ofL{\displaystyle {\mathcal {L}}}, then the following definitions apply:[70][68] For interpretations (cases)I{\displaystyle {\mathcal {I}}}ofL{\displaystyle {\mathcal {L}}}, these definitions are sometimes given: Forclassical logic, which assumes that all cases are complete and consistent,[34]the following theorems apply: Proof systems in propositional logic can be broadly classified intosemantic proof systemsandsyntactic proof systems,[88][89][90]according to the kind oflogical consequencethat they rely on: semantic proof systems rely on semantic consequence (φ⊨ψ{\displaystyle \varphi \models \psi }),[91]whereas syntactic proof systems rely on syntactic consequence (φ⊢ψ{\displaystyle \varphi \vdash \psi }).[92]Semantic consequence deals with the truth values of propositions in all possible interpretations, whereas syntactic consequence concerns the derivation of conclusions from premises based on rules and axioms within a formal system.[93]This section gives a very brief overview of the kinds of proof systems, withanchorsto the relevant sections of this article on each one, as well as to the separate Wikipedia articles on each one. Semantic proof systems rely on the concept of semantic consequence, symbolized asφ⊨ψ{\displaystyle \varphi \models \psi }, which indicates that ifφ{\displaystyle \varphi }is true, thenψ{\displaystyle \psi }must also be true in every possible interpretation.[93] Atruth tableis a semantic proof method used to determine the truth value of a propositional logic expression in every possible scenario.[94]By exhaustively listing the truth values of its constituent atoms, a truth table can show whether a proposition is true, false, tautological, or contradictory.[95]See§ Semantic proof via truth tables. Asemantic tableauis another semantic proof technique that systematically explores the truth of a proposition.[96]It constructs a tree where each branch represents a possible interpretation of the propositions involved.[97]If every branch leads to a contradiction, the original proposition is considered to be a contradiction, and its negation is considered atautology.[39]See§ Semantic proof via tableaux. Syntactic proof systems, in contrast, focus on the formal manipulation of symbols according to specific rules. The notion of syntactic consequence,φ⊢ψ{\displaystyle \varphi \vdash \psi }, signifies thatψ{\displaystyle \psi }can be derived fromφ{\displaystyle \varphi }using the rules of the formal system.[93] Anaxiomatic systemis a set of axioms or assumptions from which other statements (theorems) are logically derived.[98]In propositional logic, axiomatic systems define a base set of propositions considered to be self-evidently true, and theorems are proved by applying deduction rules to these axioms.[99]See§ Syntactic proof via axioms. Natural deductionis a syntactic method of proof that emphasizes the derivation of conclusions from premises through the use of intuitive rules reflecting ordinary reasoning.[100]Each rule reflects a particular logical connective and shows how it can be introduced or eliminated.[100]See§ Syntactic proof via natural deduction. Thesequent calculusis a formal system that represents logical deductions as sequences or "sequents" of formulas.[101]Developed byGerhard Gentzen, this approach focuses on the structural properties of logical deductions and provides a powerful framework for proving statements within propositional logic.[101][102] Taking advantage of the semantic concept of validity (truth in every interpretation), it is possible to prove a formula's validity by using atruth table, which gives every possible interpretation (assignment of truth values to variables) of a formula.[95][49][37]If, and only if, all the lines of a truth table come out true, the formula is semantically valid (true in every interpretation).[95][49]Further, if (and only if)¬φ{\displaystyle \neg \varphi }is valid, thenφ{\displaystyle \varphi }is inconsistent.[83][84][85] For instance, this table shows that "p→ (q∨r→ (r→ ¬p))" is not valid:[49] The computation of the last column of the third line may be displayed as follows:[49] Further, using the theorem thatφ⊨ψ{\displaystyle \varphi \models \psi }if, and only if,(φ→ψ){\displaystyle (\varphi \to \psi )}is valid,[70][80]we can use a truth table to prove that a formula is a semantic consequence of a set of formulas:{φ1,φ2,φ3,...,φn}⊨ψ{\displaystyle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\}\models \psi }if, and only if, we can produce a truth table that comes out all true for the formula((⋀i=1nφi)→ψ){\displaystyle \left(\left(\bigwedge _{i=1}^{n}\varphi _{i}\right)\rightarrow \psi \right)}(that is, if⊨((⋀i=1nφi)→ψ){\displaystyle \models \left(\left(\bigwedge _{i=1}^{n}\varphi _{i}\right)\rightarrow \psi \right)}).[103][104] Since truth tables have 2nlines for n variables, they can be tiresomely long for large values of n.[39]Analytic tableaux are a more efficient, but nevertheless mechanical,[71]semantic proof method; they take advantage of the fact that "we learn nothing about the validity of the inference from examining the truth-value distributions which make either the premises false or the conclusion true: the only relevant distributions when considering deductive validity are clearly just those which make the premises true or the conclusion false."[39] Analytic tableaux for propositional logic are fully specified by the rules that are stated in schematic form below.[51]These rules use "signed formulas", where a signed formula is an expressionTX{\displaystyle TX}orFX{\displaystyle FX}, whereX{\displaystyle X}is a (unsigned) formula of the languageL{\displaystyle {\mathcal {L}}}.[51](Informally,TX{\displaystyle TX}is read "X{\displaystyle X}is true", andFX{\displaystyle FX}is read "X{\displaystyle X}is false".)[51]Their formal semantic definition is that "under any interpretation, a signed formulaTX{\displaystyle TX}is called true ifX{\displaystyle X}is true, and false ifX{\displaystyle X}is false, whereas a signed formulaFX{\displaystyle FX}is called false ifX{\displaystyle X}is true, and true ifX{\displaystyle X}is false."[51] 1)T∼XFXF∼XTXspacer2)T(X∧Y)TXTYF(X∧Y)FX|FYspacer3)T(X∨Y)TX|TYF(X∨Y)FXFYspacer4)T(X⊃Y)FX|TYF(X⊃Y)TXFY{\displaystyle {\begin{aligned}&1)\quad {\frac {T\sim X}{FX}}\quad &&{\frac {F\sim X}{TX}}\\{\phantom {spacer}}\\&2)\quad {\frac {T(X\land Y)}{\begin{matrix}TX\\TY\end{matrix}}}\quad &&{\frac {F(X\land Y)}{FX|FY}}\\{\phantom {spacer}}\\&3)\quad {\frac {T(X\lor Y)}{TX|TY}}\quad &&{\frac {F(X\lor Y)}{\begin{matrix}FX\\FY\end{matrix}}}\\{\phantom {spacer}}\\&4)\quad {\frac {T(X\supset Y)}{FX|TY}}\quad &&{\frac {F(X\supset Y)}{\begin{matrix}TX\\FY\end{matrix}}}\end{aligned}}} In this notation, rule 2 means thatT(X∧Y){\displaystyle T(X\land Y)}yields bothTX,TY{\displaystyle TX,TY}, whereasF(X∧Y){\displaystyle F(X\land Y)}branchesintoFX,FY{\displaystyle FX,FY}. The notation is to be understood analogously for rules 3 and 4.[51]Often, in tableaux forclassical logic, thesigned formulanotation is simplified so thatTφ{\displaystyle T\varphi }is written simply asφ{\displaystyle \varphi }, andFφ{\displaystyle F\varphi }as¬φ{\displaystyle \neg \varphi }, which accounts for naming rule 1 the "Rule of Double Negation".[39][71] One constructs a tableau for a set of formulas by applying the rules to produce more lines and tree branches until every line has been used, producing acompletetableau. In some cases, a branch can come to contain bothTX{\displaystyle TX}andFX{\displaystyle FX}for someX{\displaystyle X}, which is to say, a contradiction. In that case, the branch is said toclose.[39]If every branch in a tree closes, the tree itself is said to close.[39]In virtue of the rules for construction of tableaux, a closed tree is a proof that the original formula, or set of formulas, used to construct it was itself self-contradictory, and therefore false.[39]Conversely, a tableau can also prove that a logical formula istautologous: if a formula is tautologous, its negation is a contradiction, so a tableau built from its negation will close.[39] To construct a tableau for an argument⟨{φ1,φ2,φ3,...,φn},ψ⟩{\displaystyle \langle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\},\psi \rangle }, one first writes out the set of premise formulas,{φ1,φ2,φ3,...,φn}{\displaystyle \{\varphi _{1},\varphi _{2},\varphi _{3},...,\varphi _{n}\}}, with one formula on each line, signed withT{\displaystyle T}(that is,Tφ{\displaystyle T\varphi }for eachTφ{\displaystyle T\varphi }in the set);[71]and together with those formulas (the order is unimportant), one also writes out the conclusion,ψ{\displaystyle \psi }, signed withF{\displaystyle F}(that is,Fψ{\displaystyle F\psi }).[71]One then produces a truth tree (analytic tableau) by using all those lines according to the rules.[71]A closed tree will be proof that the argument was valid, in virtue of the fact thatφ⊨ψ{\displaystyle \varphi \models \psi }if, and only if,{φ,∼ψ}{\displaystyle \{\varphi ,\sim \psi \}}is inconsistent (also written asφ,∼ψ⊨{\displaystyle \varphi ,\sim \psi \models }).[71] Using semantic checking methods, such as truth tables or semantic tableaux, to check for tautologies and semantic consequences, it can be shown that, in classical logic, the following classical argument forms are semantically valid, i.e., these tautologies and semantic consequences hold.[37]We useφ{\displaystyle \varphi }⟚ψ{\displaystyle \psi }to denote equivalence ofφ{\displaystyle \varphi }andψ{\displaystyle \psi }, that is, as an abbreviation for bothφ⊨ψ{\displaystyle \varphi \models \psi }andψ⊨φ{\displaystyle \psi \models \varphi };[37]as an aid to reading the symbols, a description of each formula is given. The description reads the symbol ⊧ (called the "double turnstile") as "therefore", which is a common reading of it,[37][105]although many authors prefer to read it as "entails",[37][106]or as "models".[107] Natural deduction, since it is a method of syntactical proof, is specified by providinginference rules(also calledrules of proof)[38]for a language with the typical set of connectives{−,&,∨,→,↔}{\displaystyle \{-,\&,\lor ,\to ,\leftrightarrow \}}; no axioms are used other than these rules.[110]The rules are covered below, and a proof example is given afterwards. Different authors vary to some extent regarding which inference rules they give, which will be noted. More striking to the look and feel of a proof, however, is the variation in notation styles. The§ Gentzen notation, which was covered earlier for a short argument, can actually be stacked to produce large tree-shaped natural deduction proofs[43][15]—not to be confused with "truth trees", which is another name foranalytic tableaux.[71]There is also a style due toStanisław Jaśkowski, where the formulas in the proof are written inside various nested boxes,[43]and there is a simplification of Jaśkowski's style due toFredric Fitch(Fitch notation), where the boxes are simplified to simple horizontal lines beneath the introductions of suppositions, and vertical lines to the left of the lines that are under the supposition.[43]Lastly, there is the only notation style which will actually be used in this article, which is due toPatrick Suppes,[43]but was much popularized byE.J. LemmonandBenson Mates.[111]This method has the advantage that, graphically, it is the least intensive to produce and display, which made it a natural choice for theeditorwho wrote this part of the article, who did not understand the complexLaTeXcommands that would be required to produce proofs in the other methods. Aproof, then, laid out in accordance with theSuppes–Lemmon notationstyle,[43]is a sequence of lines containing sentences,[38]where each sentence is either an assumption, or the result of applying a rule of proof to earlier sentences in the sequence.[38]Eachline of proofis made up of asentence of proof, together with itsannotation, itsassumption set, and the currentline number.[38]The assumption set lists the assumptions on which the given sentence of proof depends, which are referenced by the line numbers.[38]The annotation specifies which rule of proof was applied, and to which earlier lines, to yield the current sentence.[38]See the§ Natural deduction proof example. Natural deduction inference rules, due ultimately toGentzen, are given below.[110]There are ten primitive rules of proof, which are the ruleassumption, plus four pairs of introduction and elimination rules for the binary connectives, and the rulereductio ad adbsurdum.[38]Disjunctive Syllogism can be used as an easier alternative to the proper ∨-elimination,[38]and MTT and DN are commonly given rules,[110]although they are not primitive.[38] The proof below[38]derives−P{\displaystyle -P}fromP→Q{\displaystyle P\to Q}and−Q{\displaystyle -Q}using onlyMPPandRAA, which shows thatMTTis not a primitive rule, since it can be derived from those two other rules. It is possible to perform proofs axiomatically, which means that certaintautologiesare taken as self-evident and various others are deduced from them usingmodus ponensas aninference rule, as well as arule ofsubstitution, which permits replacing anywell-formed formulawith anysubstitution-instanceof it.[113]Alternatively, one uses axiom schemas instead of axioms, and no rule of substitution is used.[113] This section gives the axioms of some historically notable axiomatic systems for propositional logic. For more examples, as well as metalogical theorems that are specific to such axiomatic systems (such as their completeness and consistency), see the articleAxiomatic system (logic). Although axiomatic proof has been used since the famousAncient Greektextbook,Euclid'sElements of Geometry, in propositional logic it dates back toGottlob Frege's1879Begriffsschrift.[37][113]Frege's system used onlyimplicationandnegationas connectives.[2]It had six axioms:[113][114][115] These were used by Frege together with modus ponens and a rule of substitution (which was used but never precisely stated) to yield a complete and consistent axiomatization of classical truth-functional propositional logic.[114] Jan Łukasiewiczshowed that, in Frege's system, "the third axiom is superfluous since it can be derived from the preceding two axioms, and that the last three axioms can be replaced by the single sentenceCCNpNqCpq{\displaystyle CCNpNqCpq}".[115]Which, taken out of Łukasiewicz'sPolish notationinto modern notation, means(¬p→¬q)→(p→q){\displaystyle (\neg p\rightarrow \neg q)\rightarrow (p\rightarrow q)}. Hence, Łukasiewicz is credited[113]with this system of three axioms: Just like Frege's system, this system uses a substitution rule and uses modus ponens as an inference rule.[113]The exact same system was given (with an explicit substitution rule) byAlonzo Church,[116]who referred to it as the system P2[116][117]and helped popularize it.[117] One may avoid using the rule of substitution by giving the axioms in schematic form, using them to generate an infinite set of axioms. Hence, using Greek letters to represent schemata (metalogical variables that may stand for anywell-formed formulas), the axioms are given as:[37][117] The schematic version of P2is attributed toJohn von Neumann,[113]and is used in theMetamath"set.mm" formal proof database.[117]It has also been attributed toHilbert,[118]and namedH{\displaystyle {\mathcal {H}}}in this context.[118] As an example, a proof ofA→A{\displaystyle A\to A}in P2is given below. First, the axioms are given names: And the proof is as follows: One notable difference between propositional calculus and predicate calculus is that satisfiability of a propositional formula isdecidable.[119]: 81Deciding satisfiability of propositional logic formulas is anNP-completeproblem. However, practical methods exist (e.g.,DPLL algorithm, 1962;Chaff algorithm, 2001) that are very fast for many useful cases. Recent work has extended theSAT solveralgorithms to work with propositions containingarithmetic expressions; these are theSMT solvers.
https://en.wikipedia.org/wiki/Propositional_logic
InBoolean functionsandpropositional calculus, theSheffer strokedenotes alogical operationthat is equivalent to thenegationof theconjunctionoperation, expressed in ordinary language as "not both". It is also callednon-conjunction,alternative denial(since it says in effect that at least one of its operands is false), orNAND("not and").[1]Indigital electronics, it corresponds to theNAND gate. It is named afterHenry Maurice Shefferand written as∣{\displaystyle \mid }or as↑{\displaystyle \uparrow }or as∧¯{\displaystyle {\overline {\wedge }}}or asDpq{\displaystyle Dpq}inPolish notationbyŁukasiewicz(but not as ||, often used to representdisjunction). Itsdualis theNOR operator(also known as thePeirce arrow,Quine daggerorWebb operator). Like its dual, NAND can be used by itself, without any other logical operator, to constitute a logicalformal system(making NANDfunctionally complete). This property makes theNAND gatecrucial to moderndigital electronics, including its use incomputer processordesign. Thenon-conjunctionis alogical operationon twological values. It produces a value of true, if — and only if — at least one of thepropositionsis false. Thetruth tableofA↑B{\displaystyle A\uparrow B}is as follows. The Sheffer stroke ofP{\displaystyle P}andQ{\displaystyle Q}is the negation of their conjunction ByDe Morgan's laws, this is also equivalent to the disjunction of the negations ofP{\displaystyle P}andQ{\displaystyle Q} Peircewas the first to show the functional completeness of non-conjunction (representing this as⋏¯{\displaystyle {\overline {\curlywedge }}}) but didn't publish his result.[2][3]Peirce's editor added⋏¯{\displaystyle {\overline {\curlywedge }}}) for non-disjunction.[3] In 1911,Stammwas the first to publish a proof of the completeness of non-conjunction, representing this with∼{\displaystyle \sim }(theStamm hook)[4]and non-disjunction in print at the first time and showed their functional completeness.[5] In 1913,Shefferdescribed non-disjunction using∣{\displaystyle \mid }and showed its functional completeness. Sheffer also used∧{\displaystyle \wedge }for non-disjunction.[4]Many people, beginning withNicodin 1917, and followed byWhitehead,Russelland many others[who?], mistakenly thought Sheffer had described non-conjunction using∣{\displaystyle \mid }, naming this symbol the Sheffer stroke.[citation needed] In 1928,HilbertandAckermanndescribed non-conjunction with the operator/{\displaystyle /}.[6][7] In 1929,ŁukasiewiczusedD{\displaystyle D}inDpq{\displaystyle Dpq}for non-conjunction in hisPolish notation.[8] An alternative notation for non-conjunction is↑{\displaystyle \uparrow }. It is not clear who first introduced this notation, although the corresponding↓{\displaystyle \downarrow }for non-disjunction was used by Quine in 1940.[9] The stroke is named afterHenry Maurice Sheffer, who in 1913 published a paper in theTransactions of the American Mathematical Society[10]providing an axiomatization ofBoolean algebrasusing the stroke, and proved its equivalence to a standard formulation thereof byHuntingtonemploying the familiar operators ofpropositional logic(AND,OR,NOT). Because of self-dualityof Boolean algebras, Sheffer's axioms are equally valid for either of the NAND or NOR operations in place of the stroke. Sheffer interpreted the stroke as a sign for nondisjunction (NOR) in his paper, mentioning non-conjunction only in a footnote and without a special sign for it. It wasJean Nicodwho first used the stroke as a sign for non-conjunction (NAND) in a paper of 1917 and which has since become current practice.[11][12]Russell and Whitehead used the Sheffer stroke in the 1927 second edition ofPrincipia Mathematicaand suggested it as a replacement for the "OR" and "NOT" operations of the first edition. Charles Sanders Peirce(1880) had discovered thefunctional completenessof NAND or NOR more than 30 years earlier, using the termampheck(for 'cutting both ways'), but he never published his finding. Two years before Sheffer,Edward Stamm[pl]also described the NAND and NOR operators and showed that the other Boolean operations could be expressed by it.[5] NAND is commutative but not associative, which means thatP↑Q↔Q↑P{\displaystyle P\uparrow Q\leftrightarrow Q\uparrow P}but(P↑Q)↑R↮P↑(Q↑R){\displaystyle (P\uparrow Q)\uparrow R\not \leftrightarrow P\uparrow (Q\uparrow R)}.[13] The Sheffer stroke, taken by itself, is afunctionally completeset of connectives.[14][15]This can be seen from the fact that NAND does not possess any of the following five properties, each of which is required to be absent from, and the absence of all of which is sufficient for, at least one member of a set offunctionally completeoperators: truth-preservation, falsity-preservation,linearity,monotonicity,self-duality. (An operator is truth-preserving if its value is truth whenever all of its arguments are truth, or falsity-preserving if its value is falsity whenever all of its arguments are falsity.)[16] It can also be proved by first showing, with atruth table, that¬A{\displaystyle \neg A}is truth-functionally equivalent toA↑A{\displaystyle A\uparrow A}.[17]Then, sinceA↑B{\displaystyle A\uparrow B}is truth-functionally equivalent to¬(A∧B){\displaystyle \neg (A\land B)},[17]andA∨B{\displaystyle A\lor B}is equivalent to¬(¬A∧¬B){\displaystyle \neg (\neg A\land \neg B)},[17]the Sheffer stroke suffices to define the set of connectives{∧,∨,¬}{\displaystyle \{\land ,\lor ,\neg \}},[17]which is shown to be truth-functionally complete by theDisjunctive Normal Form Theorem.[17] Expressed in terms of NAND↑{\displaystyle \uparrow }, the usual operators of propositional logic are:
https://en.wikipedia.org/wiki/Sheffer_stroke
Inlogic circuits, theToffoli gate, also known as theCCNOT gate(“controlled-controlled-not”), invented byTommaso Toffoliin 1980[1]is aCNOTgate with two control bits and one target bit. That is, the target bit (third bit) will be inverted if the first and second bits are both 1. It is a universal reversible logic gate, which means that any classicalreversible circuitcan be constructed from Toffoli gates. There is also aquantum-computingversion where the bits are replaced byqubits. Thetruth tableandpermutation matrixare as follows (the permutation can be written (7,8) incycle notation): [1000000001000000001000000001000000001000000001000000000100000010]{\displaystyle {\begin{bmatrix}1&0&0&0&0&0&0&0\\0&1&0&0&0&0&0&0\\0&0&1&0&0&0&0&0\\0&0&0&1&0&0&0&0\\0&0&0&0&1&0&0&0\\0&0&0&0&0&1&0&0\\0&0&0&0&0&0&0&1\\0&0&0&0&0&0&1&0\\\end{bmatrix}}} An input-consuminglogic gateLis reversible if it meets the following conditions: (1)L(x) =yis a gate where for any outputy, there is a unique inputx; (2) The gateLis reversible if there is a gateL´(y) =xwhich mapsytox, for ally. An example of a reversible logic gate is aNOT, which can be described from its truth table below: The commonANDgate is not reversible, because the inputs 00, 01 and 10 are all mapped to the output 0. Reversible gates have been studied since the 1960s. The original motivation was that reversible gates dissipate less heat (or, in principle, no heat).[2] More recent motivation comes fromquantum computing. Inquantum mechanicsthe quantum state can evolve in two ways: bySchrödinger's equation(unitary transformations), or by theircollapse. Logic operations for quantum computers, of which the Toffoli gate is an example, are unitary transformations and therefore evolve reversibly.[3] The classical Toffoli gate implemented in the hardware description languageVerilog: Any reversible gate that consumes its inputs and allows all input computations must have no more input bits than output bits, by thepigeonhole principle. For one input bit, there are two possiblereversiblegates. One of them is NOT. The other is the identity gate, which maps its input to the output unchanged. For two input bits, the only non-trivial gate (up to symmetry) is thecontrolled NOT gate(CNOT), whichXORsthe first bit to the second bit and leaves the first bit unchanged. [1000010000010010]{\displaystyle {\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&0&1\\0&0&1&0\\\end{bmatrix}}} Unfortunately, there are reversible functions that cannot be computed using just those gates. For example, AND cannot be achieved by those gates. In other words, the set consisting of NOT and XOR gates is notuniversal. To compute an arbitrary function using reversible gates, the Toffoli gate, proposed in 1980 by Toffoli, can indeed achieve the goal.[1]It can be also described as mapping bits {a,b,c} to {a,b,cXOR (aANDb)}. This can also be understood as amodulo operationon bitc: {a,b,c} → {a,b, (c+ab) mod 2}, often written as {a,b,c} → {a,b,c⨁ab}.[4] The Toffoli gate is universal; this means that for anyBoolean functionf(x1,x2, ...,xm), there is a circuit consisting of Toffoli gates that takesx1,x2, ...,xmand some extra bits set to 0 or 1 to outputsx1,x2, ...,xm,f(x1,x2, ...,xm), and some extra bits (called garbage). A NOT gate, for example, can be constructed from a Toffoli gate by setting the three input bits to {a, 1, 1}, making the third output bit (1 XOR (aAND 1)) = NOTa; (aANDb) is the third output bit from {a,b, 0}. Essentially, this means that one can use Toffoli gates to build systems that will perform any desired Boolean function computation in a reversible manner. Any reversible gate can be implemented on aquantum computer,[citation needed]and hence the Toffoli gate is also aquantum operator. However, the Toffoli gate cannot be used for universal quantum computation, though it does mean that a quantum computer can implement all possible classical computations. The Toffoli gate has to be implemented along with some inherently quantum gate(s) in order to be universal for quantum computation. Specifically any single-qubit gate with real coefficients[clarification needed]that can create a nontrivial quantum state[clarification needed]suffices.[11] A Toffoli gate based onquantum mechanicswas successfully realized in January 2009 at the University of Innsbruck, Austria.[12]While the implementation of ann-qubit Toffoli with circuit model requires2n{\displaystyle 2n}CNOT gates,[13]the best known upper bound stands at6n−12{\displaystyle 6n-12}CNOT gates.[14]It has been suggested that trapped Ion Quantum computers may be able to implement ann-qubit Toffoli gate directly.[15]The application of many-body interaction could be used for direct operation of the gate in trapped ions,Rydberg atoms, and superconducting circuit implementations.[16][17][18][19][20][21]Following the dark-state manifold, Khazali-Mølmer Cn-NOT gate[17]operates with only three pulses, departing from the circuit model paradigm. The iToffoli gate was implemented in a single step using three superconducting qubits with pair-wise coupling.[22]
https://en.wikipedia.org/wiki/Toffoli_gate
Stoic logicis the system ofpropositional logicdeveloped by theStoicphilosophers inancient Greece. It was one of the two great systems of logic in the classical world. It was largely built and shaped byChrysippus, the third head of the Stoic school in the 3rd century BCE. Chrysippus's logic differed from Aristotle'sterm logicbecause it was based on the analysis ofpropositionsrather than terms. The smallest unit in Stoic logic is anassertible(the Stoic equivalent of a proposition) which is the content of a statement such as "it is day". Assertibles have a truth-value such that they are only true or false depending on when it was expressed (e.g. the assertible "it is night" will only be true if it is true that it is night).[1]In contrast, Aristotelian propositions strongly affirm or deny a predicate of a subject and seek to have its truth validated or falsified independent of context. Compound assertibles can be built up from simple ones through the use oflogical connectives. The resulting syllogistic was grounded on five basic indemonstrable arguments to which all other syllogisms were claimed to be reducible.[2]The linguistic orientation of Stoic logic made it difficult for its students even within the Stoic school.[3] Towards the end of antiquity Stoic logic was neglected in favour of Aristotle's logic, and as a result the Stoic writings on logic did not survive, and the only accounts of it were incomplete reports by other writers. Knowledge about Stoic logic as a system was lost until the 20th century, when logicians familiar with the modernpropositional calculusreappraised the ancient accounts of it. Stoicismis a school of philosophy which developed in theHellenistic periodaround a generation after the time ofAristotle.[4]The Stoics believed that the universe operated according to reason,i.e.by a God which is immersed in nature itself.[4]Logic (logike) was the part of philosophy which examined reason (logos).[5]To achieve a happy life—a life worth living—requires logical thought.[4]The Stoics held that an understanding of ethics was impossible without logic.[6]In the words of Inwood, the Stoics believed that:[7] Logic helps a person see what is the case, reason effectively about practical affairs, stand his or her ground amid confusion, differentiate the certain from the probable, and so forth. Aristotle'sterm logiccan be viewed as a logic of classification.[8]It makes use of four logical terms "all", "some", "is/are", and "is/are not" and to that extent is fairly static.[8][9]The Stoics needed a logic that examines choice and consequence.[6]The Stoics therefore developed a logic ofpropositionswhich uses connectives such as "if ... then", "either ... or", and "not both".[10]Such connectives are part of everyday reasoning.[10]Socratesin theDialogues of Platooften asks a fellow citizenifthey believe a certain thing; when they agree, Socrates then proceeds to show how the consequences are logically false or absurd, inferring that the original belief must be wrong.[10]Similar attempts at forensic reasoning must have been used in the law-courts, and they are a fundamental part of Greek mathematics.[10]Aristotle himself was familiar with propositions, and his pupilsTheophrastusandEudemushad examinedhypothetical syllogisms, but there was no attempt by thePeripatetic schoolto develop these ideas into a system of logic.[11] The Stoic tradition of logic originated in the 4th-century BCE in a different school of philosophy known as theMegarian school.[12]It was two dialecticians of this school,Diodorus Cronusand his pupilPhilo, who developed their own theories ofmodalitiesand ofconditional propositions.[12]The founder of Stoicism,Zeno of Citium, studied under the Megarians and he was said to have been a fellow pupil with Philo.[13]However, the outstanding figure in the development of Stoic logic wasChrysippus of Soli(c. 279 – c. 206 BCE), the third head of the Stoic school.[12]Chrysippus shaped much of Stoic logic as we know it creating a system of propositional logic.[14]As a logician Chrysippus is sometimes said to rival Aristotle in stature.[13]The logical writings by Chrysippus are, however, almost entirely lost,[12]instead his system has to be reconstructed from the partial and incomplete accounts preserved in the works of later authors such asSextus Empiricus,Diogenes Laërtius, andGalen.[13] To the Stoics, logic was a wide field of knowledge which included the study oflanguage,grammar,rhetoricandepistemology.[5]However, all of these fields were interrelated, and the Stoics developed their logic (or "dialectic") within the context of their theory of language and epistemology.[15] The Stoics held that any meaningful utterance will involve three items: the sounds uttered; the thing which is referred to or described by the utterance; and an incorporeal item—thelektón(sayable)—that which is conveyed in the language.[16]Thelektonis not a statement but the content of a statement, and it corresponds to a complete utterance.[17][18]Alektoncan be something such as a question or a command, but Stoic logic operates on thoselektawhich are called "assertibles" (axiomata), described as a proposition which is either true or false and which affirms or denies.[17][19]Examples of assertibles include "it is night", "it is raining this afternoon", and "no one is walking."[20][21]The assertibles aretruth-bearers.[22]They can never be true and false at the same time (law of noncontradiction) and they must beat leasttrue or false (law of excluded middle).[23]The Stoics catalogued these simple assertibles according to whether they are affirmative or negative, and whether they are definite or indefinite (or both).[24]The assertibles are much like modernpropositions, however their truth value can change depending onwhenthey are asserted.[1]Thus an assertible such as "it is night" will only be true when it is night and not when it is day.[19] Simple assertibles can be connected to each other to form compound or non-simple assertibles.[25]This is achieved through the use oflogical connectives.[25]Chrysippus seems to have been responsible for introducing the three main types of connectives: theconditional(if),conjunctive(and), anddisjunctive(or).[26]A typical conditional takes the form of "if p then q";[27]whereas a conjunction takes the form of "both p and q";[27]and a disjunction takes the form of "either p or q".[28]Theorthey used isexclusive, unlike theinclusive orgenerally used in modern formal logic.[29]These connectives are combined with the use ofnotfor negation.[30]Thus the conditional can take the following four forms:[31] Later Stoics added more connectives: the pseudo-conditional took the form of "since p then q"; and the causal assertible took the form of "because p then q".[a]There was also a comparative (or dissertive): "more/less (likely) p than q".[32] Logical connectives Assertibles can also be distinguished by theirmodal properties[b]—whether they are possible, impossible, necessary, or non-necessary.[33]In this the Stoics were building on an earlier Megarian debate initiated by Diodorus Cronus.[33]Diodorus had definedpossibilityin a way which seemed to adopt a form offatalism.[34]Diodorus definedpossibleas "that which either is or will be true".[35]Thus there are no possibilities that are forever unrealised, whatever is possible is or one day will be true.[34]His pupil Philo, rejecting this, definedpossibleas "that which is capable of being true by the proposition's own nature",[35]thus a statement like "this piece of wood can burn" ispossible, even if it spent its entire existence on the bottom of the ocean.[36]Chrysippus, on the other hand, was a causal determinist: he thought that true causes inevitably give rise to their effects and that all things arise in this way.[37]But he was not a logical determinist or fatalist: he wanted to distinguish between possible and necessary truths.[37]Thus he took a middle position between Diodorus and Philo, combining elements of both their modal systems.[38]Chrysippus's set of Stoic modal definitions was as follows:[39] In Stoic logic, an argument (λόγος) is defined as a compound or system of premisses (λήμματα) and a conclusion (ἐπιφορά, συμπέρασμα).[40][41]A typical Stoicsyllogismis: It has a non-simple assertible for the first premise ("If it is day, it is light") and a simple assertible for the second premise ("It is day").[41]The second premise doesn'talwayshave to be simple but it will have fewer components than the first.[41] In more formal terms this type of syllogism is:[19] As with Aristotle's term logic, Stoic logic also uses variables, but the values of the variables are propositions not terms.[42]Chrysippus listed five basic argument forms, which he regarded as true beyond dispute.[43][44][c]These five indemonstrable arguments are made up of conditional, conjunction, disjunction, and negation connectives,[45]and all other arguments are reducible to these five indemonstrable arguments.[18][46] There can be many variations of these five indemonstrable arguments.[47]For example the assertibles in the premises can be more complex, and the following syllogism is a valid example of the second indemonstrable (modus tollens):[31] Similarly one can incorporate negation into these arguments.[31]A valid example of the fourth indemonstrable (strongmodus tollendo ponensor exclusive disjunctive syllogism) is:[48] which, incorporating the principle ofdouble negation, is equivalent to:[48] Many arguments are not in the form of the five indemonstrables, and the task is to show how they can be reduced to one of the five types.[30]A simple example of Stoic reduction is reported bySextus Empiricus:[49] This can be reduced to two separate indemonstrable arguments of the second and third type:[50] The Stoics stated that complex syllogisms could be reduced to the indemonstrables through the use of four ground rules orthemata.[51][52]Of these fourthemata, only two have survived.[53][35]One, the so-called firstthema, was a rule of antilogism:[35] When from two [assertibles] a third follows, then from either of them together with the contradictory of the conclusion the contradictory of the other follows (Apuleius,De Interpretatione209. 9–14). In modern sequent:(p∧q)→r,p,¬r⊢¬q{\displaystyle (p\land q)\to r,\;p,\;\neg r\;\;\vdash \;\neg q}. The other, the thirdthema, was acut ruleby which chain syllogisms could be reduced to simple syllogisms.[e]The importance of these rules is not altogether clear.[54]In the 2nd-century BCEAntipater of Tarsusis said to have introduced a simpler method involving the use of fewerthemata, although few details survive concerning this.[54]In any case, thethematacannot have been a necessary part of every analysis.[55] Why should not the philosopher develop his own reason? You turn to vessels of crystal, I to the syllogism calledThe Liar; you to myrrhine glassware, I to the syllogism calledThe Denyer. In addition to describing which inferences are valid ones, part of a Stoic's logical training was the enumeration and refutation of false arguments, including the identification of paradoxes.[56]A false argument could be one with a false premise or which is formally incorrect, however paradoxes represented a challenge to the basic logical notions of the Stoics such as truth or falsehood.[57]One famous paradox, known asThe Liar, asked "A man says he is lying; is what he says true or false?"—if the man says something true then it seems he is lying, but if he is lying then he is not saying something true, and so on.[58]Chrysippus is known to have written several books on this paradox, although it is not known what solution he offered for it.[59]Another paradox known as theSoritesor "Heap" asked "How many grains of wheat do you need before you get a heap?"[59]It was said to challenge the idea of true or false by offering up the possibility of vagueness.[59]The response of Chrysippus however was: "That doesn't harm me, for like a skilled driver I shall restrain my horses before I reach the edge ... In like manner I restrain myself in advance and stop replying to sophistical questions."[59] However, this mastery of logical puzzles, study of paradoxes, and dissection of arguments[60]was not an end in itself, but rather its purpose was for the Stoics to cultivate their rational powers.[61]Stoic logic was thus a method of self-discovery.[62]Its aim was to enable ethical reflection, permit secure and confident arguing, and lead the pupil to truth.[60]The end result would be thought that is consistent, clear and precise, and which exposes confusion, murkiness and inconsistency.[63]Diogenes Laërtiusgives a list of dialectical virtues, which were probably invented by Chrysippus:[64] First he mentionsaproptosia, which means literally 'not falling forward' and is defined as 'knowledge of when one should give assent or not' (give assent); nextaneikaiotes, 'unhastiness', defined as 'strong-mindedness against the probable (or plausible), so as not to give in to it'; third,anelenxia, 'irrefutability', the definition of which is 'strength in argument, so as not to be driven by it to the contradictory'; and fourth,amataiotes, 'lack of emptyheadedness', defined as 'a disposition which refers impressions (phantasiai) to the correctlogos.[64] For around five hundred years Stoic logic was one of the two great systems of logic.[65]The logic of Chrysippus was discussed alongside that of Aristotle, and it may well have been more prominent since Stoicism was the dominant philosophical school.[66]From a modern perspective Aristotle's term logic and the Stoic logic of propositions appear complementary, but they were sometimes regarded as rival systems.[30]In late antiquity the Stoic school fell into decline, and the last pagan philosophical school, theNeoplatonists, adopted Aristotle's logic for their own.[67]Only elements of Stoic logic made their way into the logical writings of later commentators such asBoethius, transmitting confused parts of Stoic logic to the Middle Ages.[66]Propositional logic was redeveloped byPeter Abelardin the 12th-century, but by the mid-15th-century the only logic which was being studied was a simplified version of Aristotle's.[68] In the 18th-centuryImmanuel Kantdeclared that "since Aristotle ... logic has not been able to advance a single step, and is thus to all appearance a closed and complete body of doctrine."[69]To 19th-century historians, who believed thatHellenistic philosophyrepresented a decline from that of Plato and Aristotle, Stoic logic was seen with contempt.[70]Carl Prantlthought that Stoic logic was "dullness, triviality, and scholastic quibbling" and he welcomed the fact that the works of Chrysippus were no longer extant.[71]Eduard Zellerremarked that "the whole contribution of the Stoics to the field of logic consists in their having clothed the logic of the Peripatetics with a new terminology."[72] Although developments in modern logic that parallel Stoic logic[73]began in the middle of the 19th-century with the work ofGeorge BooleandAugustus De Morgan,[68]Stoic logic itself was only reappraised in the 20th-century,[71]beginning with the work of Polish logicianJan Łukasiewicz[71]andBenson Mates.[71] What we see as a result is a close similarity between [these] methods of reasoning and the behaviour of digital computers. ... The code happens to come from the nineteenth-century logician and mathematician George Boole, whose aim was to codify the relations studied much earlier by Chrysippus (albeit with greater abstraction and sophistication). Later generations built on Boole's insights ... but the logic that made it all possible was the interconnected logic of an interconnected universe, discovered by the ancient Chrysippus, who labored long ago under an old Athenian stoa.[74] a.^The minimum requirement for a conditional is that the consequent follows from the antecedent.[27]The pseudo-conditional adds that the antecedent must also be true. The causal assertible adds an asymmetry rule such that if p is the cause/reason for q, then q cannot be the cause/reason for p.Bobzien 1999, p. 109b.^"Stoic modal logic is not a logic of modal propositions (e.g., propositions of the type 'It is possible that it is day' ...) ... instead, their modal theory was about non-modalized propositions like 'It is day', insofar as they are possible, necessary, and so forth."Bobzien 1999, p. 117c.^Most of these argument forms had already been discussed by Theophrastus, but: "It is plain that even if Theophrastus discussed (1)–(5), he did not anticipate Chrysippus' achievement. ... his Aristotelian approach to the study and organization of argument-forms would have given his discussion of mixed hypothetical syllogisms an utterly unStoical aspect."Barnes 1999, p. 83d.^TheseLatinnames date from the Middle Ages.Shenefelt & White 2013, p. 288e.^For a brief summary of thesethematasee Susanne Bobzien'sAncient Logicarticle for the Stanford Encyclopedia of Philosophy. For a detailed (and technical) analysis of thethemata, including a tentative reconstruction of the two lost ones, seeBobzien 1999, pp. 137–148,Long & Sedley 1987, §36 HIJ.
https://en.wikipedia.org/wiki/Stoic_logic
Asyllogism(Ancient Greek:συλλογισμός,syllogismos, 'conclusion, inference') is a kind oflogical argumentthat appliesdeductive reasoningto arrive at aconclusionbased on twopropositionsthat are asserted or assumed to be true. In its earliest form (defined byAristotlein his 350 BC bookPrior Analytics), a deductive syllogism arises when two true premises (propositions or statements) validly imply a conclusion, or the main point that the argument aims to get across.[1]For example, knowing that all men are mortal (major premise), and thatSocratesis a man (minor premise), we may validly conclude that Socrates is mortal. Syllogistic arguments are usually represented in a three-line form: All men are mortal.Socrates is a man.Therefore, Socrates is mortal.[2] In antiquity, two rival syllogistic theories existed:Aristotelian syllogismandStoic syllogism.[3]From theMiddle Agesonwards,categorical syllogismandsyllogismwere usually used interchangeably. This article is concerned only with this historical use. The syllogism was at the core of historical deductive reasoning, whereby facts are determined by combining existing statements, in contrast toinductive reasoning, in which facts are predicted by repeated observations. Within some academic contexts, syllogism has been superseded byfirst-order predicate logicfollowing the work ofGottlob Frege, in particular hisBegriffsschrift(Concept Script; 1879). Syllogism, being a method of valid logical reasoning, will always be useful in most circumstances, and for general-audience introductions to logic and clear-thinking.[4][5] In antiquity, two rival syllogistic theories existed: Aristotelian syllogism and Stoic syllogism.[3] Aristotledefines the syllogism as "a discourse in which certain (specific) things having been supposed, something different from the things supposed results of necessity because these things are so."[6]Despite this very general definition, inPrior AnalyticsAristotle limits himself to categorical syllogisms that consist of threecategorical propositions, including categoricalmodalsyllogisms.[7] The use of syllogisms as a tool for understanding can be dated back to the logical reasoning discussions ofAristotle. Before the mid-12th century, medieval logicians were only familiar with a portion of Aristotle's works, including such titles asCategoriesandOn Interpretation, works that contributed heavily to the prevailing Old Logic, orlogica vetus. The onset of a New Logic, orlogica nova, arose alongside the reappearance ofPrior Analytics, the work in which Aristotle developed his theory of the syllogism. Prior Analytics, upon rediscovery, was instantly regarded by logicians as "a closed and complete body of doctrine", leaving very little for thinkers of the day to debate, and reorganize. Aristotle's theory on the syllogism forassertoricsentences was considered especially remarkable, with only small systematic changes occurring to the concept over time. This theory of the syllogism would not enter the context of the more comprehensive logic of consequence until logic began to be reworked in general in the mid-14th century by the likes ofJohn Buridan. Aristotle'sPrior Analyticsdid not, however, incorporate such a comprehensive theory on the modal syllogism—a syllogism that has at least onemodalizedpremise, that is, a premise containing the modal wordsnecessarily,possibly, orcontingently. Aristotle's terminology in this aspect of his theory was deemed vague, and in many cases unclear, even contradicting some of his statements fromOn Interpretation. His original assertions on this specific component of the theory were left up to a considerable amount of conversation, resulting in a wide array of solutions put forth by commentators of the day. The system for modal syllogisms laid forth by Aristotle would ultimately be deemed unfit for practical use, and would be replaced by new distinctions and new theories altogether. Boethius(c. 475–526) contributed an effort to make the ancient Aristotelian logic more accessible. While his Latin translation ofPrior Analyticswent primarily unused before the 12th century, his textbooks on the categorical syllogism were central to expanding the syllogistic discussion. Rather than in any additions that he personally made to the field, Boethius' logical legacy lies in his effective transmission of prior theories to later logicians, as well as his clear and primarily accurate presentations of Aristotle's contributions. Another of medieval logic's first contributors from the Latin West,Peter Abelard(1079–1142), gave his own thorough evaluation of the syllogism concept, and accompanying theory in theDialectica—a discussion of logic based on Boethius' commentaries and monographs. His perspective on syllogisms can be found in other works as well, such asLogica Ingredientibus. With the help of Abelard's distinction betweende dictomodal sentences andde remodal sentences, medieval logicians began to shape a more coherent concept of Aristotle's modal syllogism model. The French philosopherJean Buridan(c. 1300 – 1361), whom some consider the foremost logician of the later Middle Ages, contributed two significant works:Treatise on ConsequenceandSummulae de Dialectica, in which he discussed the concept of the syllogism, its components and distinctions, and ways to use the tool to expand its logical capability. For 200 years after Buridan's discussions, little was said about syllogistic logic. Historians of logic have assessed that the primary changes in the post-Middle Age era were changes in respect to the public's awareness of original sources, a lessening of appreciation for the logic's sophistication and complexity, and an increase in logical ignorance—so that logicians of the early 20th century came to view the whole system as ridiculous.[8] The Aristotelian syllogism dominated Western philosophical thought for many centuries. Syllogism itself is about drawing valid conclusions from assumptions (axioms), rather than about verifying the assumptions. However, people over time focused on the logic aspect, forgetting the importance of verifying the assumptions. In the 17th century,Francis Baconemphasized that experimental verification of axioms must be carried out rigorously, and cannot take syllogism itself as the best way to draw conclusions in nature.[9]Bacon proposed a more inductive approach to the observation of nature, which involves experimentation, and leads to discovering and building on axioms to create a more general conclusion.[9]Yet, a full method of drawing conclusions in nature is not the scope of logic or syllogism, and the inductive method was covered in Aristotle's subsequent treatise, thePosterior Analytics. In the 19th century, modifications to syllogism were incorporated to deal withdisjunctive("A or B") andconditional("if A then B") statements.Immanuel Kantfamously claimed, inLogic(1800), that logic was the one completed science, and that Aristotelian logic more or less included everything about logic that there was to know. (This work is not necessarily representative of Kant's mature philosophy, which is often regarded as an innovation to logic itself.) Kant's opinion stood unchallenged in the West until 1879, whenGottlob Fregepublished hisBegriffsschrift(Concept Script). This introduced a calculus, a method of representing categorical statements (and statements that are not provided for in syllogism as well) by the use of quantifiers and variables. A noteworthy exception is the logic developed inBernard Bolzano's workWissenschaftslehre(Theory of Science, 1837), the principles of which were applied as a direct critique of Kant, in the posthumously published workNew Anti-Kant(1850). The work of Bolzano had been largely overlooked until the late 20th century, among other reasons, because of the intellectual environment at the time inBohemia, which was then part of theAustrian Empire. In the last 20 years, Bolzano's work has resurfaced and become subject of both translation and contemporary study. This led to the rapid development ofsentential logicand first-orderpredicate logic, subsuming syllogistic reasoning, which was, therefore, after 2000 years, suddenly considered obsolete by many.[original research?]The Aristotelian system is explicated in modern fora of academia primarily in introductory material and historical study. One notable exception to this modern relegation is the continued application of Aristotelian logic by officials of theCongregation for the Doctrine of the Faith, and the Apostolic Tribunal of theRoman Rota, which still requires that any arguments crafted by Advocates be presented in syllogistic format. George Boole's unwavering acceptance of Aristotle's logic is emphasized by the historian of logicJohn Corcoranin an accessible introduction toLaws of Thought.[10][11]Corcoran also wrote a point-by-point comparison ofPrior AnalyticsandLaws of Thought.[12]According to Corcoran, Boole fully accepted and endorsed Aristotle's logic. Boole's goals were "to go under, over, and beyond" Aristotle's logic by:[12] More specifically, Boole agreed with whatAristotlesaid; Boole's 'disagreements', if they might be called that, concern what Aristotle did not say. First, in the realm of foundations, Boole reduced Aristotle's four propositional forms to one form, the form of equations, which by itself was a revolutionary idea. Second, in the realm of logic's problems, Boole's addition of equation solving to logic—another revolutionary idea—involved Boole's doctrine that Aristotle's rules of inference (the "perfect syllogisms") must be supplemented by rules for equation solving. Third, in the realm of applications, Boole's system could handle multi-term propositions and arguments, whereas Aristotle could handle only two-termed subject-predicate propositions and arguments. For example, Aristotle's system could not deduce: "No quadrangle that is a square is a rectangle that is a rhombus" from "No square that is a quadrangle is a rhombus that is a rectangle" or from "No rhombus that is a rectangle is a square that is a quadrangle." A categorical syllogism consists of three parts: Each part is acategorical proposition, and each categorical proposition contains two categorical terms.[13]In Aristotle, each of the premises is in the form "All S are P," "Some S are P", "No S are P" or "Some S are not P", where "S" is the subject-term and "P" is the predicate-term: More modern logicians allow some variation. Each of the premises has one term in common with the conclusion: in a major premise, this is themajor term(i.e., thepredicateof the conclusion); in a minor premise, this is theminor term(i.e., the subject of the conclusion). For example: Each of the three distinct terms represents a category. From the example above,humans,mortal, andGreeks:mortalis the major term, andGreeksthe minor term. The premises also have one term in common with each other, which is known as themiddle term; in this example,humans. Both of the premises are universal, as is the conclusion. Here, the major term isdie, the minor term ismen, and the middle term ismortals. Again, both premises are universal, hence so is the conclusion. A polysyllogism, or asorites, is a form of argument in which a series of incomplete syllogisms is so arranged that the predicate of each premise forms the subject of the next until the subject of the first is joined with the predicate of the last in the conclusion. For example, one might argue that all lions are big cats, all big cats are predators, and all predators are carnivores. To conclude that therefore all lions are carnivores is to construct a sorites argument. There are infinitely many possible syllogisms, but only 256 logically distinct types and only 24 valid types (enumerated below). A syllogism takes the form (note: M – Middle, S – subject, P – predicate.): The premises and conclusion of a syllogism can be any of four types, which are labeled by letters[14]as follows. The meaning of the letters is given by the table: InPrior Analytics, Aristotle uses mostly the letters A, B, and C (Greek lettersalpha,beta, andgamma) as term place holders, rather than giving concrete examples. It is traditional to useisrather thanareas thecopula, henceAll A is Brather thanAll As are Bs. It is traditional and convenient practice to use a, e, i, o asinfix operatorsso the categorical statements can be written succinctly. The following table shows the longer form, the succinct shorthand, and equivalent expressions in predicate logic: The convention here is that the letter S is the subject of the conclusion, P is the predicate of the conclusion, and M is the middle term. The major premise links M with P and the minor premise links M with S. However, the middle term can be either the subject or the predicate of each premise where it appears. The differing positions of the major, minor, and middle terms gives rise to another classification of syllogisms known as thefigure. Given that in each case the conclusion is S-P, the four figures are: (Note, however, that, following Aristotle's treatment of the figures, some logicians—e.g.,Peter AbelardandJean Buridan—reject the fourth figure as a figure distinct from the first.) Putting it all together, there are 256 possible types of syllogisms (or 512 if the order of the major and minor premises is changed, though this makes no difference logically). Each premise and the conclusion can be of type A, E, I or O, and the syllogism can be any of the four figures. A syllogism can be described briefly by giving the letters for the premises and conclusion followed by the number for the figure. For example, the syllogism BARBARA below is AAA-1, or "A-A-A in the first figure". The vast majority of the 256 possible forms of syllogism are invalid (the conclusion does notfollow logicallyfrom the premises). The table below shows the valid forms. Even some of these are sometimes considered to commit theexistential fallacy, meaning they are invalid if they mention an empty category. These controversial patterns are marked initalics. All but four of the patterns in italics (felapton, darapti, fesapo and bamalip) are weakened moods, i.e. it is possible to draw a stronger conclusion from the premises. The letters A, E, I, and O have been used since themedieval Schoolsto formmnemonicnames for the forms as follows: 'Barbara' stands for AAA, 'Celarent' for EAE, etc. Next to each premise and conclusion is a shorthand description of the sentence. So in AAI-3, the premise "All squares are rectangles" becomes "MaP"; the symbols mean that the first term ("square") is the middle term, the second term ("rectangle") is the predicate of the conclusion, and the relationship between the two terms is labeled "a" (All M are P). The following table shows all syllogisms that are essentially different. The similar syllogisms share the same premises, just written in a different way. For example "Some pets are kittens" (SiM inDarii) could also be written as "Some kittens are pets" (MiS in Datisi). In the Venn diagrams, the black areas indicate no elements, and the red areas indicate at least one element. In the predicate logic expressions, a horizontal bar over an expression means to negate ("logical not") the result of that expression. It is also possible to usegraphs(consisting of vertices and edges) to evaluate syllogisms.[15] Similar: Cesare (EAE-2) Camestres is essentially like Celarent with S and P exchanged.Similar: Calemes (AEE-4) Similar: Datisi (AII-3) Disamis is essentially like Darii with S and P exchanged.Similar: Dimatis (IAI-4) Similar: Festino (EIO-2), Ferison (EIO-3), Fresison (EIO-4) Bamalipis exactly likeBarbariwith S and P exchanged: Similar:Cesaro (EAO-2) Similar:Calemos (AEO-4) Similar:Fesapo (EAO-4) This table shows all 24 valid syllogisms, represented byVenn diagrams. Columns indicate similarity, and are grouped by combinations of premises. Borders correspond to conclusions. Those with an existential assumption are dashed. With Aristotle, we may distinguishsingular terms, such asSocrates, and general terms, such asGreeks. Aristotle further distinguished types (a) and (b): Such a predication is known as adistributive, as opposed to non-distributive as inGreeks are numerous. It is clear that Aristotle's syllogism works only for distributive predication, since we cannot reasonAll Greeks are animals, animals are numerous, therefore all Greeks are numerous. In Aristotle's view singular terms were of type (a), and general terms of type (b). Thus,Mencan be predicated ofSocratesbutSocratescannot be predicated of anything. Therefore, for a term to be interchangeable—to be either in the subject or predicate position of a proposition in a syllogism—the terms must be general terms, orcategorical termsas they came to be called. Consequently, the propositions of a syllogism should be categorical propositions (both terms general) and syllogisms that employ only categorical terms came to be calledcategorical syllogisms. It is clear that nothing would prevent a singular term occurring in a syllogism—so long as it was always in the subject position—however, such a syllogism, even if valid, is not a categorical syllogism. An example isSocrates is a man, all men are mortal, therefore Socrates is mortal.Intuitively this is as valid asAll Greeks are men, all men are mortal therefore all Greeks are mortals. To argue that its validity can be explained by the theory of syllogism would require that we show thatSocrates is a manis the equivalent of a categorical proposition. It can be arguedSocrates is a manis equivalent toAll that are identical to Socrates are men, so our non-categorical syllogism can be justified by use of the equivalence above and then citing BARBARA. If a statement includes a term such that the statement is false if the term has no instances, then the statement is said to haveexistential importwith respect to that term. It is ambiguous whether or not a universal statement of the formAll A is Bis to be considered as true, false, or even meaningless if there are no As. If it is considered as false in such cases, then the statementAll A is Bhas existential import with respect to A. It is claimed Aristotle's logic system does not cover cases where there are no instances. Aristotle's goal was to develop a logic for science. He relegates fictions, such as mermaids and unicorns, to the realms of poetry and literature. In his mind, they exist outside the ambit of science, which is why he leaves no room for such non-existent entities in his logic. This is a thoughtful choice, not an inadvertent omission. Technically, Aristotelian science is a search for definitions, where a definition is "a phrase signifying a thing's essence." Because non-existent entities cannot be anything, they do not, in Aristotle's mind, possess an essence. This is why he leaves no place for fictional entities like goat-stags (or unicorns).[16] However, many logic systems developed sincedoconsider the case where there may be no instances. Medieval logicians were aware of the problem of existential import and maintained that negative propositions do not carry existential import, and that positive propositions with subjects that do notsuppositare false. The following problems arise: For example, if it is accepted that AiB is false if there are no As and AaB entails AiB, then AiB has existential import with respect to A, and so does AaB. Further, if it is accepted that AiB entails BiA, then AiB and AaB have existential import with respect to B as well. Similarly, if AoB is false if there are no As, and AeB entails AoB, and AeB entails BeA (which in turn entails BoA) then both AeB and AoB have existential import with respect to both A and B. It follows immediately that all universal categorical statements have existential import with respect to both terms. If AaB and AeB is a fair representation of the use of statements in normal natural language of All A is B and No A is B respectively, then the following example consequences arise: If it is ruled that no universal statement has existential import then the square of opposition fails in several respects (e.g. AaB does not entail AiB) and a number of syllogisms are no longer valid (e.g. BaC, AaB->AiC). These problems and paradoxes arise in both natural language statements and statements in syllogism form because of ambiguity, in particular ambiguity with respect to All. If "Fred claims all his books were Pulitzer Prize winners", is Fred claiming that he wrote any books? If not, then is what he claims true? Suppose Jane says none of her friends are poor; is that true if she has no friends? The first-order predicate calculus avoids such ambiguity by using formulae that carry no existential import with respect to universal statements. Existential claims must be explicitly stated. Thus, natural language statements—of the formsAll A is B, No A is B,Some A is B, andSome A is not B—can be represented in first order predicate calculus in which any existential import with respect to terms A and/or B is either explicit or not made at all. Consequently, the four formsAaB, AeB, AiB, andAoBcan be represented in first order predicate in every combination of existential import—so it can establish which construal, if any, preserves the square of opposition and the validity of the traditionally valid syllogism. Strawson claims such a construal is possible, but the results are such that, in his view, the answer to question (e) above isno. People often make mistakes when reasoning syllogistically.[17] For instance, from the premises some A are B, some B are C, people tend to come to a definitive conclusion that therefore some A are C.[18][19]However, this does not follow according to the rules of classical logic. For instance, while some cats (A) are black things (B), and some black things (B) are televisions (C), it does not follow from the parameters that some cats (A) are televisions (C). This is because in the structure of the syllogism invoked (i.e. III-1) the middle term is not distributed in either the major premise or in the minor premise, a pattern called the "fallacy of the undistributed middle". Because of this, it can be hard to follow formal logic, and a closer eye is needed in order to ensure that an argument is, in fact, valid.[20] Determining the validity of a syllogism involves determining thedistributionof each term in each statement, meaning whether all members of that term are accounted for. In simple syllogistic patterns, the fallacies of invalid patterns are:
https://en.wikipedia.org/wiki/Syllogism#Other_types
This is a list of topics aroundBoolean algebraandpropositional logic.
https://en.wikipedia.org/wiki/Boolean_algebra_topics
ABoolean-valued function(sometimes called apredicateor aproposition) is afunctionof the type f : X →B, where X is an arbitrarysetand whereBis aBoolean domain, i.e. a generic two-element set, (for exampleB= {0, 1}), whose elements are interpreted aslogical values, for example, 0 =falseand 1 =true, i.e., a singlebitofinformation. In theformal sciences,mathematics,mathematical logic,statistics, and their applied disciplines, a Boolean-valued function may also be referred to as a characteristic function,indicator function, predicate, or proposition. In all of these uses, it is understood that the various terms refer to a mathematical object and not the correspondingsemioticsign or syntactic expression. Informal semantictheories oftruth, atruth predicateis a predicate on thesentencesof aformal language, interpreted for logic, that formalizes the intuitive concept that is normally expressed by saying that a sentence is true. A truth predicate may have additional domains beyond the formal language domain, if that is what is required to determine a finaltruth value.
https://en.wikipedia.org/wiki/Boolean-valued_function
Inpropositional logicandBoolean algebra, there is aduality betweenconjunctionanddisjunction,[1][2][3]also called theduality principle.[4][5][6]It is the most widely known example of duality in logic.[1]The duality consists in thesemetalogicaltheorems: The connectives may be defined in terms of each other as follows: Since theDisjunctive Normal Form Theoremshows that the set of connectives{∧,∨,¬}{\displaystyle \{\land ,\vee ,\neg \}}isfunctionally complete, these results show that the sets of connectives{∧,¬}{\displaystyle \{\land ,\neg \}}and{∨,¬}{\displaystyle \{\vee ,\neg \}}are themselves functionally complete as well. De Morgan's lawsalso follow from the definitions of these connectives in terms of each other, whichever direction is taken to do it.[1] Thedualof a sentence is what you get by swapping all occurrences of∨{\textstyle \vee }and∧{\textstyle \land }, while also negating all propositional constants. For example, the dual of(A∧B∨C){\textstyle (A\land B\vee C)}would be(¬A∨¬B∧¬C){\textstyle (\neg A\vee \neg B\land \neg C)}. The dual of a formulaφ{\textstyle \varphi }is notated asφ∗{\textstyle \varphi ^{*}}. TheDuality Principlestates that in classical propositional logic, any sentence is equivalent to the negation of its dual.[4][7] Assumeφ⊨ψ{\displaystyle \varphi \models \psi }. Thenφ¯⊨ψ¯{\displaystyle {\overline {\varphi }}\models {\overline {\psi }}}by uniform substitution of¬Pi{\displaystyle \neg P_{i}}forPi{\displaystyle P_{i}}. Hence,¬ψ⊨¬φ{\displaystyle \neg \psi \models \neg \varphi },by contraposition; so finally,ψD⊨φD{\displaystyle \psi ^{D}\models \varphi ^{D}}, by the property thatφD{\displaystyle \varphi ^{D}}⟚¬φ¯{\displaystyle \neg {\overline {\varphi }}}, which was just proved above.[7]And sinceφDD=φ{\displaystyle \varphi ^{DD}=\varphi }, it is also true thatφ⊨ψ{\displaystyle \varphi \models \psi }if, and only if,ψD⊨φD{\displaystyle \psi ^{D}\models \varphi ^{D}}.[7]And it follows, as a corollary, that ifφ⊨¬ψ{\displaystyle \varphi \models \neg \psi }, thenφD⊨¬ψD{\displaystyle \varphi ^{D}\models \neg \psi ^{D}}.[7] For a formulaφ{\displaystyle \varphi }indisjunctive normal form, the formulaφ¯D{\displaystyle {\overline {\varphi }}^{D}}will be inconjunctive normal form, and given the result that§ Negation is semantically equivalent to dual, it will be semantically equivalent to¬φ{\displaystyle \neg \varphi }.[8][9]This provides a procedure for converting between conjunctive normal form and disjunctive normal form.[10]Since theDisjunctive Normal Form Theoremshows that every formula of propositional logic is expressible in disjunctive normal form, every formula is also expressible in conjunctive normal form by means of effecting the conversion to its dual.[9] [11][12]
https://en.wikipedia.org/wiki/Conjunction/disjunction_duality
Inprobabilistic logic, theFréchet inequalities, also known as theBoole–Fréchet inequalities, are rules implicit in the work ofGeorge Boole[1][2]and explicitly derived byMaurice Fréchet[3][4]that govern the combination of probabilities aboutlogical propositionsoreventslogically linked together inconjunctions(ANDoperations) ordisjunctions(ORoperations) as inBoolean expressionsorfaultorevent treescommon inrisk assessments,engineering designandartificial intelligence. These inequalities can be considered rules about how to bound calculations involving probabilities without assumingindependenceor, indeed, without making anydependenceassumptions whatsoever. The Fréchet inequalities are closely related to theBoole–Bonferroni–Fréchet inequalities, and toFréchet bounds. IfAiarelogical propositionsorevents, the Fréchet inequalities are where P( ) denotes the probability of an event or proposition. In the case where there are only two events, sayAandB, the inequalities reduce to The inequalities bound the probabilities of the two kinds of joint events given the probabilities of the individual events. For example, if A is "has lung cancer", and B is "has mesothelioma", then A & B is "has both lung cancer and mesothelioma", and A ∨ B is "has lung cancer or mesothelioma or both diseases", and the inequalities relate the risks of these events. Note that logical conjunctions are denoted in various ways in different fields, including AND, &, ∧ and graphicalAND-gates. Logical disjunctions are likewise denoted in various ways, including OR, |, ∨, and graphicalOR-gates. If events are taken to besetsrather thanlogical propositions, theset-theoreticversions of the Fréchet inequalities are If the probability of an event A is P(A) =a= 0.7, and the probability of the event B is P(B) =b= 0.8, then the probability of theconjunction, i.e., the joint event A & B, is surely in the intervalP(A∧B)∈[max(0,a+b−1),min(a,b)]=[max(0,0.7+0.8−1),min(0.7,0.8)]=[0.5,0.7].{\displaystyle {\begin{aligned}\mathbb {P} (A\land B)&\in [\max(0,\,a+b-1),\,\min(a,\,b)]\\&=[\max(0,\,0.7+0.8-1),\,\min(0.7,\,0.8)]\\&=[0.5,\,0.7]\end{aligned}}.}Likewise, the probability of thedisjunctionA ∨ B is surely in the intervalP(A∨B)∈[max(a,b),min(1,a+b)]=[max(0.7,0.8),min(1,0.7+0.8)]=[0.8,1].{\displaystyle {\begin{aligned}\mathbb {P} (A\lor B)&\in [\max(a,\,b),\,\min(1,\,a+b)]\\&=[\max(0.7,\,0.8),\,\min(1,\,0.7+0.8)]\\&=[0.8,\,1]\end{aligned}}.} These intervals are contrasted with the results obtained from the rules ofprobability assuming independence, where the probability of the conjunction is P(A & B) =a×b= 0.7 × 0.8 = 0.56, and the probability of the disjunction is P(A ∨ B) =a+b−a×b= 0.94. When the marginal probabilities are very small (or large), the Fréchet intervals are strongly asymmetric about the analogous results under independence. For example, suppose P(A) = 0.000002 =2×10−6and P(B) = 0.000003 =3×10−6. Then the Fréchet inequalities say P(A & B) is in the interval [0,2×10−6], and P(A ∨ B) is in the interval [3×10−6,5×10−6]. If A and B are independent, however, the probability of A & B is6×10−12which is, comparatively, very close to the lower limit (zero) of the Fréchet interval. Similarly, the probability of A ∨ B is4.999994×10−6, which is very close to the upper limit of the Fréchet interval. This is what justifies the rare-event approximation[5]often used inreliability theory. The proofs are elementary. Recall that P(A∨B) = P(A) + P(B) − P(A&B), which implies P(A) + P(B) − P(A∨B) = P(A&B). Because all probabilities are no bigger than 1, we know P(A∨B) ≤ 1, which implies that P(A) + P(B) − 1 ≤ P(A&B). Because all probabilities are also positive we can similarly say 0 ≤ P(A&B), so max(0, P(A) + P(B) − 1) ≤ P(A&B). This gives the lower bound on the conjunction. To get the upper bound, recall that P(A&B) = P(A|B) P(B) = P(B|A) P(A). Because P(A|B) ≤ 1 and P(B|A) ≤ 1, we know P(A&B) ≤ P(A) and P(A&B) ≤ P(B). Therefore, P(A&B) ≤ min(P(A), P(B)), which is the upper bound. The best-possible nature of these bounds follows from observing that they are realized by some dependency between the events A and B. Comparable bounds on the disjunction are similarly derived. When the input probabilities are themselves interval ranges, the Fréchet formulas still work as aprobability bounds analysis. Hailperin[2]considered the problem of evaluating probabilistic Boolean expressions involving many events in complex conjunctions and disjunctions. Some[6][7]have suggested using the inequalities in various applications of artificial intelligence and have extended the rules to account for various assumptions about the dependence among the events. The inequalities can also be generalized to other logical operations, including evenmodus ponens.[6][8]When the input probabilities are characterized byprobability distributions, analogous operations that generalize logical and arithmetic convolutions without assumptions about the dependence between the inputs can be defined based on the related notion ofFréchet bounds.[7][9][10] Similar bounds hold also inquantum mechanicsin the case ofseparable quantum systemsand thatentangledstates violate these bounds.[11]Consider a composite quantum system. In particular, we focus on a composite quantum systemABmade by two finite subsystems denoted asAandB. Assume that we know thedensity matrixof the subsystemA, i.e.,ρA{\displaystyle \rho ^{A}}that is a trace-one positive definite matrix inChn×n{\displaystyle \mathbb {C} _{h}^{n\times n}}(the space ofHermitian matricesof dimensionn×n{\displaystyle n\times n}), and the density matrix of subsystemBdenoted asρB.{\displaystyle \rho ^{B}.}We can think ofρA{\displaystyle \rho ^{A}}andρB{\displaystyle \rho ^{B}}as themarginalsof the subsystemsAandB. From the knowledge of these marginals, we want to infer something about thejointρAB{\displaystyle \rho ^{AB}}inChnm×nm.{\displaystyle \mathbb {C} _{h}^{nm\times nm}.}We restrict our attention tojointρAB{\displaystyle \rho ^{AB}}that areseparable. A density matrix on a composite system is separable if there existpk≥0,{ρ1k}{\displaystyle p_{k}\geq 0,\{\rho _{1}^{k}\}}and{ρ2k}{\displaystyle \{\rho _{2}^{k}\}}which are mixed states of the respective subsystems such thatρAB=∑kpkρ1k⊗ρ2k{\displaystyle \rho ^{AB}=\sum _{k}p_{k}\rho _{1}^{k}\otimes \rho _{2}^{k}}where∑kpk=1.{\displaystyle \sum _{k}p_{k}=1.} OtherwiseρAB{\displaystyle \rho ^{AB}}is called an entangled state. Forseparable density matricesρAB{\displaystyle \rho ^{AB}}inChnm×nm{\displaystyle \mathbb {C} _{h}^{nm\times nm}}the following Fréchet like bounds hold: {ρAB≤ρA⊗ImρAB≤In⊗ρBρAB≥ρA⊗Im+In⊗ρB−InmρAB⪈0{\displaystyle {\begin{cases}\rho ^{AB}\leq \rho ^{A}\otimes I_{m}\\\rho ^{AB}\leq I_{n}\otimes \rho ^{B}\\[6pt]\rho ^{AB}\geq \rho ^{A}\otimes I_{m}+I_{n}\otimes \rho ^{B}-I_{nm}\\\rho ^{AB}\gneq 0\end{cases}}} The inequalities arematrix inequalities,⊗{\displaystyle \otimes }denotes thetensor productandIx{\displaystyle I_{x}}theidentity matrixof dimensionx{\displaystyle x}. It is evident that structurally the above inequalities are analogues of the classical Fréchet bounds for the logical conjunction. It is also worth to notice that when the matricesρA,ρB{\displaystyle \rho ^{A},\rho ^{B}}andρAB{\displaystyle \rho ^{AB}}are restricted to be diagonal, we obtain the classical Fréchet bounds. The upper bound is known in Quantum Mechanics asreduction criterionfor density matrices; it was first proven by[12]and independently formulated by.[13]The lower bound has been obtained in[11]: Theorem A.16that provides a Bayesian interpretation of these bounds. We have observed when the matricesρA,ρB{\displaystyle \rho ^{A},\rho ^{B}}andρAB{\displaystyle \rho ^{AB}}are all diagonal, we obtain the classical Fréchet bounds. To show that, consider again the previous numerical example: ρA=diag(pa,pa¯)=diag(0.7,0.3)ρB=diag(pb,pb¯)=diag(0.8,0.2){\displaystyle {\begin{aligned}\rho ^{A}&={\text{diag}}(p_{a},p_{\bar {a}})={\text{diag}}(0.7,0.3)\\\rho ^{B}&={\text{diag}}(p_{b},p_{\bar {b}})={\text{diag}}(0.8,0.2)\end{aligned}}} then we have:ρAB=diag(pab,pab¯,pa¯b,pa¯b¯)⩽ρA⊗I2=diag(0.7,0.7,0.3,0.3)ρAB=diag(pab,pab¯,pa¯b,pa¯b¯)⩽I2⊗ρB=diag(0.8,0.2,0.8,0.2)ρAB=diag(pab,pab¯,pa¯b,pa¯b¯)⩾ρA⊗I2+I2⊗ρB−I4=diag(0.5,−0.1,0.1,−0.5)ρAB=diag(pab,pab¯,pa¯b,pa¯b¯)⩾0{\displaystyle {\begin{aligned}\rho ^{AB}&={\text{diag}}(p_{ab},p_{a{\bar {b}}},p_{{\bar {a}}b},p_{{\bar {a}}{\bar {b}}})\leqslant \rho ^{A}\otimes I_{2}={\text{diag}}(0.7,0.7,0.3,0.3)\\\rho ^{AB}&={\text{diag}}(p_{ab},p_{a{\bar {b}}},p_{{\bar {a}}b},p_{{\bar {a}}{\bar {b}}})\leqslant I_{2}\otimes \rho ^{B}={\text{diag}}(0.8,0.2,0.8,0.2)\\\rho ^{AB}&={\text{diag}}(p_{ab},p_{a{\bar {b}}},p_{{\bar {a}}b},p_{{\bar {a}}{\bar {b}}})\geqslant \rho ^{A}\otimes I_{2}+I_{2}\otimes \rho ^{B}-I_{4}={\text{diag}}(0.5,-0.1,0.1,-0.5)\\\rho ^{AB}&={\text{diag}}(p_{ab},p_{a{\bar {b}}},p_{{\bar {a}}b},p_{{\bar {a}}{\bar {b}}})\geqslant 0\end{aligned}}} which means: 0.5⩽pab⩽0.70⩽pab¯⩽0.20.1⩽pa¯b⩽0.30⩽pa¯b¯⩽0.2{\displaystyle {\begin{aligned}0.5&\leqslant p_{ab}\leqslant 0.7\\0&\leqslant p_{a{\bar {b}}}\leqslant 0.2\\0.1&\leqslant p_{{\bar {a}}b}\leqslant 0.3\\0&\leqslant p_{{\bar {a}}{\bar {b}}}\leqslant 0.2\end{aligned}}} It is worth to point out thatentangledstates violate the above Fréchet bounds. Consider for instance the entangled density matrix (which is not separable): ρAB=12[1001000000001001],{\displaystyle \rho ^{AB}={\frac {1}{2}}{\begin{bmatrix}1&0&0&1\\0&0&0&0\\0&0&0&0\\1&0&0&1\\\end{bmatrix}},} which has marginalρA=ρB=diag(12,12).{\displaystyle \rho ^{A}=\rho ^{B}={\text{diag}}\left({\tfrac {1}{2}},{\tfrac {1}{2}}\right).} Entangled states are not separable and it can easily be verified that{ρA⊗Im−ρAB⪈0In⊗ρB−ρAB⪈0{\displaystyle {\begin{cases}\rho ^{A}\otimes I_{m}-\rho ^{AB}\ngeqslant 0\\I_{n}\otimes \rho ^{B}-\rho ^{AB}\ngeqslant 0\end{cases}}} since the resulting matrices have one negative eigenvalue. Another example of violation of probabilistic bounds is provided by the famousBell's inequality: entangled states exhibit a form ofstochasticdependence stronger than the strongest classical dependence: and in fact they violate Fréchet like bounds.
https://en.wikipedia.org/wiki/Fr%C3%A9chet_inequalities
Free choiceis a phenomenon in natural language where a linguisticdisjunctionappears to receive a logicalconjunctiveinterpretation when it interacts with amodaloperator. For example, the following English sentences can be interpreted to mean that the addressee can watch a movieandthat they can also play video games, depending on their preference:[1] Free choice inferences are a major topic of research informal semanticsandphilosophical logicbecause they are notvalidin classical systems ofmodal logic. If they were valid, then the semantics of natural language would validate theFree Choice Principle. This symbolic logic formula above is not valid in classicalmodal logic: Adding this principle as an axiom to standard modal logics would allow one to conclude◊Q{\displaystyle \Diamond Q}from◊P{\displaystyle \Diamond P}, for anyP{\displaystyle P}andQ{\displaystyle Q}. This observation is known as theParadox of Free Choice.[1][2]To resolve this paradox, some researchers have proposed analyses of free choice within nonclassical frameworks such asdynamic semantics,linear logic,alternative semantics, andinquisitive semantics.[1][3][4]Others have proposed ways of deriving free choice inferences asscalar implicatureswhich arise on the basis ofclassicallexical entries for disjunction and modality.[1][5][6][7] Free choice inferences are most widely studied fordeontic modals, but also arise with other flavors of modality as well asimperatives,conditionals, and other kinds of operators.[1][8][9][4]Indefinite noun phrasesgive rise to a similar inference which is also referred to as "free choice" though researchers disagree as to whether it forms anatural classwith disjunctive free choice.[9][10] Thislogic-related article is astub. You can help Wikipedia byexpanding it. Thissemanticsarticle is astub. You can help Wikipedia byexpanding it. Thispragmatics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Free_choice_inference
Informal semantics, aHurford disjunctionis adisjunctionin which one of the disjunctsentailsthe other. The concept was first identified by British linguistJames Hurford.[1]The sentence "Mary is in the Netherlands or she is in Amsterdam" is an example of a Hurford disjunction since one cannot be in Amsterdam without being in the Netherlands. Other examples are shown below:[2][3] As indicated by theoctothorpsin the above examples, Hurford disjunctions are typicallyinfelicitous. Their infelicity has been argued to arise from them being redundant, since simply uttering the stronger of the two disjuncts would have had the same semantic effect. Thus, they have been taken as motivation for a principle such as the following:[3][4] However, some particular instances of Hurford disjunctions are felicitous.[2][5] Felicitous Hurford disjunctions have been analyzed by positing that the weaker disjunct is strengthened by anembeddedscalar implicaturewhich eliminates the entailment between the disjuncts. For instance, in the first of the felicitous examples above, the left disjunct's unenriched meaning is simply that Sofia ate a nonzero amount of pizza. This would result in a redundancy violation since eating all the pizza entails eating a nonzero amount of it. However, if an embedded scalar implicature enriches this disjunct so that it denotes the proposition that that Sofia ate somebut not allof the pizza, this entailment no longer goes through. Eating all of the pizza does not entail eating some but not all of it. Thus, Local Redundancy will still be satisfied.[2][5] Thissemanticsarticle is astub. You can help Wikipedia byexpanding it. Thispragmatics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Hurford_disjunction
Informal semanticsandphilosophical logic,simplification of disjunctive antecedents(SDA) is the phenomenon whereby adisjunctionin the antecedent of aconditionalappears todistributeover the conditional as a whole. This inference is shown schematically below:[1][2] This inference has been argued to bevalidon the basis of sentence pairs such as that below, since Sentence 1 seems to imply Sentence 2.[1][2] The SDA inference was first discussed as a potential problem for thesimilarity analysis of counterfactuals. In these approaches, a counterfactual(A∨B)>C{\displaystyle (A\lor B)>C}is predicted to be true ifC{\displaystyle C}holds throughout thepossible worldswhereA∨B{\displaystyle A\lor B}holds which are mostsimilarto the world of evaluation. On aBooleansemantics for disjunction,A∨B{\displaystyle A\lor B}can hold at a world simply in virtue ofA{\displaystyle A}being true there, meaning that the most similarA∨B{\displaystyle A\lor B}-worlds could all be ones whereA{\displaystyle A}holds butB{\displaystyle B}does not. IfC{\displaystyle C}is also true at these worlds but not at the closest worlds hereB{\displaystyle B}is true, then this approach will predict a failure of SDA:(A∨B)>C{\displaystyle (A\lor B)>C}will be true at the world of evaluation while(B>C){\displaystyle (B>C)}will be false. In more intuitive terms, imagine that Yde missed the most recent party because he happened to get a flat tire while Dani missed it because she hates parties and is also deceased. In all of the closest worlds where either Yde or Dani comes to the party, it will be Yde and not Dani who attends. If Yde is a fun person to have at parties, this will mean that Sentence 1 above is predicted to be true on the similarity approach. However, if Dani tends to have the opposite effect on parties she attends, then Sentence 2 is predicted false, in violation of SDA.[3][1][2] SDA has been analyzed in a variety of ways. One is to derive it as a semanticentailmentby positing a non-classical treatment of disjunction such as that ofalternative semanticsorinquisitive semantics.[4][5][6][1][2]Another approach also derives it as a semanticentailment, but does so by adopting an alternative denotation for conditionals such as thestrict conditionalor any of the options made available insituation semantics.[1][2]Finally, some researchers have suggested that it can be analyzed as a pragmaticimplicaturederived on the basis of classical disjunction and a standard semantics for conditionals.[7][1][2]SDA is sometimes considered anembeddedinstance of thefree choice inference.[8] Thissemanticsarticle is astub. You can help Wikipedia byexpanding it. Thispragmatics-related article is astub. You can help Wikipedia byexpanding it. Thislogic-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Simplification_of_disjunctive_antecedents
Atbash(Hebrew:אתבש; also transliteratedAtbaš) is a monoalphabeticsubstitution cipheroriginally used toencrypttheHebrew alphabet. It can be modified for use with any knownwriting systemwith a standardcollating order. The Atbash cipher is a particular type ofmonoalphabetic cipherformed by taking thealphabet(orabjad,syllabary, etc.) and mapping it to its reverse, so that the first letter becomes the last letter, the second letter becomes the second to last letter, and so on. For example, theISO basic Latin alphabetwould work like this: Because there is only one way to perform this, the Atbash cipher provides nocommunications security, as it lacks any sort ofkey. If multiplecollating ordersare available, which one was used in encryption can be used as a key, but this does not provide significantly more security, considering that only a few letters can give away which one was used. The name derives from the first, last, second, and second to last Hebrew letters (Aleph–Taw–Bet–Shin). The Atbash cipher for the modernHebrew alphabetwould be: By shifting the correlation one space to the left or the right, one may derive a variantBatgash(named for Bet–Taw–Gimel–Shin) orAshbar(for Aleph–Shin–Bet–Reish). Either alternative mapping leaves one letter unsubstituted; respectively Aleph and Taw. Severalbiblicalwords are described by commentators[n 1]as being examples of Atbash:[1][2][3] Regarding a potential Atbash switch of a single letter: The Atbash cipher can be seen as a special case of theaffine cipher. Under the standard affine convention, an alphabet ofmletters is mapped to the numbers0, 1, ... ,m− 1.(The Hebrew alphabet hasm= 22,and the standard Latin alphabet hasm= 26).The Atbash cipher may then be enciphered and deciphered using the encryption function for an affine cipher by settinga=b= (m− 1): This may be simplified to If, instead, themletters of the alphabet are mapped to1, 2, ...,m,then the encryption and decryption function for the Atbash cipher becomes
https://en.wikipedia.org/wiki/Atbash
Inmathematics, anautomorphismis anisomorphismfrom amathematical objectto itself. It is, in some sense, asymmetryof the object, and a way ofmappingthe object to itself while preserving all of its structure. Thesetof all automorphisms of an object forms agroup, called theautomorphism group. It is, loosely speaking, thesymmetry groupof the object. In analgebraic structuresuch as agroup, aring, orvector space, anautomorphismis simply abijectivehomomorphismof an object into itself. (The definition of a homomorphism depends on the type of algebraic structure; see, for example,group homomorphism,ring homomorphism, andlinear operator.) More generally, for an object in somecategory, an automorphism is a morphism of the object to itself that has an inverse morphism; that is, a morphismf:X→X{\displaystyle f:X\to X}is an automorphism if there is a morphismg:X→X{\displaystyle g:X\to X}such thatg∘f=f∘g=idX,{\displaystyle g\circ f=f\circ g=\operatorname {id} _{X},}whereidX{\displaystyle \operatorname {id} _{X}}is theidentity morphismofX. For algebraic structures, the two definitions are equivalent; in this case, the identity morphism is simply theidentity function, and is often called thetrivial automorphism. The automorphisms of an objectXform agroupundercompositionofmorphisms, which is called theautomorphism groupofX. This results straightforwardly from the definition of a category. The automorphism group of an objectXin a categoryCis often denotedAutC(X), or simply Aut(X) if the category is clear from context. One of the earliest group automorphisms (automorphism of a group, not simply a group of automorphisms of points) was given by the Irish mathematicianWilliam Rowan Hamiltonin 1856, in hisicosian calculus, where he discovered an order two automorphism,[5]writing: so thatμ{\displaystyle \mu }is a new fifth root of unity, connected with the former fifth rootλ{\displaystyle \lambda }by relations of perfect reciprocity. In some categories—notablygroups,rings, andLie algebras—it is possible to separate automorphisms into two types, called "inner" and "outer" automorphisms. In the case of groups, theinner automorphismsare the conjugations by the elements of the group itself. For each elementaof a groupG, conjugation byais the operationφa:G→Ggiven byφa(g) =aga−1(ora−1ga; usage varies). One can easily check that conjugation byais a group automorphism. The inner automorphisms form anormal subgroupof Aut(G), denoted by Inn(G); this is calledGoursat's lemma. The other automorphisms are calledouter automorphisms. Thequotient groupAut(G) / Inn(G)is usually denoted by Out(G); the non-trivial elements are thecosetsthat contain the outer automorphisms. The same definition holds in anyunitalringoralgebrawhereais anyinvertible element. ForLie algebrasthe definition is slightly different.
https://en.wikipedia.org/wiki/Automorphism
Idempotence(UK:/ˌɪdɛmˈpoʊtəns/,[1]US:/ˈaɪdəm-/)[2]is the property of certainoperationsinmathematicsandcomputer sciencewhereby they can be applied multiple times without changing the result beyond the initial application. The concept of idempotence arises in a number of places inabstract algebra(in particular, in the theory ofprojectorsandclosure operators) andfunctional programming(in which it is connected to the property ofreferential transparency). The term was introduced by American mathematicianBenjamin Peircein 1870[3][4]in the context of elements of algebras that remain invariant when raised to a positive integer power, and literally means "(the quality of having) the same power", fromidem+potence(same + power). An elementx{\displaystyle x}of a setS{\displaystyle S}equipped with abinary operator⋅{\displaystyle \cdot }is said to beidempotentunder⋅{\displaystyle \cdot }if[5][6] Thebinary operation⋅{\displaystyle \cdot }is said to beidempotentif[7][8] In the monoid(EE,∘){\displaystyle (E^{E},\circ )}of the functions from a setE{\displaystyle E}to itself (seeset exponentiation) withfunction composition∘{\displaystyle \circ }, idempotent elements are the functionsf:E→E{\displaystyle f\colon E\to E}such thatf∘f=f{\displaystyle f\circ f=f},[a]that is such thatf(f(x))=f(x){\displaystyle f(f(x))=f(x)}for allx∈E{\displaystyle x\in E}(in other words, the imagef(x){\displaystyle f(x)}of each elementx∈E{\displaystyle x\in E}is afixed pointoff{\displaystyle f}). For example: If the setE{\displaystyle E}hasn{\displaystyle n}elements, we can partition it intok{\displaystyle k}chosen fixed points andn−k{\displaystyle n-k}non-fixed points underf{\displaystyle f}, and thenkn−k{\displaystyle k^{n-k}}is the number of different idempotent functions. Hence, taking into account all possible partitions, is the total number of possible idempotent functions on the set. Theinteger sequenceof the number of idempotent functions as given by the sum above forn= 0, 1, 2, 3, 4, 5, 6, 7, 8, ... starts with 1, 1, 3, 10, 41, 196, 1057, 6322, 41393, ... (sequenceA000248in theOEIS). Neither the property of being idempotent nor that of being not is preserved under function composition.[b]As an example for the former,f(x)=x{\displaystyle f(x)=x}mod3 andg(x)=max(x,5){\displaystyle g(x)=\max(x,5)}are both idempotent, butf∘g{\displaystyle f\circ g}is not,[c]althoughg∘f{\displaystyle g\circ f}happens to be.[d]As an example for the latter, the negation function¬{\displaystyle \neg }on the Boolean domain is not idempotent, but¬∘¬{\displaystyle \neg \circ \neg }is. Similarly, unary negation−(⋅){\displaystyle -(\cdot )}of real numbers is not idempotent, but−(⋅)∘−(⋅){\displaystyle -(\cdot )\circ -(\cdot )}is. In both cases, the composition is simply theidentity function, which is idempotent. Incomputer science, the termidempotencemay have a different meaning depending on the context in which it is applied: This is a very useful property in many situations, as it means that an operation can be repeated or retried as often as necessary without causing unintended effects. With non-idempotent operations, the algorithm may have to keep track of whether the operation was already performed or not. A function looking up a customer's name and address in adatabaseis typically idempotent, since this will not cause the database to change. Similarly, a request for changing a customer's address to XYZ is typically idempotent, because the final address will be the same no matter how many times the request is submitted. However, a customer request for placing an order is typically not idempotent since multiple requests will lead to multiple orders being placed. A request for canceling a particular order is idempotent because no matter how many requests are made the order remains canceled. A sequence of idempotent subroutines where at least one subroutine is different from the others, however, is not necessarily idempotent if a later subroutine in the sequence changes a value that an earlier subroutine depends on—idempotence is not closed under sequential composition. For example, suppose the initial value of a variable is 3 and there is a subroutine sequence that reads the variable, then changes it to 5, and then reads it again. Each step in the sequence is idempotent: both steps reading the variable have no side effects and the step changing the variable to 5 will always have the same effect no matter how many times it is executed. Nonetheless, executing the entire sequence once produces the output (3, 5), but executing it a second time produces the output (5, 5), so the sequence is not idempotent. In theHypertext Transfer Protocol(HTTP), idempotence andsafetyare the major attributes that separateHTTP methods. Of the major HTTP methods, GET, PUT, and DELETE should be implemented in an idempotent manner according to the standard, but POST doesn't need to be.[9]GET retrieves the state of a resource; PUT updates the state of a resource; and DELETE deletes a resource. As in the example above, reading data usually has no side effects, so it is idempotent (in factnullipotent). Updating and deleting a given data are each usually idempotent as long as the request uniquely identifies the resource and only that resource again in the future. PUT and DELETE with unique identifiers reduce to the simple case of assignment to a variable of either a value or the null-value, respectively, and are idempotent for the same reason; the end result is always the same as the result of the initial execution, even if the response differs.[10] Violation of the unique identification requirement in storage or deletion typically causes violation of idempotence. For example, storing or deleting a given set of content without specifying a unique identifier: POST requests, which do not need to be idempotent, often do not contain unique identifiers, so the creation of the identifier is delegated to the receiving system which then creates a corresponding new record. Similarly, PUT and DELETE requests with nonspecific criteria may result in different outcomes depending on the state of the system - for example, a request to delete the most recent record. In each case, subsequent executions will further modify the state of the system, so they are not idempotent. Inevent stream processing, idempotence refers to the ability of a system to produce the same outcome, even if the same file, event or message is received more than once. In aload–store architecture, instructions that might possibly cause apage faultare idempotent. So if a page fault occurs, theoperating systemcan load the page from disk and then simply re-execute the faulted instruction. In a processor where such instructions are not idempotent, dealing with page faults is much more complex.[11][12] When reformatting output,pretty-printingis expected to be idempotent. In other words, if the output is already "pretty", there should be nothing to do for the pretty-printer.[citation needed] Inservice-oriented architecture(SOA), a multiple-step orchestration process composed entirely of idempotent steps can be replayed without side-effects if any part of that process fails. Many operations that are idempotent often have ways to "resume" a process if it is interrupted – ways that finish much faster than starting all over from the beginning. For example,resuming a file transfer,synchronizing files, creating asoftware build, installing an application and all of its dependencies with apackage manager, etc. Applied examples that many people could encounter in their day-to-day lives includeelevatorcall buttons andcrosswalk buttons.[13]The initial activation of the button moves the system into a requesting state, until the request is satisfied. Subsequent activations of the button between the initial activation and the request being satisfied have no effect, unless the system is designed to adjust the time for satisfying the request based on the number of activations. Similarly, the elevator "close" button may be pressed many times to the same effect as once, since the doors close on a fixed schedule - unless the "open" button is pressed. Which is not idempotent, because each press adds further delay.
https://en.wikipedia.org/wiki/Idempotence
ROT13is a simple lettersubstitution cipherthat replaces a letter with the 13th letter after it in theLatin alphabet. ROT13 is a special case of theCaesar cipherwhich was developed in ancient Rome, used byJulius Caesarin the 1st century BC.[1]An early entry on theTimeline of cryptography. ROT13 can be referred by "Rotate13", "rotate by 13 places", hyphenated "ROT-13" or sometimes by itsautonym"EBG13". Applying ROT13 to a piece of text requires examining its alphabetic characters and replacing each one by the letter 13 places further along in thealphabet, wrapping back to the beginning as necessary.[2] When encoding a message,AbecomesN,BbecomesO, and so on up toM, which becomesZ. Then the sequence continues at the beginning of the alphabet:NbecomesA,ObecomesB, and so on toZ, which becomesM. When decoding a message, the same substitution rules are applied, but this time on the ROT13 encrypted text. Other characters, such as numbers, symbols, punctuation orwhitespace, are left unchanged. Because there are 26 letters in theLatin alphabetand 26 = 2 × 13, the ROT13 function is its owninverse:[2] In other words, two successive applications of ROT13 restore the original text (inmathematics, this is sometimes called aninvolution; in cryptography, areciprocal cipher). The transformation can be done using alookup table, such as the following: For example, in the following joke, the punchline has been obscured by ROT13: Transforming the entire text via ROT13 form, the answer to the joke is revealed: A second application of ROT13 would restore the original. ROT13 is not intended to be used in modern times. At the time of conception in an era ofAncient Roman technology, the encryption scheme was not represented by amathematical structure. Thekeyto decrypt a message requires no more knowledge than the fact that ROT13 is in use. Even ifsecrecydoes not fail, any alien party or individual, capable of intercepting the message, could break the code by spending enough time on decoding the text throughfrequency analysis[2]or finding otherpatterns. In the early 1980s, people used ROT13 in their messages onUsenet newsgroup servers[3]They did this to hide potentially offensive jokes, or to obscure an answer to a puzzle or otherspoiler,[4]or to fool less sophisticatedspam bots[dubious–discuss]. ROT13 has been the subject of many jokes. The 1989International Obfuscated C Code Contest(IOCCC) included an entry by Brian Westley. Westley'scomputer programcan be encoded in ROT13 or reversed and stillcompilescorrectly. Its operation, when executed, is either to perform ROT13 encoding on, or to reverse its input.[5] In December 1999, it was found thatNetscape Communicatorused ROT13 as part of an insecure scheme to store email passwords.[6] In 2001, Russian programmerDimitry Sklyarovdemonstrated that an eBook vendor, New Paradigm Research Group (NPRG), used ROT13 to encrypt their documents. It has been speculated that NPRG may have mistaken the ROT13 toy example—provided with theAdobeeBooksoftware development kit—for a serious encryption scheme.[7]Windows XP uses ROT13 on some of its registry keys.[8]ROT13 is also used in theUnix fortune programto conceal potentially offensivedicta. Johann Ernst Elias Bessler, an 18th-century clock maker and constructor ofperpetual motionmachines, pointed out that ROT13 encodes his surname asOrffyre. He used itslatinisedform,Orffyreus, as his pseudonym.[9] Because of its utter unsuitability for real secrecy, ROT13 has become a catchphrase to refer to any conspicuously weakencryptionscheme; a critic might claim that "56-bitDESis little better than ROT13 these days". In a play on real terms like "double DES" several terms cropped up with humorous intent: ROT13 jokes were popular on manynewsgroupservers, like net.jokes as early as the 1980s.[3] The newsgroup alt.folklore.urban coined a word—furrfu—that was the ROT13 encoding of the frequently encoded utterance "sheesh". "Furrfu" evolved in mid-1992 as a response to postings repeatingurban mythson alt.folklore.urban, after some posters complained that "Sheesh!" as a response tonewcomerswas being overused.[11] Using asearch engineon public social networks, yields results for ROT13 in jokes to this day.[citation needed][when?] ROT13 provides an opportunity for letter games. Some words will, when transformed with ROT13, produce another word. Examples of 7-letter pairs in theEnglish languageareabjurerandnowhere, andChechenandpurpura. Other examples of words like these are shown in the table.[12]The pairgnatandtangis an example of words that are both ROT13 reciprocals and reversals. ROT5 is a practice similar to ROT13 that applies to numeric digits (0 to 9). ROT13 and ROT5 can be used together in the same message, sometimes called ROT18 (18 = 13 + 5) or ROT13.5. ROT47 is a derivative of ROT13 which, in addition to scrambling the basic letters, treats numbers and common symbols. Instead of using the sequenceA–Zas the alphabet, ROT47 uses a larger set of characters from the commoncharacter encodingknown asASCII. Specifically, the 7-bit printable characters, excluding space, from decimal 33 '!' through 126 '~', 94 in total, taken in the order of the numerical values of their ASCII codes, are rotated by 47 positions, without special consideration of case. For example, the characterAis mapped top, whileais mapped to2. The use of a larger alphabet produces a more thorough obfuscation than that of ROT13; for example, a telephone number such as+1-415-839-6885is not obvious at first sight from the scrambled resultZ'\c`d\gbh\eggd. On the other hand, because ROT47 introduces numbers and symbols into the mix without discrimination, it is more immediately obvious that the text has been encoded. Example: enciphers to TheGNU C library, a set of standard routines available for use incomputer programming, contains afunction—memfrob()[13]—which has a similar purpose to ROT13, although it is intended for use with arbitrary binary data. The function operates by combining eachbytewith thebinarypattern 00101010 (42) using theexclusive or(XOR) operation. This effects asimple XOR cipher. Like ROT13, XOR (and thereforememfrob()) is self-reciprocal, and provides a similar, virtually absent, level of security. The ROT13 and ROT47 are fairly easy to implement using the Unix terminal applicationtr; to encrypt the string "Pack My Box With Five Dozen Liquor Jugs" in ROT13: and the string "The Quick Brown Fox Jumps Over The Lazy Dog" for ROT47: InEmacs, one can ROT13 the buffer or a selection with the commands:[14]M-x toggle-rot13-mode,M-x rot13-other-window, orM-x rot13-region. In theVim text editor, one can ROT13 a buffer with the command:[15]ggg?G. The modulecodecsprovides'rot13'text transform.[16] Without importing any libraries, it can be done by creating a translation table manually:[a] For Python 3, you can use the methodstr.translate()(withstr.maketrans()):
https://en.wikipedia.org/wiki/ROT13
InBoolean logic,logical NOR,[1]non-disjunction, orjoint denial[1]is a truth-functional operator which produces a result that is the negation oflogical or. That is, a sentence of the form (pNORq) is true precisely when neitherpnorqis true—i.e. when bothpandqarefalse. It is logically equivalent to¬(p∨q){\displaystyle \neg (p\lor q)}and¬p∧¬q{\displaystyle \neg p\land \neg q}, where the symbol¬{\displaystyle \neg }signifies logicalnegation,∨{\displaystyle \lor }signifiesOR, and∧{\displaystyle \land }signifiesAND. Non-disjunction is usually denoted as↓{\displaystyle \downarrow }or∨¯{\displaystyle {\overline {\vee }}}orX{\displaystyle X}(prefix) orNOR{\displaystyle \operatorname {NOR} }. As with itsdual, theNAND operator(also known as theSheffer stroke—symbolized as either↑{\displaystyle \uparrow },∣{\displaystyle \mid }or/{\displaystyle /}), NOR can be used by itself, without any other logical operator, to constitute a logicalformal system(making NORfunctionally complete). Thecomputerused in the spacecraft that first carried humans to themoon, theApollo Guidance Computer, was constructed entirely using NOR gates with three inputs.[2] TheNOR operationis alogical operationon twological values, typically the values of twopropositions, that produces a value oftrueif and only if both operands are false. In other words, it produces a value offalseif and only if at least one operand is true. Thetruth tableofA↓B{\displaystyle A\downarrow B}is as follows: The logical NOR↓{\displaystyle \downarrow }is the negation of the disjunction: Peirceis the first to show the functional completeness of non-disjunction while he doesn't publish his result.[3][4]Peirce used⋏¯{\displaystyle {\overline {\curlywedge }}}fornon-conjunctionand⋏{\displaystyle \curlywedge }for non-disjunction (in fact, what Peirce himself used is⋏{\displaystyle \curlywedge }and he didn't introduce⋏¯{\displaystyle {\overline {\curlywedge }}}while Peirce's editors made such disambiguated use).[4]Peirce called⋏{\displaystyle \curlywedge }theampheck(from Ancient Greekἀμφήκης,amphēkēs, "cutting both ways").[4] In 1911,Stamm[pl]was the first to publish a description of both non-conjunction (using∼{\displaystyle \sim }, the Stamm hook), and non-disjunction (using∗{\displaystyle *}, the Stamm star), and showed their functional completeness.[5][6]Note that most uses in logical notation of∼{\displaystyle \sim }use this for negation. In 1913,Shefferdescribed non-disjunction and showed its functional completeness. Sheffer used∣{\displaystyle \mid }for non-conjunction, and∧{\displaystyle \wedge }for non-disjunction. In 1935,Webbdescribed non-disjunction forn{\displaystyle n}-valued logic, and use∣{\displaystyle \mid }for the operator. So some people call itWebb operator,[7]Webb operation[8]orWebb function.[9] In 1940,Quinealso described non-disjunction and use↓{\displaystyle \downarrow }for the operator.[10]So some people call the operatorPeirce arroworQuine dagger. In 1944,Churchalso described non-disjunction and use∨¯{\displaystyle {\overline {\vee }}}for the operator.[11] In 1954,BocheńskiusedX{\displaystyle X}inXpq{\displaystyle Xpq}for non-disjunction inPolish notation.[12] APLuses a glyph⍱that combines a∨with a~.[13] NOR is commutative but not associative, which means thatP↓Q↔Q↓P{\displaystyle P\downarrow Q\leftrightarrow Q\downarrow P}but(P↓Q)↓R↮P↓(Q↓R){\displaystyle (P\downarrow Q)\downarrow R\not \leftrightarrow P\downarrow (Q\downarrow R)}.[14] The logical NOR, taken by itself, is afunctionally completeset of connectives.[15]This can be proved by first showing, with atruth table, that¬A{\displaystyle \neg A}is truth-functionally equivalent toA↓A{\displaystyle A\downarrow A}.[16]Then, sinceA↓B{\displaystyle A\downarrow B}is truth-functionally equivalent to¬(A∨B){\displaystyle \neg (A\lor B)},[16]andA∨B{\displaystyle A\lor B}is equivalent to¬(¬A∧¬B){\displaystyle \neg (\neg A\land \neg B)},[16]the logical NOR suffices to define the set of connectives{∧,∨,¬}{\displaystyle \{\land ,\lor ,\neg \}},[16]which is shown to be truth-functionally complete by theDisjunctive Normal Form Theorem.[16] This may also be seen from the fact that Logical NOR does not possess any of the five qualities (truth-preserving, false-preserving,linear,monotonic, self-dual) required to be absent from at least one member of a set offunctionally completeoperators. NOR has the interesting feature that all other logical operators can be expressed by interlaced NOR operations. Thelogical NANDoperator also has this ability. Expressed in terms of NOR↓{\displaystyle \downarrow }, the usual operators of propositional logic are:
https://en.wikipedia.org/wiki/Logical_NOR
Aconceptual graph(CG) is a formalism forknowledge representation. In the first published paper on CGs,John F. Sowaused them to represent theconceptual schemasused indatabase systems.[1]The first book on CGs applied them to a wide range of topics inartificial intelligence,computer science, andcognitive science.[2] Since 1984, the model has been developed along three main directions: a graphical interface forfirst-order logic, adiagrammaticcalculus of logics, and agraph-basedknowledge representation and reasoningmodel.[2] In this approach, a formula infirst-order logic(predicate calculus) is represented by a labeled graph. A linear notation, called the Conceptual Graph Interchange Format (CGIF), has been standardized in theISO standardforcommon logic. The diagram above is an example of thedisplay formfor a conceptual graph. Each box is called aconcept node, and each oval is called arelation node. In CGIF, this CG would be represented by the following statement: [Cat Elsie] [Sitting *x] [Mat *y] (agent ?x Elsie) (location ?x ?y) In CGIF, brackets enclose the information inside the concept nodes, and parentheses enclose the information inside the relation nodes. The letters x and y, which are calledcoreference labels, show how the concept and relation nodes are connected. In CLIF, those letters are mapped to variables, as in the following statement: (exists ((x Sitting) (y Mat)) (and (Cat Elsie) (agent x Elsie) (location x y))) As this example shows, the asterisks on the coreference labels*xand*yin CGIF map to existentially quantified variables in CLIF, and the question marks on?xand?ymap to bound variables in CLIF. A universal quantifier, represented@every*zin CGIF, would be representedforall (z)in CLIF. Reasoning can be done by translating graphs into logical formulas, then applying a logicalinference engine. Another research branch continues the work onexistential graphsofCharles Sanders Peirce, which were one of the origins of conceptual graphs as proposed by Sowa. In this approach, developed in particular by Dau (Dau 2003), conceptual graphs are conceptualdiagramsrather than graphs in the sense ofgraph theory, and reasoning operations are performed by operations on these diagrams. Key features of GBKR, the graph-based knowledge representation and reasoning model developed by Chein and Mugnier and the Montpellier group, can be summarized as follows:[3] COGITANT and COGUI are tools that implement the GBKR model. COGITANT is a library ofC++classes that implement most of the GBKR notions and reasoning mechanisms. COGUI is a graphical user interface dedicated to the construction of a GBKR knowledge base (it integrates COGITANT and, among numerous functionalities, it contains a translator from GBKR toRDF/Sand conversely).
https://en.wikipedia.org/wiki/Conceptual_graph
Charles Sanders Peirce(/pɜːrs/[a][8]PURSS; September 10, 1839 – April 19, 1914) was an American scientist, mathematician,logician, and philosopher who is sometimes known as "the father ofpragmatism".[9][10]According to philosopherPaul Weiss, Peirce was "the most original and versatile of America's philosophers and America's greatest logician".[11]Bertrand Russellwrote "he was one of the most original minds of the later nineteenth century and certainly the greatest American thinker ever".[12] Educated as a chemist and employed as a scientist for thirty years, Peirce meanwhile made major contributions to logic, such as theories ofrelationsandquantification.C. I. Lewiswrote, "The contributions of C. S. Peirce to symbolic logic are more numerous and varied than those of any other writer—at least in the nineteenth century." For Peirce, logic also encompassed much of what is now calledepistemologyand thephilosophy of science. He saw logic as the formal branch ofsemioticsor study ofsigns, of which he is a founder, which foreshadowed the debate amonglogical positivistsand proponents ofphilosophy of languagethat dominated 20th-century Western philosophy. Peirce's study of signs also included atripartite theory of predication. Additionally, he defined the concept ofabductive reasoning, as well as rigorously formulatingmathematical inductionanddeductive reasoning. He was one of thefounders of statistics. As early as 1886, he saw thatlogical operations could be carried out by electrical switching circuits. The same idea was used decades later to produce digital computers.[13] Inmetaphysics, Peirce was an "objective idealist" in the tradition of German philosopherImmanuel Kantas well as ascholastic realistabout universals. He also held a commitment to the ideas of continuity and chance as real features of the universe, views he labeledsynechismandtychismrespectively. Peirce believed an epistemicfallibilismand anti-skepticismwent along with these views. Peirce was born at 3 Phillips Place inCambridge, Massachusetts. He was the son of Sarah Hunt Mills andBenjamin Peirce, himself a professor of mathematics andastronomyatHarvard University.[b]At age 12, Charles read his older brother's copy ofRichard Whately'sElements of Logic, then the leading English-language text on the subject. So began his lifelong fascination with logic and reasoning.[14] He suffered from his late teens onward from a nervous condition then known as "facial neuralgia", which would today be diagnosed astrigeminal neuralgia. His biographer, Joseph Brent, says that when in the throes of its pain "he was, at first, almost stupefied, and then aloof, cold, depressed, extremely suspicious, impatient of the slightest crossing, and subject to violent outbursts of temper".[15]Its consequences may have led to the social isolation of his later life. Peirce went on to earn a Bachelor of Arts degree and a Master of Arts degree (1862) from Harvard. In 1863 theLawrence Scientific Schoolawarded him aBachelor of Sciencedegree, Harvard's firstsumma cum laudechemistrydegree.[16]His academic record was otherwise undistinguished.[17]At Harvard, he began lifelong friendships withFrancis Ellingwood Abbot,Chauncey Wright, andWilliam James.[18]One of his Harvard instructors,Charles William Eliot, formed an unfavorable opinion of Peirce. This proved fateful, because Eliot, while President of Harvard (1869–1909—a period encompassing nearly all of Peirce's working life), repeatedly vetoed Peirce's employment at the university.[19] Between 1859 and 1891, Peirce was intermittently employed in various scientific capacities by the United States Coast Survey, which in 1878 was renamed theUnited States Coast and Geodetic Survey,[20]where he enjoyed his highly influential father's protection until the latter's death in 1880.[21]At the Survey, he worked mainly ingeodesyandgravimetry, refining the use ofpendulumsto determine small local variations in the Earth'sgravity.[20] This employment exempted Peirce from having to take part in theAmerican Civil War; it would have been very awkward for him to do so, as theBoston BrahminPeirces sympathized with theConfederacy.[22]No members of the Peirce family volunteered or enlisted. Peirce grew up in a home where white supremacy was taken for granted, and slavery was considered natural.[23]Peirce's father had described himself as asecessionistuntil the outbreak of the war, after which he became aUnionpartisan, providing donations to theSanitary Commission, the leading Northern war charity. Peirce liked to use the followingsyllogismto illustrate the unreliability oftraditionalforms of logic (for the first premise arguablyassumes the conclusion):[24] All Men are equal in their political rights.Negroes are Men.Therefore, negroes are equal in political rights to whites. He was elected a resident fellow of theAmerican Academy of Arts and Sciencesin January 1867.[25]The Survey sent him to Europe five times,[26]first in 1871 as part of a group sent to observe asolar eclipse. There, he sought outAugustus De Morgan,William Stanley Jevons, andWilliam Kingdon Clifford,[27]British mathematicians and logicians whose turn of mind resembled his own. From 1869 to 1872, he was employed as an assistant in Harvard's astronomical observatory, doing important work on determining the brightness ofstarsand the shape of theMilky Way.[28]In 1872 he founded theMetaphysical Club, a conversational philosophical club that Peirce, the futureSupreme Court JusticeOliver Wendell Holmes Jr., the philosopher and psychologistWilliam James, amongst others, formed in January 1872 inCambridge, Massachusetts, and dissolved in December 1872. Other members of the club includedChauncey Wright,John Fiske,Francis Ellingwood Abbot,Nicholas St. John Green, andJoseph Bangs Warner.[29]The discussions eventually birthed Peirce's notion of pragmatism. On April 20, 1877, he was elected a member of theNational Academy of Sciences.[31]Also in 1877, he proposed measuring the meter as so manywavelengthsof light of a certainfrequency,[32]the kind of definition employedfrom 1960 to 1983. In 1879 Peirce developedPeirce quincuncial projection, having been inspired byH. A. Schwarz's 1869conformal transformation of a circle onto a polygon ofnsides(known as the Schwarz–Christoffel mapping). During the 1880s, Peirce's indifference to bureaucratic detail waxed while his Survey work's quality and timeliness waned. Peirce took years to write reports that he should have completed in months.[according to whom?]Meanwhile, he wrote entries, ultimately thousands, during 1883–1909 on philosophy, logic, science, and other subjects for the encyclopedicCentury Dictionary.[33]In 1885, an investigation by theAllisonCommission exonerated Peirce, but led to the dismissal of SuperintendentJulius Hilgardand several other Coast Survey employees for misuse of public funds.[34]In 1891, Peirce resigned from the Coast Survey at SuperintendentThomas Corwin Mendenhall's request.[35] In 1879, Peirce was appointed lecturer in logic atJohns Hopkins University, which had strong departments in areas that interested him, such as philosophy (RoyceandDeweycompleted their PhDs at Hopkins), psychology (taught byG. Stanley Halland studied byJoseph Jastrow, who coauthored a landmark empirical study with Peirce), and mathematics (taught byJ. J. Sylvester, who came to admire Peirce's work on mathematics and logic). HisStudies in Logic by Members of the Johns Hopkins University(1883) contained works by himself andAllan Marquand,Christine Ladd,Benjamin Ives Gilman, and Oscar Howard Mitchell,[36]several of whom were his graduate students.[7]Peirce's nontenured position at Hopkins was the only academic appointment he ever held. Brent documents something Peirce never suspected, namely that his efforts to obtain academic employment, grants, and scientific respectability were repeatedly frustrated by the covert opposition of a major Canadian-American scientist of the day,Simon Newcomb.[37]Newcomb had been a favourite student of Peirce's father; although "no doubt quite bright", "likeSalieriinPeter Shaffer's Amadeushe also had just enough talent to recognize he was not a genius and just enough pettiness to resent someone who was". Additionally "an intensely devout and literal-minded Christian of rigid moral standards", he was appalled by what he considered Peirce's personal shortcomings.[38]Peirce's efforts may also have been hampered by what Brent characterizes as "his difficult personality".[39]In contrast,Keith Devlinbelieves that Peirce's work was too far ahead of his time to be appreciated by the academic establishment of the day and that this played a large role in his inability to obtain a tenured position.[40] Peirce's personal life undoubtedly worked against his professional success. After his first wife,Harriet Melusina Fay("Zina"), left him in 1875,[41]Peirce, while still legally married, became involved withJuliette, whose last name, given variously as Froissy and Pourtalai,[42]and nationality (she spoke French)[43]remain uncertain.[44]When his divorce from Zina became final in 1883, he married Juliette.[45]That year, Newcomb pointed out to a Johns Hopkins trustee that Peirce, while a Hopkins employee, had lived and traveled with a woman to whom he was not married; the ensuing scandal led to his dismissal in January 1884.[46]Over the years Peirce sought academic employment at various universities without success.[47]He had no children by either marriage.[48] In 1887, Peirce spent part of his inheritance from his parents to buy 2,000 acres (8 km2) of rural land nearMilford, Pennsylvania, which never yielded an economic return.[49]There he had an 1854 farmhouse remodeled to his design.[50]The Peirces named the property "Arisbe". There they lived with few interruptions for the rest of their lives,[51]Charles writing prolifically, with much of his work remaining unpublished to this day (seeWorks). Living beyond their means soon led to grave financial and legal difficulties.[52]Charles spent much of his last two decades unable to afford heat in winter and subsisting on old bread donated by the local baker. Unable to afford new stationery, he wrote on theversoside of old manuscripts. An outstanding warrant for assault and unpaid debts led to his being a fugitive in New York City for a while.[53]Several people, including his brotherJames Mills Peirce[54]and his neighbors, relatives ofGifford Pinchot, settled his debts and paid his property taxes and mortgage.[55] Peirce did some scientific and engineering consulting and wrote much for meager pay, mainly encyclopedic dictionary entries, and reviews forThe Nation(with whose editor,Wendell Phillips Garrison, he became friendly). He did translations for theSmithsonian Institution, at its directorSamuel Langley's instigation. Peirce also did substantial mathematical calculations for Langley's research on powered flight. Hoping to make money, Peirce tried inventing.[56]He began but did not complete several books.[57]In 1888, PresidentGrover Clevelandappointed him to theAssay Commission.[58] From 1890 on, he had a friend and admirer in Judge Francis C. Russell of Chicago,[59]who introduced Peirce to editorPaul Carusand ownerEdward C. Hegelerof the pioneering American philosophy journalThe Monist, which eventually published at least 14 articles by Peirce.[60]He wrote many texts inJames Mark Baldwin'sDictionary of Philosophy and Psychology(1901–1905); half of those credited to him appear to have been written actually byChristine Ladd-Franklinunder his supervision.[61]He applied in 1902 to the newly formedCarnegie Institutionfor a grant to write a systematic book describing his life's work. The application was doomed; his nemesis, Newcomb, served on the Carnegie Institution executive committee, and its president had been president of Johns Hopkins at the time of Peirce's dismissal.[62] The one who did the most to help Peirce in these desperate times was his old friendWilliam James, dedicating hisWill to Believe(1897) to Peirce, and arranging for Peirce to be paid to give two series of lectures at or near Harvard (1898 and 1903).[63]Most important, each year from 1907 until James's death in 1910, James wrote to his friends in the Boston intelligentsia to request financial aid for Peirce; the fund continued even after James died. Peirce reciprocated by designating James's eldest son as his heir should Juliette predecease him.[64]It has been believed that this was also why Peirce used "Santiago" ("St. James" in English) as a middle name, but he appeared in print as early as 1890 as Charles Santiago Peirce. (SeeCharles Santiago Sanders Peircefor discussion and references). Peirce died destitute inMilford, Pennsylvania, twenty years before his widow. Juliette Peirce kept the urn with Peirce's ashes at Arisbe. In 1934, Pennsylvania GovernorGifford Pinchotarranged for Juliette's burial in Milford Cemetery. The urn with Peirce's ashes was interred with Juliette.[c] Bertrand Russell(1959) wrote "Beyond doubt [...] he was one of the most original minds of the later nineteenth century and certainly the greatest American thinker ever".[12]Russell andWhitehead'sPrincipia Mathematica, published from 1910 to 1913, does not mention Peirce (Peirce's work was not widely known until later).[65]A. N. Whitehead, while reading some of Peirce's unpublished manuscripts soon after arriving at Harvard in 1924, was struck by how Peirce had anticipated his own "process" thinking. (On Peirce andprocess metaphysics, see Lowe 1964.[28])Karl Popperviewed Peirce as "one of the greatest philosophers of all times".[66]Yet Peirce's achievements were not immediately recognized. His imposing contemporariesWilliam JamesandJosiah Royce[67]admired him andCassius Jackson Keyser, at Columbia andC. K. Ogden, wrote about Peirce with respect but to no immediate effect. The first scholar to give Peirce his considered professional attention was Royce's studentMorris Raphael Cohen, the editor of an anthology of Peirce's writings entitledChance, Love, and Logic(1923), and the author of the first bibliography of Peirce's scattered writings.[68]John Deweystudied under Peirce at Johns Hopkins.[7]From 1916 onward, Dewey's writings repeatedly mention Peirce with deference. His 1938Logic: The Theory of Inquiryis much influenced by Peirce.[69]The publication of the first six volumes ofCollected Papers(1931–1935) was the most important event to date in Peirce studies and one that Cohen made possible by raising the needed funds;[70]however it did not prompt an outpouring of secondary studies. The editors of those volumes,Charles HartshorneandPaul Weiss, did not become Peirce specialists. Early landmarks of the secondary literature include the monographs by Buchler (1939),Feibleman(1946), andGoudge(1950), the 1941 PhD thesis byArthur W. Burks(who went on to edit volumes 7 and 8), and the studies edited by Wiener and Young (1952). TheCharles S. Peirce Societywas founded in 1946. ItsTransactions, an academic quarterly specializing in Peirce's pragmatism and American philosophy has appeared since 1965.[71](See Phillips 2014, 62 for discussion of Peirce and Dewey relative totransactionalism.) By 1943 such was Peirce's reputation, in the US at least, thatWebster's Biographical Dictionarysaid that Peirce was "now regarded as the most original thinker and greatest logician of his time".[72] In 1949, while doing unrelated archival work, the historian of mathematicsCarolyn Eisele(1902–2000) chanced on an autograph letter by Peirce. So began her forty years of research on Peirce, “the mathematician and scientist,” culminating in Eisele (1976, 1979, 1985). In 1952, theScottishphilosopherW. B. Galliehad his bookPeirce and Pragmatism[73]published, which introduced the work of Peirce to an international readership.A.J. Ayer, the English philosopher, provided the Editorial Foreword to Gallie's book. In it he credited Peirce's philosophy as being 'not only of great historical significance, as one of the original sources of American pragmatism, but also extremely important in itself.' Ayer concluded: 'it is clear from Professor Gallie’s exposition of his doctrines that he is a philosopher from whom we still have much to learn.'[74] Beginning around 1960, Max Fisch (1900-1995),[75]the philosopher andhistorian of ideas, emerged as an authority on Peirce (Fisch, 1986).[76]He included many of his relevant articles in a survey (Fisch 1986: 422–448) of the impact of Peirce's thought through 1983. Peirce has gained an international following, marked by university research centers devoted to Peirce studies andpragmatismin Brazil (CeneP/CIEPandCentro de Estudos de Pragmatismo), Finland (HPRCandCommens), Germany (Wirth's group,Hoffman's and Otte's group, and Deuser's and Härle's group[77]), France (L'I.R.S.C.E.), Spain (GEP), and Italy (CSP). His writings have been translated into several languages, including German, French, Finnish, Spanish, and Swedish. Since 1950, there have been French, Italian, Spanish, British, and Brazilian Peirce scholars of note. For many years, the North American philosophy department most devoted to Peirce was theUniversity of Toronto, thanks in part to the leadership ofThomas Goudgeand David Savan. In recent years, U.S. Peirce scholars have clustered atIndiana University – Purdue University Indianapolis, home of thePeirce Edition Project(PEP) –, andPennsylvania State University. Currently, considerable interest is being taken in Peirce's ideas by researchers wholly outside the arena of academic philosophy. The interest comes from industry, business, technology, intelligence organizations, and the military; and it has resulted in the existence of a substantial number of agencies, institutes, businesses, and laboratories in which ongoing research into and development of Peircean concepts are being vigorously undertaken. In recent years, Peirce'strichotomyof signs is exploited by a growing number of practitioners for marketing and design tasks. John Deelywrites that Peirce was the last of the "moderns" and "first of the postmoderns". He lauds Peirce's doctrine of signs as a contribution to the dawn of thePostmodernepoch. Deely additionally comments that "Peirce stands...in a position analogous to the position occupied byAugustineas last of the WesternFathersand first of the medievals".[78] Peirce's reputation rests largely on academic papers published in American scientific and scholarly journals such asProceedings of theAmerican Academy of Arts and Sciences, theJournal of Speculative Philosophy,The Monist,Popular ScienceMonthly, theAmerican Journal of Mathematics,Memoirs of theNational Academy of Sciences,The Nation, and others. SeeArticles by Peirce, published in his lifetimefor an extensive list with links to them online. The only full-length book (neither extract nor pamphlet) that Peirce authored and saw published in his lifetime[79]wasPhotometric Researches(1878), a 181-page monograph on the applications of spectrographic methods to astronomy. While at Johns Hopkins, he editedStudies in Logic(1883), containing chapters by himself and hisgraduate students. Besides lectures during his years (1879–1884) as lecturer in Logic at Johns Hopkins, he gave at least nine series of lectures, many now published; seeLectures by Peirce. After Peirce's death,Harvard Universityobtained from Peirce's widow the papers found in his study, but did not microfilm them until 1964. Only after Richard Robin (1967)[80]catalogued thisNachlassdid it become clear that Peirce had left approximately 1,650 unpublished manuscripts, totaling over 100,000 pages,[81]mostly still unpublished excepton microfilm. On the vicissitudes of Peirce's papers, see Houser (1989).[82]Reportedly the papers remain in unsatisfactory condition.[83] The first published anthology of Peirce's articles was the one-volumeChance, Love and Logic: Philosophical Essays, edited byMorris Raphael Cohen, 1923, still in print.Other one-volume anthologieswere published in 1940, 1957, 1958, 1972, 1994, and 2009, most still in print. The main posthumous editions[84]of Peirce's works in their long trek to light, often multi-volume, and some still in print, have included: 1931–1958:Collected Papers of Charles Sanders Peirce(CP), 8 volumes, includes many published works, along with a selection of previously unpublished work and a smattering of his correspondence. This long-time standard edition drawn from Peirce's work from the 1860s to 1913 remains the most comprehensive survey of his prolific output from 1893 to 1913. It is organized thematically, but texts (including lecture series) are often split up across volumes, while texts from various stages in Peirce's development are often combined, requiring frequent visits to editors' notes.[85]Edited (1–6) byCharles HartshorneandPaul Weissand (7–8) byArthur Burks, in print and online. 1975–1987:Charles Sanders Peirce: Contributions toThe Nation, 4 volumes, includes Peirce's more than 300 reviews and articles published 1869–1908 inThe Nation. Edited by Kenneth Laine Ketner and James Edward Cook, online. 1976:The New Elements of Mathematics by Charles S. Peirce, 4 volumes in 5, included many previously unpublished Peirce manuscripts on mathematical subjects, along with Peirce's important published mathematical articles. Edited by Carolyn Eisele, back in print. 1977:Semiotic and Significs: The Correspondence between C. S. Peirce and Victoria Lady Welby(2nd edition 2001), included Peirce's entire correspondence (1903–1912) withVictoria, Lady Welby. Peirce's other published correspondence is largely limited to the 14 letters included in volume 8 of theCollected Papers, and the 20-odd pre-1890 items included so far in theWritings. Edited by Charles S. Hardwick with James Cook, out of print. 1982–now:Writings of Charles S. Peirce, A Chronological Edition(W), Volumes 1–6 & 8, of a projected 30. The limited coverage, and defective editing and organization, of theCollected Papersled Max Fisch and others in the 1970s to found thePeirce Edition Project(PEP), whose mission is to prepare a more complete critical chronological edition. Only seven volumes have appeared to date, but they cover the period from 1859 to 1892, when Peirce carried out much of his best-known work.Writings of Charles S. Peirce, 8 was published in November 2010; and work continues onWritings of Charles S. Peirce, 7, 9, and 11. In print and online. 1985:Historical Perspectives on Peirce's Logic of Science: A History of Science, 2 volumes. Auspitz has said,[86]"The extent of Peirce's immersion in the science of his day is evident in his reviews in theNation[...] and in his papers, grant applications, and publishers' prospectuses in the history and practice of science", referring latterly toHistorical Perspectives. Edited by Carolyn Eisele, back in print. 1992:Reasoning and the Logic of Thingscollects in one place Peirce's 1898 series of lectures invited by William James. Edited by Kenneth Laine Ketner, with commentary byHilary Putnam, in print. 1992–1998:The Essential Peirce(EP), 2 volumes, is an important recent sampler of Peirce's philosophical writings. Edited (1) by Nathan Hauser and Christian Kloesel and (2) byPeirce Edition Projecteditors, in print. 1997:Pragmatism as a Principle and Method of Right Thinkingcollects Peirce's 1903 Harvard "Lectures on Pragmatism" in a study edition, including drafts, of Peirce's lecture manuscripts, which had been previously published in abridged form; the lectures now also appear inThe Essential Peirce, 2. Edited by Patricia Ann Turisi, in print. 2010:Philosophy of Mathematics: Selected Writingscollects important writings by Peirce on the subject, many not previously in print. Edited by Matthew E. Moore, in print. Peirce's most important work in pure mathematics was in logical and foundational areas. He also worked onlinear algebra,matrices, various geometries,topologyandListing numbers,Bell numbers,graphs, thefour-color problem, and the nature of continuity. He worked on applied mathematics in economics, engineering, and map projections, and was especially active inprobabilityand statistics.[87] Peirce made a number of striking discoveries in formal logic and foundational mathematics, nearly all of which came to be appreciated only long after he died: In 1860,[88]he suggested a cardinal arithmetic for infinite numbers, years before any work byGeorg Cantor(who completedhis dissertation in 1867) and without access toBernard Bolzano's 1851 (posthumous)Paradoxien des Unendlichen. In 1880–1881,[89]he showed howBoolean algebracould be done via arepeated sufficient single binary operation(logical NOR), anticipatingHenry M. Shefferby 33 years. (See alsoDe Morgan's Laws.) In 1881,[90]he set out theaxiomatization of natural number arithmetic, a few years beforeRichard DedekindandGiuseppe Peano. In the same paper Peirce gave, years before Dedekind, the first purely cardinal definition of a finite set in the sense now known as "Dedekind-finite", and implied by the same stroke an important formal definition of aninfinite set(Dedekind-infinite), as asetthat can be put into aone-to-one correspondencewith one of its propersubsets. In 1885,[91]he distinguished between first-order and second-order quantification.[92][d]In the same paper he set out what can be read as the first (primitive)axiomatic set theory, anticipatingZermeloby about two decades (Brady 2000,[93]pp. 132–133). In 1886, he saw that Boolean calculations could be carried out via electrical switches,[13]anticipatingClaude Shannonby more than 50 years. By the later 1890s[94]he was devisingexistential graphs, a diagrammatic notation for thepredicate calculus. Based on them areJohn F. Sowa'sconceptual graphsand Sun-Joo Shin'sdiagrammatic reasoning. Peirce wrote drafts for an introductory textbook, with the working titleThe New Elements of Mathematics, that presented mathematics from an original standpoint. Those drafts and many other of his previously unpublished mathematical manuscripts finally appeared[87]inThe New Elements of Mathematics by Charles S. Peirce(1976), edited by mathematicianCarolyn Eisele. Peirce agreed withAuguste Comtein regarding mathematics as more basic than philosophy and the special sciences (of nature and mind). Peirceclassifiedmathematics into three subareas: (1) mathematics of logic, (2) discrete series, and (3) pseudo-continua (as he called them, including thereal numbers) and continua. Influenced by his fatherBenjamin, Peirce argued that mathematics studies purely hypothetical objects and is not just the science of quantity but is more broadly the science which draws necessary conclusions; that mathematics aids logic, not vice versa; and that logic itself is part of philosophy and is the scienceaboutdrawing conclusions necessary and otherwise.[95] Peirce held that science achieves statistical probabilities, not certainties, and that spontaneity ("absolute chance") is real (seeTychismon his view). Most of his statistical writings promote thefrequency interpretationof probability (objective ratios of cases), and many of his writings express skepticism about (and criticize the use of)probabilitywhen such models are not based on objectiverandomization.[e]Though Peirce was largely a frequentist, hispossible world semanticsintroduced the"propensity" theory of probabilitybeforeKarl Popper.[96][97]Peirce (sometimes withJoseph Jastrow) investigated theprobability judgmentsof experimental subjects, "perhaps the very first" elicitation and estimation ofsubjective probabilitiesinexperimental psychologyand (what came to be called)Bayesian statistics.[2] Peirce formulated modern statistics in "Illustrations of the Logic of Science" (1877–1878) and "A Theory of Probable Inference" (1883). With arepeated measures design, Charles Sanders Peirce and Joseph Jastrow introducedblinded,controlled randomized experimentsin 1884[98](Hacking 1990:205)[1](beforeRonald A. Fisher).[2]He inventedoptimal designfor experiments on gravity, in which he "corrected the means". He usedcorrelationandsmoothing. Peirce extended the work onoutliersbyBenjamin Peirce, his father.[2]He introduced the terms "confidence" and "likelihood" (beforeJerzy NeymanandFisher). (SeeStephen Stigler's historical books andIan Hacking1990.[1]) Peirce was a working scientist for 30 years, and arguably was a professional philosopher only during the five years he lectured at Johns Hopkins. He learned philosophy mainly by reading, each day, a few pages ofImmanuel Kant'sCritique of Pure Reason, in the original German, while a Harvard undergraduate. His writings bear on a wide array of disciplines, including mathematics,logic, philosophy, statistics,astronomy,[28]metrology,[3]geodesy,experimental psychology,[4]economics,[5]linguistics,[6]and thehistory and philosophy of science. This work has enjoyed renewed interest and approval, a revival inspired not only by his anticipations of recent scientific developments but also by his demonstration of how philosophy can be applied effectively to human problems. Peirce's philosophy includes a pervasive three-category system: belief that truth is immutable and is both independent from actual opinion (fallibilism) and discoverable (no radical skepticism), logic as formal semiotic on signs, on arguments, and on inquiry's ways—including philosophicalpragmatism(which he founded),critical common-sensism, andscientific method—and, in metaphysics:Scholastic realism, e.g.John Duns Scotus, belief in God, freedom, and at least an attenuated immortality,objective idealism, and belief in the reality of continuity and of absolute chance, mechanical necessity, and creative love.[99]In his work, fallibilism and pragmatism may seem to work somewhat likeskepticismandpositivism, respectively, in others' work. However, for Peirce, fallibilism is balanced by ananti-skepticismand is a basis for belief in the reality of absolute chance and of continuity,[100]and pragmatism commits one to anti-nominalistbelief in the reality of the general (CP 5.453–457). For Peirce, First Philosophy, which he also called cenoscopy, is less basic than mathematics and more basic than the special sciences (of nature and mind). It studies positive phenomena in general, phenomena available to any person at any waking moment, and does not settle questions by resorting to special experiences.[101]Hedividedsuch philosophy into (1) phenomenology (which he also called phaneroscopy or categorics), (2) normative sciences (esthetics, ethics, and logic), and (3) metaphysics; his views on them are discussed in order below. Peirce did not write extensively in aesthetics and ethics,[102]but came by 1902 to hold that aesthetics, ethics, and logic, in that order, comprise the normative sciences.[103]He characterized aesthetics as the study of the good (grasped as the admirable), and thus of the ends governing all conduct and thought.[104] Umberto Ecodescribed Peirce as "undoubtedly the greatest unpublished writer of our generation"[105]and byKarl Popperas "one of the greatest philosophers of all time".[106]TheInternet Encyclopedia of Philosophysays of Peirce that although "long considered an eccentric figure whose contribution to pragmatism was to provide its name and whose importance was as an influence upon James and Dewey, Peirce's significance in his own right is now largely accepted."[107] Peirce's recipe for pragmatic thinking, which he calledpragmatismand, later,pragmaticism, is recapitulated in several versions of the so-calledpragmatic maxim. Here is one of his moreemphatic reiterationsof it: Consider what effects that mightconceivablyhave practical bearings youconceivethe objects of yourconceptionto have. Then, yourconceptionof those effects is the whole of yourconceptionof the object. As a movement, pragmatism began in the early 1870s in discussions among Peirce,William James, and others inthe Metaphysical Club. James among others regarded some articles by Peirce such as "The Fixation of Belief" (1877) and especially "How to Make Our Ideas Clear" (1878) as foundational topragmatism.[108]Peirce (CP 5.11–12), like James (Pragmatism: A New Name for Some Old Ways of Thinking, 1907), saw pragmatism as embodying familiar attitudes, in philosophy and elsewhere, elaborated into a new deliberate method for fruitful thinking about problems. Peirce differed from James and the earlyJohn Dewey, in some of their tangential enthusiasms, in being decidedly more rationalistic and realistic, in several senses of those terms, throughout the preponderance of his own philosophical moods. In 1905 Peirce coined the new namepragmaticism"for the precise purpose of expressing the original definition", saying that "all went happily" with James's andF.C.S. Schiller's variant uses of the old name "pragmatism" and that he coined the new name because of the old name's growing use in "literary journals, where it gets abused". Yet he cited as causes, in a 1906 manuscript, his differences with James and Schiller and, in a 1908 publication, his differences with James as well as literary authorGiovanni Papini's declaration of pragmatism's indefinability. Peirce in any case regarded his views that truth is immutable and infinity is real, as being opposed by the other pragmatists, but he remained allied with them on other issues.[109][circular reference] Pragmatism begins with the idea that belief is that on which one is prepared to act. Peirce's pragmatism is a method of clarification of conceptions of objects. It equates any conception of an object to a conception of that object's effects to a general extent of the effects' conceivable implications for informed practice. It is a method of sorting out conceptual confusions occasioned, for example, by distinctions that make (sometimes needed) formal yet not practical differences. He formulated both pragmatism and statistical principles as aspects of scientific logic, in his "Illustrations of the Logic of Science" series of articles. In the second one, "How to Make Our Ideas Clear", Peirce discussed three grades of clearness of conception: By way of example of how to clarify conceptions, he addressed conceptions about truth and the real as questions of thepresuppositions of reasoningin general. In clearness's second grade (the "nominal" grade), he defined truth as a sign's correspondence to its object, and the real as the object of such correspondence, such that truth and the real are independent of that which you or I or any actual, definitecommunity of inquirersthink. After that needful but confined step, next in clearness's third grade (the pragmatic, practice-oriented grade) he defined truth as that opinion whichwouldbe reached, sooner or later but still inevitably, by research taken far enough, such that the real does depend on that ideal final opinion—a dependence to which he appeals in theoretical arguments elsewhere, for instance for the long-run validity of the rule of induction.[110]Peirce argued that even to argue against the independence and discoverability of truth and the real is to presuppose that there is, about that very question under argument, a truth with just such independence and discoverability. Peirce said that a conception's meaning consists in "all general modes of rational conduct" implied by "acceptance" of the conception—that is, if one were to accept, first of all, the conception as true, then what could one conceive to be consequent general modes of rational conduct by all who accept the conception as true?—the whole of such consequent general modes is the whole meaning. His pragmatism does not equate a conception's meaning, its intellectual purport, with the conceived benefit or cost of the conception itself, like a meme (or, say, propaganda), outside the perspective of its being true, nor, since a conception is general, is its meaning equated with any definite set of actual consequences or upshots corroborating or undermining the conception or its worth. His pragmatism also bears no resemblance to "vulgar" pragmatism, which misleadingly connotes a ruthless andMachiavelliansearch for mercenary or political advantage. Instead the pragmatic maxim is the heart of his pragmatism as a method of experimentational mentalreflection[111]arriving at conceptions in terms of conceivable confirmatory and disconfirmatory circumstances—a method hospitable to the formation of explanatory hypotheses, and conducive to the use and improvement of verification.[112] Peirce's pragmatism, as method and theory of definitions and conceptual clearness, is part of his theory of inquiry,[113]which he variously called speculative, general, formal oruniversal rhetoricor simply methodeutic.[114]He applied his pragmatism as a method throughout his work. In "The Fixation of Belief" (1877), Peirce gives his take on the psychological origin and aim of inquiry. On his view, individuals are motivated to inquiry by desire to escape the feelings of anxiety and unease which Peirce takes to be characteristic of the state of doubt. Doubt is described by Peirce as an "uneasy and dissatisfied state from which we struggle to free ourselves and pass into the state of belief." Peirce uses words like "irritation" to describe the experience of being in doubt and to explain why he thinks we find such experiences to be motivating. The irritating feeling of doubt is appeased, Peirce says, through our efforts to achieve a settled state of satisfaction with what we land on as our answer to the question which led to that doubt in the first place. This settled state, namely, belief, is described by Peirce as "a calm and satisfactory state which we do not wish to avoid." Our efforts to achieve the satisfaction of belief, by whichever methods we may pursue, are what Peirce calls "inquiry". Four methods which Peirce describes as having been actually pursued throughout the history of thought are summarized below in the section after next. Critical common-sensism,[115]treated by Peirce as a consequence of his pragmatism, is his combination ofThomas Reid's common-sense philosophywith afallibilismthat recognizes that propositions of our more or less vague common sense now indubitable may later come into question, for example because of transformations of our world through science. It includes efforts to raise genuine doubts in tests for a core group of common indubitables that change slowly, if at all. In "The Fixation of Belief" (1877), Peirce described inquiry in general not as the pursuit of truthper sebut as the struggle to move from irritating, inhibitory doubt born of surprise, disagreement, and the like, and to reach a secure belief, belief being that on which one is prepared to act. That let Peirce frame scientific inquiry as part of a broader spectrum and as spurred, like inquiry generally, by actual doubt, not mere verbal, quarrelsome, orhyperbolic doubt, which he held to be fruitless. Peirce sketched four methods of settling opinion, ordered from least to most successful: Peirce held that, in practical affairs, slow and stumbling ratiocination is often dangerously inferior to instinct and traditional sentiment, and that the scientific method is best suited to theoretical research,[116]which in turn should not be trammeled by the other methods and practical ends; reason's "first rule"[117]is that, in order to learn, one must desire to learn and, as a corollary, must not block the way of inquiry.Scientific methodexcels over the others finally by being deliberately designed to arrive—eventually—at the most secure beliefs, upon which the most successful practices can be based. Starting from the idea that people seek not truthper sebut instead to subdue irritating, inhibitory doubt, Peirce showed how, through the struggle, some can come to submit to truth for the sake of belief's integrity, seek as truth the guidance of potential conduct correctly to its given goal, and wed themselves to the scientific method. Insofar as clarification by pragmatic reflection suits explanatory hypotheses and fosters predictions and testing, pragmatism points beyond the usual duo of foundational alternatives:deductionfrom self-evident truths, orrationalism; andinductionfrom experiential phenomena, orempiricism. Based on his critique of threemodes of argumentand different from eitherfoundationalismorcoherentism, Peirce's approach seeks to justify claims by a three-phase dynamic of inquiry: Thereby, Peirce devised an approach to inquiry far more solid than the flatter image of inductive generalizationsimpliciter, which is a mere re-labeling of phenomenological patterns. Peirce's pragmatism was the first time thescientific methodwas proposed as anepistemologyfor philosophical questions. A theory that succeeds better than its rivals in predicting and controlling our world is said to be nearer the truth. This is an operational notion of truth used by scientists. Peirce extracted the pragmaticmodelortheoryof inquiry from its raw materials in classical logic and refined it in parallel with the early development of symbolic logic to address problems about the nature of scientific reasoning. Abduction, deduction, and induction make incomplete sense in isolation from one another but comprise a cycle understandable as a whole insofar as they collaborate toward the common end of inquiry. In the pragmatic way of thinking about conceivable practical implications, every thing has a purpose, and, as possible, its purpose should first be denoted. Abduction hypothesizes an explanation for deduction to clarify into implications to be tested so that induction can evaluate the hypothesis, in the struggle to move from troublesome uncertainty to more secure belief. No matter how traditional and needful it is to study the modes of inference in abstraction from one another, the integrity of inquiry strongly limits the effectivemodularityof its principal components. Peirce's outline of the scientific method in §III–IV of "A Neglected Argument"[118]is summarized below (except as otherwise noted). There he also reviewed plausibility and inductive precision (issues ofcritique of arguments). Peirce drew on the methodological implications of thefour incapacities—no genuine introspection, no intuition in the sense of non-inferential cognition, no thought but in signs, and no conception of the absolutely incognizable—to attack philosophicalCartesianism, of which he said that:[125] On May 14, 1867, the 27-year-old Peirce presented a paper entitled "On a New List of Categories" to theAmerican Academy of Arts and Sciences, which published it the following year. The paper outlined a theory of predication, involving three universal categories that Peirce developed in response to readingAristotle,Immanuel Kant, andG. W. F. Hegel, categories that Peirce applied throughout his work for the rest of his life.[20]Peirce scholars generally regard the "New List" as foundational or breaking the ground for Peirce's "architectonic", his blueprint for a pragmatic philosophy. In the categories one will discern, concentrated, the pattern that one finds formed by the three grades of clearness in "How To Make Our Ideas Clear" (1878 paper foundational to pragmatism), and in numerous other trichotomies in his work. "On a New List of Categories"is cast as a Kantian deduction; it is short but dense and difficult to summarize. The following table is compiled from that and later works.[126]In 1893, Peirce restated most of it for a less advanced audience.[127] *Note:An interpretant is an interpretation (human or otherwise) in the sense of the product of an interpretive process. In 1918, the logicianC. I. Lewiswrote, "The contributions of C.S. Peirce to symbolic logic are more numerous and varied than those of any other writer—at least in the nineteenth century."[134] Beginning with his first paper on the"Logic of Relatives" (1870), Peirce extended thetheory of relationspioneered byAugustus De Morgan.[h]Beginning in 1940,Alfred Tarskiand his students rediscovered aspects of Peirce's larger vision of relational logic, developing the perspective ofrelation algebra. Relational logic gained applications. In mathematics, it influenced the abstract analysis ofE. H. Mooreand thelattice theoryofGarrett Birkhoff. In computer science, therelational modelfordatabaseswas developed with Peircean ideas in work ofEdgar F. Codd, who was a doctoral student[135]ofArthur W. Burks, a Peirce scholar. In economics, relational logic was used byFrank P. Ramsey,John von Neumann, andPaul Samuelsonto study preferences and utility and byKenneth J. ArrowinSocial Choice and Individual Values, following Arrow's association with Tarski atCity College of New York. On Peirce and his contemporariesErnst SchröderandGottlob Frege,Hilary Putnam(1982)[92]documented that Frege's work on the logic of quantifiers had little influence on his contemporaries, although it was published four years before the work of Peirce and his student Oscar Howard Mitchell. Putnam found that mathematicians and logicians learned about the logic of quantifiers through the independent work of Peirce and Mitchell, particularly through Peirce's "On the Algebra of Logic: A Contribution to the Philosophy of Notation"[91](1885), published in the premier American mathematical journal of the day, and cited byPeanoand Schröder, among others, who ignored Frege. They also adopted and modified Peirce's notations, typographical variants of those now used. Peirce apparently was ignorant of Frege's work, despite their overlapping achievements in logic,philosophy of language, and thefoundations of mathematics. Peirce's work on formal logic had admirers besidesErnst Schröder: A philosophy of logic, grounded in his categories and semiotic, can be extracted from Peirce's writings and, along with Peirce's logical work more generally, is exposited and defended in Hilary Putnam (1982);[92]the Introduction in Nathan Houseret al.(1997);[137]andRandall Dipert's chapter inCheryl Misak(2004).[138] Peirce regarded logicper seas a division of philosophy, as a normative science based on esthetics and ethics, as more basic than metaphysics,[117]and as "the art of devising methods of research".[139]More generally, as inference, "logic is rooted in the social principle", since inference depends on a standpoint that, in a sense, is unlimited.[140]Peirce called (with no sense of deprecation) "mathematics of logic" much of the kind of thing which, in current research and applications, is called simply "logic". He was productive in both (philosophical) logic and logic's mathematics, which were connected deeply in his work and thought. Peirce argued that logic is formal semiotic: the formal study of signs in the broadest sense, not only signs that are artificial, linguistic, or symbolic, but also signs that are semblances or are indexical such as reactions. Peirce held that "all this universe is perfused with signs, if it is not composed exclusively of signs",[141]along with their representational and inferential relations. He argued that, since all thought takes time, all thought is in signs[142]and sign processes ("semiosis") such as the inquiry process. Hedividedlogic into: (1) speculative grammar, or stechiology, on how signs can be meaningful and, in relation to that, what kinds of signs there are, how they combine, and how some embody or incorporate others; (2) logical critic, or logic proper, on the modes of inference; and (3) speculative oruniversal rhetoric, or methodeutic,[114]the philosophical theory of inquiry, including pragmatism. In his "F.R.L." [First Rule of Logic] (1899), Peirce states that the first, and "in one sense, the sole", rule of reason is that,to learn, one needs to desire to learnand desire it without resting satisfied with that which one is inclined to think.[117]So, the first rule is,to wonder. Peirce proceeds to a critical theme in research practices and the shaping of theories: ...there follows onecorollarywhich itself deserves to be inscribed upon every wall of the city of philosophy:Do not block the way of inquiry. Peirce adds, that method and economy are best in research but no outright sin inheres in trying any theory in the sense that the investigation via its trial adoption can proceed unimpeded and undiscouraged, and that "the one unpardonable offence" is a philosophical barricade against truth's advance, an offense to which "metaphysicians in all ages have shown themselves the most addicted". Peirce in many writings holds thatlogic precedes metaphysics(ontological, religious, and physical). Peirce goes on to list four common barriers to inquiry: (1) Assertion of absolute certainty; (2) maintaining that something is absolutely unknowable; (3) maintaining that something is absolutely inexplicable because absolutely basic or ultimate; (4) holding that perfect exactitude is possible, especially such as to quite preclude unusual and anomalous phenomena. To refuse absolute theoretical certainty is the heart offallibilism, which Peirce unfolds into refusals to set up any of the listed barriers. Peirce elsewhere argues (1897) that logic's presupposition of fallibilism leads at length to the view that chance and continuity are very real (tychismandsynechism).[100] The First Rule of Logic pertains to the mind's presuppositions in undertaking reason and logic; presuppositions, for instance, that truth and the real do not depend on yours or my opinion of them but do depend on representational relation and consist in the destined end in investigation taken far enough (see below). He describes such ideas as, collectively, hopes which, in particular cases, one is unable seriously to doubt.[143] In three articles in 1868–1869,[142][125][144]Peirce rejected mere verbal orhyperbolic doubtand first or ultimate principles, and argued that we have (as he numbered them[125]): (The above sense of the term "intuition" is almost Kant's, said Peirce. It differs from the current looser sense that encompasses instinctive or anyway half-conscious inference.) Peirce argued that those incapacities imply the reality of the general and of the continuous, the validity of the modes of reasoning,[144]and the falsity of philosophicalCartesianism(see below). Peirce rejected the conception (usually ascribed to Kant) of the unknowable thing-in-itself[125]and later said that to "dismiss make-believes" is a prerequisite for pragmatism.[145] Peirce sought, through his wide-ranging studies through the decades, formal philosophical ways to articulate thought's processes, and also to explain the workings of science. These inextricably entangled questions of a dynamics of inquiry rooted in nature and nurture led him to develop his semiotic with very broadened conceptions of signs and inference, and, as its culmination, a theory of inquiry for the task of saying 'how science works' and devising research methods. This would be logic by the medieval definition taught for centuries: art of arts, science of sciences, having the way to the principles of all methods.[139]Influences radiate from points on parallel lines of inquiry inAristotle's work, in suchlocias: the basic terminology ofpsychologyinOn the Soul; the founding description ofsign relationsinOn Interpretation; and the differentiation ofinferenceinto three modes that are commonly translated into English asabduction,deduction, andinduction, in thePrior Analytics, as well as inference byanalogy(calledparadeigmaby Aristotle), which Peirce regarded as involving the other three modes. Peirce began writing on semiotic in the 1860s, around the time when he devised his system of three categories. He called it bothsemioticandsemeiotic. Both are current in singular and plural. He based it on the conception of a triadicsign relation, and definedsemiosisas "action, or influence, which is, or involves, a cooperation ofthreesubjects, such as a sign, its object, and its interpretant, this tri-relative influence not being in any way resolvable into actions between pairs".[146]As to signs in thought, Peirce emphasized the reverse: "To say, therefore, that thought cannot happen in an instant, but requires a time, is but another way of saying that every thought must be interpreted in another, or that all thought is in signs."[142] Peirce held that all thought is in signs, issuing in and from interpretation, wheresignis the word for the broadest variety of conceivable semblances, diagrams, metaphors, symptoms, signals, designations, symbols, texts, even mental concepts and ideas, all as determinations of a mind orquasi-mind, that which at least functions like a mind, as in the work of crystals or bees[147]—the focus is on sign action in general rather than on psychology, linguistics, or social studies (fields which he also pursued). Inquiry is a kind of inference process, a manner of thinking and semiosis. Global divisions of ways for phenomena to stand as signs, and the subsumption of inquiry and thinking within inference as a sign process, enable the study of inquiry on semiotics' three levels: Peirce uses examples often from common experience, but defines and discusses such things as assertion and interpretation in terms of philosophical logic. In a formal vein, Peirce said: On the Definition of Logic. Logic isformal semiotic. A sign is something,A, which brings something,B, itsinterpretantsign, determined or created by it, into the same sort of correspondence (or a lower implied sort) with something,C, itsobject, as that in which itself stands toC. This definition no more involves any reference to human thought than does the definition of a line as the place within which a particle lies during a lapse of time. It is from this definition that I deduce the principles of logic by mathematical reasoning, and by mathematical reasoning that, I aver, will support criticism ofWeierstrassianseverity, and that is perfectly evident. The word "formal" in the definition is also defined.[148] Peirce's theory of signs is known to be one of the most complex semiotic theories due to its generalistic claim. Anything is a sign—not absolutely as itself, but instead in some relation or other. Thesign relationis the key. It defines three roles encompassing (1) the sign, (2) the sign's subject matter, called itsobject, and (3) the sign's meaning or ramification as formed into a kind of effect called itsinterpretant(a further sign, for example a translation). It is an irreducibletriadic relation, according to Peirce. The roles are distinct even when the things that fill those roles are not. The roles are but three; a sign of an object leads to one or more interpretants, and, as signs, they lead to further interpretants. Extension × intension = information.Two traditional approaches to sign relation, necessary though insufficient, are the way ofextension(a sign's objects, also called breadth, denotation, or application) and the way ofintension(the objects' characteristics, qualities, attributes referenced by the sign, also called depth,comprehension, significance, or connotation). Peirce adds a third, the way ofinformation, including change of information, to integrate the other two approaches into a unified whole.[149]For example, because of the equation above, if a term's total amount of information stays the same, then the more that the term 'intends' or signifies about objects, the fewer are the objects to which the term 'extends' or applies. Determination.A sign depends on its object in such a way as to represent its object—the object enables and, in a sense, determines the sign. A physically causal sense of this stands out when a sign consists in an indicative reaction. The interpretant depends likewise on both the sign and the object—an object determines a sign to determine an interpretant. But this determination is not a succession of dyadic events, like a row of toppling dominoes; sign determination is triadic. For example, an interpretant does not merely represent something which represented an object; instead an interpretant represents somethingasa sign representing the object. The object (be it a quality or fact or law or even fictional) determines the sign to an interpretant through one's collateral experience[150]with the object, in which the object is found or from which it is recalled, as when a sign consists in a chance semblance of an absent object. Peirce used the word "determine" not in a strictly deterministic sense, but in a sense of "specializes",bestimmt,[151]involving variable amount, like an influence.[152]Peirce came to define representation and interpretation in terms of (triadic) determination.[153]The object determines the sign to determine another sign—the interpretant—to be related to the objectas the sign is related to the object, hence the interpretant, fulfilling its function as sign of the object, determines a further interpretant sign. The process is logically structured to perpetuate itself, and is definitive of sign, object, and interpretant in general.[152] Peirce held there are exactly three basic elements in semiosis (sign action): Some of the understanding needed by the mind depends on familiarity with the object. To know what a given sign denotes, the mind needs some experience of that sign's object, experience outside of, and collateral to, that sign or sign system. In that context Peirce speaks of collateral experience, collateral observation, collateral acquaintance, all in much the same terms.[150] Among Peirce's many sign typologies, three stand out, interlocked. The first typology depends on the sign itself, the second on how the sign stands for its denoted object, and the third on how the sign stands for its object to its interpretant. Also, each of the three typologies is a three-way division, atrichotomy, via Peirce's three phenomenologicalcategories: (1) quality of feeling, (2) reaction, resistance, and (3) representation, mediation.[157] I.Qualisign, sinsign, legisign(also calledtone, token, type,and also calledpotisign, actisign, famisign):[158]This typology classifies every sign according to the sign's own phenomenological category—the qualisign is a quality, a possibility, a "First"; the sinsign is a reaction or resistance, a singular object, an actual event or fact, a "Second"; and the legisign is a habit, a rule, a representational relation, a "Third". II.Icon, index, symbol: This typology, the best known one, classifies every sign according to the category of the sign's way of denoting its object—the icon (also called semblance or likeness) by a quality of its own, the index by factual connection to its object, and the symbol by a habit or rule for its interpretant. III.Rheme, dicisign, argument(also calledsumisign, dicisign, suadisign,alsoseme, pheme, delome,[158]and regarded as very broadened versions of the traditionalterm, proposition, argument): This typology classifies every sign according to the category which the interpretant attributes to the sign's way of denoting its object—the rheme, for example a term, is a sign interpreted to represent its object in respect of quality; the dicisign, for example a proposition, is a sign interpreted to represent its object in respect of fact; and the argument is a sign interpreted to represent its object in respect of habit or law. This is the culminating typology of the three, where the sign is understood as a structural element of inference. Every sign belongs to one class or another within (I)andwithin (II)andwithin (III). Thus each of the three typologies is a three-valued parameter for every sign. The three parameters are not independent of each other; many co-classifications are absent, for reasons pertaining to the lack of either habit-taking or singular reaction in a quality, and the lack of habit-taking in a singular reaction. The result is not 27 but instead ten classes of signs fully specified at this level of analysis. Borrowing a brace of concepts fromAristotle, Peirce examined three basic modes ofinference—abduction,deduction, andinduction—in his "critique of arguments" or "logic proper". Peirce also called abduction "retroduction", "presumption", and, earliest of all, "hypothesis". He characterized it as guessing and as inference to an explanatory hypothesis. He sometimes expounded the modes of inference by transformations of the categoricalsyllogism Barbara (AAA), for example in "Deduction, Induction, and Hypothesis" (1878).[159]He does this by rearranging therule(Barbara's major premise), thecase(Barbara's minor premise), and theresult(Barbara's conclusion): Deduction. Rule:All the beans from this bag are white.Case:These beans are beans from this bag.∴{\displaystyle \therefore }Result:These beans are white. Induction. Case:These beans are [randomly selected] from this bag.Result:These beans are white.∴{\displaystyle \therefore }Rule:All the beans from this bag are white. Hypothesis (Abduction). Rule:All the beans from this bag are white.Result:These beans [oddly] are white.∴{\displaystyle \therefore }Case:These beans are from this bag. In 1883, in "A Theory of Probable Inference" (Studies in Logic), Peirce equated hypothetical inference with the induction of characters of objects (as he had done in effect before[125]). Eventually dissatisfied, by 1900 he distinguished them once and for all and also wrote that he now took the syllogistic forms and the doctrine of logical extension and comprehension as being less basic than he had thought. In 1903 he presented the following logical form for abductive inference:[160] The surprising fact, C, is observed; The logical form does not also cover induction, since induction neither depends on surprise nor proposes a new idea for its conclusion. Induction seeks facts to test a hypothesis; abduction seeks a hypothesis to account for facts. "Deduction proves that somethingmustbe; Induction shows that somethingactually isoperative; Abduction merely suggests that somethingmay be."[161]Peirce did not remain quite convinced that one logical form covers all abduction.[162]In hismethodeuticor theory of inquiry (see below), he portrayed abduction as an economic initiative to further inference and study, and portrayed all three modes as clarified by their coordination in essential roles in inquiry: hypothetical explanation, deductive prediction, inductive testing Peircedividedmetaphysics into (1) ontology or general metaphysics, (2)psychicalor religious metaphysics, and (3) physical metaphysics. On the issue of universals, Peirce was ascholastic realist, declaring the reality ofgeneralsas early as 1868.[163]According to Peirce, his category he called "thirdness", the more general facts about the world, are extra-mental realities. Regardingmodalities(possibility, necessity, etc.), he came in later years to regard himself as having wavered earlier as to just how positively real the modalities are. In his 1897 "The Logic of Relatives" he wrote: I formerly defined the possible as that which in a given state of information (real or feigned) we do not know not to be true. But this definition today seems to me only a twisted phrase which, by means of two negatives, conceals an anacoluthon. We know in advance of experience that certain things are not true, because we see they are impossible. Peirce retained, as useful for some purposes, the definitions in terms of information states, but insisted that the pragmaticist is committed to a strongmodal realismby conceiving of objects in terms of predictive general conditional propositions about how theywouldbehave under certain circumstances.[164] Continuity andsynechismare central in Peirce's philosophy: "I did not at first suppose that it was, as I gradually came to find it, the master-Key of philosophy".[165] From a mathematical point of view, he embracedinfinitesimalsand worked long on the mathematics of continua. He long held that the real numbers constitute a pseudo-continuum;[166]that a true continuum is the real subject matter ofanalysis situs(topology); and that a true continuum of instants exceeds—and within any lapse of time has room for—anyAleph number(any infinitemultitudeas he called it) of instants.[167] In 1908 Peirce wrote that he found that a true continuum might have or lack such room. Jérôme Havenel (2008): "It is on 26 May 1908, that Peirce finally gave up his idea that in every continuum there is room for whatever collection of any multitude. From now on, there are different kinds of continua, which have different properties."[168] Peirce believed in God, and characterized such belief as founded in an instinct explorable in musing over the worlds of ideas, brute facts, and evolving habits—and it is a belief in God not as anactualorexistentbeing (in Peirce's sense of those words), but all the same as arealbeing.[169]In "A Neglected Argument for the Reality of God" (1908),[118]Peirce sketches, for God's reality, an argument to a hypothesis of God as the Necessary Being, a hypothesis which he describes in terms of how it would tend to develop and become compelling in musement and inquiry by a normal person who is led, by the hypothesis, to consider as being purposed the features of the worlds of ideas, brute facts, and evolving habits (for example scientific progress), such that the thought of such purposefulness will "stand or fall with the hypothesis"; meanwhile, according to Peirce, the hypothesis, in supposing an "infinitely incomprehensible" being, starts off at odds with its own nature as a purportively true conception, and so, no matter how much the hypothesis grows, it both (A) inevitably regards itself as partly true, partly vague, and as continuing to define itself without limit, and (B) inevitably has God appearing likewise vague but growing, though God as the Necessary Being is not vague or growing; but the hypothesis will hold it to bemorefalse to say the opposite, that God is purposeless. Peirce also argued that the will is free[170]and (seeSynechism) that there is at least an attenuated kind of immortality. Peirce held the view, which he calledobjective idealism, that "matter is effete mind, inveterate habits becoming physical laws".[171]Peirce observed that "Berkeley's metaphysical theories have at first sight an air of paradox and levity very unbecoming to a bishop".[172] Peirce asserted the reality of (1) "absolute chance" or randomness (histychistview), (2) "mechanical necessity" or physical laws (anancistview), and (3) what he called the "law of love" (agapistview), echoing hiscategoriesFirstness, Secondness, and Thirdness, respectively.[99]He held that fortuitous variation (which he also called "sporting"), mechanical necessity, and creative love are the three modes of evolution (modes called "tychasm", "anancasm", and "agapasm")[173]of the cosmos and its parts. He found his conception of agapasm embodied inLamarckian evolution; the overall idea in any case is that of evolution tending toward an end or goal, and it could also be the evolution of a mind or a society; it is the kind of evolution which manifests workings of mind in some general sense. He said that overall he was a synechist, holding with reality of continuity,[99]especially of space, time, and law.[174] Peirce outlined two fields, "Cenoscopy" and "Science of Review", both of which he called philosophy. Both included philosophy about science. In 1903 he arranged them, from more to less theoretically basic, thus:[101] Peirce placed, within Science of Review, the work and theory ofclassifying the sciences(including mathematics and philosophy). His classifications, on which he worked for many years, draw on argument and wide knowledge, and are of interest both as a map for navigating his philosophy and as an accomplished polymath's survey of research in his time. Consequently, to discover is simply to expedite an event that would occur sooner or later, if we had not troubled ourselves to make the discovery. Consequently, the art of discovery is purely a question of economics. The economics of research is, so far as logic is concerned, the leading doctrine with reference to the art of discovery. Consequently, the conduct of abduction, which is chiefly a question of heuretic and is the first question of heuretic, is to be governed by economical considerations. Thus, twenty skillful hypotheses will ascertain what 200,000 stupid ones might fail to do. Now logical terms are of three grand classes. The first embraces those whoselogical forminvolves only the conception of quality, and which therefore represent a thing simply as "a —." These discriminate objects in the most rudimentary way, which does not involve any consciousness of discrimination. They regard an object as it is in itself assuch(quale); for example, as horse, tree, or man. These areabsolute terms. (Peirce, 1870. But also see "Quale-Consciousness", 1898, in CP 6.222–237.) ... death makes the number of our risks, the number of our inferences, finite, and so makes their mean result uncertain. The very idea of probability and of reasoning rests on the assumption that this number is indefinitely great. ... logicality inexorably requires that our interests shallnotbe limited. ... Logic is rooted in the social principle. I define a Sign as anything which is so determined by something else, called its Object, and so determines an effect upon a person, which effect I call its Interpretant, that the latter is thereby mediately determined by the former. My insertion of "upon a person" is a sop to Cerberus, because I despair of making my own broader conception understood. I will also take the liberty of substituting "reality" for "existence." This is perhaps overscrupulosity; but I myself always useexistin its strict philosophical sense of "react with the other like things in the environment." Of course, in that sense, it would be fetichism to say that God "exists." The word "reality," on the contrary, is used in ordinary parlance in its correct philosophical sense. [....] I define therealas that which holds its characters on such a tenure that it makes not the slightest difference what any man or men may havethoughtthem to be, or ever will havethoughtthem to be, here using thought to include, imagining, opining, and willing (as long as forciblemeansare not used); but the real thing's characters will remain absolutely untouched.
https://en.wikipedia.org/wiki/Charles_Sanders_Peirce
Rule 30is anelementary cellular automatonintroduced byStephen Wolframin 1983.[2]UsingWolfram's classification scheme, Rule 30 is a Class III rule, displaying aperiodic,chaoticbehaviour. This rule is of particular interest because it produces complex, seemingly random patterns from simple, well-defined rules. Because of this, Wolfram believes that Rule 30, and cellular automata in general, are the key to understanding how simple rules produce complex structures and behaviour in nature. For instance, a pattern resembling Rule 30 appears on the shell of the widespread cone snail speciesConus textile. Rule 30 has also been used as arandom number generatorinMathematica,[3]and has also been proposed as a possiblestream cipherfor use incryptography.[4][5] Rule 30 is so named because 30 is the smallestWolfram codewhich describes its rule set (as described below). The mirror image, complement, and mirror complement of Rule 30 have Wolfram codes 86, 135, and 149, respectively. In all of Wolfram's elementary cellular automata, an infinite one-dimensional array of cellular automaton cells with only two states is considered, with each cell in some initial state. At discrete time intervals, every cell spontaneously changes state based on its current state and the state of its two neighbors. For Rule 30, the rule set which governs the next state of the automaton is: If the left, center, and right cells are denoted(p,q,r)then the corresponding formula for the next state of the center cell can be expressed asp xor (q or r). It is called Rule 30 because inbinary,000111102= 30. The following diagram shows the pattern created, with cells colored based on the previous state of their neighborhood. Darker colors represent "1" and lighter colors represent "0". Time increases down the vertical axis. The following pattern emerges from an initial state in which a single cell with state 1 (shown as black) is surrounded by cells with state 0 (white). Rule 30 cellular automaton Here, the vertical axis represents time and any horizontal cross-section of the image represents the state of all the cells in the array at a specific point in the pattern's evolution. Several motifs are present in this structure, such as the frequent appearance of white triangles and a well-defined striped pattern on the left side; however the structure as a whole has no discernible pattern. The number of black cells at generationn{\displaystyle n}is given by the sequence and is approximatelyn{\displaystyle n}.[citation needed] Rule 30 meets rigorous definitions of chaos proposed byDevaneyand Knudson. In particular, according to Devaney's criteria, Rule 30 displayssensitive dependence on initial conditions(two initial configurations that differ only in a small number of cells rapidly diverge), its periodic configurations are dense in the space of all configurations, according to theCantor topologyon the space of configurations (there is a periodic configuration with any finite pattern of cells), and it ismixing(for any two finite patterns of cells, there is a configuration containing one pattern that eventually leads to a configuration containing the other pattern). According to Knudson's criteria, it displays sensitive dependence and there is a dense orbit (an initial configuration that eventually displays any finite pattern of cells). Both of these characterizations of the rule's chaotic behavior follow from a simpler and easy to verify property of Rule 30: it isleft permutative, meaning that if two configurationsCandDdiffer in the state of a single cell at positioni, then after a single step the new configurations will differ at celli+ 1.[6] As is apparent from the image above, Rule 30 generates seeming randomness despite the lack of anything that could reasonably be considered random input. Stephen Wolfram proposed using its center column as apseudorandom number generator(PRNG); it passes many standard tests for randomness, and Wolfram previously used this rule in the Mathematica product for creating random integers.[7] Sipper and Tomassini have shown that as a random number generator Rule 30 exhibits poor behavior on achi squared testwhen applied to all the rule columns as compared to other cellular automaton-based generators.[8]The authors also expressed their concern that "The relatively low results obtained by the rule 30 CA may be due to the fact that we considered N random sequences generated in parallel, rather than the single one considered by Wolfram."[9] TheCambridge North railway stationis decorated with architectural panels displaying the evolution of Rule 30 (or equivalently under black-white reversal, Rule 135).[10]The design was described by its architect as inspired byConway's Game of Life, a different cellular automaton studied by Cambridge mathematicianJohn Horton Conway, but is not actually based on Life.[11][12] The state update can be done quickly bybitwise operations, if the cell values are represented by the bits within one (or more) computer words. Here shown inC++: This program produces the following output:
https://en.wikipedia.org/wiki/Rule_30
TheRule 110 cellular automaton(often called simplyRule 110)[a]is anelementary cellular automatonwith interesting behavior on the boundary between stability and chaos. In this respect, it is similar toConway's Game of Life. Like Life, Rule 110 with a particular repeating background pattern is known to beTuring complete.[2]This implies that, in principle, any calculation or computer program can be simulated using this automaton. In an elementary cellular automaton, a one-dimensional pattern of 0s and 1s evolves according to a simple set of rules. Whether a point in the pattern will be 0 or 1 in the new generation depends on its current value, as well as on those of its two neighbors. The Rule 110 automaton has the following set of rules: The name "Rule 110" derives from the fact that this rule can be summarized in the binary sequence 01101110; interpreted as abinary number, this corresponds to the decimal value 110. This is theWolfram codenaming scheme. In 2004,Matthew Cookpublished a proof that Rule 110 with a particular repeating background pattern isTuring complete, i.e., capable ofuniversal computation, whichStephen Wolframhad conjectured in 1985.[2]Cook presented his proof at theSanta Fe Instituteconference CA98 before publication of Wolfram's bookA New Kind of Science. This resulted in a legal affair based on a non-disclosure agreement withWolfram Research.[3]Wolfram Research blocked publication of Cook's proof for several years.[4] Among the88 possible unique elementary cellular automata, Rule 110 is the only one for which Turing completeness has been directly proven, although proofs for several similar rules follow as simple corollaries (e.g. Rule 124, which is the horizontal reflection of Rule 110). Rule 110 is arguably the simplest known Turing complete system.[2][5] Rule 110, like theGame of Life, exhibits whatWolframcalls "Class 4behavior", which is neither completely stable nor completely chaotic. Localized structures appear and interact in complex ways.[6] Matthew Cookproved Rule 110 capable of supporting universal computation by successively emulatingcyclic tag systems, then 2-tag system, and thenTuring machines. The final stage hasexponential timeoverhead because the Turing machine's tape is encoded with aunary numeral system. Neary and Woods (2006) presented a different construction that replaces 2-tag systems with clockwise Turing machines and haspolynomialoverhead.[7] Matthew Cookpresented his proof of the universality of Rule 110 at a Santa Fe Institute conference, held before the publication ofA New Kind of Science. Wolfram Research claimed that this presentation violated Cook's nondisclosure agreement with his employer, and obtained a court order excluding Cook's paper from the published conference proceedings. The existence of Cook's proof nevertheless became known. Interest in his proof stemmed not so much from its result as from its methods, specifically from the technical details of its construction.[8]The character of Cook's proof differs considerably from the discussion of Rule 110 inA New Kind of Science. Cook has since written a paper setting out his complete proof.[2] Cook proved that Rule 110 was universal (or Turing complete) by showing it was possible to use the rule to emulate another computational model, thecyclic tag system, which is known to be universal. He first isolated a number ofspaceships, self-perpetuating localized patterns, that could be constructed on an infinitely repeating pattern in a Rule 110 universe. He then devised a way for combinations of these structures to interact in a manner that could be exploited for computation. The function of the universal machine in Rule 110 requires a finite number of localized patterns to be embedded within an infinitely repeating background pattern. The background pattern is fourteen cells wide and repeats itself exactly every seven iterations. The pattern is00010011011111. Three localized patterns are of particular importance in the Rule 110 universal machine. They are shown in the image below, surrounded by the repeating background pattern. The leftmost structure shifts to the right two cells and repeats every three generations. It comprises the sequence0001110111surrounded by the background pattern given above, as well as two different evolutions of this sequence. In the figures, time elapses from top to bottom: the top line represents the initial state, and each following line the state at the next time. The center structure shifts left eight cells and repeats every thirty generations. It comprises the sequence1001111surrounded by the background pattern given above, as well as twenty-nine different evolutions of this sequence. The rightmost structure remains stationary and repeats every seven generations. It comprises the sequence111surrounded by the background pattern given above, as well as five different evolutions of this sequence. Below is an image showing the first two structures passing through each other without interacting other than by translation (left), and interacting to form the third structure (right). There are numerous other spaceships in Rule 110, but they do not feature as prominently in the universality proof. The cyclic tag system machinery has three main components: The initial spacing between these components is of utmost importance. In order for the cellular automaton to implement the cyclic tag system, the automaton's initial conditions must be carefully selected so that the various localized structures contained therein interact in a highly ordered way. Thedata stringin the cyclic tag system is represented by a series of stationary repeating structures of the type shown above. Varying amounts of horizontal space between these structures serve to differentiate 1 symbols from 0 symbols. These symbols represent thewordon which the cyclic tag system is operating, and the first such symbol is destroyed upon consideration of every production rule. When this leading symbol is a 1, new symbols are added to the end of the string; when it is 0, no new symbols are added. The mechanism for achieving this is described below. Entering from the right are a series of left-moving structures of the type shown above, separated by varying amounts of horizontal space. Large numbers of these structures are combined with different spacings to represent 0s and 1s in the cyclic tag system's production rules. Because the tag system's production rules are known at the time of creation of the program, and infinitely repeating, the patterns of 0s and 1s at the initial condition can be represented by an infinitely repeating string. Each production rule is separated from the next by another structure known as arule separator(orblock separator), which moves towards the left at the same rate as the encoding of the production rules. When a left-moving rule separator encounters a stationary symbol in the cyclic tag system's data string, it causes the first symbol it encounters to be destroyed. However, its subsequent behavior varies depending on whether the symbol encoded by the string had been a 0 or a 1. If a 0, the rule separator changes into a new structure which blocks the incoming production rule. This new structure is destroyed when it encounters the next rule separator. If, on the other hand, the symbol in the string was a 1, the rule separator changes into a new structure which admits the incoming production rule. Although the new structure is again destroyed when it encounters the next rule separator, it first allows a series of structures to pass through towards the left. These structures are then made to append themselves to the end of the cyclic tag system's data string. This final transformation is accomplished by means of a series of infinitely repeating, right-movingclock pulsesin the right-moving pattern shown above. The clock pulses transform incoming left-moving 1 symbols from a production rule into stationary 1 symbols of the data string, and incoming 0 symbols from a production rule into stationary 0 symbols of the data string. The figure above is the schematic diagram of the reconstruction of a cyclic tag system in Rule 110.
https://en.wikipedia.org/wiki/Rule_110
Rule 184is a one-dimensional binarycellular automatonrule, notable for solving themajority problemas well as for its ability to simultaneously describe several, seemingly quite different,particle systems: The apparent contradiction between these descriptions is resolved by different ways of associating features of the automaton's state with particles. The name of Rule 184 is aWolfram codethat defines the evolution of its states. The earliest research on Rule 184 is byLi (1987)andKrug & Spohn (1988). In particular, Krug and Spohn already describe all three types of particle system modeled by Rule 184.[2] A state of the Rule 184 automaton consists of a one-dimensionalarrayof cells, each containing abinary value(0 or 1). In each step of its evolution, the Rule 184 automaton applies the following rule to each of the cells in the array, simultaneously for all cells, to determine the new state of the cell:[3] An entry in this table defines the new state of each cell as a function of the previous state and the previous values of the neighboring cells on either side. The name for this rule, Rule 184, is theWolfram codedescribing the state table above: the bottom row of the table, 10111000, when viewed as abinary number, is equal to the decimal number184.[4] The rule set for Rule 184 may also be described intuitively, in several different ways: From the descriptions of the rules above, two important properties of its dynamics may immediately be seen. First, in Rule 184, for any finite set of cells withperiodic boundary conditions, the number of 1s and the number of 0s in a pattern remains invariant throughout the pattern's evolution. Rule 184 and its reflection are the only nontrivial[7]elementary cellular automatato have this property of number conservation.[8]Similarly, if the density of 1s is well-defined for an infinite array of cells, it remains invariant as the automaton carries out its steps.[9]And second, although Rule 184 is not symmetric under left-right reversal, it does have a different symmetry: reversing left and right and at the same time swapping the roles of the 0 and 1 symbols produces a cellular automaton with the same update rule. Patterns in Rule 184 typically quickly stabilize, either to a pattern in which the cell states move in lockstep one position leftwards at each step, or to a pattern that moves one position rightwards at each step. Specifically, if the initial density of cells with state 1 is less than 50%, the pattern stabilizes into clusters of cells in state 1, spaced two units apart, with the clusters separated by blocks of cells in state 0. Patterns of this type move rightwards. If, on the other hand, the initial density is greater than 50%, the pattern stabilizes into clusters of cells in state 0, spaced two units apart, with the clusters separated by blocks of cells in state 1, and patterns of this type move leftwards. If the density is exactly 50%, the initial pattern stabilizes (more slowly) to a pattern that can equivalently be viewed as moving either leftwards or rightwards at each step: an alternating sequence of 0s and 1s.[10] Themajority problemis the problem of constructing a cellular automaton that, when run on any finite set of cells, can compute the value held by a majority of its cells. In a sense, Rule 184 solves this problem, as follows. if Rule 184 is run on a finite set of cells with periodic boundary conditions, with an unequal number of 0s and 1s, then each cell will eventually see two consecutive states of the majority value infinitely often, but will see two consecutive states of the minority value only finitely many times.[11]The majority problem cannot be solved perfectly if it is required that all cells eventually stabilize to the majority state[12]but the Rule 184 solution avoids this impossibility result by relaxing the criterion by which the automaton recognizes a majority. If one interprets each 1-cell in Rule 184 as containing a particle, these particles behave in many ways similarly to automobiles in a single lane of traffic: they move forward at a constant speed if there is open space in front of them, and otherwise they stop. Traffic models such as Rule 184 and its generalizations that discretize both space and time are commonly calledparticle-hopping models.[13]Although very primitive, the Rule 184 model of traffic flow already predicts some of the familiar emergent features of real traffic: clusters of freely moving cars separated by stretches of open road when traffic is light, andwaves of stop-and-go trafficwhen it is heavy.[14] It is difficult to pinpoint the first use of Rule 184 for traffic flow simulation, in part because the focus of research in this area has been less on achieving the greatest level of mathematical abstraction and more on verisimilitude: even the earlier papers on cellular automaton based traffic flow simulation typically make the model more complex in order to more accurately simulate real traffic. Nevertheless, Rule 184 is fundamental to traffic simulation by cellular automata.Wang, Kwong & Hui (1998), for instance, state that "the basic cellular automaton model describing a one-dimensional traffic flow problem is rule 184."Nagel (1996)writes "Much work using CA models for traffic is based on this model." Several authors describe one-dimensional models with vehicles moving at multiple speeds; such models degenerate to Rule 184 in the single-speed case.[15]Gaylord & Nishidate (1996)extend the Rule 184 dynamics to two-lane highway traffic with lane changes; their model shares with Rule 184 the property that it is symmetric under simultaneous left-right and 0-1 reversal.Biham, Middleton & Levine (1992)describe atwo-dimensional city grid modelin which the dynamics of individual lanes of traffic is essentially that of Rule 184.[16]For an in-depth survey of cellular automaton traffic modeling and associated statistical mechanics, seeMaerivoet & De Moor (2005)andChowdhury, Santen & Schadschneider (2000). When viewing Rule 184 as a traffic model, it is natural to consider the average speed of the vehicles. When the density of traffic is less than 50%, this average speed is simply one unit of distance per unit of time: after the system stabilizes, no car ever slows. However, when the density is a number ρ greater than 1/2, the average speed of traffic is1−ρρ{\displaystyle {\tfrac {1-\rho }{\rho }}}. Thus, the system exhibits a second-order kineticphase transitionatρ= 1/2. When Rule 184 is interpreted as a traffic model, and started from a random configuration whose density is at this critical valueρ= 1/2, then the average speed approaches its stationary limit as the square root of the number of steps. Instead, for random configurations whose density is not at the critical value, the approach to the limiting speed is exponential.[17] As shown in the figure, and as originally described byKrug & Spohn (1988),[18]Rule 184 may be used to model deposition of particles onto a surface. In this model, one has a set of particles that occupy a subset of the positions in asquare latticeoriented diagonally (the darker particles in the figure). If a particle is present at some position of the lattice, the lattice positions below and to the right, and below and to the left of the particle must also be filled, so the filled part of the lattice extends infinitely downward to the left and right. The boundary between filled and unfilled positions (the thin black line in the figure) is interpreted as modeling a surface, onto which more particles may be deposited. At each time step, the surface grows by the deposition of new particles in each local minimum of the surface; that is, at each position where it is possible to add one new particle that has existing particles below it on both sides (the lighter particles in the figure). To model this process by Rule 184, observe that the boundary between filled and unfilled lattice positions can be marked by a polygonal line, the segments of which separate adjacent lattice positions and have slopes +1 and −1. Model a segment with slope +1 by an automaton cell with state 0, and a segment with slope −1 by an automaton cell with state 1. The local minima of the surface are the points where a segment of slope −1 lies to the left of a segment of slope +1; that is, in the automaton, a position where a cell with state 1 lies to the left of a cell with state 0. Adding a particle to that position corresponds to changing the states of these two adjacent cells from 1,0 to 0,1, so advancing the polygonal line. This is exactly the behavior of Rule 184.[19] Related work on this model concerns deposition in which the arrival times of additional particles are random, rather than having particles arrive at all local minima simultaneously.[20]These stochastic growth processes can be modeled as anasynchronous cellular automaton. Ballistic annihilationdescribes a process by which moving particles andantiparticlesannihilateeach other when they collide. In the simplest version of this process, the system consists of a single type of particle and antiparticle, moving at equal speeds in opposite directions in a one-dimensional medium.[21] This process can be modeled by Rule 184, as follows. The particles are modeled as points that are aligned, not with the cells of the automaton, but rather with the interstices between cells. Two consecutive cells that both have state 0 model a particle at the space between these two cells that moves rightwards one cell at each time step. Symmetrically, two consecutive cells that both have state 1 model an antiparticle that moves leftwards one cell at each time step. The remaining possibilities for two consecutive cells are that they both have differing states; this is interpreted as modeling a background material without any particles in it, through which the particles move. With this interpretation, the particles and antiparticles interact by ballistic annihilation: when a rightwards-moving particle and a leftwards-moving antiparticle meet, the result is a region of background from which both particles have vanished, without any effect on any other nearby particles.[22] The behavior of certain other systems, such as one-dimensionalcyclic cellular automata, can also be described in terms of ballistic annihilation.[23]There is a technical restriction on the particle positions for the ballistic annihilation view of Rule 184 that does not arise in these other systems, stemming from the alternating pattern of the background: in the particle system corresponding to a Rule 184 state, if two consecutive particles are both of the same type they must be an odd number of cells apart, while if they are of opposite types they must be an even number of cells apart. However this parity restriction does not play a role in the statistical behavior of this system. Pivato (2007)uses a similar but more complicated particle-system view of Rule 184: he not only views alternating 0–1 regions as background, but also considers regions consisting solely of a single state to be background as well. Based on this view he describes seven different particles formed by boundaries between regions, and classifies their possible interactions. SeeChopard & Droz (1998, pp. 188–190) for a more general survey of the cellular automaton models of annihilation processes. In his bookA New Kind of Science,Stephen Wolframpoints out that rule 184, when run on patterns with density 50%, can be interpreted as parsing thecontext-free languagedescribing strings formed from nestedparentheses. This interpretation is closely related to the ballistic annihilation view of rule 184: in Wolfram's interpretation, an open parenthesis corresponds to a left-moving particle while a close parenthesis corresponds to a right-moving particle.[24]
https://en.wikipedia.org/wiki/Rule_184
Incomputer programming, theexclusive or swap(sometimes shortened toXOR swap) is analgorithmthat uses theexclusive orbitwise operationtoswapthe values of twovariableswithout using the temporary variable which is normally required. The algorithm is primarily a novelty and a way of demonstrating properties of theexclusive oroperation. It is sometimes discussed as aprogram optimization, but there are almost no cases where swapping viaexclusive orprovides benefit over the standard, obvious technique. Conventional swapping requires the use of a temporary storage variable. Using the XOR swap algorithm, however, no temporary storage is needed. The algorithm is as follows:[1][2] Since XOR is acommutative operation, either X XOR Y or Y XOR X can be used interchangeably in any of the foregoing three lines. Note that on some architectures the first operand of the XOR instruction specifies the target location at which the result of the operation is stored, preventing this interchangeability. The algorithm typically corresponds to threemachine-codeinstructions, represented by corresponding pseudocode and assembly instructions in the three rows of the following table: In the above System/370 assembly code sample, R1 and R2 are distinctregisters, and eachXRoperation leaves its result in the register named in the first argument. Using x86 assembly, values X and Y are in registers eax and ebx (respectively), andxorplaces the result of the operation in the first register. In RISC-V assembly, value X and Y are in registers X10 and X11, andxorplaces the result of the operation in the first register (same as X86) However, in the pseudocode or high-level language version or implementation, the algorithm fails ifxandyuse the same storage location, since the value stored in that location will be zeroed out by the first XOR instruction, and then remain zero; it will not be "swapped with itself". This isnotthe same as ifxandyhave the same values. The trouble only comes whenxandyuse the same storage location, in which case their values must already be equal. That is, ifxandyuse the same storage location, then the line: setsxto zero (becausex=yso X XOR Y is zero)andsetsyto zero (since it uses the same storage location), causingxandyto lose their original values. Thebinary operationXOR over bit strings of lengthN{\displaystyle N}exhibits the following properties (where⊕{\displaystyle \oplus }denotes XOR):[a] Suppose that we have two distinct registersR1andR2as in the table below, with initial valuesAandBrespectively. We perform the operations below in sequence, and reduce our results using the properties listed above. As XOR can be interpreted as binary addition and a pair of bits can be interpreted as a vector in a two-dimensionalvector spaceover thefield with two elements, the steps in the algorithm can be interpreted as multiplication by 2×2 matrices over the field with two elements. For simplicity, assume initially thatxandyare each single bits, not bit vectors. For example, the step: which also has the implicit: corresponds to the matrix(1101){\displaystyle \left({\begin{smallmatrix}1&1\\0&1\end{smallmatrix}}\right)}as The sequence of operations is then expressed as: (working with binary values, so1+1=0{\displaystyle 1+1=0}), which expresses theelementary matrixof switching two rows (or columns) in terms of thetransvections(shears) of adding one element to the other. To generalize to where X and Y are not single bits, but instead bit vectors of lengthn, these 2×2 matrices are replaced by 2n×2nblock matricessuch as(InIn0In).{\displaystyle \left({\begin{smallmatrix}I_{n}&I_{n}\\0&I_{n}\end{smallmatrix}}\right).} These matrices are operating onvalues,not onvariables(with storage locations), hence this interpretation abstracts away from issues of storage location and the problem of both variables sharing the same storage location. ACfunction that implements the XOR swap algorithm: The code first checks if the addresses are distinct and uses aguard clauseto exit the function early if they are equal. Without that check, if they were equal, the algorithm would fold to a triple*x ^= *xresulting in zero. The XOR swap algorithm can also be defined with a macro: On modernCPU architectures, the XOR technique can be slower than using a temporary variable to do swapping. At least on recent x86 CPUs, both by AMD and Intel, moving between registers regularly incurs zero latency. (This is called MOV-elimination.) Even if there is not any architectural register available to use, theXCHGinstruction will be at least as fast as the three XORs taken together. Another reason is that modern CPUs strive to execute instructions in parallel viainstruction pipelines. In the XOR technique, the inputs to each operation depend on the results of the previous operation, so they must be executed in strictly sequential order, negating any benefits ofinstruction-level parallelism.[3] The XOR swap is also complicated in practice byaliasing. If an attempt is made to XOR-swap the contents of some location with itself, the result is that the location is zeroed out and its value lost. Therefore, XOR swapping must not be used blindly in a high-level language if aliasing is possible. This issue does not apply if the technique is used in assembly to swap the contents of two registers. Similar problems occur withcall by name, as inJensen's Device, where swappingiandA[i]via a temporary variable yields incorrect results due to the arguments being related: swapping viatemp = i; i = A[i]; A[i] = tempchanges the value foriin the second statement, which then results in the incorrectivalue forA[i]in the third statement. The underlying principle of the XOR swap algorithm can be applied to any operation meeting criteria L1 through L4 above. Replacing XOR by addition and subtraction gives various slightly different, but largely equivalent, formulations. For example:[4] Unlike the XOR swap, this variation requires that the underlying processor or programming language uses a method such asmodular arithmeticorbignumsto guarantee that the computation ofX + Ycannot cause an error due tointeger overflow. Therefore, it is seen even more rarely in practice than the XOR swap. However, the implementation ofAddSwapabove in the C programming language always works even in case of integer overflow, since, according to the C standard, addition and subtraction of unsigned integers follow the rules ofmodular arithmetic, i. e. are done in thecyclic groupZ/2sZ{\displaystyle \mathbb {Z} /2^{s}\mathbb {Z} }wheres{\displaystyle s}is the number of bits ofunsigned int. Indeed, the correctness of the algorithm follows from the fact that the formulas(x+y)−y=x{\displaystyle (x+y)-y=x}and(x+y)−((x+y)−y)=y{\displaystyle (x+y)-((x+y)-y)=y}hold in anyabelian group. This generalizes the proof for the XOR swap algorithm: XOR is both the addition and subtraction in the abelian group(Z/2Z)s{\displaystyle (\mathbb {Z} /2\mathbb {Z} )^{s}}(which is thedirect sumofscopies ofZ/2Z{\displaystyle \mathbb {Z} /2\mathbb {Z} }). This doesn't hold when dealing with thesigned inttype (the default forint). Signed integer overflow is an undefined behavior in C and thus modular arithmetic is not guaranteed by the standard, which may lead to incorrect results. The sequence of operations inAddSwapcan be expressed via matrix multiplication as: On architectures lacking a dedicated swap instruction, because it avoids the extra temporary register, the XOR swap algorithm is required for optimalregister allocation. This is particularly important forcompilersusingstatic single assignment formfor register allocation; these compilers occasionally produce programs that need to swap two registers when no registers are free. The XOR swap algorithm avoids the need to reserve an extra register or to spill any registers to main memory.[5]The addition/subtraction variant can also be used for the same purpose.[6] This method of register allocation is particularly relevant toGPUshader compilers. On modern GPU architectures, spilling variables is expensive due to limited memory bandwidth and high memory latency, while limiting register usage can improve performance due to dynamic partitioning of theregister file. The XOR swap algorithm is therefore required by some GPU compilers.[7]
https://en.wikipedia.org/wiki/XOR_swap_algorithm
KabbalahorQabalah(/kəˈbɑːlə,ˈkæbələ/kə-BAH-lə,KAB-ə-lə;Hebrew:קַבָּלָה‎,romanized:Qabbālā,lit.'reception, tradition')[1][a]is anesotericmethod, discipline andschool of thoughtinJewish mysticism.[2]It forms the foundation ofmysticalreligious interpretations within Judaism.[2][3]A traditional Kabbalist is called aMekubbal(מְקֻובָּל‎,Məqubbāl, 'receiver').[2] Jewish Kabbalistsoriginally developed transmissions of theprimary texts of Kabbalahwithin the realm ofJewish tradition[2][3]and often use classical Jewish scriptures to explain and demonstrate its mystical teachings. Kabbalists hold these teachings to define the inner meaning of both theHebrew Bibleand traditionalrabbinic literatureand their formerly concealed transmitted dimension, as well as to explain the significance of Jewish religious observances.[4] Historically, Kabbalah emerged from earlier forms ofJewish mysticism, in 12th- to 13th-centuryal-Andalus (Spain)and inHakhmei Provence,[2][3]and was reinterpreted during the Jewish mystical renaissance in 16th-centuryOttoman Palestine.[2]TheZohar, the foundational text of Kabbalah, was authored in the late 13th century, likely byMoses de León.Isaac Luria(16th century) is considered the father of contemporary Kabbalah; Lurianic Kabbalah was popularised in the form ofHasidic Judaismfrom the 18th century onwards.[2]During the 20th century, academic interest in Kabbalistic texts led primarily by the Jewish historianGershom Scholemhas inspired the development ofhistorical research on Kabbalahin the field ofJudaic studies.[5][6] Though minor works contribute to an understanding of the Kabbalah as an evolving tradition, the primary texts are theBahir,Zohar,Pardes Rimonim,andEtz Chayim ('Ein Sof').[7]The earlyHekhalot literatureis acknowledged as ancestral to the sensibilities of this later flowering of the Kabbalah[8]and more especially theSefer Yetzirahis acknowledged as the antecedent from which all these books draw many of their formal inspirations. TheSefer Yetzirahis a brief document of only few pages that was written many centuries before thehighandlate medievalworks (sometime between 200-600CE), detailing an alphanumeric vision of cosmology—may be understood as a kind of prelude to the canon of Kabbalah.[7] The history of Jewish mysticism encompasses various forms ofesotericand spiritual practices aimed at understanding the divine and the hidden aspects of existence.[9][b]This mystical tradition has evolved significantly over millennia, influencing and being influenced by different historical, cultural, and religious contexts. Among the most prominent forms of Jewish mysticism is Kabbalah, which emerged in the 12th century and has since become a central component of Jewish mystical thought. Other notable early forms include prophetic and apocalyptic mysticism, which are evident in biblical and post-biblical texts. The roots of Jewish mysticism can be traced back to the biblical era, with prophetic figures such asElijahandEzekielexperiencing divine visions and encounters.[10]This tradition continued into the apocalyptic period, where texts like1 Enochand theBook of Danielintroduced complex angelology and eschatological themes.[11]TheHekhalotandMerkabahliterature, dating from the 2nd century to the early medieval period, further developed these mystical themes, focusing on visionary ascents to the heavenly palaces and the divine chariot.[12] The medieval period saw the formalization of Kabbalah, particularly in southern France and Spain. Foundational texts such as theBahirand theZoharwere composed during this time, laying the groundwork for later developments.[13]The Kabbalistic teachings of this era delved deeply into the nature of the divine, the structure of the universe, and the process of creation. Notable Kabbalists likeMoses de Leónplayed crucial roles in disseminating these teachings, which were characterized by their profound symbolic and allegorical interpretations of the Torah. In the early modern period,Lurianic Kabbalah, founded byIsaac Luriain the 16th century, introduced new metaphysical concepts such asTzimtzum(divine contraction) andTikkun(cosmic repair), which have had a lasting impact on Jewish thought.[14]The 18th century saw the rise ofHasidism, a movement that integrated Kabbalistic ideas into a popular, revivalist context, emphasizing personal mystical experience and the presence of the divine in everyday life.[15] According to theZohar, a foundational text for kabbalistic thought,[16]Torah studycan proceed along four levels of interpretation (exegesis).[17][18]These four levels are calledpardesfrom their initial letters (PRDSפַּרדֵס‎, 'orchard'): Kabbalah is considered by its followers as a necessary part of the study ofTorah– the study of Torah (theTanakhand rabbinic literature) being an inherent duty of observant Jews.[20] Modern academic-historical study of Jewish mysticism reserves the termkabbalahto designate the particular, distinctive doctrines that textually emerged fully expressed in the Middle Ages, as distinct from the earlierMerkabah mystical conceptsand methods.[21]According to this descriptive categorization, both versions of Kabbalistic theory, the medieval-Zoharic and the early-modernLurianic Kabbalahtogether comprise the Theosophical tradition in Kabbalah, while theMeditative-Ecstatic Kabbalahincorporates a parallel inter-related Medieval tradition. A third tradition, related but more shunned, involves the magical aims ofPractical Kabbalah.Moshe Idel, for example, writes that these 3 basic models can be discerned operating and competing throughout the whole history of Jewish mysticism, beyond the particular Kabbalistic background of the Middle Ages.[22]They can be readily distinguished by their basic intent with respect to God:[citation needed] According to Kabbalistic belief, early kabbalistic knowledge was transmitted orally by the Patriarchs,prophets, and sages, eventually to be "interwoven" into Jewish religious writings and culture.[25]According to this view, early kabbalah was, in around the 10th century BCE, an open knowledge practiced by over a million people in ancient Israel.[26][27]Foreign conquests drove the Jewish spiritual leadership of the time (theSanhedrin) to hide the knowledge and make it secret, fearing that it might be misused if it fell into the wrong hands.[28] It is hard to clarify with any degree of certainty the exact concepts within kabbalah. There are several different schools of thought with very different outlooks; however, all are accepted as correct.[29]Modernhalakhicauthorities have tried to narrow the scope and diversity within kabbalah, by restricting study to certain texts, notablyZoharand the teachings of Isaac Luria as passed down throughHayyim ben Joseph Vital.[30]However, even this qualification does little to limit the scope of understanding and expression, as included in those works are commentaries on Abulafian writings,Sefer Yetzirah, Albotonian writings, and theBerit Menuhah,[31]which is known to the kabbalistic elect and which, as described more recently byGershom Scholem, combined ecstatic with theosophical mysticism. It is therefore important to bear in mind when discussing things such as thesephirotand their interactions that one is dealing with highly abstract concepts that at best can only be understood intuitively.[32] From theRenaissanceonwards Jewish Kabbalah texts entered non-Jewish culture, where they were studied and translated byChristian HebraistsandHermeticoccultists.[33]The syncretic traditions ofChristian CabalaandHermetic Qabalahdeveloped independently of Judaic Kabbalah, reading the Jewish texts as universalist ancient wisdom preserved from theGnostictraditions of antiquity.[34]Both adapted the Jewish concepts freely from their Jewish understanding, to merge with multiple other theologies, religious traditions and magical associations. With the decline of Christian Cabala in theAge of Reason, Hermetic Qabalah continued as a central underground tradition inWestern esotericism. Through these non-Jewish associations with magic,alchemyand divination, Kabbalah acquired some popularoccultconnotations forbidden within Judaism, where Jewish Practical Kabbalah was a minor, permitted tradition restricted for a few elite. Today, many publications on Kabbalah belong to the non-JewishNew Ageand occult traditions of Cabala, rather than giving an accurate picture of Judaic Kabbalah.[35]Instead, academic and traditional Jewish publications now translate and study Judaic Kabbalah for wide readership. The definition of Kabbalah varies according to the tradition and aims of those following it.[36]According to its earliest and original usage in ancient Hebrew it means 'reception' or 'tradition', and in this context it tends to refer to any sacred writing composed after (or otherwise outside of) the five books of the Torah.[37]After the Talmud is written, it refers to the Oral Law (both in the sense of the 'Talmud' itself and in the sense of continuing dialog and thought devoted to the scripture in every generation).[37]In the much later writings of Eleazar of Worms (c. 1350), it refers totheurgyor the conjuring of demons and angels by the invocation of their secret names.[37]The understanding of the word Kabbalah undergoes a transformation of its meaning in medieval Judaism, in the books which are now primarily referred to as 'the Kabbalah': theBahir, theZohar,Etz Hayimetc.[37]In these books the word Kabbalah is used in manifold new senses. During this major phase it refers to the continuity of revelation in every generation, on the one hand, while also suggesting the necessity of revelation to remain concealed and secret or esoteric in every period by formal requirements native to sacred truth.[37]When the term Kabbalah is used to refer to a canon of secret mystical books by medieval Jews, these aforementioned books and other works in their constellation are the books and the literary sensibility to which the term refers.[37]Even later the word is adapted or appropriated inWestern esotericism(Christian KabbalahandHermetic Qabalah), where it influences the tenor and aesthetics of European occultism practiced by gentiles or non-Jews. But above all, Jewish Kabbalah is a set of sacred and magical teachings meant to explain the relationship between the unchanging, eternalGod—the mysteriousEin Sof(אֵין סוֹף‎, 'The Infinite')[38][39]—and the mortal, finiteuniverse(God'screation).[2][38] The nature of the divine prompted kabbalists to envision two aspects to God: (a) God in essence, absolutelytranscendent, unknowable, limitlessdivine simplicitybeyond revelation, and (b) God in manifestation, the revealed persona of God through which he creates and sustains and relates to humankind. Kabbalists speak of the first asEin/Ayn Sof(אין סוף "the infinite/endless", literally "there is no end"). Of the impersonalEin Sofnothing can be grasped. However, the second aspect of divine emanations, accessible to human perception, dynamically interacting throughout spiritual and physical existence, reveal the divineimmanently, and are bound up in the life of man. Kabbalists believe that these two aspects are not contradictory but complement one another, emanationsmysticallyrevealing the concealed mystery from within theGodhead.[citation needed] As a term describing the Infinite Godhead beyond Creation, Kabbalists viewed theEin Sofitself as too sublime to be referred to directly in the Torah. It is not aHoly Name in Judaism, as no name could contain a revelation of the Ein Sof. Even terming it "No End" is an inadequate representation of its true nature, the description only bearing its designation in relation to Creation. However, the Torah does narrate God speaking in the first person, most memorably the first word of theTen Commandments, a reference without any description or name to the simpleDivine essence(termed alsoAtzmus Ein Sof– Essence of the Infinite) beyond even the duality of Infinitude/Finitude. In contrast, the term Ein Sof describes the Godhead as Infinite lifeforce first cause, continuously keeping all Creation in existence. TheZoharreads thefirst words of Genesis,BeReishit Bara Elohim – In the beginning God created, as "With(the level of)Reishit (Beginning)(the Ein Sof)createdElohim(God's manifestation in creation)":[40] At the very beginning the King made engravings in the supernal purity. A spark of blackness emerged in the sealed within the sealed, from the mystery of the Ayn Sof, a mist within matter, implanted in a ring, no white, no black, no red, no yellow, no colour at all. When He measured with the standard of measure, He made colours to provide light. Within the spark, in the innermost part, emerged a source, from which the colours are painted below; it is sealed among the sealed things of the mystery of Ayn Sof. It penetrated, yet did not penetrate its air. It was not known at all until, from the pressure of its penetration, a single point shone, sealed, supernal. Beyond this point nothing is known, so it is calledreishit(beginning): the first word of all ... The structure of emanations has been described in various ways:Sephirot(divine attributes) andPartzufim(divine "faces"),Ohr(spiritual light and flow),Names of Godand the supernalTorah,Olamot(Spiritual Worlds), aDivine TreeandArchetypal Man,Angelic Chariot and Palaces, male and female, enclothed layers of reality, inwardly holy vitality and external Kelipot shells,613 channels("limbs" of the King) and the divine Souls ofMan. These symbols are used to describe various levels and aspects of Divine manifestation, from thePnimi(inner) dimensions to theHitzoni(outer).[citation needed]It is solely in relation to the emanations, certainly not theEin SofGround of all Being, that Kabbalah usesanthropomorphic symbolismto relate psychologically to divinity. Kabbalists debated the validity of anthropomorphic symbolism, between its disclosure as mystical allusion, versus its instrumental use as allegorical metaphor; in the language of the Zohar, symbolism "touches yet does not touch" its point.[41][non-primary source needed] TheSephirot(also spelled "sefirot"; singularsefirah) are the ten emanations and attributes of God with which he continually sustains the existence of the universe. These emanations are viewed as parts of God's divine nature, which reveal themselves in different ways. The Zohar and other Kabbalistic texts elaborate on the emergence of the sephirot from a state of concealed potential in theEin Sofuntil their manifestation in the mundane world. In particular, Moses ben Jacob Cordovero (known as "the Ramak"), describes how God emanated the myriad details of finite reality out of the absolute unity of Divine light via the ten sephirot, or vessels.[42] According to Lurianic cosmology, thesephirotcorrespond to various levels of creation (tensephirotin each of the Four Worlds, and four worlds within each of the larger four worlds, each containing tensephirot, which themselves contain tensephirot, to an infinite number of possibilities),[43]and are emanated from the Creator for the purpose of creating the universe. Thesephirotare considered revelations of the Creator's will (ratzon),[44]and they should not be understood as ten different "gods" but as ten different ways the one God reveals his will through the Emanations. It is not God who changes but the ability to perceive God that changes.[citation needed] Divine creation by means of the Ten Sephirot is an ethical process. They represent the different aspects of Morality. Loving-Kindness is a possible moral justification found in Chessed, and Gevurah is the Moral Justification of Justice and both are mediated by Mercy which is Rachamim. However, these pillars of morality become immoral once they become extremes. When Loving-Kindness becomes extreme it can lead to sexual depravity and lack of Justice to the wicked. When Justice becomes extreme, it can lead to torture and the Murder of innocents and unfair punishment.[citation needed] "Righteous" humans (tzadikimplural ofTzadik) ascend these ethical qualities of the tensephirotby doing righteous actions. If there were no righteous humans, the blessings of God would become completely hidden, and creation would cease to exist. While real human actions are the "Foundation" (Yesod) of this universe (Malchut), these actions must accompany the conscious intention of compassion. Compassionate actions are often impossible without faith (Emunah), meaning to trust that God always supports compassionate actions even when God seems hidden. Ultimately, it is necessary to show compassion toward oneself too in order to share compassion toward others. This "selfish" enjoyment of God's blessings but only in order to empower oneself to assist others is an important aspect of "Restriction", and is considered a kind ofgolden meanin kabbalah, corresponding to thesefirahof Adornment (Tiferet) being part of the "Middle Column".[citation needed] Moses ben Jacob Cordovero, wroteTomer Devorah(Palm Tree of Deborah), in which he presents an ethical teaching of Judaism in the kabbalistic context of the tensephirot.Tomer Devorahhas become also a foundationalMusar text.[45] The most esotericIdrotsections of the classic Zohar make reference tohypostaticmale and femalePartzufim(Divine Personas) displacing the Sephirot, manifestations of God in particularAnthropomorphicsymbolic personalities based onBiblical esoteric exegesisandmidrashicnarratives. Lurianic Kabbalah places these at the centre of our existence, rather than earlier Kabbalah's Sephirot, which Luria saw as broken in Divine crisis. Contemporary cognitive understanding of the Partzuf symbols relates them toJungian archetypesof thecollective unconscious, reflecting a psychologised progression from youth to sage in therapeutic healing back to the infinite Ein Sof/Unconscious, as Kabbalah is simultaneously boththeologyandpsychology.[46] Medieval Kabbalists believed that all things are linked to God through theseemanations, making all levels in creation part of one great, gradually descendingchain of being. Through this any lower creation reflects its particular roots in supernal divinity. Kabbalists agreed with thedivine transcendencedescribed byJewish philosophy, but as only referring to theEin Sofunknowable Godhead. They reinterpreted thetheisticphilosophical concept of creation from nothing, replacing God's creative act withpanentheisticcontinual self-emanation by the mysticalAyinNothingness/No-thing sustaining all spiritual and physical realms as successively more corporeal garments, veils and condensations ofdivine immanence. The innumerable levels of descent divide intoFour comprehensive spiritual worlds,Atziluth("Closeness" – Divine Wisdom),Beriah("Creation" – Divine Understanding),Yetzirah("Formation" – Divine Emotions),Assiah("Action" – Divine Activity), with a preceding Fifth WorldAdam Kadmon("Primordial Man" – Divine Will) sometimes excluded due to its sublimity. Together the whole spiritual heavens form the Divine Persona/Anthropos.[citation needed] Hasidic thought extends the divine immanence of Kabbalah by holding that God is all that really exists, all else being completely undifferentiated from God's perspective. This view can be defined asacosmicmonisticpanentheism. According to this philosophy, God's existence is higher than anything that this world can express, yet he includes all things of this world within his divine reality in perfect unity, so that the creation effected no change in him at all. This paradox as seen from dual human and divine perspectives is dealt with at length inChabad texts.[47] Among problems considered in the Hebrew Kabbalah is the theological issue of the nature and origin of evil. In the views of some Kabbalists this conceives "evil" as a "quality of God", asserting that negativity enters into the essence of the Absolute. In this view it is conceived that the Absolute needs evil to "be what it is", i.e., to exist.[49]Foundational texts of Medieval Kabbalism conceived evil as a demonic parallel to the holy, called theSitra Achra(the "Other Side"), and theqlippoth(the "shells/husks") that cover and conceal the holy, are nurtured from it, and yet also protect it by limiting its revelation. Scholem termed this element of the Spanish Kabbalah a "Jewish gnostic" motif, in the sense of dual powers in the divine realm of manifestation. In a radical notion, the root of evil is found within the 10 holy Sephirot, through an imbalance ofGevurah, the power of "Strength/Judgement/Severity".[50] Gevurah is necessary for Creation to exist as it counterposesChesed("loving-kindness"), restricting the unlimited divine bounty within suitable vessels, so forming the Worlds. However, if man sins (actualising impure judgement within his soul), the supernal Judgement is reciprocally empowered over the Kindness, introducing disharmony among the Sephirot in the divine realm and exile from God throughout Creation. The demonic realm, though illusory in its holy origin, becomes the real apparent realm of impurity in lower Creation. In theZohar, the sin of Adam and Eve (who embodiedAdam Kadmonbelow) took place in the spiritual realms. Their sin was that they separated theTree of knowledge(10sefirotwithinMalkuth, representingDivine immanence), from theTree of lifewithin it (10 sefirot withinTiferet, representingDivine transcendence). This introduced the false perception of duality into lower creation, an externalTree of Deathnurtured from holiness, and anAdam Belialof impurity.[51]In Lurianic Kabbalah, evil originates from a primordial shattering of the sephirot of God's Persona before creation of thestable spiritual worlds, mystically represented by the 8Kings of Edom(the derivative ofGevurah) "who died" before any king reigned in Israel fromGenesis 36. In the divine view from above within Kabbalah, emphasised inHasidicPanentheism, the appearance of duality and pluralism below dissolves into the absoluteMonismof God, psychologising evil.[52]Though impure below, what appears as evil derives from a divine blessing too high to be contained openly.[53]The mystical task of therighteousin the Zohar is to reveal this concealed Divine Oneness and absolute good, to "convert bitterness into sweetness, darkness into light".[This quote needs a citation] Kabbalistic doctrine gives man the central role in Creation, as his soul and body correspond to the supernal divine manifestations. In the Christian Kabbalah this scheme was universalised to describeharmonia mundi, the harmony of Creation within man.[54]In Judaism, it gave a profound spiritualisation of Jewish practice. While the kabbalistic scheme gave a radically innovative, though conceptually continuous, development of mainstream Midrashic and Talmudic rabbinic notions, kabbalistic thought underscored and invigorated conservative Jewish observance. The esoteric teachings of kabbalah gave the traditional mitzvot observances the central role in spiritual creation, whether the practitioner was learned in this knowledge or not. Accompanying normative Jewish observance and worship with elite mystical kavanot intentions gave themtheurgicpower, but sincere observance by common folk, especially in the Hasidic popularisation of kabbalah, could replace esoteric abilities. Many kabbalists were also leading legal figures in Judaism, such as Nachmanides andJoseph Karo.[citation needed] Medieval kabbalah elaborates particular reasons for each Biblicalmitzvah, and their role in harmonising the supernal divine flow, uniting masculine and feminine forces on High. With this, the feminine Divine presence in this world is drawn from exile to the Holy One Above. The613 mitzvotare embodied in the organs and soul of man. Lurianic Kabbalah incorporates this in the more inclusive scheme of Jewish messianic rectification of exiled divinity. Jewish mysticism, in contrast to Divine transcendence rationalist human-centred reasons for Jewish observance, gave Divine-immanent providential cosmic significance to the daily events in the worldly life of man in general, and the spiritual role of Jewish observance in particular.[citation needed] The Kabbalah posits that the human soul has three elements: thenefesh,ru'ach, andneshamah. Thenefeshis found in all humans, and enters the physical body at birth. It is the source of one's physical and psychological nature. The next two parts of the soul are not implanted at birth, but can be developed over time; their development depends on the actions and beliefs of the individual. They are said to only fully exist in people awakened spiritually. A common way of explaining the three parts of the soul is as follows:[56] Reincarnation, the transmigration of the soul after death, was introduced into Judaism as a central esoteric tenet of Kabbalah from the Medieval period onwards, called Gilgul neshamot ("cycles of the soul"). The concept does not appear overtly in the Hebrew Bible or classic rabbinic literature, and was rejected by various Medieval Jewish philosophers. However, the Kabbalists explained a number of scriptural passages in reference to Gilgulim. The concept became central to the later Kabbalah of Isaac Luria, who systemised it as the personal parallel to the cosmic process of rectification. Through Lurianic Kabbalah and Hasidic Judaism, reincarnation entered popular Jewish culture as a literary motif.[57] Tzimtzum(Constriction/Concentration) is the primordial cosmic act whereby God "contracted" His infinite light, leaving a "void" into which the light of existence was poured. This allowed the emergence of independent existence that would not become nullified by the pristine Infinite Light, reconciling the unity of theEin Sofwith the plurality of creation. This changed the first creative act into one of withdrawal/exile, the antithesis of the ultimate Divine Will. In contrast, a new emanation after the Tzimtzum shone into the vacuum to begin creation, but led to an initial instability calledTohu(Chaos), leading to a new crisis ofShevirah(Shattering) of the sephirot vessels. The shards of the broken vessels fell down into the lower realms, animated by remnants of their divine light, causing primordial exile within the Divine Persona before the creation of man. Exile and enclothement of higher divinity within lower realms throughout existence requires man to complete theTikkun olam(Rectification) process. Rectification Above corresponds to the reorganization of the independent sephirot into relatingPartzufim(Divine Personas), previously referred to obliquely in the Zohar. From the catastrophe stems the possibility of self-aware Creation, and also theKelipot(Impure Shells) of previous Medieval kabbalah. The metaphoricalanthropomorphismof the partzufim accentuates the sexual unifications of the redemption process, whileGilgulreincarnation emerges from the scheme. Uniquely, Lurianism gave formerly private mysticism the urgency of Messianic social involvement.[citation needed] According to interpretations of Luria, the catastrophe stemmed from the "unwillingness" of the residue imprint after the Tzimtzum to relate to the new vitality that began creation. The process was arranged to shed and harmonise the Divine Infinity with the latent potential of evil.[58]The creation ofAdamwould have redeemed existence, but his sin caused new shevirah of Divine vitality, requiring the Giving of the Torah to begin Messianic rectification. Historical and individual history becomes the narrative of reclaiming exiled Divine sparks.[citation needed] Kabbalistic thought extendedBiblicalandMidrashicnotions that God enacted Creation through the Hebrew language and through theTorahinto a full linguistic mysticism.[59]In this, every Hebrew letter, word, number, even accent on words of the Hebrew Bible containJewish mystical meanings, describing the spiritual dimensions within exoteric ideas, and it teaches thehermeneuticmethods of interpretation for ascertaining these meanings.Names of God in Judaismhave further prominence, though infinite meaning turns the whole Torah into a Divine name. As the Hebrew name of things is the channel of their lifeforce, parallel to the sephirot, so concepts such as "holiness" and "mitzvot" embody ontological Divine immanence, as God can be known in manifestation as well as transcendence. The infinite potential of meaning in the Torah, as in theEin Sof, is reflected in the symbol of the two trees of the Garden of Eden; the Torah of theTree of Knowledgeis the external, finiteHalachicTorah, enclothed within which the mystics perceive the unlimited infinite plurality of meanings of the Torah of theTree of Life. In Lurianic terms, each of the 600,000 root souls of Israel find their own interpretation in Torah, as "God, the Torah and Israel are all One".[60][61] The reapers of the Field are the Comrades, masters of this wisdom, becauseMalkhutis called the Apple Field, and She grows sprouts of secrets and new meanings of Torah. Those who constantly create new interpretations of Torah are the ones who reap Her.[61] As early as the 1st century BCE Jews believed that the Torah and other canonical texts contained encoded messages and hidden meanings.[62]Gematriais one method for discovering its hidden meanings. In this system, each Hebrew letter also represents a number. By converting letters to numbers, Kabbalists were able to find a hidden meaning in each word. This method of interpretation was used extensively by various schools.[63] In contemporary interpretation of kabbalah, Sanford Drob makes cognitive sense of this linguistic mythos by relating it topostmodern philosophicalconcepts described byJacques Derridaand others, where all reality embodies narrative texts with infinite plurality of meanings brought by the reader. In this dialogue, kabbalah survives the nihilism ofDeconstructionby incorporating its own LurianicShevirah, and by the dialectical paradox where man and God imply each other.[64] The founder of the academic study of Jewish mysticism,Gershom Scholem, privileged an intellectual view of the nature of KabbalisticsymbolsasdialecticTheosophicalspeculation. In contrast, contemporary scholarship ofMoshe IdelandElliot R. Wolfsonhas opened aphenomenologicalunderstanding of themysticalnature of Kabbalistic experience, based on a close reading of the historical texts. Wolfson has shown that among the closed elite circles of mystical activity, medieval Theosophical Kabbalists held that an intellectual view of their symbols was secondary to the experiential. In the context of medievalJewish philosophicaldebates on the role of imagination in Biblical prophecy, and essentialist versus instrumental kabbalistic debates about the relation ofsephirotto God, they sawcontemplation on the sephirotas a vehicle for prophecy. Judaism's ban on physical iconography, along with anthropomorphic metaphors for Divinity in theHebrew Bibleandmidrash, enabled their internal visualisation of the Divine sephirot Anthropos in imagination. Disclosure of the aniconic in iconic internal psychology, involved sublimatory revelation of Kabbalah's sexual unifications. Previous academic distinction betweenTheosophicalversus AbulafianEcstatic-Prophetic Kabbalahoverstated their division of aims, which revolved around visual versus verbal/auditory views of prophecy.[65]In addition, throughout the history of Judaic Kabbalah, the greatest mystics claimed to receive new teachings fromElijah the Prophet, the souls of earlier sages (a purpose ofLurianic meditationprostrated on the graves of TalmudicTannaim,Amoraimand Kabbalists), the soul of themishnah, ascents during sleep, heavenly messengers, etc. A tradition ofparapsychologyabilities,psychicknowledge, andtheurgicintercessions in heaven for the community is recounted in thehagiographicworksPraises ofthe Ari,Praises of theBesht, and in many other Kabbalistic andHasidictales. Kabbalistic andHasidictexts are concerned to apply themselves fromexegesisand theory to spiritual practice, includingpropheticdrawing of new mystical revelations in Torah. The mythological symbols Kabbalah uses to answer philosophical questions, themselves invitemysticalcontemplation,intuitiveapprehension andpsychologicalengagement.[66] In bringing Theosophical Kabbalah into contemporary intellectual understanding, using the tools of modern and postmodernphilosophyandpsychology, Sanford Drob shows philosophically how every symbol of the Kabbalah embodies the simultaneousdialecticalparadox of mysticalCoincidentia oppositorum, the conjoining of two opposite dualities.[67]Thus the InfiniteEin Sofis above the duality ofYesh/AyinBeing/Non-Beingtranscending Existence/Nothingness (Becominginto Existence through the souls of Man who are the inner dimension of all spiritual and physical worlds, yet simultaneously the Infinite Divine generative lifesource beyond Creation that continuously keeps everything spiritual and physical in existence);Sephirotbridge the philosophical problem of the One and the Many; Man is both Divine (Adam Kadmon) and human (invited to project human psychology onto Divinity to understand it);Tzimtzumis both illusion and real from Divine and human perspectives; evil and good imply each other (Kelipahdraws from Divinity, good arises only from overcoming evil); Existence is simultaneously partial (Tzimtzum), broken (Shevirah), and whole (Tikun) from different perspectives; God experiences Himself as Other through Man, Man embodies and completes (Tikun) the Divine Persona Above. In Kabbalah's reciprocalPanentheism,TheismandAtheism/Humanismrepresent two incomplete poles of a mutual dialectic that imply and include each other's partial validity.[64]This was expressed by theChabadHasidic thinkerAaron of Staroselye, that the truth of any concept is revealed only in its opposite. They wish to convey here that if arms were a disgrace to the hero, it would not have used them as aparablefor words of Torah. Instead, they are an adornment for him, so the verse used them for its parable, saying that he should have words of Torah and wisdom in hand, like the sword on the hero’s thigh, girded and accessible to him whenever he wishes to unsheathe it and use it to overpower his fellow—this is his glory and splendor. This is the idea wherever they expound a midrashic parable orallegory; they believe that both“the internal and external” are true[68] By expressing itself usingsymbolsandmyththat transcend single interpretations, Theosophical Kabbalah incorporates aspects ofphilosophy, Jewishtheology,psychologyand unconsciousdepth psychology,mysticismandmeditation,Jewish exegesis,theurgy, andethics, as well as overlapping with theory frommagical elements. Its symbols can be read as questions which are their ownexistentialistanswers (the Hebrew sephirahChokmah-Wisdom, the beginning of Existence, is read etymologically by Kabbalists as the question "Koach Mah?" the "Power of What?"). Alternative listings of theSephirotstart with eitherKeter(Unconscious Will/Volition), or Chokmah (Wisdom), a philosophical duality between a Rational or Supra-Rational Creation, between whether theMitzvotJudaic observances have reasons or transcend reasons in Divine Will, between whetherstudy or good deedsis superior, and whether the symbols of Kabbalah should be read as primarilymetaphysicalintellectual cognition orAxiologyvalues. Messianic redemption requires both ethicalTikkun olamand contemplativeKavanah. Sanford Drob sees every attempt to limit Kabbalah to one fixed dogmatic interpretation as necessarily bringing its ownDeconstruction(Lurianic Kabbalah incorporates its ownShevirahself shattering; theEin Softranscends all of its infinite expressions; the infinite mystical Torah of theTree of Lifehas no/infinite interpretations). The infinite axiology of the Ein Sof One, expressed through the Plural Many, overcomes the dangers of nihilism, or theantinomianmystical breaking ofJewish observancealluded to throughout Kabbalistic and Hasidic mysticisms.[64] Like the rest of the rabbinic literature, the texts of kabbalah were once part of an ongoingoral tradition, though, over the centuries, much of the oral tradition has been written down. Jewish forms of esotericism existed over 2,000 years ago.Ben Sira(bornc.170 BCE) warns against it, saying: "You shall have no business with secret things".[69]Nonetheless, mystical studies were undertaken and resulted in mystical literature, the first being theApocalyptic literatureof the second and first pre-Christian centuries and which contained elements that carried over to later kabbalah. Throughout the centuries since, many texts have been produced, among them the ancient descriptions ofSefer Yetzirah, theHeichalotmystical ascent literature, theBahir,Sefer Raziel HaMalakhand theZohar, the main text of Kabbalistic exegesis. Classic mystical Bible commentaries are included in fuller versions of theMikraot Gedolot(Main Commentators). Cordoveran systemisation is presented inPardes Rimonim, philosophical articulation in the works of theMaharal, and Lurianic rectification inEtz Chayim. Subsequent interpretation of Lurianic Kabbalah was made in the writings of Shalom Sharabi, inNefesh HaChaimand the 20th-centurySulam. Hasidism interpreted kabbalistic structures to their correspondence in inward perception.[70]The Hasidic development of kabbalah incorporates a successive stage of Jewish mysticism from historical kabbalistic metaphysics.[71] The first modern-academic historians of Judaism, the "Wissenschaft des Judentums" school of the 19th century, framed Judaism in solely rational terms in the emancipatory Haskalah spirit of their age. They opposed kabbalah and restricted its significance from Jewish historiography. In the mid-20th century, it was left toGershom Scholemto overturn their stance, establishing the flourishing present-day academic investigation of Jewish mysticism, and making Heichalot, Kabbalistic and Hasidic texts the objects of scholarly critical-historical study. In Scholem's opinion, the mythical and mystical components of Judaism were at least as important as the rational ones, and he thought that they, rather than the exotericHalakhaor intellectualistJewish philosophy, were the living subterranean stream in historical Jewish development that periodically broke out to renew the Jewish spirit and social life of the community. Scholem's magisterialMajor Trends in Jewish Mysticism(1941) among his seminal works, though representing scholarship and interpretations that have subsequently been challenged and revised within the field,[72]remains the only academic survey studying all main historical periods ofJewish mysticism.[dubious–discuss][citation needed] TheHebrew University of Jerusalemhas been a centre of this research, including Scholem andIsaiah Tishby, and more recentlyJoseph Dan,Yehuda Liebes,Rachel Elior, andMoshe Idel.[73]Scholars across the eras of Jewish mysticism in America and Britain have includedAlexander Altmann,Arthur Green,Lawrence Fine,Elliot Wolfson,Daniel Matt,[74]Louis JacobsandAda Rapoport-Albert. Moshe Idel has opened up research on the Ecstatic Kabbalah alongside the theosophical, and has called for new multi-disciplinary approaches, beyond the philological and historical that have dominated until now, to includephenomenology,psychology,anthropologyandcomparative studies.[75] Historians have noted that most claims for the authority of kabbalah involve an argument of the antiquity of authority.[76]As a result, virtually all early foundational workspseudepigraphicallyclaim, or are ascribed, ancient authorship. For example,Sefer Raziel HaMalach, an astro-magical text partly based on a magical manual of late antiquity,Sefer ha-Razim, was, according to the kabbalists, transmitted by the angelRazielto Adam after he was evicted from Eden. Another famous work, the earlySefer Yetzirah, is dated back to the patriarchAbraham.[77]This tendency toward pseudepigraphy has its roots in apocalyptic literature, which claims that esoteric knowledge such as magic, divination and astrology was transmitted to humans in the mythic past by the two angels, Aza andAzaz'el(in other places, Azaz'el and Uzaz'el) who fell from heaven (see Genesis 6:4). As well as ascribing ancient origins to texts, and reception ofOral Torahtransmission, the greatest and most innovative Kabbalists claimed mystical reception of direct personal divine revelations, by heavenly mentors such asElijah the Prophet, the souls ofTalmudic sages,prophetic revelation, soul ascents on high, etc. On this basisArthur Greenspeculates, that while theZoharwas written by a circle of Kabbalists in medieval Spain, they may have believed they were channeling the souls and direct revelations from the earlier mystic circle ofShimon bar Yochaiin 2nd century Galilee depicted in the Zohar's narrative.[78]Academics have compared the Zohar mystic circle of Spain with the romanticised wandering mystic circle of Galilee described in the text. Similarly,Isaac Luriagathered his disciples at the traditionalIdraassembly location, placing each in the seat of their former reincarnations as students of Shimon bar Yochai. One point of view is represented by the Hasidic workTanya(1797), in order to argue that Jews have a different character of soul: while a non-Jew, according to the authorShneur Zalman of Liadi(1745–1812), can achieve a high level of spirituality, similar to an angel, his soul is still fundamentally different in character, from a Jewish one.[79]A similar view is found inKuzari, an early medieval philosophical book byYehuda Halevi(1075–1141).[80]Another rabbi,Abraham Yehudah Khein(1878–1957), believed that spiritually elevated Gentiles have essentially Jewish souls, "who just lack the formal conversion to Judaism", and that unspiritual Jews are "Jewish merely by their birth documents".[81] David Halperin argues that the collapse of Kabbalah's influence among Western European Jews over the course of the 17th and 18th century was a result of thecognitive dissonancethey experienced between the negative perception of Gentiles found in some exponents of Kabbalah, and their own positive dealings with non-Jews, which were rapidly expanding and improving during this period due to the influence ofthe Enlightenment.[82]Pinchas Elijah Hurwitz, a Lithuanian-Galician Kabbalist of the 18th century and a moderate proponent of the Haskalah, called for brotherly love and solidarity between all nations, and believed that Kabbalah can empower everyone, Jews and Gentiles alike, with prophetic abilities.[83] The works ofAbraham Cohen de Herrera(1570–1635) are full of references to Gentile mystical philosophers. Such approach was particularly common among theRenaissanceand post-RenaissanceItalian Jews. Late medieval and Renaissance Italian Kabbalists, such asYohanan Alemanno,David Messer LeonandAbraham Yagel, adhered to humanistic ideals and incorporated teachings of various Christian andpaganmystics.[citation needed] A prime representative of this humanist stream in Kabbalah wasElijah Benamozegh, who explicitly praised Christianity, Islam, Zoroastrianism, Hinduism, as well as a whole range of ancient pagan mystical systems. He believed that Kabbalah can reconcile the differences between the world's religions, which represent different facets and stages of the universal human spirituality. In his writings, Benamozegh interprets theNew Testament,Hadith,Vedas,Avestaand pagan mysteries according to the Kabbalistic theosophy.[84] E. R. Wolfson provides numerous examples from the 17th to the 20th centuries, which would challenge the view of Halperin as well as the notion that "modern Judaism" has rejected or dismissed this "outdated aspect" of the religion and, he argues, there are still Kabbalists today who harbor this view. He argues that, while it is accurate to say that many Jews do and would find this distinction offensive, it is inaccurate to say that the idea has been totally rejected in all circles. As Wolfson has argued, it is an ethical demand on the part of scholars to continue to be vigilant with regard to this matter and in this way the tradition can be refined from within.[85] The idea that there are ten divinesephirotcould evolve over time into the idea that "God is One being, yet in that One being there are Ten" which opens up a debate about what the "correct beliefs" in God should be, according to Judaism. The early Kabbalists debated the relationship of the Sephirot to God, adopting a range of essentialist versus instrumental views.[25]Modern Kabbalah, based on the 16th century systemisations ofCordoveroandIsaac Luria, takes an intermediate position: the instrumental vessels of the sephirot are created, but their inner light is from the undifferentiatedOhr Ein Sofessence.[citation needed] Maimonides(12th century), celebrated by followers for hisJewish rationalism, rejected many of the pre-KabbalisticHekalottexts, particularlyShi'ur Qomahwhose starkly anthropomorphic vision of God he considered heretical.[86]Maimonides, a centrally important medieval sage of Judaism, lived at the time of the first emergence of Kabbalah. Modern scholarship views the systemisation and publication of their historic oral doctrine by Kabbalists, as a move to rebut the threat onJudaic observanceby the populance misreading Maimonides' ideal of philosophical contemplation over ritual performance in his philosophicalGuide for the Perplexed. They objected to Maimonides equating the TalmudicMaaseh Breishit and Maaseh Merkavahsecrets of the Torah withAristoteleanphysics and metaphysics in that work and in his legalMishneh Torah, teaching that their own Theosophy, centred on an esoteric metaphysics of traditional Jewish practice, is the Torah's true inner meaning.[citation needed] The Kabbalistmedieval rabbinic sageNachmanides(13th century), classic debater against Maimonidean rationalism, provides background to many kabbalistic ideas. An entire book entitledGevuras Aryehwas authored byYaakov Yehuda Aryeh Leib Frenkeland originally published in 1915, specifically to explain and elaborate on the kabbalistic concepts addressed by Nachmanides in his classiccommentary to the Five books of Moses.[citation needed] Abraham Maimonides(in the spirit of his father Maimonides,Saadiah Gaon, and other predecessors) explains at length in hisMilḥamot HaShemthat God is in no way literally within time or space nor physically outside time or space, since time and space simply do not apply to his being whatsoever, emphasizing theMonotheistOneness ofDivine transcendenceunlike any worldly conception. Kabbalah'sPanentheismexpressed byMoses CordoveroandHasidic thought, agrees that God's essence transcends all expression, but holds in contrast that existence is a manifestation of God's Being, descendingimmanentlythrough spiritual and physical condensations of the divine light. By incorporating the pluralist many within God, God's Oneness is deepened to exclude the true existence of anything but God. InHasidic Panentheism, the world isacosmicfrom the Divine view, yet real from its own perspective.[citation needed] Around the 1230s,Rabbi Meir ben Simon of Narbonnewrote an epistle (included in hisMilḥemet Mitzvah) against his contemporaries, the early Kabbalists, characterizing them as blasphemers who even approach heresy. He particularly singled out the Sefer Bahir, rejecting the attribution of its authorship to thetannaR. Neḥunya ben ha-Kanahand describing some of its content as truly heretical.[25] Leon of Modena, a 17th-centuryVenetiancritic of Kabbalah, wrote that if we were to accept the Kabbalah, then the Christian trinity would be compatible with Judaism, as the Trinity seems to resemble the kabbalistic doctrine of thesephirot. This was in response to the belief that some European Jews of the period addressed individualsephirotin their prayers, although the practice was apparently uncommon. Apologists explained that Jews may have been prayingforand not necessarilytothe aspects of Godliness represented by thesephirot. In contrast to Christianity, Kabbalists declare that one prays only "to Him (God's Essence, male solely by metaphor in Hebrew's gendered grammar), not to his attributes (sephirot or any other Divine manifestations or forms of incarnation)". Kabbalists directed their prayers to God's essence through the channels of particular sephirot usingkavanotDivine namesintentions. To pray to a manifestation of God introduces false division among the sephirot, disrupting their absolute unity, dependence and dissolving into the transcendentEin Sof; the sephirot descend throughout Creation, only appearing from man's perception of God, where God manifests by any variety of numbers.[citation needed] Yaakov Emden(1697–1776), himself an Orthodox Kabbalist who venerated theZohar,[87]concerned to battleSabbateanmisuse of Kabbalah, wrote theMitpaḥath Sfarim(Veil of the Books), an astute critique of theZoharin which he concludes that certain parts of the Zohar contain heretical teaching and therefore could not have been written by Shimon bar Yochai.[87] Vilna Gaon(1720–1797) held the Zohar and Luria in deep reverence, critically emending classic Judaic texts from historically accumulated errors by his acute acumen and scholarly belief in the perfect unity of Kabbalah revelation and Rabbinic Judaism. Though a Lurianic Kabbalist, his commentaries sometimes chose Zoharic interpretation over Luria when he felt the matter lent itself to a more exoteric view. Although proficient in mathematics and sciences and recommending their necessity for understandingTalmud, he had no use for canonical medievalJewish philosophy, declaring thatMaimonideshad been "misled by the accursed philosophy" in denying belief in the externaloccult mattersof demons, incantations and amulets.[88] Views of Kabbalists regardingJewish philosophyvaried from those who appreciatedMaimonideanand other classic medieval philosophical works, integrating them with Kabbalah and seeing profound human philosophical and Divine kabbalistic wisdoms as compatible, to those who polemicised against religious philosophy during times when it became overly rationalist and dogmatic. A dictum commonly cited by Kabbalists, "Kabbalah begins where Philosophy ends", can be read as either appreciation or polemic. Moses of Burgos (late 13th century) declared, "these philosophers whose wisdom you are praising end where we begin".[89]Moses Cordoveroappreciated the influence of Maimonides in his quasi-rational systemisation.[90]From its inception, the Theosophical Kabbalah became permeated by terminology adapted from philosophy and given new mystical meanings, such as its early integration with the Neoplatonism ofIbn Gabiroland use of Aristotelian terms of Form over Matter.[citation needed] Pinchas Giller andAdin Steinsaltzwrite that Kabbalah is best described as the inner part of traditionalJewish religion, the officialmetaphysicsof Judaism, that was essential to normative Judaism until fairly recently.[91][92]With the decline of Jewish life inmedieval Spain, it displaced rationalistJewish philosophyuntil the modern rise of Haskalah enlightenment, receiving a revival in ourpostmodernage. While Judaism always maintained a minority tradition of religious rationalist criticism of Kabbalah,Gershom Scholemwrites that Lurianic Kabbalah was the last theology that was near predominant in Jewish life. While Lurianism represented the elite of esoteric Kabbalism, its mythic-messianic divine drama and personalisation ofreincarnationcaptured the popular imagination inJewish folkloreand in theSabbateanandHasidicsocial movements.[93]Giller notes that the formerZoharic-Cordoverianclassic Kabbalah represented a common exoteric popular view of Kabbalah, as depicted in early modernMusar literature.[94] In contemporaryOrthodox Judaismthere is dispute as to the status of the Zohar's and Isaac Luria's (theArizal) Kabbalistic teachings. While a portion ofModern Orthodox, followers of theDor De'ahmovement, and many students of theRambamreject Arizal's Kabbalistic teachings, as well as deny that theZoharis authoritative or fromShimon bar Yohai, all three of these groups accept the existence and validity of the TalmudicMaaseh Breishit and Maaseh Merkavahmysticism. Their disagreement concerns whether the Kabbalistic teachings promulgated today are accurate representations of those esoteric teachings to which the Talmud refers. The mainstreamHaredi(Hasidic,Lithuanian,Oriental) andReligious ZionistJewish movements revere Luria and the Kabbalah, but one can find both rabbis who sympathize with such a view, while disagreeing with it,[95]as well as rabbis who consider such a view heresy. The HarediEliyahu DesslerandGedaliah Nadelmaintained that it is acceptable to believe that the Zohar was not written by Shimon bar Yochai and that it had a late authorship.[96]Yechiel Yaakov Weinbergmentioned the possibility of Christian influence in the Kabbalah with the "Kabbalistic vision of the Messiah as the redeemer of all mankind" being "the Jewish counterpart to Christ."[97] Modern Orthodox Judaism, representing an inclination to rationalism, embrace of academic scholarship, and the individual's autonomy to define Judaism, embodies a diversity of views regarding Kabbalah from aNeo-Hasidicspirituality toMaimonistanti-Kabbalism. In a book to help define central theological issues in Modern Orthodoxy, Michael J. Harris writes that the relationship between Modern Orthodoxy and mysticism has been under-discussed. He sees a deficiency of spirituality in Modern Orthodoxy, as well as the dangers in a fundamentalist adoption of Kabbalah. He suggests the development of neo-Kabbalistic adaptions of Jewish mysticism compatible with rationalism, offering a variety of precedent models from past thinkers ranging from the mystical inclusivism ofAbraham Isaac Kookto a compartmentalisation between Halakha and mysticism.[98] Yiḥyeh Qafeḥ, a 20th-centuryYemenite Jewishleader and Chief Rabbi of Yemen, spearheaded theDor De'ah("generation of knowledge") movement[99]to counteract the influence of the Zohar and modern Kabbalah.[100]He authored critiques of mysticism in general and Lurianic Kabbalah in particular; his magnum opus was Milḥamoth ha-Shem (Wars of Hashem)[101]against what he perceived asneo-platonicand gnostic influences on Judaism with the publication and distribution of the Zohar since the 13th Century. Rabbi Yiḥyah foundedyeshivot, rabbinical schools, and synagogues that featured a rationalist approach to Judaism based on the Talmud and works of Saadia Gaon and Maimonides (Rambam). In recent years, rationalists holding similar views as those of the Dor De'ah movement have described themselves as "talmide ha-Rambam" (disciples of Maimonides) rather than as being aligned with Dor De'ah, and are more theologically aligned with the rationalism ofModern Orthodox Judaismthan with OrthodoxḤasidicorḤaredicommunities.[102] Yeshayahu Leibowitz(1903–1994), an ultra-rationalist Modern Orthodox philosopher, referred to Kabbalah "a collection of "pagan superstitions" and "idol worship" in remarks given in 1990.[103] Kabbalah tended to be rejected by most Jews in the Conservative andReformmovements, though its influences were not eliminated. While it was generally not studied as a discipline, the KabbalisticKabbalat Shabbatservice remained part of liberal liturgy, as did theYedid Nefeshprayer. Nevertheless, in the 1960s,Saul Liebermanof theJewish Theological Seminary of Americais reputed to have introduced a lecture by Scholem on Kabbalah with a statement that Kabbalah itself was "nonsense", but the academic study of Kabbalah was "scholarship". This view became popular among many Jews, who viewed the subject as worthy of study, but who did not accept Kabbalah as teaching literal truths.[citation needed] According toBradley Shavit Artson(Dean of the ConservativeZiegler School of Rabbinic Studies): Many western Jews insisted that their future and their freedom required shedding what they perceived as parochial orientalism. They fashioned a Judaism that was decorous and strictly rational (according to 19th-century European standards), denigrating Kabbalah as backward, superstitious, and marginal.[104] However, in the late 20th century and early 21st century there has been a revival in interest in Kabbalah in all branches of liberal Judaism. The Kabbalistic 12th-century prayerAnim Zemirotwas restored to the new ConservativeSim Shalomsiddur, as was theB'rikh Shmehpassage from the Zohar, and the mysticalUshpizinservice welcoming to theSukkahthe spirits of Jewish forebears.Anim Zemirotand the 16th-century mystical poemLekhah Dodireappeared in the Reform SiddurGates of Prayerin 1975. All rabbinical seminaries now teach several courses in Kabbalah—inConservative Judaism, both theJewish Theological Seminary of Americaand theZiegler School of Rabbinic Studiesof theAmerican Jewish Universityin Los Angeles have full-time instructors in Kabbalah andHasidut, Eitan Fishbane and Pinchas Giller, respectively. In Reform Judaism, Sharon Koren teaches at theHebrew Union College-Jewish Institute of Religion. Reform rabbis like Herbert Weiner andLawrence Kushnerhave renewed interest in Kabbalah among Reform Jews. At theReconstructionist Rabbinical College, Joel Hecker is the full-time instructor teaching courses in Kabbalah and Hasidut.[citation needed] According to Artson: Ours is an age hungry for meaning, for a sense of belonging, for holiness. In that search, we have returned to the very Kabbalah our predecessors scorned. The stone that the builders rejected has become the head cornerstone (Psalm 118:22)... Kabbalah was the last universal theology adopted by the entire Jewish people, hence faithfulness to our commitment to positive-historical Judaism mandates a reverent receptivity to Kabbalah.[105] TheReconstructionistmovement, under the leadership of Arthur Green in the 1980s and 1990s, and with the influence of Zalman Schachter Shalomi, brought a strong openness to Kabbalah and hasidic elements that then came to play prominent roles in the Kol ha-Neshamah siddur series.[citation needed] Antinomianstrands of Kabbalah reject or invert normal religious principles as a way of attempting purification. In these frameworks, transgression or sin itself is viewed as a spiritual necessity, capable of unleashing hidden divine sparks trapped in impure realms. The most prominent antinomian movements within Judaism were theSabbateansand theFrankists.[106][107]Followers ofSabbatai Zevibelieved that the coming of the messiah renderedJewish commandmentsobsolete, with some sects engaging in ritualistic violations of the Law.[108][109]Many of his adherents continued to view his actions as part of a hidden divine plan. In the 18th century,Jacob Frankpushed this theology further, advocating explicitly for "redemption through sin," such asritualized orgiesandincest.[110][111][112]Eventually, the Frankists were encouraged to mass convert intoCatholicism.[113][114]These movements were widely condemned as heretical but demonstrate the extent to which mystical ideas could support radical or subversive, reinterpretations of Jewish life.[109] Teaching of classic esoteric kabbalah texts and practice remained traditional until recent times, passed on in Judaism from master to disciple, or studied by leading rabbinic scholars. This changed in the 20th century, through conscious reform and the secular openness of knowledge. In contemporary times kabbalah is studied in four very different, though sometimes overlapping, ways. The traditional method, employed among Jews since the 16th century, continues in learned study circles. Its prerequisite is to either be born Jewish or be a convert and to join a group of kabbalists under the tutelage of a rabbi, since the 18th century more likely a Hasidic one, though others exist among Sephardi-Mizrachi, and Lithuanian rabbinic scholars. Beyond elite, historical esoteric kabbalah, the public-communally studied texts of Hasidic thought explain kabbalistic concepts for wide spiritual application, through their own concern with popular psychological perception of Divine Panentheism.[50] A second, newuniversalistform, is the method of modern-style Jewish organisations and writers, who seek to disseminate kabbalah to every man, woman and child regardless of race or class, especially since the Western interest in mysticism from the 1960s. These derive from various cross-denominational Jewish interests in kabbalah, and range from considered theology to popularised forms that often adopt New Age terminology and beliefs for wider communication. These groups highlight or interpret kabbalah through non-particularist, universalist aspects.[115] A third way is non-Jewish organisations, mystery schools, initiation bodies, fraternities andsecret societies, the most popular of which areFreemasonry,Rosicrucianismand theGolden Dawn, although hundreds of similar societies claim a kabbalistic lineage. These derive fromsyncreticcombinations of Jewish kabbalah with Christian, occultist or contemporaryNew Agespirituality. As a separate spiritual tradition in Western esotericism since the Renaissance, with different aims from its Jewish origin, the non-Jewish traditions differ significantly and do not give an accurate representation of the Jewish spiritual understanding (or vice versa).[116] Fourthly, since the mid-20th century,historical-criticalscholarly investigation of all eras of Jewish mysticism has flourished into an established department of universityJewish studies. Where the first academic historians of Judaism in the 19th century opposed and marginalised kabbalah, Gershom Scholem and his successors repositioned the historiography of Jewish mysticism as a central, vital component of Judaic renewal through history. Cross-disciplinary academic revisions of Scholem's and others' theories are regularly published for a wide readership.[117] In recent decades, Kabbalah has seen a resurgence of interest, with several modern groups and individuals exploring its profound teachings. These contemporary interpretations of Kabbalah offer a fresh perspective on this ancient mystical tradition, often bridging the gap between traditional wisdom and modern thought. Some of these interpretations emphasize universalist and philosophical approaches, seeking to enrich secular disciplines through the lens of Kabbalistic insights. Others have gained attention for their unique blends of spirituality and popular culture, attracting followers from diverse backgrounds. These modern expressions of Kabbalah showcase its enduring appeal and relevance in today's world.[citation needed] Bnei Baruchis a group of Kabbalah students, based in Israel. Study materials are available in over 25 languages for free online or at printing cost. Michael Laitman established Bnei Baruch in 1991, following the passing of his teacher, Ashlag's son RavBaruch Ashlag. Laitman named his group Bnei Baruch (sons of Baruch) to commemorate the memory of his mentor. The teaching strongly suggests restricting one's studies to 'authentic sources', kabbalists of the direct lineage of master to disciple.[118][119] TheKabbalah Centrewas founded in the United States in 1965 as The National Research Institute of Kabbalah byPhilip Bergand Rav Yehuda Tzvi Brandwein, disciple of Yehuda Ashlag's. Later Philip Berg and his wife re-established the organisation as the worldwide Kabbalah Centre.[120][failed verification]The organization's leaders "vehemently reject" Orthodox Jewish identity.[121] TheKabbalah Society, run byWarren Kenton, an organisation based instead on pre-Lurianic Medieval Kabbalah presented in universalist style. In contrast, traditional kabbalists read earlier kabbalah through later Lurianism and the systemisations of 16th-century Safed.[citation needed] The New Kabbalah, website and books by Sanford L. Drob, is a scholarly intellectual investigation of the Lurianic symbolism in the perspective of modern and postmodern intellectual thought. It seeks a "new kabbalah" rooted in the historical tradition through its academic study, but universalised through dialogue with modern philosophy and psychology. This approach seeks to enrich the secular disciplines, while uncovering intellectual insights formerly implicit in kabbalah's essential myth:[122] By being equipped with the nonlinear concepts of dialectical, psychoanalytic, and deconstructive thought we can begin to make sense of the kabbalistic symbols in our own time. So equipped, we are today probably in a better position to understand the philosophical aspects of the kabbalah than were the kabbalists themselves.[123] The Kabbalah of Information is described in the 2018 bookFrom Infinity to Man: The Fundamental Ideas of Kabbalah Within the Framework of Information Theory and Quantum Physicswritten by Ukrainian-born professor and businessmanEduard Shyfrin. The main tenet of the teaching is "In the beginning He created information", rephrasing the famous saying of Nahmanides, "In the beginning He created primordial matter and He didn't create anything else, just shaped it and formed it."[124] Since the 18th century, Jewish mystical development has continued in Hasidic Judaism, turning kabbalah into a social revival with texts that internalise mystical thought. Among different schools,Chabad-LubavitchandBreslavwith related organisations, give outward looking spiritual resources and textual learning for secular Jews. The Intellectual Hasidism of Chabad most emphasises the spread and understanding of kabbalah through its explanation in Hasidic thought, articulating the Divine meaning within kabbalah through human rational analogies, uniting the spiritual and material, esoteric and exoteric in their Divine source: Hasidic thought instructs in the predominance of spiritual form over physical matter, the advantage of matter when it is purified, and the advantage of form when integrated with matter. The two are to be unified so one cannot detect where either begins or ends, for "the Divine beginning is implanted in the end and the end in the beginning" (Sefer Yetzira 1:7). The One God created both for one purpose – to reveal the holy light of His hidden power. Only both united complete the perfection desired by the Creator.[125] From the early 20th century,Neo-Hasidismexpressed a modernist or non-Orthodox Jewish interest in Jewish mysticism, becoming influential amongModern Orthodox,Conservative,ReformandReconstructionalistJewish denominations from the 1960s, and organised through theJewish RenewalandChavurahmovements. The writings and teachings ofZalman Schachter-Shalomi,Arthur Green,Lawrence Kushner,Herbert Weinerand others, has sought a critically selective, non-fundamentalist neo- Kabbalistic and Hasidic study andmystical spiritualityamong modernist Jews. The contemporary proliferation of scholarship byJewish mysticism academiahas contributed to critical adaptions of Jewish mysticism. Arthur Green's translations from the religious writings ofHillel Zeitlinconceive the latter to be a precursor of contemporary Neo-Hasidism. Reform rabbi Herbert Weiner'sNine and a Half Mystics: The Kabbala Today(1969), a travelogue among Kabbalists and Hasidim, brought perceptive insights into Jewish mysticism to many Reform Jews. Leading Reform philosopherEugene Borowitzdescribed the Orthodox HasidicAdin Steinsaltz(The Thirteen Petalled Rose) andAryeh Kaplanas major presenters of Kabbalistic spirituality for modernists today.[126] The writings of Abraham Isaac Kook (1864–1935), first chief rabbi of Mandate Palestine and visionary, incorporate kabbalistic themes through his own poetic language and concern with human and divine unity. His influence is in theReligious Zionistcommunity, who follow his aim that the legal and imaginative aspects of Judaism should interfuse: Due to the alienation from the "secret of God" [i.e. Kabbalah], the higher qualities of the depths of Godly life are reduced to trivia that do not penetrate the depth of the soul. When this happens, the most mighty force is missing from the soul of nation and individual, and Exile finds favor essentially... We should not negate any conception based on rectitude and awe of Heaven of any form—only the aspect of such an approach that desires to negate the mysteries and their great influence on the spirit of the nation. This is a tragedy that we must combat with counsel and understanding, with holiness and courage.[127] In several important areas of his history of the Kabbalah,Gershom Scholeminvestigates and considers the evidence of an interactivity of influence between the medieval Kabbalists of Provence and theCathar heresywhich was also prevalent in the region at the same time that the earliest works of medieval Kabbalah were written.[128]InJewish Influence on Christian Reform Movements,Louis I. Newmanconcluded, "Point by point, parallels can be found between Catharist views and the Kabbalah, and it may well be that at times there was an exchange of opinions between Jewish and Gentile mystics."[129]Earlier in the same book, Newman observed: …that the powerful Jewish culture in Languedoc, which had acquired sufficient strength to assume an aggressive, propagandist policy, created a milieu wherefrom movements of religious independence arose readily and spontaneously. Contact and association between Christian princes and their Jewish officials and friends stimulated the state of mind which facilitated the banishment of orthodoxy, the clearing away of the debris of Catholic theology. Unwilling to receive Jewish thought, the princes and laity turned towards Catharism, then being preached in their domains.[129] Nathaniel Deutschwrites: Initially, these interactions [betweenMandaeansandJewish mysticsin Babylonia from Late Antiquity to the medieval period] resulted in shared magical and angelological traditions. During this phase the parallels which exist betweenMandaeismandHekhalotmysticism would have developed. At some point, both Mandaeans and Jews living in Babylonia began to develop similar cosmogonic and theosophic traditions involving an analogous set of terms, concepts, and images. At present it is impossible to say whether these parallels resulted primarily from Jewish influence on Mandaeans, Mandaean influence on Jews, or from cross fertilization. Whatever their original source, these traditions eventually made their way into the priestly – that is, esoteric – Mandaean texts ... and into the Kabbalah.[130]: 222 R.J. Zwi Werblowskysuggests Mandaeism has more commonality with Kabbalah than withMerkabah mysticismsuch as cosmogony and sexual imagery.The Thousand and Twelve Questions,Scroll of Exalted Kingship, andAlma Rišaia Rbalink the alphabet with the creation of the world, a concept found inSefer Yetzirahand theBahir.[130]: 217Mandaean names foruthras(angels or guardians) have been found in Jewish magical texts.Abaturappears to be inscribed inside a Jewish magic bowl in a corrupted form as "Abiṭur".Ptahilis found inSefer HaRazimlisted among other angels who stand on the ninth step of the second firmament.[131]: 210–211 This article incorporates text from a publication now in thepublic domain:Singer, Isidore; et al., eds. (1901–1906). "Kabbalah".The Jewish Encyclopedia. New York: Funk & Wagnalls.
https://en.wikipedia.org/wiki/Kabbalah
Inreligion,mythology, andfiction, aprophecyis a message that has been communicated to a person (typically called aprophet) by asupernaturalentity. Prophecies are a feature of manyculturesandbelief systemsand usually containdivine willorlaw, orpreternaturalknowledge, for example of future events. They can be revealed to the prophet in various ways depending on the religion and the story, such asvisions, or direct interaction withdivine beingsin physical form. Stories ofpropheticdeeds sometimes receive considerable attention and some have been known to survive for centuries throughoral traditionor asreligious texts. The English noun "prophecy", in the sense of "function of a prophet" appeared from about 1225, fromOld Frenchprofecie(12th century), and fromprophetia,Greekpropheteia"gift of interpreting the will of God", from Greekprophetes(seeprophet). The related meaning, "thing spoken or written by a prophet", dates fromc.1300, while the verb "to prophesy" is recorded by 1377.[1] In 1863,Bahá'u'lláh, the founder of theBaháʼí Faith, claimed to have been the promised messianic figure of all previous religions, and aManifestation of God,[10]a type of prophet in the Baháʼí writings that serves as intermediary between the divine and humanity and who speaks with the voice of a God.[11]Bahá'u'lláh claimed that, while being imprisoned in theSiyah-Chalin Iran, he underwent a series of mystical experiences including having a vision of theMaid of Heavenwho told him of his divine mission, and the promise of divine assistance;[12]In Baháʼí belief, theMaid of Heavenis a representation of the divine.[13] TheHaedong Kosung-jon(Biographies of High Monks) records that KingBeopheung of Silladesired to promulgate Buddhism as the state religion. However, officials in his court opposed him. In the fourteenth year of his reign, Beopheung's "Grand Secretary",Ichadon, devised a strategy to overcome court opposition. Ichadon schemed with the king, convincing him to make a proclamation granting Buddhism official state sanction using the royal seal. Ichadon told the king to deny having made such a proclamation when the opposing officials received it and demanded an explanation. Instead, Ichadon would confess and accept the punishment of execution, for what would quickly be seen as a forgery. Ichadon prophesied to the king that at his execution a wonderful miracle would convince the opposing court faction of Buddhism's power. Ichadon's scheme went as planned, and the opposing officials took the bait. When Ichadon was executed on the 15th day of the 9th month in 527, his prophecy was fulfilled; the earth shook, the sun was darkened, beautiful flowers rained from the sky, his severed head flew to the sacred Geumgang Mountains, and milk instead of blood sprayed 100 feet in the air from his beheaded corpse. The omen was accepted by the opposing court officials as a manifestation of heaven's approval, and Buddhism was made the state religion in 527.[14] According toWalter Brueggemann, the task of prophetic (Christian) ministry is to nurture, nourish and evoke a consciousness and perception alternative to the consciousness and perception of the dominant culture.[15]A recognized form of Christian prophecy is the "prophetic drama" whichFrederick Dillistonedescribes as a "metaphorical conjunction between present situations and future events".[16] In hisDialogue with Trypho,Justin Martyrargued that prophets were no longer among Israel but were in the Church.[17]The Shepherd of Hermas, written around the mid-2nd century, describes the way prophecy was being used within the church of that time.Irenaeusconfirms the existence of suchspiritual giftsin hisAgainst Heresies. Although some modern commentators claim thatMontanuswas rejected because he claimed to be a prophet, a careful examination of history shows that the gift of prophecy was still acknowledged during the time of Montanus, and that he was controversial because of the manner in which he prophesied and the doctrines he propagated.[18] Prophecy and other spiritual gifts were somewhat rarely acknowledged throughout church history and there are few examples of the prophetic and certain other gifts until the ScottishCovenanterslikeProphet PedenandGeorge Wishart.[citation needed]From 1904 to 1906, theAzusa Street Revivaloccurred in Los Angeles, California and is sometimes considered the birthplace ofPentecostalism. This revival is well known for the "speaking in tongues" that occurred there. Some participants of the Azusa Street Revival are claimed to have prophesied. Pentecostals believe prophecy and certain other gifts are once again being given to Christians. TheCharismatic Movementalso accepts spiritual gifts like speaking in tongues and prophecy. TheSeventh-day Adventist Churchis a denomination that traces its history to theMillerite Movementand theGreat Disappointment. Seventh-day Adventists "accept the biblical teaching of spiritual gifts and believe that the gift of prophecy is one of the identifying marks of the remnant church." The church also believesEllen G. Whiteto be a prophet and that her writings are divinely inspired. Since 1972, theneo-PentecostalChurch of God Ministry of Jesus Christ Internationalhas expressed a belief in prophecy. The church claims this gift is manifested by one person (the prophesier) laying their hands on another person, who receives an individual message said by the prophesier. Prophesiers are believed to be used by theHoly Ghostas instruments through whom their God expresses his promises, advice and commandments. The church claims people receive messages about their future, in the form of promises given by their God and expected to be fulfilled by divine action.[19] In theApostolic-Prophetic Movement, a prophesy is simply a word delivered under the inspiration of theHoly Spiritthat accurately communicates God's "thoughts and intention".[20] The Apostolic Council of Prophetic Elders was a council of prophetic elders co-convened byC. Peter Wagnerand Cindy Jacobs that included: Beth Alves, Jim Gool,Chuck Pierce,Mike and Cindy Jacobs, Bart Pierces, John and Paula Sanford,Dutch Sheets,Tommy Tenny, Heckor Torres, Barbara Wentroble,Mike Bickle,Paul Cain, Emanuele Cannistraci, Bill Hamon,Kingsley Fletcher, Ernest Gentile, Jim Laffoon, James Ryle, and Gwen Shaw.[21] TheLatter Day Saint movementmaintains that its first prophet,Joseph Smith, was visited byGodand Jesus Christ in 1820. The Latter Day Saints further claims that God communicated directly with Joseph Smith on many subsequent occasions, and that following the death of Joseph Smith God has continued to speak through subsequent prophets. Joseph Smith claims to have been led by an angel to a large hill in upstate New York, where he was shown an ancient manuscript engraved on plates of gold metal. Joseph Smith claimed to have translated this manuscript into modern English under divine inspiration by the gift and power of God, and the publication of this translation are known as theBook of Mormon. Following Smith's murder, there was asuccession crisisthat resulted in a great schism. The majority ofLatter-day SaintsbelievingBrigham Youngto be the next prophet and following him out to Utah, while a minority returned to Missouri with Emma Smith, believing Joseph Smith Junior's son,Joseph Smith III, to be the next legitimate prophet (forming theReorganized Church of Jesus Christ of Latter Day Saints, now the Community of Christ). Since even before the death of Joseph Smith in 1844, there have been numerous separatistLatter Day Saint sectsthat have splintered from theChurch of Jesus Christ of Latter Day Saints. To this day, there are an unknown number of organizations within the Latter Day Saint movement, each with their own proposed prophet. TheChurch of Jesus Christ of Latter-day Saints(LDS Church) is the largest Latter Day Saint body. The currentProphet/Presidentof the LDS Church isRussell M. Nelson. The church has, sinceJoseph Smith's deathon June 27, 1844, held a belief that the president of their church is also a literal prophet of God. The church also maintains that further revelations claimed to have been given through Joseph Smith are published in theDoctrine and Covenants, one of theStandard Works. Additional revelations and prophecies outside the Standard Works, such as Joseph Smith's "White Horse Prophecy", concerning a great and final war in the United States before the Second Coming of Jesus Christ, can be found in other church published works. The Arabic term for prophecynubū'ah(Arabic:نُبُوْءَة) stems from the term for prophets,nabī(Arabic:نَبِي; pl.anbiyāʼfromnabā"tidings, announcement") who are lawbringers thatMuslimsbelieve were sent byGodto every person, bringing God's message in a language they can understand.[22][23]But there is also the termrasūl(Arabic:رسول"messenger, apostle") to classify those who bring a divinerevelation(Arabic:رسالةrisālah"message") via anangel.[22][24]Knowledge of the Islamic prophets is one of thesix articles of the Islamic faith,[25]and specifically mentioned in the Quran.[26]Along withMuhammad, many of theprophets in Judaism(such asNoah,Abraham,Moses,Aaron,Elijah, etc.) andprophets of Christianity(Adam,Zechariah the priest,John the Baptist,Jesus Christ) are mentioned by name in the Quran.[22] In the sense of predicting events, theQurancontains verses believed to have predicted many events years before they happened and that such prophecies are proof of the divine origin of the Qur'an. The Qur'an itself states "Every ˹destined˺ matter has a ˹set˺ time to transpire. And you will soon come to know."[Quran6:67]Muslims also recognize the validity of some prophecies in other sacred texts like in theBible; however, they believe that, unlike the Qur'an, some parts of the Bible have been corrupted over the years, and as a result, not all of the prophecies and verses in the Bible are accurate.[27] The Hebrew term for prophet,Navi(נביא), literally means "spokesperson"; a prophet speaks to the people as a mouthpiece of theirGod, and to their god on behalf of the people. "The nameprophet,from the Greek meaning "forespeaker" (πρὸbeing used in the original local sense), is an equivalent of the HebrewNavi, which signifies properly a delegate or mouthpiece of another."[28] Sigmund Mowinckel's account of prophecy in ancient Israel distinguishesseersand prophets - both in their origins and in their functions: According to Mowinckel, the early seer and the ecstatic prophet derived from two distinctly different social and institutional backgrounds. The seer belonged to the earliest stratum of Israelite society and was related to the priest who 'was not originally in the first instance a sacrificer, but as with the old Arabs, custodian of the sanctuary, oracle priest, "seer" and holder of the effective future-creating and future-interpreting word of power, the blessing and the curse.' [...] Ecstatic prophecy -nebiism- and temple priests were indigenous to Canaanite culture and represented elements adopted by the Israelites. With the fusion of the functions of the seer-priest with the functions of the temple-sacrificial priests and ecstatic prophets, two main groups developed: the priests occupied with cult and sacrifice [...] and the 'prophets' who 'continued the more "pneumatic" aspect of the character and work of the old "seers"' and 'were mediums of the divinely inspired "word" which was "whispered to" them, or "came to them"' [...] The prophets retained, in guild fashion, the old seer relationship to the cult [...].[29] According to Judaism, authenticNevuah(נבואה, "Prophecy") got withdrawn from the world after the destruction of the firstJerusalem Temple.[30]Malachiis acknowledged to have been the last authentic prophet if one accepts the opinion thatNechemyahdied in Babylon before 9th Tevet 3448 (313 BCE).[31] TheTorahcontains laws concerning thefalse prophet(Deuteronomy 13:2-6, 18:20-22). Prophets in Islam, likeLot, for example, are false prophets according to Jewish standards. In the Torah, prophecy often consisted of a conditioned warning by theirGodof the consequences should the society, specific communities, or their leaders not adhere to the Torah's instructions in the time contemporary with the prophet's life. Prophecies sometimes included conditioned promises of blessing for obeying their god, and returning to behaviors and laws as written in the Torah. Conditioned-warning prophecies feature in all Jewish works of theTanakh. NotablyMaimonides(1138–1204), philosophically suggested that there once were many levels of prophecy, from the highest (such as those experienced byMoses) to the lowest (where the individuals were able to apprehend the Divine Will, but not respond or even describe this experience to others, citing for example, Shem, Eber and most notably,Noah, who, in the biblical narrative, does not issue prophetic declarations).[32] Maimonides, in his philosophical workThe Guide for the Perplexed, outlines twelve modes of prophecy[33]from lesser to greater degree of clarity: The Tanakh contains prophecies from variousHebrew prophets(55 in total) who communicated messages from God to thenation of Israel, and later to the population ofJudeaand elsewhere. Experience of prophecy in the Torah and the rest of Tanakh was not restricted to Jews. Nor was the prophetic experience restricted to theHebrewlanguage. There exists a problem in verifying most Native American prophecy, in that they remain primarily anoral tradition, and thus there is no way to cite references of where writings have been committed to paper. In their system, the best reference is an Elder, who acts as a repository of the accumulated wisdom of their tradition. In another type of example, it is recorded that there are threeDogribprophets who had claimed to have been divinely inspired to bring the message of Christianity's God to their people.[34]This prophecy among the Dogrib involves elements such as dances and trance-like states.[35] In ancient Chinese, prophetic texts are known asChen(谶). The most famous Chinese prophecy is theTui bei tu(推背圖). Esotericprophecy has been claimed for, but not by, Michel de Nostredame (1503–1566), popularly referred to asNostradamus, who claimed to be aconverted Christian. It is known that he suffered several tragedies in his life, and was persecuted to some degree for his cryptic esoteric writings about the future, reportedly derived through a use of acrystal ball. Nostradamus was a Frenchapothecaryand reputed seer who published collections of foreknowledge of future events. He is best known for his bookLes Propheties("The Prophecies"), the first edition of which appeared in 1555. SinceLes Prophetieswas published, Nostradamus has attracted an esoteric following that, along with the popularistic press, credits him with foreseeing world events. His esoteric cryptic foreseeings have in some cases been assimilated to the results of applying the allegedBible code, as well as to other purported pseudo-prophetic works. Most reliable academic sources maintain that the associations made between world events and Nostradamus'squatrainsare largely the result of misinterpretations or mistranslations (sometimes deliberate) or else are so tenuous as to render them useless as evidence of any genuine predictive power. Moreover, none of the sources listed offers any evidence that anyone has ever interpreted any of Nostradamus's pseudo-prophetic works specifically enough to allow a clear identification of any event in advance.[36] According to skeptics, many apparently fulfilled prophecies can be explained ascoincidences, possibly aided by the prophecy's own vagueness, and others may have been invented after the fact to match the circumstances of a past event (an act termed "postdiction").[37][38][39] Bill Whitcomb inThe Magician's Companionobserves, One point to remember is that the probability of an event changes as soon as a prophecy (or divination) exists. . . . The accuracy or outcome of any prophecy is altered by the desires and attachments of the seer and those who hear the prophecy.[40] Many prophets make a large number of prophecies. This makes the chances of at least one prophecy being correct much higher by sheer weight of numbers.[41] The phenomenon of prophecy is not well understood in psychology research literature. Psychiatrist and neurologist Arthur Deikman describes the phenomenon as an "intuitive knowing, a type of perception that bypasses the usual sensory channels and rational intellect."[42] "(P)rophecy can be likened to a bridge between the individual 'mystical self' and the communal 'mystical body'," writes religious sociologistMargaret Poloma.[43]Prophecy seems to involve "the free association that occurred through the workings of the right brain."[44] Psychologist Julian Jaynes proposed that this is a temporary accessing of the bicameral mind; that is, a temporary separating of functions, such that the authoritarian part of the mind seems to literally be speaking to the person as if a separate (and external) voice. Jaynes posits that the gods heard as voices in the head were and are organizations of the central nervous system. God speaking through man, according to Jaynes, is a more recent vestige of God speaking to man; the product of a more integrated higher self. When the bicameral mind speaks, there is no introspection. In earlier times, posits Jaynes, there was additionally a visual component, now lost.[45] Child development and consciousness authorJoseph Chilton Pearceremarked that revelation typically appears in symbolic form and "in a single flash of insight."[46]He used the metaphor of lightning striking and suggests that the revelation is "a result of a buildup of resonant potential."[47]Pearce compared it to the earth asking a question and the sky answering it. Focus, he said, feeds into "a unified field of like resonance (and becomes) capable of attracting and receiving the field's answer when it does form."[48] Some cite aspects ofcognitive psychologysuch as pattern forming and attention to the formation of prophecy in modern-day society as well as the declining influence of religion in daily life.[49] For theancient Greeks, prediction, prophesy, and poetry were often intertwined.[50]Prophecies were given in verse, and a word for poet in Latin is “vates” or prophet.[50]Both poets andoraclesclaimed to be inspired by forces outside themselves. In ancient China, divination is regarded as the oldest form of occult inquiry and was often expressed in verse.[51]In contemporary Western cultures, theological revelation and poetry are typically seen as distinct and often even as opposed to each other. Yet the two still are often understood together as symbiotic in their origins, aims, and purposes.[52] Middle Englishpoems of a political nature are linked with Latin and vernacular prophecies. Prophecies in this sense are predictions concerning kingdoms or peoples; and these predictions are ofteneschatologicalorapocalyptic.[53]The prophetic tradition in English derives in fromGeoffrey of Monmouth'sHistory of the Kings of Britain(1136), otherwise called "Prophecies of Merlin;" this work is prelude to numerous books devoted toKing Arthur. In 18th century England, prophecy as poetry is revived byWilliam Blake[54]who wrote:America: A Prophecy(1783) andEurope: A Prophecy(1794).[53] Contemporary American poetry is also rich in lyrics about prophesy, including poems entitled Prophecy byDana Gioia[55]andEileen Myles. In 1962,Robert Frostpublished "The Prophets Really Prophesy asMysticsthe Commentators Merely byStatistics".[56]Other modern poets who write on prophets or prophecy includeCarl Dennis,Richard Wilbur,[57]andDerek Walcott.[58]
https://en.wikipedia.org/wiki/Prophecy
TheVaticinia Michaelis Nostradami de Futuri Christi Vicarii ad Cesarem Filium D. I. A. Interprete(The Prophecies of Michel Nostradamus on The FutureVicars of Christto Cesar His Son, As Expounded by Lord Abbot Joachim), orVaticinia Nostradami(The Prophecies of Nostradamus) for short, is a collection of eightywatercolorimages compiled as an illustratedcodex.[1]A version of the well-knownVaticinia de Summis Pontificibusof the 13th–14th century,[2]it was discovered in 1994 by theItalianjournalists Enza Massa andRoberto Pinottiin theBiblioteca Nazionale Centrale di Roma(Central National Library) in Rome, Italy.[3]The document can be found in the library under the titleFondo Vittorio Emanuele 307. A postscript byCarthusianlibrarians states that the book had been presented by one Brother Beroaldus to cardinal Maffeo Barberini, who would later becomePope Urban VIII(1623–1644). A further covering note suggests that the images were by the French seerNostradamus(1503–1566), and had been sent to Rome by his son César de Nostredame as a gift. There is, however, absolutely no contemporary evidence that Nostradamus himself was either a painter or the author of the work, whose contents in fact date from several centuries before his time—nor, indeed, that he had ever heard of it, given that it did not finally appear in print until after his death.[2]The postscript is in fact dated '1629', and the covering note (not in Nostradamus's hand) from which the Nostradamian title derives cannot, on the basis of its contents, date from earlier than 1689 – though an internal note does refer to a source dated 1343.[4] Nevertheless, the highly speculative Italian writer Ottavio Cesare Ramotti,[5]together with theHistory Channel'sThe Lost Book of Nostradamus(October 2007), have still made much of the book's supposedly 'Nostradamian' origin. There is a letter by Cèsar de Nostredame (Michel's first son), written to the French scientistFabri de Peiresc, in which mention is made of several miniatures painted by Cèsar, and of a booklet that was destined as a gift to KingLouis XIIIin 1629,[6]but there is no evidence whatsoever of any connection between these and theVaticinia.[2] The images contain symbolic objects, letters, animals, crossings of banners, bugles, crosses, candles, three writing styles, etc., some of which seem to some to form figures similar to Roman numerals, or veiled references to personal names. As suggested by the various added inscriptions, they are supposed to have been inspired by the celebrated papal prophecies of AbbotJoachim of Fiore, a 12th-centuryCistercianmonk fromCalabria.[7] The origin of the work is clearly the fourteenth centuryVaticinia de Summis Pontificibus, in which most of the images (including Image 23 opposite) are to be found. By way of example, its Image 12 corresponds to the latter's Image 9, Image 18 to 15, Image 23 to 20, Image 24 to 21 and Image 29 to 26 (note, too, the similarity of sequence).[2]A work similar to this is Marston MS 225, which can be found in the manuscript and rare-book library ofYale University, inNew Haven,Connecticut, United States.[8]This manuscript comes from the German areas ofBavariaandBohemia, probably from within the courts of emperorFrederick IIIandMaximilian I.
https://en.wikipedia.org/wiki/Vaticinia_Nostradami
TheVigenère cipher(French pronunciation:[viʒnɛːʁ]) is a method ofencryptingalphabetictext where each letter of theplaintextis encoded with a differentCaesar cipher, whose increment is determined by the corresponding letter of another text, thekey. For example, if the plaintext isattacking tonightand the key isoculorhinolaryngology, then and so on. It is important to note that traditionally spaces and punctuation are removed prior to encryption[1]and reintroduced afterwards. If the recipient of the message knows the key, they can recover the plaintext by reversing this process. The Vigenère cipher is therefore a special case of apolyalphabetic substitution.[2][3] First described byGiovan Battista Bellasoin 1553, the cipher is easy to understand and implement, but it resisted all attempts to break it until 1863, three centuries later. This earned it the descriptionle chiffrage indéchiffrable(Frenchfor 'the indecipherable cipher'). Many people have tried to implement encryption schemes that are essentially Vigenère ciphers.[4]In 1863,Friedrich Kasiskiwas the first to publish a general method of deciphering Vigenère ciphers. In the 19th century, the scheme was misattributed toBlaise de Vigenère(1523–1596) and so acquired its present name.[5] The very first well-documented description of a polyalphabetic cipher was byLeon Battista Albertiaround 1467 and used a metalcipher diskto switch between cipher alphabets. Alberti's system only switched alphabets after several words, and switches were indicated by writing the letter of the corresponding alphabet in the ciphertext. Later,Johannes Trithemius, in his workPolygraphia(which was completed in manuscript form in 1508 but first published in 1518),[6]invented thetabula recta, a critical component of the Vigenère cipher.[7]TheTrithemius cipher, however, provided a progressive, rather rigid and predictable system for switching between cipher alphabets.[note 1] In 1586 Blaise de Vigenère published a type of polyalphabetic cipher called anautokey cipher– because its key is based on the original plaintext – before the court ofHenry III of France.[8]The cipher now known as the Vigenère cipher, however, is based on that originally described byGiovan Battista Bellasoin his 1553 bookLa cifra del Sig. Giovan Battista Bellaso.[9]He built upon the tabula recta of Trithemius but added a repeating "countersign" (akey) to switch cipher alphabets every letter. Whereas Alberti and Trithemius used a fixed pattern of substitutions, Bellaso's scheme meant the pattern of substitutions could be easily changed, simply by selecting a new key. Keys were typically single words or short phrases, known to both parties in advance, or transmitted "out of band" along with the message, Bellaso's method thus required strong security for only the key. As it is relatively easy to secure a short key phrase, such as by a previous private conversation, Bellaso's system was considerably more secure.[citation needed] Note, however, as opposed to the modern Vigenère cipher, Bellaso's cipher didn't have 26 different "shifts" (different Caesar's ciphers) for every letter, instead having 13 shifts for pairs of letters. In the 19th century, the invention of this cipher, essentially designed by Bellaso, was misattributed to Vigenère. David Kahn, in his book,The Codebreakerslamented this misattribution, saying that history had "ignored this important contribution and instead named a regressive and elementary cipher for him [Vigenère] though he had nothing to do with it".[10] The Vigenère cipher gained a reputation for being exceptionally strong. Noted author and mathematician Charles Lutwidge Dodgson (Lewis Carroll) called the Vigenère cipher unbreakable in his 1868 piece "The Alphabet Cipher" in a children's magazine. In 1917,Scientific Americandescribed the Vigenère cipher as "impossible of translation".[11][12]That reputation was not deserved.Charles Babbageis known to have broken a variant of the cipher as early as 1854 but did not publish his work.[13]Kasiski entirely broke the cipher and published the technique in the 19th century, but even in the 16th century, some skilled cryptanalysts could occasionally break the cipher.[10] The Vigenère cipher is simple enough to be a field cipher if it is used in conjunction with cipher disks.[14]TheConfederate States of America, for example, used a brass cipher disk to implement the Vigenère cipher during theAmerican Civil War. The Confederacy's messages were far from secret, and the Union regularly cracked its messages. Throughout the war, the Confederate leadership primarily relied upon three key phrases: "Manchester Bluff", "Complete Victory" and, as the war came to a close, "Come Retribution".[15] A Vigenère cipher with a completely random (and non-reusable) key which is as long as the message becomes aone-time pad, a theoretically unbreakable cipher.[16]Gilbert Vernamtried to repair the broken cipher (creating the Vernam–Vigenère cipher in 1918), but the technology he used was so cumbersome as to be impracticable.[17] In aCaesar cipher, each letter of the alphabet is shifted along some number of places. For example, in a Caesar cipher of shift 3,awould becomeD,bwould becomeE,ywould becomeBand so on. The Vigenère cipher has several Caesar ciphers in sequence with different shift values. To encrypt, a table of alphabets can be used, termed atabula recta,Vigenère squareorVigenère table. It has the alphabet written out 26 times in different rows, each alphabet shifted cyclically to the left compared to the previous alphabet, corresponding to the 26 possible Caesar ciphers. At different points in the encryption process, the cipher uses a different alphabet from one of the rows. The alphabet used at each point depends on a repeating keyword.[citation needed] For example, suppose that theplaintextto be encrypted is The person sending the message chooses a keyword and repeats it until it matches the length of the plaintext, for example, the keyword "LEMON": Each row starts with a key letter. The rest of the row holds the letters A to Z (in shifted order). Although there are 26 key rows shown, a code will use only as many keys (different alphabets) as there are unique letters in the key string, here just 5 keys: {L, E, M, O, N}. For successive letters of the message, successive letters of the key string will be taken and each message letter enciphered by using its corresponding key row. When a new character of the message is selected, the next letter of the key is chosen, and the row corresponding to that char is gone along to find the column heading that matches the message character. The letter at the intersection of [key-row, msg-col] is the enciphered letter. For example, the first letter of the plaintext,a, is paired withL, the first letter of the key. Therefore, rowLand columnAof the Vigenère square are used, namelyL. Similarly, for the second letter of the plaintext, the second letter of the key is used. The letter at rowEand columnTisX. The rest of the plaintext is enciphered in a similar fashion: Decryption is performed by going to the row in the table corresponding to the key, finding the position of the ciphertext letter in that row and then using the column's label as the plaintext. For example, in rowL(fromLEMON), the ciphertextLappears in columnA, soais the first plaintext letter. Next, in rowE(fromLEMON), the ciphertextXis located in columnT. Thustis the second plaintext letter. Vigenère can also be described algebraically. If the lettersA–Zare taken to be the numbers 0–25 (A=^0{\displaystyle A\,{\widehat {=}}\,0},B=^1{\displaystyle B\,{\widehat {=}}\,1}, etc.), and addition is performedmodulo26, Vigenère encryptionE{\displaystyle E}using the keyK{\displaystyle K}can be written as and decryptionD{\displaystyle D}using the keyK{\displaystyle K}as in whichM=M1…Mn{\displaystyle M=M_{1}\dots M_{n}}is the message,C=C1…Cn{\displaystyle C=C_{1}\dots C_{n}}is the ciphertext andK=K1…Kn{\displaystyle K=K_{1}\dots K_{n}}is the key obtained by repeating the keyword⌈n/m⌉{\displaystyle \lceil n/m\rceil }times in whichm{\displaystyle m}is the keyword length. Thus, by using the previous example, to encryptA=^0{\displaystyle A\,{\widehat {=}}\,0}with key letterL=^11{\displaystyle L\,{\widehat {=}}\,11}the calculation would result in11=^L{\displaystyle 11\,{\widehat {=}}\,L}. Therefore, to decryptR=^17{\displaystyle R\,{\widehat {=}}\,17}with key letterE=^4{\displaystyle E\,{\widehat {=}}\,4}, the calculation would result in13=^N{\displaystyle 13\,{\widehat {=}}\,N}. In general, ifΣ{\displaystyle \Sigma }is the alphabet of lengthℓ{\displaystyle \ell }, andm{\displaystyle m}is the length of key, Vigenère encryption and decryption can be written: Mi{\displaystyle M_{i}}denotes the offset of thei-th character of the plaintextM{\displaystyle M}in the alphabetΣ{\displaystyle \Sigma }. For example, by taking the 26 English characters as the alphabetΣ=(A,B,C,…,X,Y,Z){\displaystyle \Sigma =(A,B,C,\ldots ,X,Y,Z)}, the offset of A is 0, the offset of B is 1 etc.Ci{\displaystyle C_{i}}andKi{\displaystyle K_{i}}are similar. The idea behind the Vigenère cipher, like all other polyalphabetic ciphers, is to disguise the plaintextletter frequencyto interfere with a straightforward application offrequency analysis. For instance, ifPis the most frequent letter in a ciphertext whose plaintext is inEnglish, one might suspect thatPcorresponds toesinceeis the most frequently used letter in English. However, by using the Vigenère cipher,ecan be enciphered as different ciphertext letters at different points in the message, which defeats simple frequency analysis. The primary weakness of the Vigenère cipher is the repeating nature of itskey. If a cryptanalyst correctly guesses the key's lengthn, the cipher text can be treated asninterleavedCaesar ciphers, which can easily be broken individually. The key length may be discovered bybrute forcetesting each possible value ofn, orKasiski examinationand theFriedman testcan help to determine the key length (see below:§ Kasiski examinationand§ Friedman test). In 1863,Friedrich Kasiskiwas the first to publish a successful general attack on the Vigenère cipher.[18]Earlier attacks relied on knowledge of the plaintext or the use of a recognizable word as a key. Kasiski's method had no such dependencies. Although Kasiski was the first to publish an account of the attack, it is clear that others had been aware of it. In 1854,Charles Babbagewas goaded into breaking the Vigenère cipher when John Hall Brock Thwaites submitted a "new" cipher to theJournal of the Society of the Arts.[19][20]When Babbage showed that Thwaites' cipher was essentially just another recreation of the Vigenère cipher, Thwaites presented a challenge to Babbage: given an original text (from Shakespeare'sThe Tempest: Act 1, Scene 2) and its enciphered version, he was to find the key words that Thwaites had used to encipher the original text. Babbage soon found the key words: "two" and "combined". Babbage then enciphered the same passage from Shakespeare using different key words and challenged Thwaites to find Babbage's key words.[21]Babbage never explained the method that he used. Studies of Babbage's notes reveal that he had used the method later published by Kasiski and suggest that he had been using the method as early as 1846.[22] TheKasiski examination, also called the Kasiski test, takes advantage of the fact that repeated words are, by chance, sometimes encrypted using the same key letters, leading to repeated groups in the ciphertext. For example, consider the following encryption using the keywordABCD: There is an easily noticed repetition in the ciphertext, and so the Kasiski test will be effective. The distance between the repetitions ofCSASTPis 16. If it is assumed that the repeated segments represent the same plaintext segments, that implies that the key is 16, 8, 4, 2, or 1 characters long. (Allfactorsof the distance are possible key lengths; a key of length one is just a simpleCaesar cipher, and itscryptanalysisis much easier.) Since key lengths 2 and 1 are unrealistically short, one needs to try only lengths 16, 8, and 4. Longer messages make the test more accurate because they usually contain more repeated ciphertext segments. The following ciphertext has two segments that are repeated: The distance between the repetitions ofVHVSis 18. If it is assumed that the repeated segments represent the same plaintext segments, that implies that the key is 18, 9, 6, 3, 2, or 1 characters long. The distance between the repetitions ofQUCEis 30 characters. That means that the key length could be 30, 15, 10, 6, 5, 3, 2, or 1 characters long. By taking theintersectionof those sets, one could safely conclude that the most likely key length is 6 since 3, 2, and 1 are unrealistically short. The Friedman test (sometimes known as the kappa test) was invented during the 1920s byWilliam F. Friedman, who used theindex of coincidence, which measures the unevenness of the cipher letter frequencies to break the cipher. By knowing the probabilityκp{\displaystyle \kappa _{\text{p}}}that any two randomly chosen source language letters are the same (around 0.067 forcase-insensitiveEnglish) and the probability of a coincidence for a uniform random selection from the alphabetκr{\displaystyle \kappa _{\text{r}}}(1⁄26= 0.0385 for English), the key length can be estimated as the following: from the observed coincidence rate in whichcis the size of the alphabet (26 for English),Nis the length of the text andn1toncare the observed ciphertextletter frequencies, as integers. That is, however, only an approximation; its accuracy increases with the length of the text. It would, in practice, be necessary to try various key lengths that are close to the estimate.[23]A better approach for repeating-key ciphers is to copy the ciphertext into rows of a matrix with as many columns as an assumed key length and then to compute the averageindex of coincidencewith each column considered separately. When that is done for each possible key length, the highest average index of coincidence then corresponds to the most-likely key length.[24]Such tests may be supplemented by information from the Kasiski examination. Once the length of the key is known, the ciphertext can be rewritten into that many columns, with each column corresponding to a single letter of the key. Each column consists of plaintext that has been encrypted by a singleCaesar cipher. The Caesar key (shift) is just the letter of the Vigenère key that was used for that column. Using methods similar to those used to break the Caesar cipher, the letters in the ciphertext can be discovered. An improvement to the Kasiski examination, known asKerckhoffs' method, matches each column's letter frequencies to shifted plaintext frequencies to discover the key letter (Caesar shift) for that column. Once every letter in the key is known, all the cryptanalyst has to do is to decrypt the ciphertext and reveal the plaintext.[25]Kerckhoffs' method is not applicable if the Vigenère table has been scrambled, rather than using normal alphabetic sequences, but Kasiski examination and coincidence tests can still be used to determine key length. The Vigenère cipher, with normal alphabets, essentially uses modulo arithmetic, which is commutative. Therefore, if the key length is known (or guessed), subtracting the cipher text from itself, offset by the key length, will produce the plain text subtracted from itself, also offset by the key length. If any "probable word" in the plain text is known or can be guessed, its self-subtraction can be recognized, which allows recovery of the key by subtracting the known plaintext from the cipher text. Key elimination is especially useful against short messages. For example, usingLIONas the key below: Then subtract the ciphertext from itself with a shift of the key length 4 forLION. Which is nearly equivalent to subtracting the plaintext from itself by the same shift. Which is algebraically represented fori∈[1,n−m]{\displaystyle i\in [1,n-m]}as: In this example, the wordsbrownfoxare known. This resultomazcorresponds with the 9th through 12th letters in the result of the larger examples above. The known section and its location is verified. Subtractbrowfrom that range of the ciphertext. This produces the final result, the reveal of the keyLION. Therunning keyvariant of the Vigenère cipher was also considered unbreakable at one time. For the key, this version uses a block of text as long as the plaintext. Since the key is as long as the message, the Friedman and Kasiski tests no longer work, as the key is not repeated. If multiple keys are used, the effective key length is theleast common multipleof the lengths of the individual keys. For example, using the two keysGOandCAT, whose lengths are 2 and 3, one obtains an effective key length of 6 (the least common multiple of 2 and 3). This can be understood as the point where both keys line up. Encrypting twice, first with the keyGOand then with the keyCATis the same as encrypting once with a key produced by encrypting one key with the other. This is demonstrated by encryptingattackatdawnwithIOZQGH, to produce the same ciphertext as in the original example. If key lengths are relatively prime, the effective key length is the product of the key lengths, and hence grows quickly as the individual key lengths are increased. For example, while the effective length of combined key lengths of 10, 12, and 15 characters is only 60 (2x2x3x5), that of key lengths of 8, 11, and 15 characters is 1320 (8x11x15). If this effective key length is longer than the ciphertext, it achieves the same immunity to the Friedman and Kasiski tests as the running key variant. If one uses a key that is truly random, is at least as long as the encrypted message, and is used only once, the Vigenère cipher is theoretically unbreakable. However, in that case, the key, not the cipher, provides cryptographic strength, and such systems are properly referred to collectively asone-time padsystems, irrespective of the ciphers employed. A simple variant is to encrypt by using the Vigenère decryption method and to decrypt by using Vigenère encryption. That method is sometimes referred to as "Variant Beaufort". It is different from theBeaufort cipher, created byFrancis Beaufort, which is similar to Vigenère but uses a slightly modified enciphering mechanism and tableau. The Beaufort cipher is areciprocal cipher. Despite the Vigenère cipher's apparent strength, it never became widely used throughout Europe. The Gronsfeld cipher is a variant attributed byGaspar Schottto Count Gronsfeld (Josse Maximilaan vanGronsveldné van Bronckhorst) but was actually used much earlier by an ambassador of Duke of Mantua in 1560s-1570s. It is identical to the Vigenère cipher except that it uses just a cipher alphabet of 10 characters, corresponding to the digits 0 to 9: a Gronsfeld key of 0123 is the same as a Vigenere key of ABCD. The Gronsfeld cipher is strengthened because its key is not a word, but it is weakened because it has just a cipher alphabet of 10 characters. It is Gronsfeld's cipher that became widely used throughout Germany and Europe, despite its weaknesses. Vigenère actually invented a stronger cipher, anautokey cipher. The name "Vigenère cipher" became associated with a simpler polyalphabetic cipher instead. In fact, the two ciphers were often confused, and both were sometimes calledle chiffre indéchiffrable. Babbage actually broke the much-stronger autokey cipher, but Kasiski is generally credited with the first published solution to the fixed-key polyalphabetic ciphers.
https://en.wikipedia.org/wiki/Vigen%C3%A8re_cypher
Anelectronic health record(EHR) is the systematized collection of electronically stored patient and population health information in a digital format.[1]These records can be shared across differenthealth caresettings. Records are shared through network-connected, enterprise-wideinformation systemsor other information networks and exchanges. EHRs may include a range of data, includingdemographics, medical history,medicationandallergies,immunizationstatus, laboratory test results,radiologyimages,vital signs, personal statistics like age and weight, and billing information.[2] For several decades, EHRs have been touted as key to increasing quality of care.[3]EHR combines all patients' demographics into a large pool, which assists providers in the creation of "new treatments or innovation in healthcare delivery" to improve quality outcomes in healthcare.[4]Combining multiple types of clinical data from the system's health records has helped clinicians identify and stratify chronically ill patients. EHR can also improve quality of care through the use of data and analytics to prevent hospitalizations among high-risk patients. EHR systems are designed to store data accurately and to capture a patient's state across time. It eliminates the need to track down a patient's previous papermedical recordsand assists in ensuring data is up-to-date,[5]accurate, and legible. It also allows open communication between the patient and the provider while providing "privacy and security."[5]EHR is cost-efficient, decreases the risk of lost paperwork, and can reduce risk of data replication as there is only one modifiable file, which means the file is more likely up to date.[5]Due to the digital information being searchable and in a single file, EMRs (electronic medical records) are more effective when extracting medical data to examine possible trends and long-term changes in a patient. The widespread adoption of EHRs and EMRs may also facilitate population-based studies of medical records. The terms EHR, electronic patient record (EPR), andelectronic medical record(EMR) have often been used interchangeably, but "subtle" differences exist.[6]The electronic health record (EHR) is a more longitudinal collection of the electronic health information of individual patients or populations. The EMR, in contrast, is the patient record created by providers for specific encounters in hospitals and ambulatory environments and can serve as a data source for an EHR.[7][8] EMRs are essentially digital versions of the paper documents used in a clinician’s office, typically functioning as an internal system within a practice. An EMR includes the medical and treatment history of patients treated by that specific practice.[9] In contrast, apersonal health record(PHR) is an electronic application for recording individual medical data that the individual patient controls and may make available to health providers.[10] While there is still considerable debate around the superiority of electronic health records over paper records, the research literature paints a more realistic picture of the benefits and downsides.[11] The increased transparency, portability, and accessibility acquired by the adoption of electronic medical records may increase the ease with which they can be accessed byhealthcare professionals, but also can increase the amount of stolen information by unauthorized persons or unscrupulous users versus paper medical records, as acknowledged by the increased security requirements for electronic medical records included in theHealth Insurance Portability and Accountability Act (HIPAA)and by large-scale breaches in confidential records reported by EMR users.[12][13]Concerns about security contribute to the resistance shown to their adoption.[weasel words] Handwritten paper medical records may be poorly legible, which can contribute tomedical errors.[14]Pre-printed forms, standardization of abbreviations, and standards for penmanship were encouraged to improve the reliability of paper medical records. An example of possible medical errors is the administration of medication. Medication is an intervention that can turn a person's status from stable to unstable very quickly. With paper documentation it is very easy to not properly document the administration of medication, the time given, or errors such as giving the "wrong drug, dose, form, or not checking for allergies," and could affect the patient negatively. It has been reported that these errors have been reduced by "55-83%" because records are now online and require specific steps to avoid these errors.[15] Electronic records may help with the standardization of forms, terminology, and data input.[16][17]Digitization of forms facilitates the collection of data forepidemiologyand clinical studies.[18][19]However, standardization may create challenges for local practice.[11]Overall, those with EMRs that have automated notes and records, order entry, and clinical decision support had fewer complications, lower mortality rates, and lower costs.[20] EMRs can be continuously updated (within certain legal limitations: see below). If the ability to exchange records between different EMR systems were perfected ("interoperability"[21]), it would facilitate the coordination of health care delivery in non-affiliatedhealth care facilities. In addition, data from an electronic system can be used anonymously for statistical reporting in matters such as quality improvement, resource management, andpublic healthcommunicable disease surveillance.[22]However, it is difficult to remove data from its context.[11] Providing patients with information is central topatient-centered health careand has been shown to positively affect health outcomes.[23]Providing patients access to their health records, including medical histories and test results via an EHR, is a legal right in some parts of the world.[23] There is evidence that patient access may help patients understand their conditions and actively involve them in their management. For example, granting people who havetype 2 diabetesaccess to their electronic health records may help these people to reduce theirblood sugar levels.[24][25][26] Challenges with sharing the electronic health record with patients include a risk of increased confusion or anxiety if a person does not understand or cannot contextualize the testing results.[23]In addition, many EHRs are not designed for people of all educational levels and do not consider the needs of those with a lower level of education or those who are not fluent in the language.[23]Accessing the EHR requires a level of proficiency with electronic devices, which adds to a disparity for those without access or for those who have a mental or physical illness that restricts their access to the electronic system.[23] Electronic medical records could also be studied to quantifydisease burdens– such as the number of deaths fromantimicrobial resistance[27]– or help identify causes of, factors of,links between,[28][29]and contributors to diseases,[30][31][32]especially when combined withgenome-wide association studies.[33][34] This may enable increased flexibility, improveddisease surveillance, better medical product safety surveillance,[35]betterpublic health monitoring(such as for evaluation ofhealth policyeffectiveness),[36][37]increasedquality of care(via guidelines[38]and improved medical history sharing[39][40]), and novel life-saving treatments. Privacy: For such purposes, electronic medical records could potentially be made available in securely anonymized or pseudonymized[41]forms to ensurepatients' privacyis maintained,[42][34][43][44]even ifdata breachesoccur. There are concerns about the efficacy of some currently appliedpseudonymizationand data protection techniques, including the appliedencryption.[45][39] Documentation burden: While such records could enable avoiding duplication of work via records-sharing,[39][40]documentationburdens for medical facility personnel can be a further issue with EHRs. This burden could be reduced viavoice recognition,optical character recognition, other technologies, physician involvement in software changes, and other means[40][46][47][48]which could possibly reduce the documentation burden to below paper-based records documentation and low-level documentation. Theoretically,free softwaresuch asGNU Healthandother open-source health softwarecould be used or modified for various purposes that use electronic medical records, i.e., via securely sharing anonymized patient treatments, medical history, and individual outcomes (including by common primary care physicians).[49] Ambulance services in Australia, the United States, and the United Kingdom have introduced EMR systems.[62][63]EMS Encounters in the United States are recorded using various platforms and vendors in compliance with the NEMSIS (National EMS Information System) standard.[64]The benefits of electronic records in ambulances include patient data sharing, injury/illness prevention, better training for paramedics, review of clinical standards, better research options for pre-hospital care and design of future treatment options, data-based outcome improvement, and clinical decision support.[65] EHRs enable health information to be used and shared over secure networks to: Using an EMR to read and write a patient's record is not only possible through a workstation but, depending on the type of system and health care settings, may also be possible through mobile devices that are handwriting capable,[67]such as tablets and smartphones. Electronic medical records may include access topersonal health records(PHR) which makes individual notes from an EMR readily visible and accessible to consumers.[citation needed] Some EMR systems automatically monitor clinical events by analyzing patient data from an electronic health record to predict, detect, and potentially prevent adverse events. This can include discharge/transfer orders, pharmacy orders, radiology results, laboratory results, and any other data from ancillary services or provider notes.[68]This type of event monitoring has been implemented using the Louisiana Public Health Information Exchange, which links statewide public health with electronic medical records. This system alerted medical providers when a patient with HIV/AIDS had not received care in over twelve months. This system greatly reduced the number of missed critical opportunities.[69] Within a meta-narrativesystematic reviewof research in the field, various different philosophical approaches to the EHR exist.[11]The health information systems literature has seen the EHR as a container holding information about the patient and a tool for aggregating clinical data for secondary uses (billing, audit, etc.). However, other research traditions see the EHR as a contextualized artifact within a socio-technical system. For example,actor-network theorywould see the EHR as an actant in a network,[70]and research incomputer-supported cooperative work(CSCW) sees the EHR as a tool supporting particular work. Several possible advantages to EHRs over paper records have been proposed, but there is debate about the degree to which these are achieved in practice.[71] Several studies call into question whether EHRs improve the quality of care.[11][72][73][74][75]One 2011 study in diabetes care, published in theNew England Journal of Medicine, found evidence that practices with EHR provided better quality care.[76] EMRs may eventually help improve care coordination. An article in a trade journal suggests that since anyone using an EMR can view the patient's full chart, it cuts down on guessing histories and seeing multiple specialists, smooths transitions between care settings, and may allow better care in emergency situations.[77]EHRs may also improve prevention by providing doctors and patients better access to test results, identifying missing patient information, and offering evidence-based recommendations for preventive services.[78] The steep price and provider uncertainty regarding the value they will derive from adoption in the form ofreturn on investmentsignificantly influences EHR adoption.[79]In a project initiated by theOffice of the National Coordinator for Health Information, surveyors found that hospital administrators and physicians who had adopted EHR noted that any gains in efficiency were offset by reduced productivity as the technology was implemented, as well as the need to increase information technology staff to maintain the system.[79] TheU.S. Congressional Budget Officeconcluded that the cost savings may occur only in large integrated institutions like Kaiser Permanente and not in small physician offices. They challenged theRand Corporation's estimates of savings. "Office-based physicians in particular may see no benefit if they purchase such a product—and may even suffer financial harm. Even though the use of health IT could generate cost savings for the health system at large that might offset the EHR's cost, many physicians might not be able to reduce their office expenses or increase their revenue sufficiently to pay for it. For example, the use of health IT could reduce the number of duplicated diagnostic tests. However, that improvement in efficiency would be unlikely to increase the income of many physicians."[80] One CEO of an EHR company has argued if a physician performs tests in the office, it might reduce his or her income.[81] Doubts have been raised about cost saving from EHRs by researchers atHarvard University, theWharton School of the University of Pennsylvania,Stanford University, and others.[75][82][83] In 2022, the chief executive ofGuy's and St Thomas' NHS Foundation Trust, one of the biggest NHS organisations, said that the £450 million cost over 15 years to install theEpic Systemselectronic patient record across its six hospitals, which will reduce more than 100 different IT systems down to just a handful, was "chicken feed" when compared to the NHS's overall budget.[84] The implementation of EMR can potentially decrease the identification time of patients upon hospital admission. Research by theAnnals of Internal Medicineshowed that since the adoption of EMR, a relative decrease in time by 65% has been recorded (from 130 to 46 hours).[85] TheHealthcare Information and Management Systems Society, a very large U.S. healthcare IT industry trade group, observed in 2009 that EHR adoption rates "have been slower than expected in the United States, especially compared to other industry sectors and other developed countries. Aside from initial costs and lost productivity during EMR implementation, one key reason is lack of efficiency and usability of EMRs currently available."[86][87]The U.S.National Institute of Standards and Technologyof theDepartment of Commercestudied usability in 2011 and lists a number of specific issues that have been reported by health care workers.[88]The U.S. military's EHR,AHLTA, was reported to have significant usability issues.[89]Furthermore, studies such as the one conducted in BMC Medical Informatics and Decision Making showed that although the implementation of electronic medical records systems has been a great assistance togeneral practitioners, there is still much room for revision in the overall framework and the amount of training provided.[90]It was observed that the efforts to improve EHR usability should be placed in the context of physician-patient communication.[91] However, physicians are embracing mobile technologies such as smartphones and tablets at a rapid pace. According to a 2012 survey byPhysicians Practice, 62.6 percent of respondents (1,369 physicians, practice managers, and other healthcare providers) say they use mobile devices in the performance of their job. Mobile devices are increasingly able to sync up with electronic health record systems, allowing physicians to access patient records from remote locations. Most devices are extensions of desktop EHR systems, using a variety of software to communicate and access files remotely. The advantages of instant access to patient records at any time and place are clear, but raise security concerns. As mobile systems become more prevalent, practices will need comprehensive policies that govern security measures and patient privacy regulations.[92] Other advanced computational techniques allow EHRs to be evaluated at a much quicker rate.Natural language processingis increasingly used to search EMRs, especially through searching and analyzing notes and text that would otherwise be inaccessible for study when seeking to improve care.[93]One study found that several machine learning methods could be used to predict the rate of a patient's mortality with moderate success, with the most successful approach including using a combination of aconvolutional neural networkand a heterogenous graph model.[94] When a health facility has documented its workflow and chosen its software solution, it must consider the hardware and supporting device infrastructure for the end users. Staff and patients must engage with various devices throughout a patient's stay and charting workflow. Computers, laptops, all-in-one computers, tablets, mouse, keyboards and monitors are all hardware devices that may be utilized. Other considerations include supporting work surfaces and equipment, wall desks or articulating arms for end users to work on. Another important factor is how all these devices will be physically secured and how they will be charged so that staff can always utilize them for EHR charting when needed. The success of eHealth interventions largely depends on the adopter's ability to fully understand workflow and anticipate potential clinical processes prior to implementations. Failure to do so can create costly and time-consuming interruptions to service delivery.[95] Per empirical research insocial informatics,information and communications technology(ICT) use can lead to both intended andunintended consequences.[96][97][98] A 2008 Sentinel Event Alert from the U.S.Joint Commission, the organization that accredits American hospitals to provide healthcare services, states, "As health information technology (HIT) and 'converging technologies'—the interrelationship between medical devices and HIT—are increasingly adopted by health care organizations, users must be mindful of the safety risks and preventable adverse events that these implementations can create or perpetuate. Technology-related adverse events can be associated with all components of a comprehensive technology system and may involve errors of either commission or omission. These unintended adverse events typically stem from human-machine interfaces or organization/system design."[99]The Joint Commission cites as an example theUnited States PharmacopeiaMEDMARX database,[100]where of 176,409 medication error records for 2006, approximately 25 percent (43,372) involved some aspect of computer technology as at least one cause of the error. The BritishNational Health Service(NHS) reports specific examples of potential and actual EHR-caused unintended consequences in its 2009 document on the management of clinical risk relating to the deployment and use of health software.[101] In February 2010, an AmericanFood and Drug Administration(FDA) memorandum noted that EHR unintended consequences include EHR-related medical errors from (1) errors of commission (EOC), (2) errors of omission or transmission (EOT), (3) errors in data analysis (EDA), and (4) incompatibility between multi-vendor software applications or systems (ISMA), citing various examples. The FDA also noted that the "absence of mandatory reporting enforcement of H-IT safety issues limits the numbers of medical device reports (MDRs) and impedes a more comprehensive understanding of the actual problems and implications."[102][103] A 2010 Board Position Paper by theAmerican Medical Informatics Association(AMIA) contains recommendations on EHR-related patient safety, transparency, ethics education for purchasers and users, adoption of best practices, and re-examination of regulation of electronic health applications.[104]Beyond concrete issues such as conflicts of interest and privacy concerns, questions have been raised about how the physician-patient relationship would be affected by an electronic intermediary.[105][106] During the implementation phase,cognitive workloadfor healthcare professionals may be significantly increased as they familiarize themselves with a new system.[107] EHRs are almost invariably detrimental to physician productivity, whether the data is entered during the encounter or sometime thereafter.[108]It is possible for an EHR to increase physician productivity[109]by providing a fast and intuitive interface for viewing and understanding patient clinical data and minimizing the number of clinically irrelevant questions,[citation needed]but that is almost never the case.[citation needed]The other way to mitigate the detriment to physician productivity is to hire scribes to work alongside medical practitioners, which is almost never financially viable.[citation needed] As a result, many have conducted studies like the one discussed in theJournal of the American Medical Informatics Association, "The Extent And Importance of Unintended Consequences Related To Computerized Provider Order Entry," which seeks to understand the degree and significance of unplanned adverse consequences related to computerized physician order entry and understand how to interpret adverse events and understand the importance of its management for the overall success of computer physician order entry.[110] In the United States, Great Britain, and Germany, the concept of a national centralized server model of healthcare data has been poorly received.[111]Concerns include issues of privacy and security.[112][113] In theEuropean Union(EU), a new directly binding instrument, a regulation of theEuropean Parliamentand of the council, was passed in 2016 to go into effect in 2018 to protect the processing of personal data, including that for purposes of health care, theGeneral Data Protection Regulation. Threats to health care information can be categorized under three headings: These threats can either be internal, external, intentional, or unintentional. Health information systems professionals consider these particular threats when discussing ways to protect patients' health information. It has been found that there is a lack of security awareness among health care professionals in countries such as Spain.[114]TheHealth Insurance Portability and Accountability Act(HIPAA) has developed a framework to mitigate the harm of these threats that is comprehensive but not so specific as to limit the options of healthcare professionals who may have access to different technology.[115]With the increase of clinical notes being shared electronically due to the21st Century Cures Act, an increase in sensitive terms used across the records of all patients, including minors, are increasingly shared amongst care teams, complicating efforts to maintain privacy.[116] Personal Information Protection and Electronic Documents Act(PIPEDA) was given Royal Assent in Canada on 13 April 2000 to establish rules on the use, disclosure, and collection of personal information. The personal information includes both non-digital and electronic forms. In 2002, PIPEDA extended to the health sector in Stage 2 of the law's implementation.[117]There are four provinces where this law does not apply because their privacy laws were considered similar to PIPEDA: Alberta, British Columbia, Ontario, and Quebec. TheCOVID-19 pandemic in the United Kingdomled to radical changes.NHS DigitalandNHSXmade changes, said to be only for the duration of the crisis, to the information sharing system GP Connect across England, meaning that patient records are shared across primary care. Only patients who have specifically opted out are excluded.[118] Legal liability in all aspects of health care was an increasing problem in the 1990s and 2000s. The surge in the per capita number of attorneys in the USA[119]and changes in thetortsystem caused an increase in the cost of every aspect of health care, and health care technology was no exception.[120] Failure or damages caused during installation or utilization of an EHR system has been feared as a threat in lawsuits.[121]Similarly, the implementation of electronic health records can carry significant legal risks.[122] Liability is of special concern for small EHR system makers, which may be forced to abandon markets based on the regional liability climate.[123][unreliable source]Larger EHR providers (or government-sponsored providers of EHRs) are better able to withstand legal challenges. Electronic documentation of patient visits and data could open physicians to an increased incidence ofmalpracticesuits. Disabling physician alerts, selecting from dropdown menus, and using templates can encourage physicians to skip a complete review of past patient history and medications and thus miss important data. Another potential problem is electronic time stamps. Many physicians are unaware that EHR systems produce an electronic time stamp every time the patient record is updated. If a malpractice claim goes to court, the prosecution can request a detailed record of all entries made in a patient's electronic record. Waiting to chart patient notes until the end of the day and making addendums to records well after the patient visit can be problematic in that this practice could result in less than accurate patient data or indicate possible intent to illegally alter the patient's record.[124] In some communities, hospitals attempt to standardize EHR systems by providing discounted versions of the hospital's software to local healthcare providers. A challenge to this practice has been raised as being a violation of Stark rules that prohibit hospitals from preferentially assisting community healthcare providers.[125]In 2006, however, exceptions to the Stark rule were enacted to allow hospitals to furnish software and training to community providers, mostly removing this legal obstacle.[126][unreliable source][127][unreliable source] In cross-border use cases of EHR implementations, the additional issue of legal interoperability arises. Different countries may have diverging legal requirements for the content or usage of electronic health records, which can require radical changes to the technical makeup of the EHR implementation in question, especially when fundamental legal incompatibilities are involved. Exploring these issues is, therefore, often necessary when implementing cross-border EHR solutions.[128] TheUnited NationsWorld Health Organization(WHO) administration intentionally does not contribute to an internationally standardized view of medical records nor to personal health records. However, the WHO contributes to minimum requirements definitions for developing countries.[129] The United Nations-accredited standardization bodyInternational Organization for Standardization(ISO) however has reviewed and adopted certain standards in the scope of theHL7platform for health care informatics. Respective standards are available with ISO/HL7 10781:2009 Electronic Health Record-System Functional Model, Release 1.1[130]and subsequent set of detailing standards.[131] The majority of the countries in Europe have made a strategy for the development and implementation of electronic health record systems. This would mean greater access to health records by numerous stakeholders, even from countries with lower levels of privacy protection. The implementation of the Cross-Border Health Directive and theEuropean Commission's plans to centralize all health records are of prime concern to the EU public who believe that the health care organizations and governments cannot be trusted to manage their data electronically and expose them to more threats. The idea of a centralized electronic health record system was poorly received by the public who are wary that governments may use of the system beyond its intended purpose. There is also the risk for privacy breaches that could allow sensitive health care information to fall into the wrong hands. Some countries have enacted laws requiring safeguards to be put in place to protect the security and confidentiality of medical information. These safeguards add protection for records that are shared electronically and give patients some important rights to monitor their medical records and receive notification for loss and unauthorized acquisition of health information. The United States and the EU have imposed mandatorymedical data breachnotifications.[132] The purpose of a personal data breach notification is to protect individuals so that they can take all the necessary actions to limit the undesirable effects of the breach and to motivate the organization to improve the security of the infrastructure to protect the confidentiality of the data. U.S. law requires the entities to inform the individuals in the event of a breach while the EU Directive currently requires breach notification only when the breach is likely to adversely affect the privacy of the individual. Personal health data is valuable to individuals and it is therefore difficult to assess whether a breach will cause reputational or financial harm or adversely affect one's privacy. The breach notification law in the EU provides better privacy safeguards with fewer exemptions, unlike the US law, which exempts unintentional acquisition, access, or use of protected health information and inadvertent disclosure under a good faith belief.[132] The U.S. federal government has issued new rules of electronic health records.[133] Acommon data model(CDM) is a specification that describes how data from multiple sources (e.g., multiple EHR systems) can be combined. Many CDMs use a relational model (e.g., the OMOP CDM). A relational CDM defines names of tables and table columns and restricts what values are valid. Each health care environment functions differently, often in significant ways. It is difficult to create a "one-size-fits-all" EHR system. Many first-generation EHRs were designed to fit the needs of primary care physicians, leaving certain specialties significantly less satisfied with their EHR system.[citation needed] An ideal EHR system will have record standardization but also interfaces that can be customized to each provider environment. Modularity in an EHR system facilitates this. Many EHR companies employ vendors to provide customization, which can often be done so that a physician's input interface closely mimics previously utilized paper forms.[135] Providers have reported negative effects in communication, increased overtime, and missing records when a non-customized EMR system was utilized.[136]Customizing the software when released yields the highest benefits because it is adapted for the users and tailored to workflows specific to the institution.[137] However, customization can have its disadvantages. Implementing a customized system may incur higher initial costs, as more time must be spent by both the implementation team and the healthcare provider to understand the workflow needs. Development and maintenance of these interfaces and customizations can also lead to higher software implementation and maintenance costs.[138][unreliable source][139][unreliable source] An important consideration when developing electronic health records is to plan for the long-term preservation and storage of these records. The field will need to come to a consensus on the length of time to store EHRs, methods to ensure the future accessibility and compatibility of archived data with yet-to-be-developed retrieval systems, and how to ensure the physical and virtual security of the archives.[citation needed] Additionally, considerations about the long-term storage of electronic health records are complicated by the possibility that the records might one day be used longitudinally and integrated across sites of care. Records have the potential to be created, used, edited, and viewed by multiple independent entities. These entities include, but are not limited to,primary care physicians,hospitals,insurance companies, andpatients. Mandl et al. have noted that "choices about the structure and ownership of these records will have profound impact on the accessibility and privacy of patient information."[140] The required length of storage of an individual electronic health record will depend on national and state regulations, which are subject to change over time.[141]Ruotsalainen and Manning have found that the typical preservation time of patient data varies between 20 and 100 years. In one example of how an EHR archive might function, their research "describes a co-operative trusted notary archive (TNA) which receives health data from different EHR-systems, stores data together with associated meta-information for long periods and distributes EHR-data objects. TNA can store objects in XML-format and prove the integrity of stored data with the help of event records, timestamps and archive e-signatures."[142] In addition to the TNA archive described by Ruotsalainen and Manning, other combinations of EHR systems and archive systems are possible. Again, overall requirements for the design and security of the system and its archive will vary and must function under ethical and legal principles specific to the time and place.[citation needed] While it is currently unknown precisely how long EHRs will be preserved, it is certain that length of time will exceed the average shelf-life of paper records. The evolution of technology is such that the programs and systems used to input information will likely not be available to a user who desires to examine archived data. One proposed solution to the challenge of long-term accessibility and usability of data by future systems is to standardize information fields in a time-invariant way, such as withXML language. Olhede and Peterson report that "the basic XML-format has undergone preliminary testing in Europe by a Spri project and been found suitable for EU purposes. Spri has advised the Swedish National Board of Health and Welfare and the Swedish National Archive to issue directives concerning the use of XML as the archive-format for EHCR (Electronic Health Care Record) information."[143] When care is provided at two different facilities, it may be difficult to update records at both locations in a coordinated fashion. Two models have been used to satisfy this problem: acentralized data server solutionand a peer-to-peerfile synchronizationprogram (as has been developed for otherpeer-to-peer networks). However, synchronization programs for distributed storage models are only useful once record standardization has occurred. Merging of already existing public health care databases is a common software challenge. The ability of electronic health record systems to provide this function is a key benefit and can improve health care delivery.[144][145][146] The sharing of patient information between health care organizations and IT systems is changing from a "point to point" model to a "many to many" one. The European Commission is supporting moves to facilitate cross-border interoperability of e-health systems and to remove potential legal hurdles. To allow for global shared workflow, studies will be locked when they are being read and then unlocked and updated once reading is complete. This enables Radiologists to serve multiple health care facilities and read and report across large geographical areas, thus balancing workloads. The biggest challenges will relate to interoperability and legal clarity. In some countries, it is almost forbidden to practice teleradiology. The variety of languages spoken is a problem, and multilingual reporting templates for all anatomical regions are not yet available. However, the market for e-health and teleradiology is evolving more rapidly than any laws or regulations.[147] SeeElectronic health records in the United States In 2011, Moscow's government launched a major project known asUMIASas part of its electronic healthcare initiative. UMIAS - the Unified Medical Information and Analytical System - connects more than 660 clinics and over 23,600 medical practitioners in Moscow. UMIAS covers 9.5 million patients, contains more than 359 million patient records, and supports more than 500,000 different transactions daily. Approximately 700,000 Muscovites use remote links to make appointments every week.[148][149] TheEuropean Commissionwants to boost the digital economy by enabling all Europeans to have access to online medical records anywhere in Europe. With the newEuropean Health Data Space (EHDS) Regulation, steps are being taken toward a centralized European health record system. However, the concept of a centralized supranational central server raises concern about storing electronic medical records in a central location. The privacy threat posed by a supranational network is a key concern. Cross-border andinteroperableelectronic health record systems make confidential data more easily and rapidly accessible to a wider audience and increase the risk that personal data concerning health could be accidentally exposed or easily distributed to unauthorized parties by enabling greater access to a compilation of the personal data concerning health, from different sources, and throughout a lifetime.[150] The Lloyd George envelope digitisation project aims to have all paper copies of all historic patient data transferred onto computer systems. As part of the rollout, new patients will no longer be given a transit label to register when moving practices. Not only is it a step closer to a digitalNHS, the project reduces the movement of records between practices, freeing up space in practices that are used to store records as well as having the added benefit of being more environmentally friendly[151] Lyniatewas selected to provide data integration technologies forHealth and Social Care (Northern Ireland)in 2022.Epic Systemswill supply integrated electronic health records with a single digital record for every citizen. Lyniate Rhapsody, already used in 79 NHS Trusts, will be used to integrate the multiple health and social care systems.[152] In UKveterinarypractice, the replacement of paper recording systems with electronic methods of storing animal patient information escalated from the 1980s, and the majority of clinics now use electronic medical records. In a sample of 129 veterinary practices, 89% used aPractice Management System (PMS)for data recording.[153]There are more than ten PMS providers currently in the UK. Collecting data directly from PMSs for epidemiological analysis abolishes the need for veterinarians to manually submit individual reports per animal visit and therefore increases the reporting rate.[154] Veterinary electronic medical record data are being used to investigate antimicrobial efficacy, risk factors forcanine cancer, and inherited diseases in dogs and cats in the small animal disease surveillance project'VetCOMPASS'(Veterinary Companion Animal Surveillance System) at theRoyal Veterinary College, London, in collaboration with theUniversity of Sydney(the VetCOMPASS project was formerly known as VEctAR).[155][156] A letter published in Communications of the ACM[157]describes the concept of generating synthetic patient populations and proposes a variation of theTuring testto assess the difference between synthetic and real patients. The letter states: "In the EHR context, though a human physician can readily distinguish between synthetically generated and real live human patients, could a machine be given the intelligence to make such a determination on its own?" Further, the letter states: "Before synthetic patient identities become a public health problem, the legitimate EHR market might benefit from applying Turing Test-like techniques to ensure greater data reliability and diagnostic value. Any new techniques must thus consider patients' heterogeneity and are likely to have greater complexity than the Allen eighth-grade-science-test is able to grade."[158]
https://en.wikipedia.org/wiki/Electronic_medical_record
TheElectronic Signatures in Global and National Commerce Act(ESIGN,Pub. L.106–229 (text)(PDF), 114Stat.464, enactedJune 30, 2000,15 U.S.C.ch. 96) is aUnited States federal law, passed by theU.S. Congressto facilitate the use ofelectronic recordsandelectronic signaturesininterstateand foreign commerce. This is done by ensuring the validity and legal effect ofcontractsentered into electronically;[1]the Act was signed into law by President Bill Clinton on June 30, 2000, and took effect on October 1, 2000.[2] Although every state has at least one law pertaining toelectronic signatures, it is the federal law that lays out the guidelines forinterstate commerce. The general intent of the ESIGN Act is spelled out in the first section (101.a), that a contract or signature “may not be denied legal effect, validity, or enforceability solely because it is in electronic form”. This simple statement provides that electronic signatures and records are just as good as their paper equivalents, and therefore subject to the same legal scrutiny ofauthenticitythat applies to paper documents.[3] Sec 106 of the ESIGN Act defines:[4] Section 101 of the ESIGN Act, sub-section (b), preserves the rights of individuals to NOT USE electronic signatures. Here the law provides that individuals reserve the right to use a paper signature. Sub-section (c) is in direct support of (b) by requiring a "Consumer Disclosure" that the signatory has consented to use an electronic format.[5] The consumer must provideaffirmative consent, meaning that it cannot be assumed that a consumer has given consent simply because he/she has not chosen the option to deny consent, or has not responded to an option to grant consent. The first public implementation of Section 106 of the ESIGN Act came nine months prior to its approval, when in October 1999,SaveDaily.comfounderEric Solis,[6]used an electronic signature to establish paperless brokerage accounts. Solis overcame the requirements of section 101(c)(1)(C) by causing the consumer to agree in advance via Consumer Disclosures that all communications, including signatures would be executed and delivered electronically.[citation needed] Section 101(d) provides that if a law requires that a business retain a record of a transaction, the business satisfies the requirement by retaining an electronic record, as long as the record 1) "accurately reflects" the substance of the original record in an unalterable format, 2) is "accessible" to people who are entitled to access it, 3) is "in a form that is capable of being accurately reproduced for later reference, whether by transmission, printing or otherwise", and 4) is retained for the legally required period of time.
https://en.wikipedia.org/wiki/Electronic_Signatures_in_Global_and_National_Commerce_Act
TheUnited States of America(USA), also known as theUnited States(U.S.) orAmerica, is a country located primarily inNorth America. It is afederal republicof 50statesandthe federal capital districtofWashington, D.C.The 48contiguous statesborderCanadato the north andMexicoto the south, with the state ofAlaskaforming asemi-exclavein the northwest and the state ofHawaiispanning anarchipelagoinOceania.Indian countryincludes 574federally recognized tribesand 326Indian reservationswithtribal sovereignty rights. The U.S. asserts sovereignty over fivemajor island territoriesandvarious uninhabited islandsin thePacific Oceanand theCaribbean. It is an ecologicallymegadiverse country, with the world'sthird-largest land area[c]andthird-largest population, exceeding 340 million.[j] Paleo-Indiansmigrated to North America across theBering land bridgemore than 12,000 years ago, and formedvarious cultures.Spanish Florida, the firstEuropean colonyin what is now the continental U.S., was established in 1513, and laterBritish colonizationled to the first settlement of theThirteen ColoniesinVirginiain 1607. Intensive agriculture in the rapidly expandingSouthern Coloniesencouraged theenslavement of Africans. Clashes with the British Crown over taxation and political representation sparked theAmerican Revolution, with theSecond Continental Congressformally declaring independence on July 4, 1776. The U.S. emerged victorious from theAmerican Revolutionary Warof 1775 to 1783 andexpanded westwardacross North America, dispossessingNative Americansduring theIndian Wars. TheLouisiana Purchasein 1803 and the end of theMexican–American Warin 1848 saw significant territorial acquisition. As more states were admitted, a North–South division over slavery led to the secession of theConfederate States of America, which foughtthe Unionin theAmerican Civil Warof 1861 to 1865. With the Union's victory, slavery was abolished nationally. In the late 19th and early 20th centuries, the U.S. established itself as agreat powerfollowing theSpanish–American WarandWorld War I. AfterJapan'sattack on Pearl Harborin 1941, the U.S. enteredWorld War II; its aftermath left the U.S. and theSoviet Unionas the world'ssuperpowers. During theCold War, both countries struggled forideological dominanceandinternational influence. The Soviet Union's collapse and theend of the Cold Warin 1991 left the U.S. as the world's sole superpower. TheU.S. national government, as established by theConstitutionin 1789, is apresidentialrepublic andliberal democracywith a separation of powers into three branches:legislative,executive, andjudicial. TheU.S. Congress, the national legislature, is composed of theHouse of Representatives(a lower house based on population) and theSenate(an upper house based on equal representation for each state). TheU.S. federal systemprovides substantial autonomy to the 50 states, each with its own constitution and laws. The American political tradition is rooted inEnlightenmentideals of liberty, equality, individual rights, and the rule of law. Since the 1850s, theDemocraticandRepublicanparties have dominatedAmerican politics. Adeveloped country, theU.S. rankshigh ineconomic competitiveness,productivity, innovation, and higher education. The U.S. accounted forover a quarterof nominal global economic output in 2024, andits economyhas been the world's largest by nominal GDPsince about 1890. It possesses themost wealth of any countryand has thehighest disposable household income per capitaamongOECDcountries, thoughU.S. wealth inequalityis one of the most pronounced in those countries. Amelting potofmany ethnicities and customs, theculture of the U.S.has been shaped by centuries of immigration, andits soft power influencehas a global reach. The U.S. is a member ofmultiple international organizationsand plays a leading role in global political, cultural, economic, and military affairs. Documented use of the phrase "United States of America" dates back to January 2, 1776. On that day,Stephen Moylan, aContinental Armyaide to GeneralGeorge Washington, wrote a letter toJoseph Reed, Washington'saide-de-camp, seeking to go "with full and ample powers from the United States of America to Spain" to seek assistance in theRevolutionary Wareffort.[21][22]The first known public usage is ananonymous essaypublished in theWilliamsburgnewspaperThe Virginia Gazetteon April 6, 1776.[21]Sometime on or after June 11, 1776,Thomas Jeffersonwrote "United States of America" in a rough draft of theDeclaration of Independence,[21]which was adopted by theSecond Continental Congresson July 4, 1776.[23] The term "United States" and its initialism "U.S.", used as nouns or as adjectives in English, are common short names for the country. The initialism "USA", a noun, is also common.[24]"United States" and "U.S." are the established terms throughout theU.S. federal government, with prescribed rules.[k]"The States" is an established colloquial shortening of the name, used particularly from abroad;[26]"stateside" is the corresponding adjective or adverb.[27] "America" is the feminine form of the first word ofAmericus Vesputius, the Latinized name of Italian explorerAmerigo Vespucci(1454–1512); it was first used as a place name by the German cartographersMartin WaldseemüllerandMatthias Ringmannin 1507.[28][l]Vespucci first proposed that theWest Indiesdiscovered byChristopher Columbusin 1492 were part of a previously unknown landmass and not among the Indies at the eastern limit of Asia.[29][30][31]In English,the term "America"rarely refers to topics unrelated to the United States, despite the usage of "theAmericas" to describe the totality ofNorthandSouth America.[32] Thefirst inhabitants of North Americamigrated fromSiberiaover 12,000 years ago, either across theBering land bridgeor along thenow-submerged Ice Age coastline.[34][35]TheClovis culture, which appeared around 11,000 BC, is believed to be the first widespread culture in the Americas.[36][37]Over time, Indigenous North American cultures grew increasingly sophisticated, and some, such as theMississippian culture, developedagriculture,architecture, andcomplex societies.[38]In thepost-archaic period, the Mississippian cultures were located in themidwestern,eastern, andsouthernregions, and theAlgonquianin theGreat Lakes regionand along theEastern Seaboard, while theHohokam cultureandAncestral Puebloansinhabited thesouthwest.[39]Native population estimatesof what is now the United States before the arrival of European immigrants range from around 500,000[40][41]to nearly 10 million.[41][42] Christopher Columbusbegan exploring theCaribbeanfor Spain in 1492, leading toSpanish-speaking settlements and missionsfromPuerto RicoandFloridatoNew MexicoandCalifornia. The first Spanish colony in what is now the continental United States wasSpanish Florida, chartered in 1513.[43][44][45][46]After several settlements failed there due to hunger and disease, Spain's first permanent town,Saint Augustine, was founded in 1565.[47]France established its own settlements inFrench Floridain 1562, but they were either abandoned (Charlesfort, 1578) or destroyed by Spanish raids (Fort Caroline, 1565);permanent French settlementswould be founded much later along theGreat Lakes(Fort Detroit, 1701), theMississippi River(Saint Louis, 1764) and especially theGulf of Mexico(New Orleans, 1718).[48]Early European colonies also included the thriving Dutch colony ofNew Nederland(settled 1626, present-day New York) and the small Swedish colony ofNew Sweden(settled 1638 in what is now Delaware).British colonizationof theEast Coastbegan with theVirginia Colony(1607) and thePlymouth Colony(Massachusetts, 1620).[49][50]TheMayflower Compactin Massachusetts and theFundamental Orders of Connecticutestablished precedents for representativeself-governanceandconstitutionalismthat would develop throughout the American colonies.[51][52]While European settlers in what is now the United States experienced conflicts with Native Americans, they also engaged in trade, exchanging European tools for food and animal pelts.[53][m]Relations ranged from close cooperation to warfare and massacres. The colonial authorities often pursued policies that forced Native Americans to adopt European lifestyles, including conversion to Christianity.[57][58]Along the eastern seaboard, settlerstrafficked African slavesthrough theAtlantic slave trade.[59] The originalThirteen Colonies[n]that would later found the United States were administered as possessions ofGreat Britain,[60]and hadlocal governments with elections open to most white male property owners.[61][62]The colonial population grew rapidly from Maine to Georgia, eclipsing Native American populations;[63]by the 1770s, the natural increase of the population was such that only a small minority of Americans had been born overseas.[64]The colonies' distance from Britain allowed for the development of self-governance,[65]and theFirst Great Awakening, a series ofChristian revivals, fueled colonial interest inreligious liberty.[66] Following their victory in the French and Indian War, Britain began to assert greater control over local colonial affairs, resulting incolonial political resistance; one of the primary colonial grievances was a denial of theirrights as Englishmen, particularly the right torepresentation in the British government that taxed them. To demonstrate their dissatisfaction and resolve, theFirst Continental Congressmet in 1774 and passed theContinental Association, a colonial boycott of British goods that proved effective. The British attempt to then disarm the colonists resulted in the 1775Battles of Lexington and Concord, igniting theAmerican Revolutionary War. At theSecond Continental Congress, the colonies appointedGeorge Washingtoncommander-in-chief of theContinental Army, and createda committeethat namedThomas Jeffersonto draft theDeclaration of Independence. Two days after passing theLee Resolutionto create an independent nation the Declaration was adopted on July 4, 1776.[67]Thepolitical values of the American Revolutionincludedliberty,inalienable individual rights; and thesovereignty of the people;[68]supportingrepublicanismand rejectingmonarchy,aristocracy, and all hereditary political power;civic virtue; and vilification ofpolitical corruption.[69]TheFounding Fathers of the United States, who included Washington, Jefferson,John Adams,Benjamin Franklin,Alexander Hamilton,John Jay,James Madison,Thomas Paine, and many others, were inspired byGreco-Roman,Renaissance, andEnlightenmentphilosophies and ideas.[70][71] TheArticles of Confederationand Perpetual Unionwere ratified in 1781 and established a decentralized government that operated until 1789.[67]After the British surrender at thesiege of Yorktownin 1781, American sovereignty was internationally recognized by theTreaty of Paris(1783), through which the U.S. gained territory stretching west to the Mississippi River, north to present-day Canada, and south toSpanish Florida.[72]TheNorthwest Ordinance(1787) established the precedent by which the country's territory would expand with theadmission of new states, rather than the expansion of existing states.[73]TheU.S. Constitutionwas drafted at the 1787Constitutional Conventionto overcome the limitations of the Articles. It went into effect in 1789, creating afederal republicgoverned bythree separate branchesthat together ensured a system ofchecks and balances.[74]George Washingtonwas electedthe country's first president under the Constitution, and theBill of Rightswas adopted in 1791 to allay skeptics' concerns about the power of the more centralized government.[75]His resignation as commander-in-chiefafter the Revolutionary War and his later refusal to run for a third term as the country's first president established a precedent for the supremacy of civil authority in the United States and thepeaceful transfer of power.[76] In the late 18th century, American settlers began toexpand westwardin larger numbers, many with a sense ofmanifest destiny.[77][78]TheLouisiana Purchaseof 1803 from France nearly doubled the territory of the United States.[79][80]Lingering issues with Britain remained, leading to theWar of 1812, which was fought to a draw.[81][82]Spain ceded Floridaand its Gulf Coast territory in 1819.[83]TheMissouri Compromiseof 1820, which admittedMissourias aslave stateandMaineas a free state, attempted to balance the desire of northern states to prevent the expansion of slavery into new territories with that of southern states to extend it there. The compromise further prohibited slavery in all other lands of the Louisiana Purchase north of the36°30′ parallel.[84]As Americans expanded further into land inhabited by Native Americans, the federal government often appliedpoliciesofIndian removalorassimilation.[85][86]The most significant such legislation was theIndian Removal Act of 1830, a key policy of PresidentAndrew Jackson. It resulted in theTrail of Tears(1830–1850), in which an estimated 60,000 Native Americans living east of theMississippi Riverwere forcibly removed and displaced to lands far to the west, causing 13,200 to 16,700 deaths along the forced march.[87]Settler expansion as well as this influx of Indigenous peoples from the East resulted in theAmerican Indian Warswest of the Mississippi.[88][89] The United StatesannexedtheRepublic of Texasin 1845,[90]and the 1846Oregon Treatyled to U.S. control of the present-dayAmerican Northwest.[91]Dispute with Mexico over Texas led to theMexican–American War(1846–1848). After the victory of the U.S., Mexico recognized U.S sovereignty over Texas,New Mexico, andCaliforniain the 1848Mexican Cession; the cession's lands also included the future states ofNevada,ColoradoandUtah.[77][92]TheCalifornia gold rushof 1848–1849 spurred a huge migration of white settlers to the Pacific coast, leading to even more confrontations with Native populations. One of the most violent, theCalifornia genocideof thousands of Native inhabitants, lasted into the mid-1870s.[93]Additional western territories and states were created.[94] During the colonial period,slavery had been legal in the American colonies, becoming the main labor force in the agriculture-intensiveSouthern Coloniesfrom Maryland to Georgia. The practice began to be significantly questioned during the American Revolution,[95]and spurred by an activeabolitionist movementthat had reemerged in the 1830s, states inthe Northenacted anti-slavery laws in those states.[96]At the same time, support for slavery had strengthened inSouthern states, with widespread use of inventions such as thecotton gin(1793) having made slavery immensely profitable forSouthern elites.[97][98][99]Throughout the 1850s, thissectional conflict regarding slaverywas further inflamed by national legislation in Congress and decisions of the Supreme Court: TheFugitive Slave Act of 1850mandated the forcible return to their owners in the South of slaves taking refuge in non-slave states. TheKansas–Nebraska Actof 1854 effectively gutted the anti-slavery requirements of the Missouri Compromise.[100]Finally, in itsDred Scott decisionof 1857, the Supreme Court ruled against a slave brought into non-slave territory and declared the entire Missouri Compromise to be unconstitutional. Theseevents exacerbated tensions between North and Souththat would culminate in theAmerican Civil War(1861–1865).[101][102] Beginning withSouth Carolina, eleven slave statessecededfrom the United States in 1861 to declare theConfederate States of America. All other states remained inthe Union.[o][103][104]War broke out in April 1861 after the Confederacybombarded Fort Sumter.[105][106]After the January 1863Emancipation Proclamation, many freed slaves joined theUnion army.[107]The warbegan to turn in the Union's favorfollowing the 1863Siege of VicksburgandBattle of Gettysburg, and the Confederates surrendered in 1865 after the Union's victory in theBattle of Appomattox Court House.[108]TheReconstruction erafollowed the war. Afterthe assassinationof PresidentAbraham Lincoln,Reconstruction Amendmentswere passed toprotect the rights of African Americans, abolishing slavery and involuntary servitude, establishing equal protection of the laws for all persons, and prohibiting discrimination in citizens' voting rights on the basis of race or previous enslavement.[109][110][111] National infrastructure, includingtranscontinental telegraphandrailroads, spurred growth in theAmerican frontier.[114]From 1865 through 1917, an unprecedented stream of immigrants arrived in the United States, including 24.4 million from Europe.[115]Most came through theport of New York City, and New York City and other large cities on theEast Coastbecame home to largeJewish,Irish, andItalianpopulations, while manyGermansand Central Europeans moved to theMidwest. At the same time, about one millionFrench Canadiansmigrated fromQuebectoNew England.[116]During theGreat Migration, millions of African Americansleft the rural Southfor urban areas in the North.[117]Alaska was purchasedfromRussiain 1867.[118] TheCompromise of 1877effectively ended Reconstruction andwhite supremacists took local control of Southern politics.[119][120]African Americans endured a period of heightened, overt racism following Reconstruction, a time often called thenadir of American race relations.[121][122]A series of Supreme Court decisions, includingPlessy v. Ferguson, emptied the Fourteenth and Fifteenth Amendments of their force, allowingJim Crow lawsin the South to remain unchecked,sundown townsin the Midwest, andsegregation in communities across the country, which would be reinforced by the policy ofredlininglater adopted by the federalHome Owners' Loan Corporation.[123] An explosion of technological advancementaccompanied by the exploitation of cheap immigrant labor[124]led torapid economic expansion during the late 19th and early 20th centuries, allowing the United States to outpace the economies of England, France, and Germany combined.[125][126]This fostered the amassing of power bya few prominent industrialists, largely by their formation oftrustsandmonopoliesto prevent competition.[127]Tycoonsled the nation's expansion in therailroad,petroleum, andsteelindustries. The United States emerged as a pioneer of theautomotive industry.[128]These changes were accompanied by significant increases ineconomic inequality,slum conditions, andsocial unrest, creating the environment forlabor unionsandsocialist movementsto begin to flourish.[129][130][131]This period eventually ended with the advent of theProgressive Era, which was characterized by significant reforms.[132][133] Pro-American elements in Hawaiioverthrew the Hawaiian monarchy; the islandswere annexedin 1898. That same year,Puerto Rico,the Philippines, andGuamwere ceded to the U.S. by Spain after the latter's defeat in theSpanish–American War. (The Philippines was granted full independence from the U.S. on July 4, 1946, following World War II. Puerto Rico and Guam have remained U.S. territories.)[134]American Samoawas acquired by the United States in 1900 after theSecond Samoan Civil War.[135]TheU.S. Virgin Islandswere purchased fromDenmarkin 1917.[136] The United Statesentered World War Ialongside theAlliesin 1917 helping to turn the tide against theCentral Powers.[137]In 1920,a constitutional amendmentgranted nationwidewomen's suffrage.[138]During the 1920s and 1930s, radio formass communicationand early television transformed communications nationwide.[139]TheWall Street Crash of 1929triggered theGreat Depression, which PresidentFranklin D. Rooseveltresponded to with theNew Deal, a series ofsweeping programsandpublic works projectscombined with financial reforms andregulations. All were intended to protect against future economic depressions.[140][141] Initially neutralduringWorld War II, the U.S. begansupplying war materielto theAllies of World War IIin March 1941 andentered the warin December after theEmpire of Japan'sattack on Pearl Harbor.[142][143]The U.S.developed the first nuclear weaponsandused them against the Japanese cities of Hiroshima and Nagasakiin August 1945, ending the war.[144][145]The United States was one of the "Four Policemen" who met to plan thepost-war world, alongside theUnited Kingdom,Soviet Union, andChina.[146][147]The U.S. emerged relatively unscathed from the war, with even greatereconomic powerandinternational political influence.[148] The end of World War II in 1945 left the U.S. and the Soviet Union assuperpowers, each with its own political, military, and economic sphere of influence. Geopolitical tensions between the two superpowers soon led to theCold War.[149][150][151]The U.S. utilized the policy ofcontainmentto limit the USSR's sphere of influence, engaged inregime changeagainst governments perceived to be aligned with Moscow, and prevailed in theSpace Race, which culminated with thefirst crewed Moon landingin 1969.[152][153]Domestically, the U.S.experienced economic growth,urbanization, andpopulation growth following World War II.[154]Thecivil rights movementemerged, withMartin Luther King Jr.becoming a prominent leader in the early 1960s.[155]TheGreat Societyplan of PresidentLyndon B. Johnson's administration resulted in groundbreaking and broad-reaching laws, policies and a constitutional amendment to counteract some of the worst effects of lingeringinstitutional racism.[156] Thecounterculture movementin the U.S. brought significant social changes, including the liberalization of attitudes towardrecreational drug useandsexuality.[157][158]It also encouragedopen defiance of the military draft(leading to theend of conscriptionin 1973) andwide oppositiontoU.S. intervention in Vietnam(with the U.S. totally withdrawing in 1975).[159]A societal shift in the roles of womenwas significantly responsible for the large increase in female paid labor participation during the 1970s, and by 1985 the majority of American women aged 16 and older were employed.[160]Thefall of communismand thecollapse of the Soviet Unionfrom 1989 to 1991 marked the end of the Cold War andleft the United States as the world's sole superpower.[161][162][163][164]This cemented the United States' global influence, reinforcing the concept of the "American Century" as it dominated international political, economic, and military affairs.[165][166] The 1990s saw thelongest recorded economic expansion in American history, a dramaticdecline in U.S. crime rates, andadvances in technology. Throughout this decade, technological innovations such as theWorld Wide Web, the evolution of thePentium microprocessorin accordance withMoore's law, rechargeablelithium-ion batteries, the firstgene therapytrial, andcloningeither emerged in the U.S. or were improved upon there. TheHuman Genome Projectwas formally launched in 1990, whileNasdaqbecame the first stock market in the United States to trade online in 1998.[167] In theGulf Warof 1991, anAmerican-led international coalition of statesexpelled anIraqiinvasion force that had occupied neighboringKuwait.[168]TheSeptember 11 attackson the United States in 2001 by thepan-Islamistmilitant organizational-Qaedaled to thewar on terror, and subsequentmilitary interventions in AfghanistanandIraq.[169][170] TheU.S. housing bubbleculminated in 2007 with theGreat Recession, the largest economic contraction since the Great Depression.[171]Coming to a head in the 2010s,political polarization in the countryincreased between liberal and conservative factions.[172][173][174]This polarization was capitalized upon in theJanuary 2021 Capitol attack,[175]when a mob of insurrectionists[176]entered theU.S. Capitoland sought to prevent the peaceful transfer of power[177]in anattempted self-coup d'état.[178]The2021 Taliban offensive(May–August) ended theWar in Afghanistanone year after theU.S. signed a peace agreementwith the Taliban.[179] The United States is the world'sthird-largest countryby total area behind Russia and Canada.[c][180][181]The 48contiguous states and the District of Columbiaoccupy a combined area of 3,119,885 square miles (8,080,470 km2).[12][182]The coastal plainof theAtlanticseaboard gives way to inland forests and rolling hills in thePiedmontplateau region.[183] TheAppalachian Mountainsand theAdirondackmassif separate theEast Coastfrom theGreat Lakesand the grasslands ofthe Midwest.[184]TheMississippi River System, the world'sfourth-longest river system, runs predominantly north–south through the heart of the country. The flat and fertileprairieof theGreat Plainsstretches to the west, interrupted bya highland regionin the southeast.[184] TheRocky Mountains, west of the Great Plains, extend north to south across the country, peaking at over 14,000 feet (4,300 m) inColorado.[185]Farther west are the rockyGreat BasinandChihuahua,Sonoran, andMojavedeserts.[186]In the northwest corner ofArizona, carved by theColorado Riverover millions of years, is theGrand Canyon, a steep-sided canyon and popular tourist destination known for its overwhelming visual size and intricate, colorful landscape. TheSierra NevadaandCascademountain ranges run close to thePacific coast. Thelowest and highest points in the contiguous United Statesare in the State of California,[187]about 84 miles (135 km) apart.[188]At an elevation of 20,310 feet (6,190.5 m), Alaska'sDenaliis the highest peak in the country and continent.[189]Activevolcanoesare common throughout Alaska'sAlexanderandAleutian Islands. Located entirely outside North America, the archipelago of Hawaii consists of volcanic islands,physiographicallyandethnologicallypart of thePolynesiansubregion ofOceania.[190]ThesupervolcanounderlyingYellowstone National Parkin the Rocky Mountains, theYellowstone Caldera, is the continent's largest volcanic feature.[191]In 2021, the United States had 8% of global permanent meadows and pastures and 10% of cropland.[192] With its large size and geographic variety, the United States includes most climate types. East of the100th meridian, the climate ranges fromhumid continentalin the north tohumid subtropicalin the south.[193]The western Great Plains aresemi-arid.[194]Many mountainous areas of the American West have analpine climate. The climate isaridin the Southwest,Mediterraneanincoastal California, andoceanicin coastalOregon,Washington, and southernAlaska. Most of Alaska issubarcticorpolar.Hawaii, thesouthern tip of Floridaand U.S. territories in theCaribbeanandPacificaretropical.[195] The United States receives more high-impactextreme weatherincidents than any other country.[196][197]States bordering theGulf of Mexicoare prone to hurricanes, and most of the world's tornadoesoccur in the country, mainly inTornado Alley.[198]Extreme weather became more frequent in the U.S. in the 21st century, with three times the number of reportedheat wavesas in the 1960s. In theAmerican Southwest, droughts became more persistent and more severe.[199]The regions considered as the most attractive to the population are the most vulnerable.[200] The U.S. is one of 17megadiverse countriescontaining large numbers ofendemic species: about 17,000 species ofvascular plantsoccur in the contiguous United States and Alaska, and over 1,800 species offlowering plantsare found in Hawaii, few of which occur on the mainland.[202]The United States is home to 428mammalspecies, 784 birds, 311 reptiles, 295amphibians,[203]and around 91,000 insect species.[204] There are63 national parks, andhundreds of other federally managedparks, forests, andwilderness areas, managed by theNational Park Serviceand other agencies.[205]About 28% of the country's land is publicly owned and federally managed,[206]primarily in theWestern States.[207]Most of this land is protected, though some is leased for commercial use, and less than one percent is used for military purposes.[208][209] Environmental issues in the United Statesinclude debates onnon-renewable resourcesandnuclear energy,air and water pollution,biodiversity, logging anddeforestation,[210][211]andclimate change.[212][213]TheU.S. Environmental Protection Agency(EPA) is the federal agency charged withaddressing most environmental-related issues.[214]Theidea of wildernesshas shaped the management of public lands since 1964, with theWilderness Act.[215]TheEndangered Species Act of 1973provides a way to protect threatened and endangered species and their habitats. TheUnited States Fish and Wildlife Serviceimplements and enforces the Act.[216]In 2024, the U.S. ranked 35th among 180 countries in theEnvironmental Performance Index.[217] The United States is afederal republicof 50states, and a federal capital district,Washington, D.C.The U.S. also asserts sovereignty over fiveunincorporated territoriesandseveral uninhabited island possessions.[218][219]The U.S. is the world's oldest surviving federation,[220]and itspresidential system of national governmenthas been adopted, in whole or in part, by many newly independent states worldwide following theirdecolonization.[221]TheConstitution of the United Statesserves asthe country's supreme legal document.[222]Most scholars describe the United States as a liberal democracy.[223][p] Composed of three branches, all headquartered in Washington, D.C., the federal government is the national government of the United States. It is regulated by a strong system ofchecks and balances.[234] The three-branch system is known as thepresidential system, in contrast to theparliamentary system, where the executive is part of the legislative body. Many countries around the world imitated this aspect of the 1789Constitution of the United States, especially in the Americas.[244] In theU.S. federal system, sovereign powers are shared between three levels of government specified in the Constitution: the national government, the states, and Indian tribes.[245][246]The U.S. also asserts sovereignty over five permanently inhabited territories:American Samoa,Guam, theNorthern Mariana Islands,Puerto Rico, and theU.S. Virgin Islands.[218] Residents of the states are represented by their elected state andlocal governments, which are administrative divisions of the states.[247]States are subdivided into counties or county equivalents, andfurther divided into municipalities. The District of Columbia isa federal districtcontaining the U.S. capital,Washington, D.C.[248]The federal district is an administrative division of the federal government.[249] Indian country is made up of 574federally recognized tribesand 326Indian reservations. They hold a government-to-government relationship with the U.S. federal government in Washington and are legally defined asdomestic dependent nationswithinherent tribal sovereignty rights.[246][245][250][251] In addition to the five major territories, the U.S. also asserts sovereignty over theUnited States Minor Outlying Islandsin thePacific Oceanand theCaribbean.[218]The seven undisputed islands without permanent populations areBaker Island,Howland Island,Jarvis Island,Johnston Atoll,Kingman Reef,Midway Atoll, andPalmyra Atoll. U.S. sovereignty over the unpopulatedBajo Nuevo Bank,Navassa Island,Serranilla Bank, andWake Islandis disputed.[218] The Constitution is silent on political parties. However, they developed independently in the 18th century with theFederalistandAnti-Federalistparties.[252]Since then, the United States has operated as ade factotwo-party system, though the parties in that system have been different at different times.[253]The two main national parties are presently theDemocraticand theRepublican. The former is perceived asrelatively liberalin itspolitical platformwhile the latter is perceived asrelatively conservative.[254] The United States has an established structure of foreign relations, and it has the world'ssecond-largest diplomatic corpsas of 2024[update]. It is apermanent member of the United Nations Security Council,[255]and home to theUnited Nations headquarters.[256]The United States is a member of theG7,[257]G20,[258]andOECDintergovernmental organizations.[259]Almost all countries have embassiesand many haveconsulates(official representatives) in the country. Likewise, nearly all countries host formaldiplomatic missionswith the United States, exceptIran,[260]North Korea,[261]andBhutan.[262]ThoughTaiwandoes not have formal diplomatic relations with the U.S., it maintains close unofficial relations.[263]The United States regularlysupplies Taiwan with military equipmentto deter potential Chinese aggression.[264]Its geopolitical attention also turned to theIndo-Pacificwhen the United States joined theQuadrilateral Security Dialoguewith Australia, India, and Japan.[265] The United States has a "Special Relationship"with the United Kingdom[266]and strong tieswith Canada,[267]Australia,[268]New Zealand,[269]the Philippines,[270]Japan,[271]South Korea,[272]Israel,[273]and severalEuropean Union countries(France,Italy,Germany,Spain, andPoland).[274]The U.S. works closely with itsNATOallies on military andnational securityissues, and with countries in the Americas through theOrganization of American Statesand theUnited States–Mexico–Canada Free Trade Agreement. In South America,Colombiais traditionally considered to be the closest ally of the United States.[275]The U.S. exercises full international defense authority and responsibility forMicronesia, theMarshall Islands, andPalauthrough theCompact of Free Association.[242]It has increasingly conducted strategic cooperationwith India,[276]whileits ties with Chinahave steadily deteriorated.[277][278]Since 2014, the U.S. hasbecome a key ally of Ukraine.[279][280] The president is thecommander-in-chiefof the United States Armed Forces and appoints its leaders, thesecretary of defenseand theJoint Chiefs of Staff. TheDepartment of Defense, which is headquartered atthe Pentagonnear Washington, D.C., administers five of the six service branches, which are made up of theU.S. Army,Marine Corps,Navy,Air Force, andSpace Force.[281]TheCoast Guardis administered by theDepartment of Homeland Securityin peacetime and can be transferred to theDepartment of the Navyin wartime.[282] The United Statesspent $916 billion on its militaryin 2023, which is by far thelargest amount of any country, making up 37% of global military spending and accounting for 3.4% of the country's GDP.[283][284]The U.S.possesses 42% of the world's nuclear weapons—the second-largest stockpile afterthat of Russia.[285] The United States has thethird-largest combined armed forcesin the world, behind theChinese People's Liberation ArmyandIndian Armed Forces.[286]The military operates about 800 bases and facilities abroad,[287]and maintainsdeployments greater than 100 active duty personnelin 25 foreign countries.[288]The United States has engaged in over 400 military interventions since its founding in 1776, with over half of these occurring between 1950 and 2019 and 25% occurring in the post-Cold War era.[289] State defense forces(SDFs) are military units that operate under the sole authority of a state government. SDFs are authorized by state and federal law but are under the command ofthe state's governor.[290][291][292]They are distinct from the state'sNational Guardunits in that they cannot become federalized entities. A state's National Guard personnel, however, may be federalized under theNational Defense Act Amendments of 1933, which created the Guard and provides for the integration ofArmy National Guardunits and personnel into the U.S. Army and (since 1947) the U.S. Air Force.[293] There are about 18,000 U.S. police agencies from local to national level in the United States.[294]Law in the United States is mainly enforced by local police departments andsheriff departmentsin their municipal or county jurisdictions.The state policedepartmentshave authority in their respective state, andfederal agenciessuch as theFederal Bureau of Investigation(FBI) and theU.S. Marshals Servicehave national jurisdiction and specialized duties, such as protectingcivil rights,national securityand enforcingU.S. federal courts' rulings and federal laws.[295]State courtsconduct most civil and criminal trials,[296]and federal courts handle designated crimes andappeals of state court decisions.[297] There is no unified "criminal justice system" in the United States. TheAmerican prison systemis largely heterogenous, with thousands of relatively independent systems operating across federal, state, local, and tribal levels. In 2025, "these systems holdnearly 2 million peoplein 1,566 state prisons, 98 federal prisons, 3,116 local jails, 1,277 juvenile correctional facilities, 133 immigration detention facilities, and 80 Indian country jails, as well as in military prisons, civil commitment centers, state psychiatric hospitals, and prisons in the U.S. territories."[298]Despite disparate systems of confinement, four main institutions dominate:federal prisons,state prisons, local jails, andjuvenile correctional facilities.[299]Federal prisons are run by theFederal Bureau of Prisonsand hold people who have been convicted of federal crimes, including pretrial detainees.[299]State prisons, run by the official department of correction of each state, hold sentenced people serving prison time (usually longer than one year) for felony offenses.[299]Local jails are county or municipal facilities that incarcerate defendants prior to trial; they also hold those serving short sentences (typically under a year).[299]Juvenile correctional facilities are operated by local or state governments and serve as longer-term placements for any minor adjudicated as delinquent and ordered by a judge to be confined.[300] In January 2023, the United States had thesixth-highest per capita incarceration ratein the world—531 people per 100,000 inhabitants—and the largest prison and jail population in the world, with more than 1.9 million people incarcerated.[298][301][302]An analysis of theWorld Health OrganizationMortality Database from 2010 showed U.S. homicide rates "were 7 times higher than in other high-income countries, driven bya gun homicide ratethat was 25 times higher".[303] The U.S. economy has been the world'slargest nominally since about 1890.[305]The 2024 U.S.gross domestic product(GDP) of more than $29 trillion[306][e]was the highest in the world, constituting over 25% of nominalglobal economic outputor 15% atpurchasing power parity(PPP). From 1983 to 2008, U.S. real compounded annual GDP growth was 3.3%, compared to a 2.3% weighted average for the rest of theG7.[307]The country ranksfirst in the world by nominal GDP,[308]second when adjusted for purchasing power parities(PPP),[15]andninth by PPP-adjusted GDP per capita.[15]It has thehighest disposable household income per capitaamongOECDcountries.[309]In February 2024, the totalU.S. federal government debtwas $34.4 trillion.[310] Of the world's500 largest companies by revenue,136 were headquartered in the U.S.in 2023,[312]which is the highest number of any country.[313]TheU.S. dollaris the currency most usedin international transactionsand the world's foremostreserve currency, backed by the country's dominant economy,its military, thepetrodollarsystem, its largeU.S. treasuries market, and its linkedeurodollar.[304]Several countries use it as their official currency, and in others it is thede factocurrency.[314][315]The U.S. hasfree trade agreementswithseveral countries, including theUSMCA.[316]It ranked second in theGlobal Competitiveness Reportin 2019, after Singapore.[317]Although the United States has reached apost-industrial level of development[318]and is often described as having aservice economy,[318][319]it remains a major industrial power.[320]In 2021, theU.S. manufacturing sectorwas the world'ssecond-largestafter China's.[321] New York Cityis the world's principalfinancial center[323][324]and the epicenter of the world'slargest metropolitan economy.[325]TheNew York Stock ExchangeandNasdaq, both located in New York City, are the world's twolargest stock exchangesbymarket capitalizationandtrade volume.[326][327]The United States is at or near the forefront oftechnological advancementandinnovation[328]in many economic fields, especially inartificial intelligence;electronicsandcomputers;pharmaceuticals; and medical,aerospaceandmilitary equipment.[180]The country's economy is fueled by abundantnatural resources, a well-developedinfrastructure, andhigh productivity.[329]Thelargest trading partners of the United Statesare theEuropean Union, Mexico, Canada, China, Japan, South Korea, the United Kingdom, Vietnam, India, and Taiwan.[330]The United States is the world'slargest importerandsecond-largest exporter.[r]It is by far the world'slargest exporter of services.[333] Americans have the highest averagehousehold[334]andemployee incomeamong OECD member states, and the fourth-highestmedian household incomein 2023,[335]up from sixth-highest in 2013.[336]With personalconsumption expendituresof over $18.5 trillion in 2023,[337]the U.S. has a heavilyconsumer-driven economyand is the world'slargest consumer market.[338]The U.S.ranked first in the number of dollar billionairesandmillionairesin 2023, with 735 billionaires and nearly 22 million millionaires.[339] Wealth in the United Statesis highly concentrated; in 2011, the richest 10% of the adult population owned 72% of the country's household wealth, while the bottom 50% owned just 2%.[340]U.S. wealth inequalityincreased substantially since the late 1980s,[341]andincome inequality in the U.S.reached a record high in 2019.[342]By 2024, the country had some of the highest wealth and income inequality amongOECDcountries.[343]Since the 1970s, there has been a decoupling of U.S. wage gains from worker productivity.[344]In 2016, the top fifth of earners took home more than half of all income,[345]giving the U.S. one of the widest income distributions among OECD countries.[346][344]There were about 771,480homeless persons in the U.S.in 2024.[347]In 2022, 6.4 million children experienced food insecurity.[348]Feeding Americaestimates that around one in five, or approximately 13 million,children experience hunger in the U.S.and do not know where they will get their next meal or when.[349]Also in 2022, about 37.9 million people, or 11.5% of the U.S. population, wereliving in poverty.[350] The United States has a smallerwelfare stateand redistributes less income through government action than most otherhigh-income countries.[351][352]It is the onlyadvanced economythat does notguarantee its workers paid vacationnationally[353]and is one of a few countries in the world without federalpaid family leaveas a legal right.[354]The United States has a higher percentage of low-incomeworkersthan almost any other developed country, largely because of a weakcollective bargainingsystem and lack of government support for at-risk workers.[355] The United Stateshas been a leader in technological innovation since the late 19th centuryand scientific research since the mid-20th century.[356]Methods for producinginterchangeable partsand the establishment of amachine toolindustry enabledthe large-scale manufacturingof U.S. consumer products in the late 19th century.[357]By the early 20th century, factoryelectrification, the introduction of theassembly line, and otherlabor-saving techniquescreated the system ofmass production.[358] In the 21st century, the United States continues to be one of the world's foremost scientific powers,[359]though China has emerged as a major competitor in many fields.[360]The U.S. has thehighest total research and development expenditure of any country[361]and ranks ninth as a percentage of GDP.[362]In 2022, the United States was (after China) the country with thesecond-highest number of published scientific papers.[363]In 2021, the U.S. ranked second (also after China) by the number of patent applications, and third by trademark and industrial design applications (after China and Germany), according toWorld Intellectual Property Indicators.[364]In 2023 and 2024, the United States ranked third (after Switzerland and Sweden) in theGlobal Innovation Index.[365][366]The United States is considered to be the leading country in the development ofartificial intelligencetechnology.[367]In 2023, the United States was ranked the second most technologically advanced country in the world (after South Korea) byGlobal Financemagazine.[368] The United States has maintained a space program since the late 1950s, beginning with the establishment of theNational Aeronautics and Space Administration(NASA) in 1958.[369][370]NASA'sApollo program(1961–1972) achieved the first crewedMoon landingwith the 1969Apollo 11mission; it remains one of the agency's most significant milestones.[371][372]Other major endeavors by NASA include theSpace Shuttle program(1981–2011),[373]theVoyager program(1972–present), theHubbleandJames Webbspace telescopes(launched in 1990 and 2021, respectively),[374][375]and the multi-missionMars Exploration Program(SpiritandOpportunity,Curiosity,andPerseverance).[376]NASA is one of five agencies collaborating on theInternational Space Station(ISS);[377]U.S. contributions to the ISS include several modules, includingDestiny(2001),Harmony(2007), andTranquility(2010), as well as ongoing logistical and operational support.[378] The United Statesprivate sectordominates the globalcommercial spaceflight industry.[379]Prominent American spaceflight contractors includeBlue Origin,Boeing,Lockheed Martin,Northrop Grumman, andSpaceX. NASA programs such as theCommercial Crew Program,Commercial Resupply Services,Commercial Lunar Payload Services, andNextSTEPhave facilitated growing private-sector involvement in American spaceflight.[380] In 2023, the United States received approximately 84% of its energy from fossil fuel, and the largest source of the country's energy came frompetroleum(38%), followed bynatural gas(36%),renewable sources(9%),coal(9%), andnuclear power(9%).[381][382]In 2022, the United States constituted only about 4% of theworld's population, but consumed around 16% of theworld's energy.[383]The U.S. ranks as thesecond-highest emitter of greenhouse gasesbehind China.[384] The U.S. is the world'slargest producer of nuclear power, generating around 30% of the world's nuclear electricity.[385]It also has the highest number of nuclear power reactors of any country.[386]From 2024, the U.S. plans to triple its nuclear power capacity by 2050.[387] Theautomotive industry in the United Statesis thesecond-largest by motor vehicle manufacturing output, having dominated the world market for much of the twentieth century.Detroit, Michigan, is still referred to as "Motor City" because of its historical significance as the center of the American automobile industry, having been home to America's "Big Three" car manufacturers for a long time. The U.S. is in the top ten countries forhighest vehicle ownership per capita, with 850 vehicles per 1000 people in 2022. The 4 million miles (6.4 million kilometers) road network, owned almost entirely by state and local governments, is thelongest in the world.[388][389]The extensiveInterstate Highway Systemconnects all major cities and is funded mostly by the federal government but maintained bystate departments of transportation, supplemented by state expressways and some privatetoll roads. A 2022 study found that 76% of U.S. commuters drive alone and 14% ride a bicycle, including bike owners and users ofbike-sharingnetworks. About 11% use some form of public transportation.[390][391]Public transportation in the United Statesis well developed in the largest urban areas, notably New York City, Chicago, Boston, Philadelphia, and Portland, Oregon; otherwise, coverage is generally less extensive than in most other developed countries. The U.S. also has many relativelycar-dependentlocalities.[392]Long-distance intercity travel is provided primarily by airlines, but travel by rail is more common along theNortheast Corridor, the onlyhigh-speed rail in the U.S.that meets international standards.Amtrak, the country's government-sponsored national passenger rail company, has a relatively sparse network compared to that of Western European countries. Service is concentrated in the Northeast, Illinois, and the West Coast. The United States has an extensive air transportation network, and the country accounted for just over half of the world'saerospace productionin 2016.[395]U.S. civilian airlinesare all privately owned. The three largest airlines in the world, by total number of passengers carried, are U.S.-based;American Airlinesbecame the global leader after its 2013 merger withUS Airways.[396]Among the busiest 50 airports in the world, 16 are in the United States, as well as five of the top 10.[397]The world's busiest airport by passenger volume isHartsfield–Jackson Atlanta InternationalinAtlanta, Georgia.[393][397]In 2022, most of the19,969 U.S. airports[398]were owned and operated by local government authorities, and there are also some private airports. Some 5,193 are designated as "public use", including forgeneral aviation. TheTransportation Security Administrationhas provided security at most major airports since 2001. The country's rail transport network, thelongestin the world at 182,412.3 mi (293,564.2 km),[399]handles mostlyfreight[400][401](in contrast to more passenger-centered rail in Europe[402]). Because they are often privately owned operations as well, U.S. railroads lag behind those of the rest of the world in terms of electrification.[403]Of the world's50 busiest container ports, four are located in the United States, with the busiest in the U.S. being thePort of Los Angeles.[404] The country's inland waterwaysare the world'sfifth-longest, totaling 41,009 km (25,482 mi).[405]They are used extensively for freight, recreation, and a small amount of passenger traffic.Miamiis a major international hub forcruise shipand airline passengers visiting theCaribbean. Transportation in Alaskarelies more on airplanes, ferries,all-terrain vehicles, andsnowmobilesbecause many settlements are not connected to the contiguous North American road network. Long distances and the requirements of theJones Actresult in higher transportation costs forHawaiiandinsular areasfrom the rest of the United States. TheU.S. Census Bureaureported 331,449,281 residents on April 1, 2020,[s][407]making the United States thethird-most-populous countryin the world, after China and India.[180]The Census Bureau's official 2024 population estimate was 340,110,988, an increase of 2.6% since the 2020 census.[13]According to the Bureau'sU.S. Population Clock, on July 1, 2024, the U.S. population had a net gain of one person every 16 seconds, or about 5400 people per day.[408]In 2023, 51% of Americans age 15 and over were married, 6% werewidowed, 10% were divorced, and 34% had never been married.[409]In 2023, thetotal fertility ratefor the U.S. stood at 1.6 children per woman,[410]and, at 23%, it had the world's highest rate of children living insingle-parenthouseholds in 2019.[411] The United States has a diverse population; 37ancestry groupshave more than one million members.[412]White Americanswith ancestry from Europe, the Middle East, or North Africa form the largestracialandethnic groupat 57.8% of the United States population.[413][414]Hispanic and Latino Americansform the second-largest group and are 18.7% of the United States population.African Americansconstitute the country's third-largest ancestry group and are 12.1% of the total U.S. population.[412]Asian Americans are the country's fourth-largest group, composing 5.9% of the United States population. The country's 3.7 million Native Americans account for about 1%,[412]and some 574 native tribes are recognized by the federal government.[415]In 2022, themedian ageof the United States population was 38.9 years.[416] While many languages are spoken in the United States,Englishis by far the most commonly spoken and written.[417]English was made theofficial languageof the United States byExecutive Order 14224in 2025.[4]However, Congress has never passed a bill to designate English as the official language of all three federal branches. Some laws, such asU.S. naturalization requirements, nonetheless standardize English. Twenty-eight states and theUnited States Virgin Islandshave declared English as the sole official language; 19 states and theDistrict of Columbiahave no official language.[418]Three states and four U.S. territories have recognized local or indigenous languages in addition to English: Hawaii (Hawaiian),[419]Alaska (twenty Native languages),[t][420]South Dakota (Sioux),[421]American Samoa (Samoan), Puerto Rico (Spanish), Guam (Chamorro), and the Northern Mariana Islands (Carolinianand Chamorro). In total, 169 Native American languages are spoken in the United States.[422]In Puerto Rico, Spanish is more widely spoken than English.[423] According to theAmerican Community Survey(2020),[424]some 245.4 million people in the U.S. age five and older spoke only English at home. About 41.2 million spoke Spanish at home, making it the second most commonly used language. Other languages spoken at home by one million people or more includeChinese(3.40 million),Tagalog(1.71 million),Vietnamese(1.52 million),Arabic(1.39 million),French(1.18 million),Korean(1.07 million), andRussian(1.04 million).German, spoken by 1 million people at home in 2010, fell to 857,000 total speakers in 2020.[425] America's immigrant population is by far the world'slargest in absolute terms.[426][427]In 2022, there were 87.7 million immigrants andU.S.-born children of immigrantsin the United States, accounting for nearly 27% of the overall U.S. population.[428]In 2017, out of the U.S. foreign-born population, some 45% (20.7 million) were naturalized citizens, 27% (12.3 million) were lawful permanent residents, 6% (2.2 million) were temporary lawful residents, and 23% (10.5 million) were unauthorized immigrants.[429]In 2019, the top countries of origin for immigrants were Mexico (24% of immigrants), India (6%), China (5%), the Philippines (4.5%), and El Salvador (3%).[430]In fiscal year 2022, over one million immigrants (most of whom entered throughfamily reunification) were grantedlegal residence.[431]In fiscal year 2024 alone, according to theMigration Policy Institute, the United States resettled 100,034 refugees, which "re-cements the United States' role as the top global resettlement destination, far surpassing other major resettlement countries in Europe and Canada".[432] TheFirst Amendmentguarantees thefree exercise of religion in the countryand forbids Congress from passing laws respectingits establishment.[433][434]Religious practice is widespread, among themost diversein the world,[435]and profoundly vibrant.[436]The country has the world'slargest Christian population. It has thefourth-largest population of Roman Catholics;[437]Pope Leo XIVof Chicago, Illinois is their current head. Other notable faiths includeJudaism,Buddhism,Hinduism,Islam, manyNew Agemovements, andNative American religions.[438]Religious practice varies significantly by region.[439]"Ceremonial deism" is common in American culture.[440] The overwhelming majority ofAmericansbelieve in ahigher poweror spiritual force, engage inspiritual practicessuch as prayer, and consider themselves religious orspiritual.[441][442]In the "Bible Belt", located within the Southern United States,evangelical Protestantismplays a significant role culturally, whereasNew Englandand the Western United Statestend to be more secular.[439][443]Mormonism, aRestorationistmovement founded in the U.S. in 1847,[444]is the predominant religion in the state of Utah and a major religion in Idaho. About 82% of Americans live inurban areas, including suburbs;[180]about half of those reside in cities with populations over 50,000.[445]In 2022, 333incorporated municipalitieshad populations over 100,000, nine cities had more than one million residents, and four cities—New York City,Los Angeles,Chicago, andHouston—had populations exceeding two million.[446]Many U.S. metropolitan populations are growing rapidly, particularly in the South and West.[447] According to theCenters for Disease Control and Prevention(CDC), average American life expectancy at birth was 78.4 years in 2023 (75.8 years for men and 81.1 years for women). This was a gain of 0.9 year from 77.5 years in 2022, and the CDC noted that the new average was largely driven by "decreases in mortality due to COVID-19, heart disease, unintentional injuries, cancer and diabetes".[452]Starting in 1998, life expectancy in the U.S. fellbehind that of other wealthy industrialized countries, and Americans' "health disadvantage" gap has been increasing ever since.[453] The Commonwealth Fund reported in 2020 that the U.S. had thehighest suicide rateamonghigh-income countries.[454]Approximately one-third of the U.S. adult population is obeseand another third is overweight.[455]The U.S. healthcare system faroutspends that of any other country, measured both in per capita spending and as a percentage of GDP, but attains worse healthcare outcomes when compared to peer countries for reasons that are debated.[456]The United States is the only developed countrywithout a system of universal healthcare, anda significant proportion of the population that does not carry health insurance.[457]Government-funded healthcare coverage for the poor (Medicaid) and for those age 65 and older (Medicare) is available to Americans who meet the programs' income or age qualifications. In 2010, former President Obama passed thePatient Protection and Affordable Care Act.[u][458]Abortion in the United Statesis not federally protected, and is illegal or restricted in 17 states.[459] American primary and secondary education (known in the U.S. asK–12, "kindergarten through 12th grade") is decentralized. School systems are operated by state, territorial, and sometimes municipal governments and regulated by theU.S. Department of Education. In general, children are required to attend school oran approved homeschoolfrom the age of five or six (kindergartenorfirst grade) until they are 18 years old. This often brings students through the12th grade, the final year of a U.S. high school, but some states and territories allow them to leave school earlier, at age 16 or 17.[461]The U.S. spends more on education per student than any other country,[462]an average of $18,614 per year per public elementary and secondary school student in 2020–2021.[463]Among Americans age 25 and older, 92.2% graduated from high school, 62.7% attended some college, 37.7% earned abachelor's degree, and 14.2% earned a graduate degree.[464]TheU.S. literacy rateis near-universal.[180][465]The country has themost Nobel Prize winners of any country, with411(having won 413 awards).[466][467] U.S. tertiary or higher educationhas earned a global reputation. Many of the world's top universities, as listed by various ranking organizations, are in the United States, including 19 of the top 25.[468][469]American higher education is dominated bystate university systems, althoughthe country's many private universities and collegesenroll about 20% of all American students. Localcommunity collegesgenerally offer coursework and degree programs covering the first two years of college study. They often have more open admission policies, shorter academic programs, and lower tuition.[470] As forpublic expenditureson higher education, the U.S. spends more per student than theOECDaverage, and Americans spend more than all nations in combined public and private spending.[471]Colleges and universities directly funded by the federal government do not charge tuition and are limited to military personnel and government employees, including: theU.S. service academies, theNaval Postgraduate School, andmilitary staff colleges. Despite some studentloan forgivenessprograms in place,[472]student loan debtincreased by 102% between 2010 and 2020,[473]and exceeded $1.7 trillion in 2022.[474] Americans have traditionallybeen characterizedby a unifying political belief in an "American Creed" emphasizingconsent of the governed,liberty,equality under the law,democracy,social equality,property rights, and a preference forlimited government.[476][477]Culturally, the country has been described as havingthe valuesofindividualismandpersonal autonomy,[478][479]as well as having a strongwork ethic,[480]competitiveness,[481]and voluntaryaltruismtowards others.[482][483][484]According to a 2016 study by theCharities Aid Foundation, Americans donated 1.44% of total GDP to charity—thehighest ratein the world by a large margin.[485]The United States is home to awide variety of ethnic groups, traditions, and values.[486][487]The country has acquired significanthardandsoft powerthroughits diplomatic influence,economic power,military alliances, andcultural exportssuch asAmerican movies,music,video games,sports, andfood.[488][489]The influence that the United States exerts on other countries through soft power is referred to asAmericanization.[490] Nearly all present Americans or their ancestors came fromEurope, Africa, or Asia(the "Old World") within the past five centuries.[491]MainstreamAmerican culture is aWestern culturelargely derived from thetraditions of European immigrantswith influences from many other sources, such astraditions brought by slaves from Africa.[492]More recent immigration fromAsiaand especiallyLatin Americahas added to a cultural mix that has been described as a homogenizingmelting pot, and a heterogeneoussalad bowl, with immigrants contributing to, and oftenassimilatinginto, mainstream American culture. TheAmerican Dream, or the perception that Americans enjoy highsocial mobility, plays a key role in attracting immigrants.[493][494]Whether this perception is accurate has been a topic of debate.[495][496][497]While mainstream culture holds that the United States is aclassless society,[498]scholars identify significant differences betweenthe country's social classes, affectingsocialization, language, and values.[499][500]Americans tend to greatly valuesocioeconomicachievement, butbeing ordinary or averageis promoted by some as a noble condition as well.[501] TheNational Foundation on the Arts and the Humanitiesis an agency of the United States federal government that was established in 1965 with the purpose to "develop and promote a broadly conceived national policy of support for the humanities and the arts in the United States, and for institutions which preserve the cultural heritage of the United States."[502]It is composed of four sub-agencies: The United States is considered to have thestrongest protections of free speech of any countryunder theFirst Amendment,[503]which protectsflag desecration,hate speech,blasphemy, andlese-majestyas forms of protected expression.[504][505][506]A 2016Pew Research Centerpoll found that Americans were the most supportive of free expression of any polity measured.[507]They are the "most supportive offreedom of the pressand theright to use the Internetwithout government censorship".[508]The U.S. is asocially progressivecountry[509]withpermissiveattitudes surroundinghuman sexuality.[510]LGBT rights in the United Statesare advanced by global standards.[510][511][512] Colonial American authors were influenced byJohn Lockeand various otherEnlightenmentphilosophers.[514][515]TheAmerican Revolutionary Period(1765–1783) is notable for the political writings ofBenjamin Franklin,Alexander Hamilton,Thomas Paine, andThomas Jefferson. Shortly before and after theRevolutionary War, the newspaper rose to prominence, filling a demand for anti-British national literature.[516][517]An early novel isWilliam Hill Brown'sThe Power of Sympathy, published in 1791. Writer and criticJohn Nealin the early- to mid-nineteenth century helped advance America toward a unique literature and culture by criticizing predecessors such asWashington Irvingfor imitating their British counterparts, and by influencing writers such asEdgar Allan Poe,[518]who took American poetry and short fiction in new directions.Ralph Waldo EmersonandMargaret Fullerpioneered the influentialTranscendentalismmovement;[519][520]Henry David Thoreau, author ofWalden, was influenced by this movement. The conflict surroundingabolitionisminspired writers, likeHarriet Beecher Stowe, and authors of slave narratives, such asFrederick Douglass.Nathaniel Hawthorne'sThe Scarlet Letter(1850) explored the dark side of American history, as didHerman Melville'sMoby-Dick(1851). Major American poets of the nineteenth centuryAmerican RenaissanceincludeWalt Whitman, Melville, andEmily Dickinson.[521][522]Mark Twainwas the first major American writer to be born in the West.Henry Jamesachieved international recognition with novels likeThe Portrait of a Lady(1881). As literacy rates rose, periodicals published more stories centered around industrial workers, women, and the rural poor.[523][524]Naturalism,regionalism, andrealismwere the major literary movements of the period.[525][526] Whilemodernismgenerally took on an international character, modernist authors working within the United States more often rooted their work in specific regions, peoples, and cultures.[527]Following the Great Migration to northern cities, African-American and blackWest Indianauthors of theHarlem Renaissancedeveloped an independent tradition of literature that rebuked a history of inequality and celebrated black culture. An important cultural export during theJazz Age, these writings were a key influence onNégritude, a philosophy emerging in the 1930s among francophone writers of theAfrican diaspora.[528][529]In the 1950s, an ideal of homogeneity led many authors to attempt to write theGreat American Novel,[530]while theBeat Generationrejected this conformity, using styles that elevated the impact of thespoken wordover mechanics to describe drug use, sexuality, and the failings of society.[531][532]Contemporary literature is more pluralistic than in previous eras, with the closest thing to a unifying feature being a trend toward self-consciousexperiments with language.[533]Twelve American laureates have won theNobel Prize in Literature.[534] Media in the United States isbroadly uncensored, with theFirst Amendmentproviding significant protections, as reiterated inNew York Times Co. v. United States.[503]The four major broadcasters in the U.S. are theNational Broadcasting Company(NBC),Columbia Broadcasting System(CBS),American Broadcasting Company(ABC), andFox Broadcasting Company(FOX). The four major broadcast television networks are all commercial entities.Cable televisionoffers hundreds of channels catering to a variety of niches.[535]In 2021, about 83% of Americans over age 12 listened tobroadcast radio, while about 40% listened topodcasts.[536]In the prior year, there were 15,460 licensed full-power radio stations in the U.S. according to theFederal Communications Commission(FCC).[537]Much of the public radio broadcasting is supplied byNPR, incorporated in February 1970 under thePublic Broadcasting Act of 1967.[538] U.S. newspapers with a global reach and reputation includeThe Wall Street Journal,The New York Times,The Washington Post, andUSA Today.[539]About 800 publicationsare produced in Spanish.[540][541]With few exceptions, newspapers are privately owned, either by large chains such asGannettorMcClatchy, which own dozens or even hundreds of newspapers; by small chains that own a handful of papers; or, in an increasingly rare situation, by individuals or families. Major cities often havealternative newspapersto complement the mainstream daily papers, such asThe Village Voicein New York City andLA Weeklyin Los Angeles. The five most popular websites used in the U.S. areGoogle,YouTube,Amazon,Yahoo, andFacebook—all of them American-owned.[542] In 2022, the video game market of the United States was the world'slargest by revenue.[543]There are 444 publishers, developers, and hardware companies in California alone.[544] The United States is well known for its theater. Mainstream theater in the United States derives from the old European theatrical tradition and has been heavily influenced by theBritish theater.[545]By the middle of the 19th century America had created new distinct dramatic forms in theTom Shows, theshowboat theaterand theminstrel show.[546]The central hub of the American theater scene is theTheater District in Manhattan, with its divisions ofBroadway,off-Broadway, andoff-off-Broadway.[547] Many movie and televisioncelebritieshave gotten their big break working in New York productions. Outside New York City, many cities have professionalregional or resident theater companiesthat produce their own seasons. The biggest-budget theatrical productions are musicals. U.S. theater has an activecommunity theaterculture.[548] TheTony Awardsrecognizes excellence in live Broadway theater and are presented at an annual ceremony inManhattan. The awards are given for Broadway productions and performances. One is also given forregional theater. Several discretionary non-competitive awards are given as well, including aSpecial Tony Award, theTony Honors for Excellence in Theatre, and theIsabelle Stevenson Award.[549] Folk artincolonial Americagrew out of artisanalcraftsmanshipin communities that allowed commonly trained people to individually express themselves. It was distinct from Europe's tradition ofhigh art, which was less accessible and generally less relevant to early American settlers.[551]Cultural movements in art and craftsmanship in colonial America generally lagged behind those of Western Europe. For example, the prevailing medieval style ofwoodworkingand primitivesculpturebecame integral to early American folk art, despite the emergence ofRenaissance stylesin England in the late 16th and early 17th centuries. The new English styles would have been early enough to make a considerable impact on American folk art, but American styles and forms had already been firmly adopted. Not only did styles change slowly in early America, but there was a tendency for rural artisans there to continue their traditional forms longer than their urban counterparts did—and far longer than those in Western Europe.[503] TheHudson River Schoolwas a mid-19th-century movement in the visual arts tradition of Europeannaturalism. The 1913Armory Showin New York City, an exhibition of Europeanmodernist art, shocked the public and transformed the U.S. art scene.[552] American RealismandAmerican Regionalismsought to reflect and give America new ways of looking at itself.Georgia O'Keeffe,Marsden Hartley, and others experimented with new and individualistic styles, which would become known asAmerican modernism. Major artistic movements such as theabstract expressionismofJackson PollockandWillem de Kooningand thepop artofAndy WarholandRoy Lichtensteindeveloped largely in the United States. Major photographers includeAlfred Stieglitz,Edward Steichen,Dorothea Lange,Edward Weston,James Van Der Zee,Ansel Adams, andGordon Parks.[553] The tide ofmodernismand thenpostmodernismhas brought global fame to American architects, includingFrank Lloyd Wright,Philip Johnson, andFrank Gehry.[554]TheMetropolitan Museum of ArtinManhattanis the largestart museumin the United States[555]and thefourth-largestin the world.[556] American folk musicencompasses numerous music genres, variously known as traditional music, traditionalfolk music, contemporary folk music, or roots music. Many traditional songs have been sung within the same family or folk group for generations, and sometimes trace back to such origins as theBritish Isles,mainland Europe, orAfrica.[557]The rhythmic and lyrical styles of African-American music in particular have influenced American music.[558]Banjoswere brought to America through the slave trade.Minstrel showsincorporating the instrument into their acts led to its increased popularity and widespread production in the 19th century.[559][560]Theelectric guitar, first invented in the 1930s, and mass-produced by the 1940s, had an enormous influence on popular music, in particular due to the development ofrock and roll.[561]Thesynthesizer,turntablism, andelectronic musicwere also largely developed in the U.S. Elements from folk idioms such as thebluesandold-time musicwere adopted and transformed intopopular genreswith global audiences.Jazzgrew from blues andragtimein the early 20th century, developing from the innovations and recordings of composers such asW.C. HandyandJelly Roll Morton.Louis ArmstrongandDuke Ellingtonincreased its popularity early in the 20th century.[562]Country musicdeveloped in the 1920s,[563]rock and roll in the 1930s,[561]andbluegrass[564]andrhythm and bluesin the 1940s.[565]In the 1960s,Bob Dylanemerged from thefolk revivalto become one of the country's most celebrated songwriters.[566]The musical forms ofpunkandhip hopboth originated in the United States in the 1970s.[567] The United States has the world'slargest music market, with a total retail value of $15.9 billion in 2022.[568]Most of the world'smajor record companiesare based in the U.S.; they are represented by theRecording Industry Association of America(RIAA).[569]Mid-20th-century American pop stars, such asFrank Sinatra[570]andElvis Presley,[571]becameglobal celebritiesandbest-selling music artists,[562]as have artists of the late 20th century, such asMichael Jackson,[572]Madonna,[573]Whitney Houston,[574]andMariah Carey,[575]and the early 21st century, such asEminem,[576]Britney Spears,[577]Lady Gaga,[577]Katy Perry,[577]Taylor SwiftandBeyoncé.[578] The United States has the world's largestapparelmarket by revenue.[579]Apart from professionalbusiness attire, American fashion is eclectic and predominantly informal. Americans' diverse cultural roots are reflected in their clothing; however,sneakers,jeans,T-shirts, andbaseball capsare emblematic of American styles.[580]New York, withits fashion week, is considered to be one of the "Big Four" globalfashion capitals, along withParis,Milan, andLondon. A study demonstrated that general proximity toManhattan's Garment Districthas been synonymous with American fashion since its inception in the early 20th century.[581] The headquarters of manydesigner labelsreside inManhattan. Labels cater toniche markets, such as preteens. New York Fashion Week is one of the most influential fashion weeks in the world, and occurs twice a year;[582]while the annualMet Galain Manhattan is commonly known as the fashion world's "biggest night".[583][584] The U.S. film industry hasa worldwide influence and following.Hollywood, a district in northern Los Angeles, the nation's second-most populous city, is alsometonymousfor the American filmmaking industry.[585][586][587]Themajor film studiosof the United States are the primary source of themost commercially successfuland most ticket-selling movies in the world.[588][589]Since the early 20th century, the U.S. film industry has largely been based in and around Hollywood, although in the 21st century an increasing number of films are not made there, and film companies have been subject to the forces of globalization.[590]TheAcademy Awards, popularly known as the Oscars, have been held annually by theAcademy of Motion Picture Arts and Sciencessince 1929,[591]and theGolden Globe Awardshave been held annually since January 1944.[592] The industry peaked in what is commonly referred to as the "Golden Age of Hollywood", from the early sound period until the early 1960s,[593]with screen actors such asJohn WayneandMarilyn Monroebecoming iconic figures.[594][595]In the 1970s, "New Hollywood", or the "Hollywood Renaissance",[596]was defined by grittier films influenced by French and Italian realist pictures of thepost-war period.[597]The 21st century has been marked by the rise of American streaming platforms, which came to rival traditional cinema.[598][599] Early settlers were introduced by Native Americans to foods such asturkey,sweet potatoes,corn,squash, andmaple syrup. Of the most enduring and pervasive examples are variations of the native dish calledsuccotash. Early settlers and later immigrants combined these with foods they were familiar with, such aswheat flour,[600]beef, and milk, to create a distinctive American cuisine.[601][602]New World crops, especiallypumpkin, corn,potatoes, and turkey as the main course are part of a shared national menu onThanksgiving, when many Americans prepare or purchase traditional dishes to celebrate the occasion.[603] Characteristic American dishes such asapple pie,fried chicken,doughnuts,french fries,macaroni and cheese,ice cream,hamburgers,hot dogs, andAmerican pizzaderive from the recipes of various immigrant groups.[604][605][606][607]Mexican dishessuch asburritosandtacospreexisted the United States in areas later annexed from Mexico, andadaptations of Chinese cuisineas well aspasta dishes freely adapted from Italian sourcesare all widely consumed.[608]Americanchefshave had a significant impact on society both domestically and internationally. In 1946, theCulinary Institute of Americawas founded byKatharine AngellandFrances Roth. This would become the United States' most prestigious culinary school, where many of the most talented American chefs would study prior to successful careers.[609][610] TheUnited States restaurant industrywas projected at $899 billion in sales for 2020,[611][612]and employed more than 15 million people, representing 10% of the nation's workforce directly.[611]It is the country's second-largest private employer and the third-largest employer overall.[613][614]The United States is home to over 220Michelin star-rated restaurants, 70 of which are in New York City alone.[615]Winehas been produced in what is now the United States since the 1500s, with thefirst widespread production beginning in what is now New Mexicoin 1628.[616][617][618]In the modern U.S., wine production is undertaken in all fifty states, withCalifornia producing 84 percent of all U.S. wine. With more than 1,100,000 acres (4,500 km2) under vine, the United States is thefourth-largest wine-producing countryin the world, afterItaly,Spain, andFrance.[619][620] The Americanfast-foodindustry developed alongside the nation'scar culture.[621]American restaurants developed thedrive-informat in the 1920s, which they began to replace with thedrive-throughformat by the 1940s.[622][623]Americanfast-food restaurantchains, such asMcDonald's,Kentucky Fried Chicken,Dunkin' Donutsandmany others, have numerous outlets around the world.[624] The most popular spectator sports in the U.S. areAmerican football,basketball,baseball,soccer, andice hockey.[625]While most major U.S. sports such as baseball and American football have evolved out of European practices, basketball,volleyball,skateboarding, andsnowboardingare American inventions, many of which have become popular worldwide.[626]Lacrosseandsurfingarose from Native American and Native Hawaiian activities that predate European contact.[627]Themarket for professional sports in the United Stateswas approximately $69 billion in July 2013, roughly 50% larger than that of Europe, the Middle East, and Africa combined.[628] American football is by several measures the most popular spectator sport in the United States;[629]theNational Football Leaguehas the highest average attendance of any sports league in the world, and theSuper Bowlis watched by tens of millions globally.[630]However, baseball has been regarded as the U.S. "national sport" since the late 19th century. After American football, the next four most popular professional team sports are basketball, baseball, soccer, and ice hockey. Their premier leagues are, respectively, theNational Basketball Association,Major League Baseball,Major League Soccer, and theNational Hockey League. The most-watchedindividual sportsin the U.S. aregolfandauto racing, particularlyNASCARandIndyCar.[631][632] On thecollegiate level, earnings for the member institutions exceed $1 billion annually,[633]andcollege footballandbasketballattract large audiences, as theNCAA March Madness tournamentand theCollege Football Playoffare some of the most watched national sporting events.[634]In the U.S., the intercollegiate sports level serves as a feeder system for professional sports. This differs greatly from practices in nearly all other countries, where publicly and privately funded sports organizations serve this function.[635] EightOlympic Gameshave taken place in the United States. The1904 Summer OlympicsinSt. Louis, Missouri, were the first-ever Olympic Games held outside of Europe.[636]The Olympic Games will be held in the U.S. for a ninth time when Los Angeles hosts the2028 Summer Olympics.U.S. athleteshave won a total of 2,968 medals (1,179 gold) at the Olympic Games, the most of any country.[637][638][639] In international professional competition, theU.S. men's national soccer teamhas qualified foreleven World Cups, while thewomen's national teamhaswontheFIFA Women's World CupandOlympic soccer tournamentfour times each.[640]The United States hosted the1994 FIFA World Cupand will co-host, along with Canada and Mexico, the2026 FIFA World Cup.[641]The1999 FIFA Women's World Cupwas also hosted by the United States.Its final matchwas watched by 90,185, setting the world record for most-attended women's sporting event at the time.[642] 40°N100°W / 40°N 100°W /40; -100 (United States of America)
https://en.wikipedia.org/wiki/United_States
Abstract Syntax Notation One(ASN.1) is a standardinterface description language(IDL) for definingdata structuresthat can beserialized and deserializedin a cross-platform way. It is broadly used intelecommunicationsandcomputer networking, and especially incryptography.[1] Protocol developers define data structures in ASN.1 modules, which are generally a section of a broader standards document written in the ASN.1 language. The advantage is that the ASN.1 description of the data encoding is independent of a particular computer or programming language. Because ASN.1 is bothhuman-readableandmachine-readable, an ASN.1 compiler can compile modules into libraries of code,codecs, that decode or encode the data structures. Some ASN.1 compilers can produce code to encode or decode several encodings, e.g. packed,BERorXML. ASN.1 is a joint standard of theInternational Telecommunication Union Telecommunication Standardization Sector(ITU-T) inITU-T Study Group 17andInternational Organization for Standardization/International Electrotechnical Commission(ISO/IEC), originally defined in 1984 as part of CCITTX.409:1984.[2]In 1988, ASN.1 moved to its own standard,X.208, due to wide applicability. The substantially revised 1995 version is covered by theX.680series.[3]The latest revision of the X.680 series of recommendations is the 6.0 Edition, published in 2021.[4] ASN.1 is a data type declaration notation. It does not define how to manipulate a variable of such a type. Manipulation of variables is defined in other languages such asSDL(Specification and Description Language) for executable modeling orTTCN-3(Testing and Test Control Notation) for conformance testing. Both these languages natively support ASN.1 declarations. It is possible to import an ASN.1 module and declare a variable of any of the ASN.1 types declared in the module. ASN.1 is used to define a large number of protocols. Its most extensive uses continue to be telecommunications, cryptography, and biometrics. ASN.1 is closely associated with a set of encoding rules that specify how to represent a data structure as a series of bytes. The standard ASN.1 encoding rules include: ASN.1 recommendations provide a number of predefined encoding rules. If none of the existing encoding rules are suitable, theEncoding Control Notation (ECN)provides a way for a user to define his or her own customized encoding rules. Privacy-Enhanced Mail (PEM)encoding is entirely unrelated to ASN.1 and its codecs, but encoded ASN.1 data, which is often binary, is often PEM-encoded so that it can be transmitted as textual data, e.g. over SMTP relays, or through copy/paste buffers. This is an example ASN.1 module defining the messages (data structures) of a fictitiousFooProtocol: This could be a specification published by creators of Foo Protocol. Conversation flows, transaction interchanges, and states are not defined in ASN.1, but are left to other notations and textual description of the protocol. Assuming a message that complies with the Foo Protocol and that will be sent to the receiving party, this particular message (protocol data unit(PDU)) is: ASN.1 supports constraints on values and sizes, and extensibility. The above specification can be changed to This change constrains trackingNumbers to have a value between 0 and 199 inclusive, and questionNumbers to have a value between 10 and 20 inclusive. The size of the questions array can be between 0 and 10 elements, with the answers array between 1 and 10 elements. The anArray field is a fixed length 100 element array of integers that must be in the range 0 to 1000. The '...' extensibility marker means that the FooHistory message specification may have additional fields in future versions of the specification; systems compliant with one version should be able to receive and transmit transactions from a later version, though able to process only the fields specified in the earlier version. Good ASN.1 compilers will generate (in C, C++, Java, etc.) source code that will automatically check that transactions fall within these constraints. Transactions that violate the constraints should not be accepted from, or presented to, the application. Constraint management in this layer significantly simplifies protocol specification because the applications will be protected from constraint violations, reducing risk and cost. To send the myQuestion message through the network, the message is serialized (encoded) as a series ofbytesusing one of theencoding rules. The Foo protocol specification should explicitly name one set of encoding rules to use, so that users of the Foo protocol know which one they should use and expect. Below is the data structure shown above as myQuestion encoded inDER format(all numbers are in hexadecimal): DER is atype–length–valueencoding, so the sequence above can be interpreted, with reference to the standard SEQUENCE, INTEGER, and IA5String types, as follows: Alternatively, it is possible to encode the same ASN.1 data structure withXML Encoding Rules(XER) to achieve greater human readability "over the wire". It would then appear as the following 108 octets, (space count includes the spaces used for indentation): Alternatively, if Packed Encoding Rules are employed, the following 122 bits (16 octets amount to 128 bits, but here only 122 bits carry information and the last 6 bits are merely padding) will be produced: In this format, type tags for the required elements are not encoded, so it cannot be parsed without knowing the expected schemas used to encode. Additionally, the bytes for the value of the IA5String are packed using 7-bit units instead of 8-bit units, because the encoder knows that encoding an IA5String byte value requires only 7 bits. However the length bytes are still encoded here, even for the first integer tag 01 (but a PER packer could also omit it if it knows that the allowed value range fits on 8 bits, and it could even compact the single value byte 05 with less than 8 bits, if it knows that allowed values can only fit in a smaller range). The last 6 bits in the encoded PER are padded with null bits in the 6 least significant bits of the last byte c0 : these extra bits may not be transmitted or used for encoding something else if this sequence is inserted as a part of a longer unaligned PER sequence. This means that unaligned PER data is essentially an ordered stream of bits, and not an ordered stream of bytes like with aligned PER, and that it will be a bit more complex to decode by software on usual processors because it will require additional contextual bit-shifting and masking and not direct byte addressing (but the same remark would be true with modern processors and memory/storage units whose minimum addressable unit is larger than 1 octet). However modern processors and signal processors include hardware support for fast internal decoding of bit streams with automatic handling of computing units that are crossing the boundaries of addressable storage units (this is needed for efficient processing in data codecs for compression/decompression or with some encryption/decryption algorithms). If alignment on octet boundaries was required, an aligned PER encoder would produce: (in this case, each octet is padded individually with null bits on their unused most significant bits). Most of the tools supporting ASN.1 do the following: A list of tools supporting ASN.1 can be found on theITU-T Tool web page. ASN.1 is similar in purpose and use toGoogle Protocol BuffersandApache Thrift, which are also interface description languages for cross-platform data serialization. Like those languages, it has a schema (in ASN.1, called a "module"), and a set of encodings, typically type–length–value encodings. Unlike them, ASN.1 does not provide a single and readily usable open-source implementation, and is published as a specification to be implemented by third-party vendors. However, ASN.1, defined in 1984, predates them by many years. It also includes a wider variety of basic data types, some of which are obsolete, and has more options for extensibility. A single ASN.1 message can include data from multiple modules defined in multiple standards, even standards defined years apart. ASN.1 also includes built-in support for constraints on values and sizes. For instance, a module can specify an integer field that must be in the range 0 to 100. The length of a sequence of values (an array) can also be specified, either as a fixed length or a range of permitted lengths. Constraints can also be specified as logical combinations of sets of basic constraints. Values used as constraints can either be literals used in the PDU specification, or ASN.1 values specified elsewhere in the schema file. Some ASN.1 tools will make these ASN.1 values available to programmers in the generated source code. Used as constants for the protocol being defined, developers can use these in the protocol's logic implementation. Thus all the PDUs and protocol constants can be defined in the schema, and all implementations of the protocol in any supported language draw upon those values. This avoids the need for developers to hand code protocol constants in their implementation's source code. This significantly aids protocol development; the protocol's constants can be altered in the ASN.1 schema and all implementations are updated simply by recompiling, promoting a rapid and low risk development cycle. If the ASN.1 tools properly implement constraints checking in the generated source code, this acts to automatically validate protocol data during program operation. Generally ASN.1 tools will include constraints checking into the generated serialization / deserialization routines, raising errors or exceptions if out-of-bounds data is encountered. It is complex to implement all aspects of ASN.1 constraints in an ASN.1 compiler. Not all tools support the full range of possible constraints expressions.XML schemaandJSON schemaboth support similar constraints concepts. Tool support for constraints varies. Microsoft's xsd.exe compiler ignores them. ASN.1 is visually similar toAugmented Backus-Naur form(ABNF), which is used to define many Internet protocols likeHTTPandSMTP. However, in practice they are quite different: ASN.1 defines a data structure, which can be encoded in various ways (e.g. JSON, XML, binary). ABNF, on the other hand, defines the encoding ("syntax") at the same time it defines the data structure ("semantics"). ABNF tends to be used more frequently for defining textual, human-readable protocols, and generally is not used to define type–length–value encodings. Many programming languages define language-specific serialization formats. For instance, Python's "pickle" module and Ruby's "Marshal" module. These formats are generally language specific. They also don't require a schema, which makes them easier to use in ad hoc storage scenarios, but inappropriate for communications protocols. JSONandXMLsimilarly do not require a schema, making them easy to use. They are also both cross-platform standards that are broadly popular for communications protocols, particularly when combined with aJSON schemaorXML schema. Some ASN.1 tools are able to translate between ASN.1 and XML schema (XSD). The translation is standardised by the ITU. This makes it possible for a protocol to be defined in ASN.1, and also automatically in XSD. Thus it is possible (though perhaps ill-advised) to have in a project an XSD schema being compiled by ASN.1 tools producing source code that serializes objects to/from JSON wireformat. A more practical use is to permit other sub-projects to consume an XSD schema instead of an ASN.1 schema, perhaps suiting tools availability for the sub-projects language of choice, with XER used as the protocol wireformat. For more detail, seeComparison of data serialization formats.
https://en.wikipedia.org/wiki/Abstract_Syntax_Notation_One
Acertificate policy(CP) is a document which aims to state what are the different entities of apublic key infrastructure(PKI), their roles and their duties. This document is published in the PKI perimeter. When in use withX.509certificates, a specific field can be set to include a link to the associated certificate policy. Thus, during an exchange, any relying party has an access to the assurance level associated with the certificate, and can decide on thelevel of trustto put in the certificate. The reference document for writing a certificate policy is, as of December 2010[update],RFC3647. The RFC proposes a framework for the writing of certificate policies andCertification Practice Statements(CPS). The points described below are based on the framework presented in the RFC. The document should describe the general architecture of the related PKI, present the different entities of the PKI and any exchange based on certificates issued by this very same PKI. An important point of the certificate policy is the description of the authorized and prohibited certificate uses. When a certificate is issued, it can be stated in its attributes what use cases it is intended to fulfill. For example, a certificate can be issued fordigital signatureofe-mail(akaS/MIME),encryptionof data,authentication(e.g. of aWeb server, as when one usesHTTPS) or further issuance of certificates (delegation of authority). Prohibited uses are specified in the same way. The document also describes how certificates names are to be chosen, and besides, the associated needs foridentificationandauthentication. When a certification application is filled, thecertification authority(or, by delegation, theregistration authority) is in charge of checking the information provided by the applicant, such as his identity. This is to make sure that the CA does not take part in anidentity theft. Thegeneration The different procedures for certificate application, issuance, acceptance, renewal, re-key, modification and revocation are a large part of the document. These procedures describe how each actor of the PKI has to act in order for the whole assurance level to be accepted. Then, a chapter is found regarding physical and procedural controls,auditand logging procedures involved in the PKI to ensuredata integrity,availabilityandconfidentiality. This part describes what are the technical requirements regarding key sizes, protection ofprivate keys(by use ofkey escrow) and various types of controls regarding the technical environment (computers, network). Thoselistsare a vital part of any public key infrastructure, and as such, a specific chapter is dedicated to the description of the management associated with these lists, to ensure consistency between certificate status and the content of the list. The PKI needs to be audited to ensure it complies with the rules stated in its documents, such as the certificate policy. The procedures used to assess suchcomplianceare described here. This last chapter tackles all remaining points, by example all the PKI-associated legal matters.
https://en.wikipedia.org/wiki/Certificate_policy
Code Access Security(CAS), in theMicrosoft .NETframework, isMicrosoft's solution to prevent untrusted code from performing privileged actions. When theCLRloads anassemblyit will obtainevidencefor the assembly and use this to identify thecode groupthat the assembly belongs to. A code group contains a permission set (one or morepermissions). Code that performs a privileged action will perform a code accessdemandwhich will cause the CLR to walk up thecall stackand examine the permission set granted to the assembly of eachmethodin the call stack. The code groups and permission sets are determined by the administrator of the machine who defines thesecurity policy. Microsoft considers CAS as obsolete and discourages its use.[1]It is also not available in .NET Core and .NET. Evidence can be any information associated with an assembly. The default evidences that are used by .NET code access security are: A developer can use custom evidence (so-called assembly evidence) but this requires writing a security assembly and in version 1.1[clarification needed]of .NET this facility does not work. Evidence based on a hash of the assembly is easily obtained in code. For example, inC#, evidence may be obtained by the following code clause: A policy is a set of expressions that uses evidence to determine a code group membership. A code group gives a permission set for the assemblies within that group. There are four policies in .NET: The first three policies are stored inXMLfiles and are administered through the .NET Configuration Tool 1.1 (mscorcfg.msc). The final policy is administered through code for the current application domain. Code access security will present an assembly's evidence to each policy and will then take the intersection (that is the permissions common to all the generated permission sets) as the permissions granted to the assembly. By default, the Enterprise, User, and AppDomain policies give full trust (that is they allow all assemblies to have all permissions) and the Machine policy is more restrictive. Since the intersection is taken, this means that the final permission set is determined by the Machine policy. Note that the policy system has been eliminated in .NET Framework 4.0.[2] Code groups associate a piece of evidence with a named permission set. The administrator uses the .NET Configuration Tool to specify a particular type of evidence (for example, Site) and a particular value for that evidence (for example, www.mysite.com) and then identifies the permission set that the code group will be granted. Code that performs some privileged action will make a demand for one or more permissions. The demand makes the CLR walk the call stack and for each method the CLR will ensure that the demanded permissions are in the method's assembly's granted permissions. If the permission is not granted then a securityexceptionis thrown. This prevents downloaded code from performing privileged actions. For example, if an assembly is downloaded from an untrusted site the assembly will not have any file IO permissions and so if this assembly attempts to access a file, will throw an exception preventing the call.
https://en.wikipedia.org/wiki/Code_Access_Security
Communicationssecurityis the discipline of preventing unauthorized interceptors from accessingtelecommunications[1]in an intelligible form, while still delivering content to the intended recipients. In theNorth Atlantic Treaty Organizationculture, including United States Department of Defense culture, it is often referred to by the abbreviationCOMSEC. The field includes cryptographic security,transmission security, emissions security andphysical securityof COMSEC equipment and associated keying material. COMSEC is used to protect bothclassifiedandunclassifiedtraffic onmilitary communicationsnetworks, including voice, video, and data. It is used for both analog and digital applications, and both wired and wireless links. Voice over secure internet protocolVOSIPhas become the de facto standard for securing voice communication, replacing the need forSecure Terminal Equipment(STE) in much of NATO, including the U.S.A.USCENTCOMmoved entirely to VOSIP in 2008.[2] Types of COMSEC equipment: TheElectronic Key Management System(EKMS) is aUnited States Department of Defense(DoD) key management, COMSEC material distribution, and logistics support system. TheNational Security Agency(NSA) established the EKMS program to supply electronic key to COMSEC devices in securely and timely manner, and to provide COMSEC managers with an automated system capable of ordering, generation, production, distribution, storage, security accounting, and access control. The Army's platform in the four-tiered EKMS, AKMS, automates frequency management and COMSEC management operations. It eliminates paper keying material, hardcopySignal operating instructions(SOI) and saves the time and resources required for courier distribution. It has 4 components: KMI is intended to replace the legacy Electronic Key Management System to provide a means for securely ordering, generating, producing, distributing, managing, and auditing cryptographic products (e.g., asymmetric keys, symmetric keys, manual cryptographic systems, and cryptographic applications).[4]This system is currently being fielded by Major Commands and variants will be required for non-DoD Agencies with a COMSEC Mission.[5]
https://en.wikipedia.org/wiki/Communications_security
ISO/IEC JTC 1, entitled "Information technology", is a joint technical committee (JTC) of theInternational Organization for Standardization(ISO) and theInternational Electrotechnical Commission(IEC). Its purpose is to develop, maintain and promote standards in the fields ofinformation and communications technology(ICT). JTC 1 has been responsible for many critical IT standards, ranging from theJoint Photographic Experts Group(JPEG) image formats andMoving Picture Experts Group(MPEG) audio and video formats[a]to theCandC++ programming languages.[b] ISO/IEC JTC 1 was formed in 1987 as a merger between ISO/TC 97 (Information Technology) and IEC/TC 83, with IEC/SC 47B joining later. The intent was to bring together, in a single committee, the IT standardization activities of the two parent organizations in order to avoid duplicative or possibly incompatible standards. At the time of its formation, the mandate of JTC 1 was to develop base standards in information technology upon which other technical committees could build. This would allow for the development of domain and application-specific standards that could be applicable to specific business domains while also ensuring the interoperation and function of the standards on a consistent base.[2] In its first 15 years, JTC 1 brought about many standards in the information technology sector, including standards in the fields of multimedia (such asMPEG), IC cards (or "smart cards"),ICT security,programming languages, and character sets (such as theUniversal Character Set).[2][3]In the early 2000s, the organization expanded its standards development into fields such as security and authentication, bandwidth/connection management, storage and data management, software and systems engineering, service protocols, portable computing devices, and certain societal aspects such as data protection and cultural and linguistic adaptability. For more than 25 years, JTC 1 has provided a standards development environment where experts come together to develop worldwide Information and Communication Technology (ICT) standards for business and consumer applications. JTC 1 also addresses such critical areas asteleconferencingand e-meetings,cloud data management interface,biometricsin identity management, sensor networks forsmart gridsystems, and corporate governance of ICT implementation. As technologies converge, JTC 1 acts as a system integrator, especially in areas of standardization in which many consortia and forums are active. JTC 1 provides the standards approval environment for integrating diverse and complex ICT technologies. These standards rely upon the core infrastructure technologies developed by JTC 1 centers of expertise complemented by specifications developed in other organizations.[4][5]There are over 2,800 published JTC 1 standards developed by about 2,100 technical experts from around the world, some of which are freely available for download while others are available for a fee.[6][7] In 2008, Ms. Karen Higginbottom ofHPwas elected as chair.[8]In a 2013 interview, she described priorities, including cloud computing standards and adaptations of existing standards.[9]After Higginbottom's nine-year term expired in 2017, Mr. Phil Wennblom ofIntelwas elected as chair at the JTC 1 Plenary meeting inVladivostok, Russia. JTC 1 has implemented a process to transpose "publicly available specifications" (PAS) into international ISO/IEC standards. The PAS transposition process allows a PAS to be approved as an ISO/IEC standard in less than a year, as opposed to a full length process that can take up to 4 years. Consortia, such asOASIS,Trusted Computing Group(TCG),The Open Group,Object Management Group(OMG),W3C,Distributed Management Task Force(DMTF),Storage Networking Industry Association(SNIA),Open Geospatial Consortium(OGC),GS1, Spice User Group,Open Connectivity Foundation (OCF), NESMA,Society of Motion Picture and Television Engineers(SMPTE),Khronos Group, orJoint Development Foundationuse this process to transpose their specifications in an efficient manner into ISO/IEC standards.[10] The scope of ISO/IEC JTC 1 is "International standardization in the field of information technology". Its official mandate is to develop, maintain, promote and facilitate IT standards required by global markets meeting business and user requirements concerning: JTC 1 has a number of principles that guide standards development within the organization, which include:[11] Like its ISO and IEC parent organizations, members of JTC 1 are national standards bodies. One national standards body represents each member country, and the members are referred to within JTC 1 as "national bodies" (NBs). A member can either have participating (P-member) or observing (O-member) status, with the main differences being the ability to participate at theworking grouplevel in the drafting of standards and to vote on proposed standards (although O-members may submit comments). As of May 2021, JTC 1 has 35 P-members and 65 O-members, and thus 100 member NBs.[12]The secretariat of JTC 1 is theAmerican National Standards Institute(ANSI), which is the national standards body for the United States member NB. Other organizations can participate as Liaison Members, some of which are internal to ISO/IEC and some of which are external. Liaison relationships can be established at different levels within JTC 1 – i.e., at the JTC 1 level, the subcommittee level, or at the level of a specific working group within a subcommittee. Altogether, as of May 2021, there are about 120 external organizations that are in liaison with JTC 1 at one level or another.[13]The liaison relationships established directly at the JTC 1 level are:[citation needed] Most work on the development of standards is done by subcommittees (SCs), each of which deals with a particular field. Most of these subcommittees have severalworking groups(WGs). Subcommittees, working groups, special working groups (SWGs), and study groups (SGs) within JTC 1 are:[14] Each subcommittee can have subgroups created for specific purposes: Subcommittees can be created to deal with new situations (SC 37 was established in 2002; SC 38 in 2009; SC 39 in 2012; and SC 40 in 2013) or disbanded if the area of work is no longer relevant. There is no requirement for any member body to maintain status on any or all of the subcommittees.
https://en.wikipedia.org/wiki/ISO/IEC_JTC_1
PKI Resource Query Protocol(PRQP) is anInternet protocolused for obtaining information about services associated with anX.509Certificate Authority. It is described byRFC7030published on October 23, 2013. PRQP aims to improve Interoperability and Usabilities issues among PKIs, helping finding services and data repositories associated with a CA. Messages communicated via PRQP are encoded inASN.1and are usually communicated overHTTP. At present, ever more services and protocols are being defined to address different needs of users and administrators in PKIs. With the deployment of new applications and services, the need to access PKI resources provided by different organizations is critical. Each application needs to be told about how to find these services for each new certificate it encounters. Therefore, each application needs to be properly configured by filling in complex configuration options whose meaning is mostly unknown to the average user (and likely to the administrator as well). In PKIs there are three other primary methods for clients to obtain pointers to PKI data: adopting specificcertificate extensions; looking at easily accessible repositories (e.g. DNS, local database, etc.); and adapting existing protocols (e.g.Web Services). To provide pointers to published data, a CA could use theAuthority Information Access(AIA) andSubject Information Access(SIA) extensions as detailed inRFC3280. The former can provide information about the issuer of the certificate while the latter carries information (inside CA certificates) about offered services. TheSubject Information Accessextension can carry a URI to point to certificate repositories and timestamping services. Hence this extension allows to access services by several different protocols (e.g.HTTP,FTP,LDAPorSMTP). Although encouraged, usage of the AIA and SIA extension is still not widely deployed. There are two main reasons for this. The first is the lack of support for such extensions in available clients. The second reason is that extensions are static, i.e. not modifiable. Indeed, to modify or add new extensions, in order to have users and applications to be aware of new services or their dismissal, the certificate must be re-issued. This would not be feasible for End Entities (EE) certificates, except during periodic reissuing, but it would be feasible for the CA certificate itself. The CA could retain the same public key and name and just add new values to the AIA extension in the new certificate. If users fetch the CA cert regularly, rather than caching it, this would enable them to become aware of the new services. Although this is possible, almost every available clients do not look for CAs certificates if they are already stored in clients' local database. In any case, since URLs tend to change quite often while certificates persist for longer time frames, experience suggests that these extensions invariably point to URLs that no longer exist. Moreover, considering the fact that the entity that issues the certificates and the one who runs the services may not be the same, it is infeasible that the issuing CA will reissue all of its certificate in case a server URL's changes. Therefore, it is not wise to depend on the usage of AIA or SIA extensions for available services and repositories lookup. TheSRV recordor DNS Service record technique is thought to provide pointers to servers directly in the DNS (RFC 1035). As defined in RFC 2782, the introduction of this type of record allows administrators to perform operations rather similar to the ones needed to solve the problem PRQP addresses, i.e. an easily configurable PKI discovery service. The basic idea is to have the client query the DNS for a specific SRV record. For example, if an SRV-aware LDAP client wants to discover an LDAP server for a certain domain, it performs a DNS lookup for_ldap._tcp.example.com(the_tcpmeans the client requesting aTCPenabledLDAPserver). The returned record contains information on the priority, the weight, the port and the target for the service in that domain. The problem in the adoption of this mechanism is that in PKIs (unlike DNS) there is usually no fixed requirement for the name space used. Most of the time, there is no correspondence between DNS structure and data contained in the certificates. The only exception is when theDomain Component(DC) attributes are used in thecertificate's Subject. The DC attributes are used to specify domain components of a DNS name, for example the domain nameexample.comcould be represented by using thedc=com, dc=exampleformat. If the CA's subject field would make use of such a format, theIssuerfield would allow client applications to perform DNS lookups for the provided domain where the information about repositories and services could be stored. However, currently, the practice is very different. In fact it is extremely difficult for a client to map digital certificates to DNS records because the DC format is not widely adopted by existing CAs. For example, only one certificate from IE7/Outlook certificates store uses the domain components to provide a mapping between the certificate and an Internet Domain.
https://en.wikipedia.org/wiki/PKI_Resource_Query_Protocol
Apublic key infrastructure(PKI) is a set of roles, policies, hardware, software and procedures needed to create, manage, distribute, use, store and revokedigital certificatesand managepublic-key encryption. The purpose of a PKI is to facilitate the secure electronic transfer of information for a range of network activities such as e-commerce, internet banking and confidential email. It is required for activities where simple passwords are an inadequate authentication method and more rigorous proof is required to confirm the identity of the parties involved in the communication and to validate the information being transferred. Incryptography, a PKI is an arrangement thatbindspublic keyswith respective identities of entities (like people and organizations).[1][2]The binding is established through a process of registration and issuance of certificates at and by acertificate authority(CA). Depending on the assurance level of the binding, this may be carried out by an automated process or under human supervision. When done over a network, this requires using a secure certificate enrollment or certificate management protocol such asCMP. The PKI role that may be delegated by a CA to assure valid and correct registration is called aregistration authority(RA). An RA is responsible for accepting requests for digital certificates and authenticating the entity making the request.[3]TheInternet Engineering Task Force's RFC 3647 defines an RA as "An entity that is responsible for one or more of the following functions: the identification and authentication of certificate applicants, the approval or rejection of certificate applications, initiating certificate revocations or suspensions under certain circumstances, processing subscriber requests to revoke or suspend their certificates, and approving or rejecting requests by subscribers to renew or re-key their certificates. RAs, however, do not sign or issue certificates (i.e., an RA is delegated certain tasks on behalf of a CA)."[4]WhileMicrosoftmay have referred to a subordinate CA as an RA,[5]this is incorrect according to the X.509 PKI standards. RAs do not have the signing authority of a CA and only manage the vetting and provisioning of certificates. So in the Microsoft PKI case, the RA functionality is provided either by the Microsoft Certificate Services web site or throughActive DirectoryCertificate Services that enforces Microsoft Enterprise CA, and certificate policy through certificate templates and manages certificate enrollment (manual or auto-enrollment). In the case of Microsoft Standalone CAs, the function of RA does not exist since all of the procedures controlling the CA are based on the administration and access procedure associated with the system hosting the CA and the CA itself rather than Active Directory. Most non-Microsoft commercial PKI solutions offer a stand-alone RA component. An entity must be uniquely identifiable within each CA domain on the basis of information about that entity. A third-partyvalidation authority(VA) can provide this entity information on behalf of the CA. TheX.509standard defines the most commonly used format forpublic key certificates.[6] PKI provides "trust services" - in plain terms trusting the actions or outputs of entities, be they people or computers. Trust service objectives respect one or more of the following capabilities: Confidentiality, Integrity and Authenticity (CIA). Confidentiality:Assurance that no entity can maliciously or unwittingly view a payload in clear text. Data is encrypted to make it secret, such that even if it was read, it appears as gibberish. Perhaps the most common use of PKI for confidentiality purposes is in the context of Transport Layer Security (TLS). TLS is a capability underpinning the security of data in transit, i.e. during transmission. A classic example of TLS for confidentiality is when using a web browser to log on to a service hosted on an internet based web site by entering a password. Integrity:Assurance that if an entity changed (tampered) with transmitted data in the slightest way, it would be obvious it happened as its integrity would have been compromised. Often it is not of utmost importance to prevent the integrity being compromised (tamper proof), however, it is of utmost importance that if integrity is compromised there is clear evidence of it having done so (tamper evident). Authenticity:Assurance that every entity has certainty of what it is connecting to, or can evidence its legitimacy when connecting to a protected service. The former is termed server-side authentication - typically used when authenticating to a web server using a password. The latter is termed client-side authentication - sometimes used when authenticating using a smart card (hosting a digital certificate and private key). Public-key cryptographyis acryptographictechnique that enables entities tosecurely communicateon an insecure public network, and reliably verify the identity of an entity viadigital signatures.[7] A public key infrastructure (PKI) is a system for the creation, storage, and distribution ofdigital certificates, which are used to verify that a particular public key belongs to a certain entity. The PKI creates digital certificates that map public keys to entities, securely stores these certificates in a central repository and revokes them if needed.[8][9][10] A PKI consists of:[9][11][12] The primary role of the CA is todigitally signand publish thepublic keybound to a given user. This is done using the CA's own private key, so that trust in the user key relies on one's trust in the validity of the CA's key. When the CA is a third party separate from the user and the system, then it is called the Registration Authority (RA), which may or may not be separate from the CA.[13]The key-to-user binding is established, depending on the level of assurance the binding has, by software or under human supervision. The termtrusted third party(TTP) may also be used forcertificate authority(CA). Moreover, PKI is itself often used as a synonym for a CA implementation.[14] A certificate may be revoked before it expires, which signals that it is no longer valid. Without revocation, an attacker would be able to exploit such a compromised or mis-issued certificate until expiry.[15]Hence, revocation is an important part of a public key infrastructure.[16]Revocation is performed by the issuingcertificate authority, which produces acryptographically authenticatedstatement of revocation.[17] For distributing revocation information to clients, timeliness of the discovery of revocation (and hence the window for an attacker to exploit a compromised certificate) trades off against resource usage in querying revocation statuses and privacy concerns.[18]If revocation information is unavailable (either due to accident or an attack), clients must decide whether tofail-hardand treat a certificate as if it is revoked (and so degradeavailability) or tofail-softand treat it as unrevoked (and allow attackers to sidestep revocation).[19] Due to the cost of revocation checks and the availability impact from potentially-unreliable remote services,Web browserslimit the revocation checks they will perform, and will fail-soft where they do.[20]Certificate revocation listsare too bandwidth-costly for routine use, and theOnline Certificate Status Protocolpresents connection latency and privacy issues. Other schemes have been proposed but have not yet been successfully deployed to enable fail-hard checking.[16] In this model of trust relationships, a CA is a trusted third party – trusted both by the subject (owner) of the certificate and by the party relying upon the certificate. According to NetCraft report from 2015,[21]the industry standard for monitoring activeTransport Layer Security(TLS) certificates, states that "Although the global [TLS] ecosystem is competitive, it is dominated by a handful of major CAs — three certificate authorities (Symantec,Sectigo,GoDaddy) account for three-quarters of all issued [TLS] certificates on public-facing web servers. The top spot has been held by Symantec (orVeriSignbefore it was purchased by Symantec) ever since [our] survey began, with it currently accounting for just under a third of all certificates. To illustrate the effect of differing methodologies, amongst the million busiest sites Symantec issued 44% of the valid, trusted certificates in use — significantly more than its overall market share." Following major issues in how certificate issuing was managed, all major players gradually distrusted Symantec-issued certificates, starting in 2017 and completed in 2021.[22][23][24][25] This approach involves a server that acts as an offline certificate authority within asingle sign-onsystem. A single sign-on server will issue digital certificates into the client system, but never stores them. Users can execute programs, etc. with the temporary certificate. It is common to find this solution variety withX.509-based certificates.[26] Starting Sep 2020, TLS Certificate Validity reduced to 13 Months. An alternative approach to the problem of public authentication of public key information is the web-of-trust scheme, which uses self-signedcertificatesand third-party attestations of those certificates. The singular term "web of trust" does not imply the existence of a single web of trust, or common point of trust, but rather one of any number of potentially disjoint "webs of trust". Examples of implementations of this approach arePGP(Pretty Good Privacy) andGnuPG(an implementation ofOpenPGP, the standardized specification of PGP). Because PGP and implementations allow the use ofe-maildigital signatures for self-publication of public key information, it is relatively easy to implement one's own web of trust. One of the benefits of the web of trust, such as inPGP, is that it can interoperate with a PKI CA fully trusted by all parties in a domain (such as an internal CA in a company) that is willing to guarantee certificates, as a trusted introducer. If the "web of trust" is completely trusted then, because of the nature of a web of trust, trusting one certificate is granting trust to all the certificates in that web. A PKI is only as valuable as the standards and practices that control the issuance of certificates and including PGP or a personally instituted web of trust could significantly degrade the trustworthiness of that enterprise's or domain's implementation of PKI.[27] The web of trust concept was first put forth by PGP creatorPhil Zimmermannin 1992 in the manual for PGP version 2.0: As time goes on, you will accumulate keys from other people that you may want to designate as trusted introducers. Everyone else will each choose their own trusted introducers. And everyone will gradually accumulate and distribute with their key a collection of certifying signatures from other people, with the expectation that anyone receiving it will trust at least one or two of the signatures. This will cause the emergence of a decentralized fault-tolerant web of confidence for all public keys. Another alternative, which does not deal with public authentication of public key information, is the simple public key infrastructure (SPKI), which grew out of three independent efforts to overcome the complexities ofX.509andPGP's web of trust. SPKI does not associate users with persons, since thekeyis what is trusted, rather than the person. SPKI does not use any notion of trust, as the verifier is also the issuer. This is called an "authorization loop" in SPKI terminology, where authorization is integral to its design.[28]This type of PKI is specially useful for making integrations of PKI that do not rely on third parties for certificate authorization, certificate information, etc.; a good example of this is anair-gappednetwork in an office. Decentralized identifiers(DIDs) eliminate dependence on centralized registries for identifiers as well as centralized certificate authorities for key management, which is the standard in hierarchical PKI. In cases where the DID registry is adistributed ledger, each entity can serve as its own root authority. This architecture is referred to as decentralized PKI (DPKI).[29][30] Developments in PKI occurred in the early 1970s at the British intelligence agencyGCHQ, whereJames Ellis,Clifford Cocksand others made important discoveries related to encryption algorithms and key distribution.[31]Because developments at GCHQ are highly classified, the results of this work were kept secret and not publicly acknowledged until the mid-1990s. The public disclosure of both securekey exchangeandasymmetric key algorithmsin 1976 byDiffie,Hellman,Rivest,Shamir, andAdlemanchanged secure communications entirely. With the further development of high-speed digital electronic communications (theInternetand its predecessors), a need became evident for ways in which users could securely communicate with each other, and as a further consequence of that, for ways in which users could be sure with whom they were actually interacting. Assorted cryptographic protocols were invented and analyzed within which the newcryptographic primitivescould be effectively used. With the invention of theWorld Wide Weband its rapid spread, the need for authentication and secure communication became still more acute. Commercial reasons alone (e.g.,e-commerce, online access to proprietary databases fromweb browsers) were sufficient.Taher Elgamaland others atNetscapedeveloped theSSLprotocol ('https' in WebURLs); it included key establishment, server authentication (prior to v3, one-way only), and so on.[32]A PKI structure was thus created for Web users/sites wishing secure communications. Vendors and entrepreneurs saw the possibility of a large market, started companies (or new projects at existing companies), and began to agitate for legal recognition and protection from liability. AnAmerican Bar Associationtechnology project published an extensive analysis of some of the foreseeable legal aspects of PKI operations (seeABA digital signature guidelines), and shortly thereafter, several U.S. states (Utahbeing the first in 1995) and other jurisdictions throughout the world began to enact laws and adopt regulations. Consumer groups raised questions aboutprivacy, access, and liability considerations, which were more taken into consideration in some jurisdictions than in others.[33] The enacted laws and regulations differed, there were technical and operational problems in converting PKI schemes into successful commercial operation, and progress has been much slower than pioneers had imagined it would be. By the first few years of the 21st century, the underlying cryptographic engineering was clearly not easy to deploy correctly. Operating procedures (manual or automatic) were not easy to correctly design (nor even if so designed, to execute perfectly, which the engineering required). The standards that existed were insufficient. PKI vendors have found a market, but it is not quite the market envisioned in the mid-1990s, and it has grown both more slowly and in somewhat different ways than were anticipated.[34]PKIs have not solved some of the problems they were expected to, and several major vendors have gone out of business or been acquired by others. PKI has had the most success in government implementations; the largest PKI implementation to date is theDefense Information Systems Agency(DISA) PKI infrastructure for theCommon Access Cardsprogram. PKIs of one type or another, and from any of several vendors, have many uses, including providing public keys and bindings to user identities, which are used for: Some argue that purchasing certificates for securing websites bySSL/TLSand securing software bycode signingis a costly venture for small businesses.[41]However, the emergence of free alternatives, such asLet's Encrypt, has changed this.HTTP/2, the latest version of HTTP protocol, allows unsecured connections in theory; in practice, major browser companies have made it clear that they would support this protocol only over a PKI securedTLSconnection.[42]Web browser implementation of HTTP/2 includingChrome,Firefox,Opera, andEdgesupports HTTP/2 only over TLS by using theALPNextension of the TLS protocol. This would mean that, to get the speed benefits of HTTP/2, website owners would be forced to purchase SSL/TLS certificates controlled by corporations. Currently the majority of web browsers are shipped with pre-installedintermediate certificatesissued and signed by a certificate authority, by public keys certified by so-calledroot certificates. This means browsers need to carry a large number of different certificate providers, increasing the risk of a key compromise.[43] When a key is known to be compromised, it could be fixed by revoking the certificate, but such a compromise is not easily detectable and can be a huge security breach. Browsers have to issue a security patch to revoke intermediary certificates issued by a compromised root certificate authority.[44]
https://en.wikipedia.org/wiki/Public_Key_Infrastructure
TheTime-Stamp Protocol, orTSPis acryptographicprotocolfor certifyingtimestampsusingX.509certificates andpublic key infrastructure. The timestamp is the signer's assertion that a piece of electronic data existed at or before a particular time. The protocol is defined inRFC3161. One application of the protocol is to show that adigital signaturewas issued before a point in time, for example before the corresponding certificate was revoked. The TSP protocol is an example oftrusted timestamping. It has been extended to create theANSI ASC X9.95 Standard. In the protocol a Time Stamp Authority (TSA) is a trusted third party that can provide a timestamp to be associated with ahashedversion of some data. It is a request-response protocol, where the request contains a hash of the data to be signed. This is sent to the TSA and the response contains a Time Stamp Token (TST) which itself includes the hash of the data, a unique serial number, a timestamp and a digital signature. The signature is generated using the private key of the TSA. The protocol can operate over a number of different transports, includingemail,TCPsocketsorHTTP. When presented with a TST, someone may verify that the data existed at the timestamp in the TST by verifying the signature using the public key of the TSA and that the hash of the data matches that included in the TST. This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Time_stamp_protocol
Aqualified electronic signatureis anelectronic signaturethat is compliant with EU Regulation No 910/2014 (eIDASRegulation) for electronic transactions within theinternal European market.[1]It enables to verify the authorship of a declaration in electronic data exchange over long periods of time. Qualified electronic signatures can be considered as a digital equivalent to handwritten signatures.[2] The purpose of eIDAS was to create a set of standards to ensure that electronic signatures could be used in a secure manner while conducting business online or while conducting official business across borders betweenEU member states. The qualified electronic signature is one such standard that has been outlined under eIDAS.[3][4] A qualified electronic signature is anadvanced electronic signaturewith aqualified digital certificatethat has been created by a qualified signature creation device (QSCD). For an electronic signature to be considered as a qualified electronic signature, it must meet three main requirements: First, the signatory must be linked and uniquely identified to the signature. The second point is that data used to create the signature must be under the sole control of the signatory. And last it must have the ability to identify if the data that accompanies the signature has been tampered with since the signing of the message.[1] It is important to note that creating a qualified electronic signature is more than merely adding a qualified certificate to an advanced electronic signature. The signature must also be created using aqualified signature creation device(QSCD). This device is responsible for qualifying digital signatures by using specific hardware and software that ensures that only the signatory has control of their private key. In addition, a qualified trust service provider manages the signature creation data that is produced. The signature creation data must remain unique, confidential and protected from forgery.[3] Qualified electronic signatures that comply with eIDAS may be technically implemented through three specific digital signature standards, that were developed by the European Telecommunications Standards Institute (ETSI) and then need to be complemented with aqualified digital certificatethrough the procedures described above:;[1] The qualifiedtrust service providerhas a crucial role in the process of qualified electronic signing. A trust service provider must receive qualified status from a supervisory governmental body that allows the entity to provide qualified trust services to be used in creating qualified electronic signatures. Regulated in eIDAS, the European Union published an EU Trust List with constitutive effect, meaning that a provider or service will only be qualified if it appears in the Trusted List.[5]Qualified trust service providers are required to abide by the strict guidelines outlined under the eIDAS Regulation, which include as part of the certificate creation process: Under eIDAS, the intent of the implementation of qualified electronic signatures is to serve several purposes, such as the facilitation of business and public services processes, including those that go across borders. These processes can be safely expedited using electronic signing. Under eIDAS, EU member states have been charged with establishing "points of single contact" (PSCs) for trust services to ensure that electronic ID schemes may be used in cross-border public sector transactions, such as exchanging and accessing healthcare information across borders.[4] Previously, a signatory would sign a document or message and then return it to the intended recipient via the postal service, facsimile service, by hand or by scanning and then attaching it to an email. The issue with these methods is that they are not always secure or timely. Delays in delivery could occur, and there exists the possibility that signatures could be forged or the enclosed documents may be altered. The risk increases as multiple signatures are required from different people who may be located in different locations. These problems are alleviated by using qualified electronic signatures, which save time, are legally binding, and provide a higher level of technical security.[1] The increased transparency in the electronic signing and transaction process and the enhanced interoperability are expected to spur innovation in the Europeaninternal market.[6] eIDAS requires that no electronic signature should be denied legal effect or admissibility as evidence solely on the grounds that it is in an electronic form or that it does not meet the requirements for qualified electronic signatures.[7]The qualified electronic signature shall have the equivalent legal effect as a handwritten signature. Its evidentiary value depends on the circumstances, but will normally be considered very high.[8]AllEU member statesare required to recognize a qualified electronic signature as valid, as long as it has been created with a qualified certificate that has been issued by another member state. Under eIDAS Regulation, Article 27,Electronic signatures in public services, member states are prohibited from requesting signatures of a higher level than qualified electronic signature. Article 25 (2) of eIDAS allows a qualified electronic signature to carry the same legal weight as a handwritten signature.[1][3][9]
https://en.wikipedia.org/wiki/Qualified_electronic_signature
In cryptography, thedining cryptographers problemstudies how to perform asecure multi-party computationof the boolean-XOR function.David Chaumfirst proposed this problem in the early 1980s and used it as an illustrative example to show that it was possible to send anonymous messages with unconditional sender and recipient untraceability. Anonymous communication networks based on this problem are often referred to asDC-nets(where DC stands for "dining cryptographers").[1] Despite the worddining, the dining cryptographers problem is unrelated to thedining philosophers problem. Three cryptographers gather around a table for dinner. The waiter informs them that the meal has been paid for by someone, who could be one of the cryptographers or theNational Security Agency(NSA). The cryptographers respect each other's right to make an anonymous payment, but want to find out whether the NSA paid. So they decide to execute a two-stage protocol. In the first stage, every two cryptographers establish a shared one-bit secret, say by tossing a coin behind a menu so that only two cryptographers see the outcome in turn for each two cryptographers. Suppose, for example, that after the coin tossing, cryptographer A and B share a secret bit1{\displaystyle 1}, A and C share0{\displaystyle 0}, and B and C share1{\displaystyle 1}. In the second stage, each cryptographer publicly announces a bit, which is: Supposing none of the cryptographers paid, then A announces1⊕0=1{\displaystyle 1\oplus 0=1}, B announces1⊕1=0{\displaystyle 1\oplus 1=0}, and C announces0⊕1=1{\displaystyle 0\oplus 1=1}. On the other hand, if A paid, she announces¬(1⊕0)=0{\displaystyle \lnot (1\oplus 0)=0}. The three public announcements combined reveal the answer to their question. One simply computes the XOR of the three bits announced. If the result is 0, it implies that none of the cryptographers paid (so the NSA must have paid the bill). Otherwise, one of the cryptographers paid, but their identity remains unknown to the other cryptographers. David Chaum coined the termdining cryptographers network, or DC-net, for this protocol. The DC-net protocol is simple and elegant. It has several limitations, however, some solutions to which have been explored in follow-up research (see the References section below). A relatedanonymous veto networkalgorithm computes the logical OR of several users' inputs, rather than a logical XOR as in DC-nets, which may be useful in applications to which a logical OR combining operation is naturally suited. David Chaumfirst thought about this problem in the early 1980s. The first publication that outlines the basic underlying ideas is his.[3]The journal version appeared in the very first issue of theJournal of Cryptology.[4] DC-nets are readily generalized to allow for transmissions of more than one bit per round, for groups larger than three participants, and for arbitrary "alphabets" other than the binary digits 0 and 1, as described below. To enable an anonymous sender to transmit more than one bit of information per DC-nets round, the group of cryptographers can simply repeat the protocol as many times as desired to create a desired number of bits worth of transmission bandwidth. These repetitions need not be performed serially. In practical DC-net systems, it is typical for pairs of participants to agree up-front on a single shared "master" secret, usingDiffie–Hellman key exchangefor example. Each participant then locally feeds this shared master secret into apseudorandom number generator, in order to produce as many shared "coin flips" as desired to allow an anonymous sender to transmit multiple bits of information. The protocol can be generalized to a group ofn{\displaystyle n}participants, each with a shared secret key in common with each other participant. In each round of the protocol, if a participant wants to transmit an untraceable message to the group, they invert their publicly announced bit. The participants can be visualized as afully connected graphwith the vertices representing the participants and the edges representing their shared secret keys. The protocol may be run with less than fully connected secret sharing graphs, which can improve the performance and scalability of practical DC-net implementations, at the potential risk of reducing anonymity if colluding participants can split the secret sharing graph into separate connected components. For example, an intuitively appealing but less secure generalization ton>3{\displaystyle n>3}participants using aring topology, where each cryptographer sitting around a table shares a secretonlywith the cryptographer to their immediate left and right, andnotwith every other cryptographer. Such a topology is appealing because each cryptographer needs to coordinate two coin flips per round, rather thann{\displaystyle n}. However, if Adam and Charlie are actually NSA agents sitting immediately to the left and right of Bob, an innocent victim, and if Adam and Charlie secretly collude to reveal their secrets to each other, then they can determine with certainty whether or not Bob was the sender of a 1 bit in a DC-net run, regardless of how many participants there are in total. This is because the colluding participants Adam and Charlie effectively "split" the secret sharing graph into two separate disconnected components, one containing only Bob, the other containing all other honest participants. Another compromise secret sharing DC-net topology, employed in theDissentsystem for scalability,[5]may be described as aclient/serveroruser/trusteetopology. In this variant, we assume there are two types of participants playing different roles: a potentially large numbernof users who desire anonymity, and a much smaller numberm{\displaystyle m}oftrusteeswhose role is to help the users obtain that anonymity. In this topology, each of then{\displaystyle n}users shares a secret with each of them{\displaystyle m}trustees—but users share no secrets directly with other users, and trustees share no secrets directly with other trustees—resulting in ann×m{\displaystyle n\times m}secret sharing matrix. If the number of trusteesm{\displaystyle m}is small, then each user needs to manage only a few shared secrets, improving efficiency for users in the same way the ring topology does. However, as long asat least one trusteebehaves honestly and does not leak his or her secrets or collude with other participants, then that honest trustee forms a "hub" connecting all honest users into a single fully connected component, regardless of which or how many other users and/or trustees might be dishonestly colluding. Users need not know or guess which trustee is honest; their security depends only on theexistenceof at least one honest, non-colluding trustee. Though the simple DC-nets protocol usesbinary digitsas its transmission alphabet, and uses the XOR operator to combine cipher texts, the basic protocol generalizes to any alphabet and combining operator suitable forone-time padencryption. This flexibility arises naturally from the fact that the secrets shared between the many pairs of participants are, in effect, merely one-time pads combined symmetrically within a single DC-net round. One useful alternate choice of DC-nets alphabet and combining operator is to use afinite groupsuitable for public-key cryptography as the alphabet—such as aSchnorr grouporelliptic curve—and to use the associated group operator as the DC-net combining operator. Such a choice of alphabet and operator makes it possible for clients to usezero-knowledge prooftechniques to prove correctness properties about the DC-net ciphertexts that they produce, such as that the participant is not "jamming" the transmission channel, without compromising the anonymity offered by the DC-net. This technique was first suggested by Golle and Juels,[6]further developed by Franck,[7]and later implemented inVerdict, a cryptographically verifiable implementation of theDissentsystem.[8] The measure originally suggested by David Chaum to avoid collisions is to retransmit the message once a collision is detected, but the paper does not explain exactly how to arrange the retransmission. Dissentavoids the possibility of unintentional collisions by using a verifiable shuffle to establish a DC-nets transmission schedule, such that each participant knows exactly which bits in the schedule correspond to his own transmission slot, but does not know who owns other transmission slots.[9] Herbivoredivides a large anonymity network into smaller DC-net groups, enabling participants to evade disruption attempts by leaving a disrupted group and joining another group, until the participant finds a group free of disruptors.[10]This evasion approach introduces the risk that an adversary who owns many nodes couldselectivelydisrupt only groups the adversary has notcompletelycompromised, thereby "herding" participants toward groups that may be functional precisely because they are completely compromised.[11] Dissentimplements several schemes to counter disruption. The original protocol[9]used a verifiablecryptographic shuffleto form a DC-net transmission schedule and distribute "transmission assignments", allowing the correctness of subsequent DC-nets ciphertexts to be verified with a simplecryptographic hashcheck. This technique required a fresh verifiable before every DC-nets round, however, leading to high latencies. A later, more efficient scheme allows a series of DC-net rounds to proceed without intervening shuffles in the absence of disruption, but in response to a disruption event uses a shuffle to distribute anonymousaccusationsenabling a disruption victim to expose and prove the identity of the perpetrator.[5]Finally, more recent versions support fully verifiable DC-nets - at substantial cost in computation efficiency due to the use ofpublic-key cryptographyin the DC-net - as well as ahybridmode that uses efficient XOR-based DC-nets in the normal case and verifiable DC-nets only upon disruption, to distribute accusations more quickly than is feasible using verifiable shuffles.[8]
https://en.wikipedia.org/wiki/Dining_cryptographers_protocol
Digital currency(digital money,electronic moneyorelectronic currency) is anycurrency,money, or money-like asset that is primarily managed, stored or exchanged on digital computer systems, especially over theinternet. Types of digital currencies includecryptocurrency,virtual currencyandcentral bank digital currency. Digital currency may be recorded on adistributed databaseon the internet, a centralized electroniccomputer databaseowned by a company or bank, withindigital filesor even on astored-value card.[1] Digital currencies exhibit properties similar to traditional currencies, but generally do not have a classical physical form offiat currencyhistorically that can be held in the hand, like currencies with printedbanknotesor mintedcoins. However, they do have a physical form in an unclassical sense coming from the computer to computer and computer to human interactions and the information and processing power of the servers that store and keep track of money. This unclassical physical form allows nearly instantaneous transactions over the internet and vastly lowers the cost associated with distributing notes and coins: for example, of the types of money in theUK economy, 3% are notes and coins, and 79% as electronic money (in the form of bank deposits).[2]Usually not issued by a governmental body, virtual currencies are not considered alegal tenderand they enableownershiptransfer across governmentalborders.[3] This type of currency may be used to buy physicalgoodsandservices, but may also be restricted to certaincommunitiessuch as for use inside an online game.[4] Digital money can either be centralized, where there is a central point of control over the money supply (for instance, a bank), ordecentralized, where the control over the money supply is predetermined or agreed upon democratically. Precursory ideas for digital currencies were presented in electronic payment methods such as theSabre (travel reservation system).[5]In 1983, a research paper titled "Blind Signatures for Untraceable Payments" byDavid Chaumintroduced the idea of digital cash.[6][7]In 1989, he foundedDigiCash, an electronic cash company, in Amsterdam to commercialize the ideas in his research.[8]It filed for bankruptcy in 1998.[8][9] e-goldwas the first widely used Internet money, introduced in 1996, and grew to several million users before the US Government shut it down in 2008. e-gold has been referenced to as "digital currency" by both US officials and academia.[10][11][12][13][14]In 1997, Coca-Cola offered buying from vending machines using mobile payments.[15]PayPallaunched its USD-denominated service in 1998. In 2009,bitcoinwas launched, which marked the start of decentralizedblockchain-based digital currencies with no central server, and no tangible assets held in reserve. Also known as cryptocurrencies, blockchain-based digital currencies proved resistant to attempt by government to regulate them, because there was no central organization or person with the power to turn them off.[16] Origins of digital currencies date back to the 1990sDot-com bubble. Another known digital currency service wasLiberty Reserve, founded in 2006; it lets users convert dollars or euros to Liberty Reserve Dollars or Euros, and exchange them freely with one another at a 1% fee. Several digital currency operations were reputed to be used for Ponzi schemes and money laundering, and were prosecuted by the U.S. government for operating without MSB licenses.[17]Q coins or QQ coins, were used as a type of commodity-based digital currency onTencent QQ's messaging platform and emerged in early 2005. Q coins were so effective in China that they were said to have had a destabilizing effect on theChinese yuancurrency due to speculation.[18]Recent interest incryptocurrencieshas prompted renewed interest in digital currencies, withbitcoin, introduced in 2008, becoming the most widely used and accepted digital currency. Digital currency is a term that refers to a specific type of electronic currency with specific properties. Digital currency is also a term used to include the meta-group of sub-types of digital currency, the specific meaning can only be determined within the specific legal or contextual case. Legally and technically, there already are a myriad of legal definitions of digital currency and the many digital currency sub-types. Combining different possible properties, there exists an extensive number of implementations creating many and numerous sub-types of digital currency. Many governmental jurisdictions have implemented their own unique definition for digital currency, virtual currency, cryptocurrency, e-money, network money, e-cash, and other types of digital currency. Within any specific government jurisdiction, different agencies and regulators define different and often conflicting meanings for the different types of digital currency based on the specific properties of a specific currency type or sub-type. A virtual currency has been defined in 2012 by theEuropean Central Bankas "a type of unregulated, digital money, which is issued and usually controlled by its developers, and used and accepted among the members of a specificvirtual community".[19]TheUS Department of Treasuryin 2013 defined it more tersely as "a medium of exchange that operates like a currency in some environments, but does not have all the attributes of real currency".[20]The US Department of Treasury also stated that, "Virtual currency does not have legal-tender status in any jurisdiction."[20] According to theEuropean Central Bank's 2015 "Virtual currency schemes – a further analysis" report, virtual currency is a digital representation of value, not issued by a central bank, credit institution or e-money institution, which, in some circumstances, can be used as an alternative to money.[21]In the previous report of October 2012, the virtual currency was defined as a type of unregulated, digital money, which is issued and usually controlled by its developers, and used and accepted among the members of a specific virtual community.[19] According to theBank for International Settlements' November 2015 "Digital currencies" report, it is an asset represented in digital form and having some monetary characteristics.[22]Digital currency can be denominated to a sovereign currency and issued by the issuer responsible to redeem digital money for cash. In that case, digital currency represents electronic money (e-money). Digital currency denominated in its own units of value or with decentralized or automatic issuance will be considered as a virtual currency. As such, bitcoin is a digital currency but also a type of virtual currency. bitcoin and its alternatives are based on cryptographic algorithms, so these kinds of virtual currencies are also called cryptocurrencies. Cryptocurrencyis a sub-type of digital currency and a digitalassetthat relies oncryptographyto chain togetherdigital signaturesof asset transfers,peer-to-peernetworking anddecentralization. In some cases aproof-of-workorproof-of-stakescheme is used to create and manage the currency.[23][24][25][26]Cryptocurrencies can allow electronic money systems to be decentralized. When implemented with a blockchain, the digital ledger system or record keeping system usescryptographyto edit separate shards of database entries that are distributed across many separate servers. The first and most popular system isbitcoin, a peer-to-peer electronic monetary system based on cryptography. Most of the traditionalmoney supplyisbank moneyheld on computers. They are considered digital currency in some cases. One could argue that our increasingly cashless society means that all currencies are becoming digital currencies, but they are not presented to us as such.[27] Currency can be exchanged electronically usingdebit cardsandcredit cardsusingelectronic funds transfer at point of sale. A number of electronic money systems usecontactless paymenttransfer in order to facilitate easy payment and give the payee more confidence in not letting go of their electronic wallet during the transaction. Acentral bank digital currency(CBDC) is a form of universally accessible digital money in a nation and holds the same value as the country's paper currency. Like acryptocurrency, a CBCD is held in the form of tokens. CBDCs are different from regular digital cash forms like in online bank accounts because CBDCs are established through the central bank within a country, with liabilities held by one's government, rather than from a commercial bank.[36]Approximately nine countries have already[when?]established a CBDC, with interest in the system increasing highly throughout the world. In these nations, CBDCs have been used as a form of exchange and a way for governments to try to prevent risks from occurring within theirfinancial systems.[37] A major problem with central bank digital currencies is deciding whether the currency should be easily trackable. If it's traceable, the government has more control than it currently does. Additionally, there's a technical aspect to consider: whether CBDCs should be based on tokens or accounts and how much anonymity users should have.[38] Digital Currency has been implemented in some cases as adecentralizedsystem of any combination of currencyissuance,ownershiprecord, ownership transferauthorizationandvalidation, and currency storage. Per theBank for International Settlements(BIS), "These schemes do not distinguish between users based on location, and therefore allow value to be transferred between users across borders. Moreover, the speed of a transaction is not conditional on the location of the payer and payee."[3] Since 2001, the European Union has implemented theE-Money Directive"on the taking up, pursuit and prudential supervision of the business of electronic money institutions" last amended in 2009.[39] In the United States, electronic money is governed by Article 4A of theUniform Commercial Codefor wholesale transactions and theElectronic Fund Transfer Actfor consumer transactions. Provider's responsibility and consumer's liability are regulated under Regulation E.[40][41] Virtual currencies pose challenges for central banks, financial regulators, departments or ministries of finance, as well as fiscal authorities and statistical authorities. As of 2016, over 24 countries are investing in distributed ledger technologies (DLT) with $1.4bn in investments. In addition, over 90central banksare engaged in DLT discussions, including implications of acentral bank issued digital currency.[42] In March 2018, theMarshall Islandsbecame the first country to issue their own cryptocurrency and certify it as legal tender; the currency is called the "sovereign".[48] The USCommodity Futures Trading Commission(CFTC) has determined virtual currencies are properly defined as commodities in 2015.[49]TheCFTCwarned investors againstpump and dumpschemes that use virtual currencies.[50] TheUS Internal Revenue Service(IRS) ruling Notice 2014-21[51]defines any virtual currency, cryptocurrency and digital currency as property; gains and losses are taxable within standard property policies. On 20 March 2013, the Financial Crimes Enforcement Network issued a guidance to clarify how the U.S.Bank Secrecy Actapplied to persons creating, exchanging, and transmitting virtual currencies.[52] In May 2014 the USSecurities and Exchange Commission(SEC) "warned about the hazards of bitcoin and other virtual currencies".[53][54] In July 2014, theNew York State Department of Financial Servicesproposed the most comprehensive regulation of virtual currencies to date, commonly calledBitLicense. It has gathered input from bitcoin supporters and the financial industry through public hearings and a comment period until 21 October 2014 to customize the rules. The proposal per NY DFS press release "sought to strike an appropriate balance that helps protect consumers and root out illegal activity".[55]It has been criticized by smaller companies to favor established institutions, and Chinese bitcoin exchanges have complained that the rules are "overly broad in its application outside the United States".[56] TheBank of Canadahas explored the possibility of creating a version of its currency on the blockchain.[57] The Bank of Canada teamed up with the nation's five largest banks – and the blockchain consulting firm R3 – for what was known as Project Jasper. In a simulation run in 2016, the central bank issued CAD-Coins onto a blockchain similarEthereum.[58]The banks used the CAD-Coins to exchange money the way they do at the end of each day to settle their master accounts.[58] In 2016, Fan Yifei, a deputy governor of China's central bank, thePeople's Bank of China(PBOC), wrote that "the conditions are ripe for digital currencies, which can reduce operating costs, increase efficiency and enable a wide range of new applications".[58]According to Fan Yifei, the best way to take advantage of the situation is for central banks to take the lead, both in supervising private digital currencies and in developing digital legal tender of their own.[59] In October 2019, the PBOC announced that a digitalrenminbiwould be released after years of preparation.[60]The version of the currency, known as DCEP (Digital Currency Electronic Payment),[61]is based oncryptocurrencywhich can be "decoupled" from the banking system.[62]The announcement received a variety of responses: some believe it is more about domestic control and surveillance.[63] In December 2020, the PBOC distributed CN¥20 million worth of digital renminbi to the residents ofSuzhouthrough a lottery program to further promote the government-backed digital currency. Recipients of the currency could make both offline and online purchases, expanding on an earlier trial that did not require internet connection through the inclusion of online stores in the program. Around 20,000 transactions were reported by the e-commerce companyJD.comin the first 24 hours of the trial. Contrary to otheronline paymentplatforms such asAlipayorWeChat Pay, the digital currency does not have transaction fees.[64] The Danish government proposed getting rid of the obligation for selected retailers to accept payment in cash, moving the country closer to a "cashless" economy.[65]The Danish Chamber of Commerce is backing the move.[66]Nearly a third of the Danish population usesMobilePay, a smartphone application for transferring money.[65] A law passed by theNational Assembly of Ecuadorgives the government permission to make payments in electronic currency and proposes the creation of a national digital currency. "Electronic money will stimulate the economy; it will be possible to attract more Ecuadorian citizens, especially those who do not have checking or savings accounts and credit cards alone. The electronic currency will be backed by the assets of the Central Bank of Ecuador", the National Assembly said in a statement.[67]In December 2015, Sistema de Dinero Electrónico ("electronic money system") was launched, making Ecuador the first country with a state-run electronic payment system.[68] On 9 June 2021, theLegislative Assembly of El Salvadorhas become the first country in the world to officially classifybitcoinaslegal currency. Starting 90 days after approval, every business must acceptbitcoinas legal tender for goods or services, unless it is unable to provide the technology needed to do the transaction.[69] TheDutch central bankis experimenting with a blockchain-based virtual currency called "DNBCoin".[58][70] The Unified Payments Interface (UPI) is a real-time payment system for instant money transfers between any two bank accounts held in participating banks in India. The interface has been developed by the National Payments Corporation of India and is regulated by the Reserve Bank of India. This digital payment system is available 24 hours a day, every day of the year. UPI is agnostic to the type of user and is used for person to person, person to business, business to person and business to business transactions. Transactions can be initiated by the payer or the payee. To identify a bank account it uses a unique Virtual Payment Address (VPA) of the type 'accountID@bankID'. The VPA can be assigned by the bank, but can also be self specified just like an email address. The simplest and most common form of VPA is 'mobilenumber@upi'. Money can be transferred from one VPA to another or from one VPA to any bank account in a participating bank using account number and bank branch details. Transfers can be inter-bank or intra-bank. UPI has no intermediate holding pond for money. It withdraws funds directly from the bank account of the sender and deposits them directly into the recipient's bank account whenever a transaction is requested. A sender can initiate and authorise a transfer using a two step secure process: login using a pass code → initiate → verify using a passcode. A receiver can initiate a payment request on the system to send the payer a notification or by presenting a QR code. On receiving the request, the payer can decline or confirm the payment using the same two step process: login → confirm → verify. The system is extraordinarily user friendly to the extent that even technophobes and barely literate users are adopting it in huge numbers. Government-controlledSberbank of RussiaownsYooMoney– electronic payment service and digital currency of the same name.[71] Swedenis in the process of replacing all of its physical banknotes, and most of its coins by mid-2017.[needs update]However, the new banknotes and coins of theSwedish kronawill probably be circulating at about half the 2007 peak of 12,494 kronor per capita. TheRiksbankis planning to begin discussions of an electronic currency issued by the central bank to which "is not to replace cash, but to act as complement to it".[72]Deputy GovernorCecilia Skingsleystates that cash will continue to spiral out of use in Sweden, and while it is currently fairly easy to get cash in Sweden, it is often very difficult to deposit it into bank accounts, especially in rural areas. No decision has been currently made about the decision to create "e-krona". In her speech,[when?]Skingsley states: "The first question is whether e-krona should be booked in accounts or whether the ekrona should be some form of a digitally transferable unit that does not need an underlying account structure, roughly like cash." Skingsley also states: "Another important question is whether the Riksbank should issue e-krona directly to the general public or go via the banks, as we do now with banknotes and coins." Other questions will be addressed like interest rates, should they be positive, negative, or zero?[citation needed] In 2016, acity governmentfirst accepted digital currency in payment of city fees.Zug, Switzerland, added bitcoin as a means of paying small amounts, up to SFr 200, in a test and an attempt to advance Zug as a region that is advancing future technologies. In order to reduce risk, Zug immediately converts any bitcoin received into the Swiss currency.[73]Swiss Federal Railways, government-owned railway company of Switzerland, sells bitcoins at its ticket machines.[74] In 2016, the UK's chief scientific adviser,Sir Mark Walport, advised the government to consider using a blockchain-based digital currency.[75] The chief economist ofBank of England, the central bank of the United Kingdom, proposed the abolition of paper currency. The Bank has also taken an interest in blockchain.[58][76]In 2016 it has embarked on a multi-year research programme to explore the implications of a central bank issued digital currency.[42]The Bank of England has produced several research papers on the topic. One suggests that the economic benefits of issuing a digital currency on a distributed ledger could add as much as 3 percent to a country's economic output.[58]The Bank said that it wanted the next version of the bank's basic software infrastructure to be compatible with distributed ledgers.[58] Government attitude dictates the tendency among established heavy financial actors that both are risk-averse and conservative. None of these offered services around cryptocurrencies and much of the criticism came from them. "The first mover among these has beenFidelity Investments,BostonbasedFidelity Digital AssetsLLC will provide enterprise-grade custody solutions, a cryptocurrency trading execution platform and institutional advising services 24 hours a day, seven days a week designed to align with blockchain's always-on trading cycle".[77]It will work withbitcoinandEthereumwith general availability scheduled for 2019.[needs update] Hard electronic currency does not have the ability to be disputed or reversed when used. It is nearly impossible to reverse a transaction, justified or not. It is very similar to cash. Contrarily, soft electronic currency payments can be reversed. Usually, when a payment is reversed there is a "clearing time." A hard currency can be "softened" with a third-party service. Many existing digital currencies have not yet seen widespread usage, and may not be easily used or exchanged. Banks generally do not accept or offer services for them.[78]There are concerns that cryptocurrencies are extremely risky due to their very highvolatility[79]and potential forpump and dumpschemes.[80]Regulators in several countries have warned against their use and some have taken concrete regulatory measures to dissuade users.[81]The non-cryptocurrencies are allcentralized. As such, they may be shut down or seized by a government at any time.[82]The more anonymous a currency is, the more attractive it is to criminals, regardless of the intentions of its creators.[82]bitcoin has also been criticised for its energy inefficient SHA-256-basedproof of work.[83] According toBarry Eichengreen, an economist known for his work on monetary and financial economics, "cryptocurrencies like bitcoin are too volatile to possess the essential attributes of money. Stablecoins have fragile currency pegs that diminish their utility in transactions. And central bank digital currencies are a solution in search of a problem."[84]
https://en.wikipedia.org/wiki/Electronic_money
XML Signature(also calledXMLDSig,XML-DSig,XML-Sig) defines anXMLsyntax fordigital signaturesand is defined in theW3C recommendationXML Signature Syntax and Processing. Functionally, it has much in common withPKCS #7but is more extensible and geared towards signing XML documents. It is used by variousWebtechnologies such asSOAP,SAML, and others. XML signatures can be used to sign data–aresource–of anytype, typically XML documents, but anything that is accessible via aURLcan be signed. An XML signature used to sign a resource outside its containing XML document is called adetached signature; if it is used to sign some part of its containing document, it is called anenvelopedsignature;[1]if it contains the signed data within itself it is called anenvelopingsignature.[2] An XML Signature consists of aSignatureelement in thehttp://www.w3.org/2000/09/xmldsig#namespace. The basic structure is as follows: When validating an XML Signature, a procedure calledCore Validationis followed. This procedure establishes whether the resources were really signed by the alleged party. However, because of the extensibility of the canonicalization and transform methods, the verifying party must also make sure that what was actually signed or digested is really what was present in the original data, in other words, that the algorithms used there can be trusted not to change the meaning of the signed data. Because the signed document's structure can be tampered with leading to "signature wrapping" attacks, the validation process should also cover XML document structure. Signed element and signature element should be selected using absoluteXPathexpression, notgetElementByNamemethods.[4] The creation of XML Signatures is substantially more complex than the creation of an ordinary digital signature because a given XML Document (an "Infoset", in common usage among XML developers) may have more than one legal serialized representation. For example, whitespace inside an XML Element is not syntactically significant, so that<Elem >is syntactically identical to<Elem>. Since the digital signature ensures data integrity, a single-byte difference would cause the signature to vary. Moreover, if an XML document is transferred from computer to computer, theline terminatormay be changed from CR to LF to CR LF, etc. A program that digests and validates an XML document may later render the XML document in a different way, e.g. adding excess space between attribute definitions with an element definition, or using relative (vs. absolute) URLs, or by reordering namespace definitions. Canonical XML is especially important when an XML Signature refers to a remote document, which may be rendered in time-varying ways by an errant remote server. To avoid these problems and guarantee that logically-identical XML documents give identical digital signatures, an XMLcanonicalizationtransform (frequently abbreviatedC14n) is employed when signing XML documents (for signing theSignedInfo, a canonicalization is mandatory). These algorithms guarantee that semantically-identical documents produce exactly identical serialized representations. Another complication arises because of the way that the default canonicalization algorithm handles namespace declarations; frequently a signed XML document needs to be embedded in another document; in this case the original canonicalization algorithm will not yield the same result as if the document is treated alone. For this reason, the so-calledExclusive Canonicalization, which serializesXML namespacedeclarations independently of the surrounding XML, was created. XML Signature is more flexible than other forms of digital signatures such asPretty Good PrivacyandCryptographic Message Syntax, because it does not operate onbinary data, but on theXML Infoset, allowing to work on subsets of the data (this is also possible with binary data in non-standard ways, for example encoding blocks of binary data in base64 ASCII), having various ways to bind the signature and signed information, and perform transformations. Another core concept is canonicalization, that is to sign only the "essence", eliminating meaningless differences like whitespace and line endings. There are criticisms directed at the architecture of XML security in general,[5]and at the suitability of XML canonicalization in particular as a front end to signing and encrypting XML data due to its complexity, inherent processing requirement, and poor performance characteristics.[6][7][8]The argument is that performing XML canonicalization causes excessive latency that is simply too much to overcome for transactional, performance sensitiveSOAapplications. These issues are being addressed in theXML Security Working Group.[9][10] Without proper policy and implementation[4]the use of XML Dsig in SOAP and WS-Security can lead to vulnerabilities,[11]such as XML signature wrapping.[12] An example of applications of XML Signatures:
https://en.wikipedia.org/wiki/XML_Signature
DigiDoc(Digital Document) is a family ofdigital signature- andcryptographiccomputing file formats utilizing apublic key infrastructure. It currently has three generations of sub formats,DDOC- , a later binary basedBDOCand currently usedASiC-Eformat that is supposed to replace the previous generation formats. DigiDoc was created and is developed and maintained byRIA[1](Riigi Infosüsteemi Amet,Information System Authority of Estonia). The format is used tolegallysign and optionally encrypt file(s) like text documents as part ofelectronic transactions. All operations are done using anational id-card, ahardware token, that has a chip with digitalPKIcertificatesto verify a person's signaturemathematically. Signed file is acontainerholding actual signed, unmodified files and hence operation does not require any support from software that created those files. Format container and its signatures can be created using application like qDigiDoc or aweb servicewith user's web browser with signingextension. When an application is used, container is typically exchanged between signing parties as an email attachment until everyone has signed it and have their own complete copy. Web services also utilize identity cards for session authentication using an authentication certificate which is also stored on the id-card. DigiDoc container contains actual files andmetadata, including ahashthat represents those files. When signing, software sends content hash using standardisedPKCS 11interface to the user's id-card. After verifying the user's PIN, id-card signs the hash internally and returns a signature which is then stored into DigiDoc container. During the signing, the certificate validity of each signing party is checked, and a signed timestamp is retrieved, using anOCSPservice. The signed timestamp makes it possible to prove later at what time a document was signed (as the timestamp is derived from the document hash) and that each signing certificate was not incertificate revocation listat the time of signing. Any signatures prior to the revocation are still valid (therefore, documents do not have to be resigned when the user receives new certificates). ASiC-E(Associated Signature Containers) and itsextendedvariant is the latest DigiDoc container format. Usedfile extensionis.asice. BDOC(Binary Document), of which the latest version is 2.1, is based onETSI's ASiC signature container standards. It is official Estonian national standardEVS 821:2014.[2]Files use the.bdocfile extension. DDOC(Digical document) is the first generation DigiDoc format. Files use the.ddocfile extension. The most widely used application is the qDigiDoc graphical desktop software that runs onMicrosoft Windows,Apple Mac OSXand on variousLinux distributions. qDigiDoc isOpen Source Softwarethat can be freely downloaded and installed. Applications also exist for Apple iPad tablet devices and Windows phones. CurrentlyEstonian- andFinnishgovernment issued cards work with qDigiDoc 3.x and later versions. Multiple programming languages are supported to create applications and services utilizing DigiDoc-format, includingC++, C, Java,.NET,
https://en.wikipedia.org/wiki/DigiDoc
Anelectronic lab notebook(also known aselectronic laboratory notebook, orELN) is acomputer programdesigned to replace paperlaboratory notebooks. Lab notebooks in general are used byscientists,engineers, andtechniciansto documentresearch,experiments, and procedures performed in a laboratory. A lab notebook is often maintained to be alegal documentand may be used in acourt of lawasevidence. Similar to aninventor's notebook, the lab notebook is also often referred to inpatentprosecution andintellectual propertylitigation. Electronic lab notebooks are a fairly new technology and offer many benefits to the user as well as organizations. For example: electronic lab notebooks are easier to search upon, simplify data copying and backups, and support collaboration amongst many users.[1]ELNs can have fine-grained access controls, and can be more secure than their paper counterparts.[2]They also allow the direct incorporation of data from instruments, replacing the practice of printing out data to be stapled into a paper notebook.[3]This is a list of ELN software packages. It is incomplete, as a recent review listed 96 active & 76 inactive (172 total) ELN products.[4]Notably, this review and other lists of ELN software often do not include widely used generic notetaking software likeOnenote,Notion,Jupyteretc, due to their lackELNnominal features like time-stamping and append-only editing. Some ELNs are web-based; others are used on premise and a few are available for both environments.
https://en.wikipedia.org/wiki/List_of_ELN_software_packages
Data managementcomprises alldisciplinesrelated to handlingdataas a valuable resource, it is the practice of managing an organization's data so it can be analyzed fordecision making.[1] The concept of data management emerged alongside the evolution of computing technology. In the 1950s, as computers became more prevalent, organizations began to grapple with the challenge of organizing and storing data efficiently. Early methods relied on punch cards and manual sorting, which were labor-intensive and prone to errors. The introduction of database management systems in the 1970s marked a significant milestone, enabling structured storage and retrieval of data. By the 1980s, relational database models revolutionized data management, emphasizing the importance of data as an asset and fostering a data-centric mindset in business. This era also saw the rise of data governance practices, which prioritized the organization and regulation of data to ensure quality and compliance. Over time, advancements in technology, such as cloud computing and big data analytics, have further refined data management, making it a cornerstone of modern business operations. As of 2025[update], data management encompasses a wide range of practices, from data storage and security to analytics and decision-making, reflecting its critical role in driving innovation and efficiency across industries.[2] The Data Management Body of Knowledge, DMBoK, developed by theData Management Association, DAMA, outlines key knowledge areas that serve as the foundation for modern data management practices. suggesting a framework for organizations to manage data as a strategicasset. Setting policies, procedures, and accountability frameworks to ensure that data is accurate, secure, and used responsibly throughout the organization. Focuses on designing the overall structure of data systems. It ensures that data flows are efficient and that systems are scalable, adaptable, and aligned with business needs. This area centers on creating models that logically represent data relationships. It’s essential for both designing databases and ensuring that data is structured in a way that facilitates analysis and reporting. Deals with the physical storage of data and its day-to-day management. This includes everything from traditional data centers to cloud-based storage solutions and ensuring efficient data processing. Ensures that data from various sources can be seamlessly shared and combined across multiple systems, which is critical for comprehensive analytics and decision-making. Focuses on managing unstructured data such as documents, multimedia, and other content, ensuring that it is stored, categorized, and easily retrievable. Involves consolidating data into repositories that support analytics, reporting, and business insights. Manages data about data, including definitions, origin, and usage, to enhance the understanding and usability of the organization’s data assets. Dedicated to ensuring that data remains accurate, complete, and reliable, this area emphasizes continuous monitoring and improvement practices. Reference data comprises standardized codes and values for consistent interpretation across systems. Master data management (MDM) governs and centralizes an organization’s critical data, ensuring a unified, reliable information source that supports effective decision-making and operational efficiency. Data security refers to a comprehensive set of practices and technologies designed to protect digital information and systems from unauthorized access, use, disclosure, modification, or destruction. It encompasses encryption, access controls, monitoring, and risk assessments to maintain data integrity, confidentiality, and availability. Data privacy involves safeguarding individuals’ personal information by ensuring its collection, storage, and use comply with consent, legal standards, and confidentiality principles. It emphasizes protecting sensitive data from misuse or unauthorized access while respecting users' rights. The distinction between data and derived value is illustrated by the "information ladder" or the DIKAR model. The "DIKAR" model stands for Data, Information, Knowledge, Action, and Result. It is a framework used to bridge the gap between raw data and actionable outcomes. The model emphasizes the transformation of data into information, which is then interpreted to create knowledge. This knowledge guides actions that lead to measurable results. DIKAR is widely applied in organizational strategies, helping businesses align their data management processes with decision-making and performance goals. By focusing on each stage, the model ensures that data is effectively utilized to drive informed decisions and achieve desired outcomes. It is particularly valuable in technology-driven environments.[3] The "information ladder" illustrates the progression from data (raw facts) to information (processed data), knowledge (interpreted information), and ultimately wisdom (applied knowledge). Each step adds value and context, enabling better decision-making. It emphasizes the transformation of unstructured inputs into meaningful insights for practical use.[4] In research, Data management refers to the systematic process of handling data throughout its lifecycle. This includes activities such as collecting, organizing, storing, analyzing, and sharing data to ensure its accuracy, accessibility, and security. Effective data management also involves creating adata management plan, DMP, addressing issues like ethical considerations, compliance with regulatory standards, and long-term preservation. Proper management enhances research transparency, reproducibility, and the efficient use of resources, ultimately contributing to the credibility and impact of research findings. It is a critical practice across disciplines to ensure data integrity and usability both during and after a research project.[5] big datarefers to the collection and analyses of massive sets of data. While big data is a recent phenomenon, the requirement for data to aid decision-making traces back to the early 1970s with the emergence of decision support systems (DSS). These systems can be considered as the initial iteration of data management for decision support.[6] Studies indicate that customer transactions account for a 40% increase in the data collected annually, which means that financial data has a considerable impact on business decisions. Therefore, modern organizations are using big data analytics to identify 5 to 10 new data sources that can help them collect and analyze data for improved decision-making. Jonsen (2013) explains that organizations using average analytics technologies are 20% more likely to gain higher returns compared to their competitors who have not introduced any analytics capabilities in their operations. Also, IRI reported that the retail industry could experience an increase of more than $10 billion each year resulting from the implementation of modern analytics technologies. Therefore, the following hypothesis can be proposed: Economic and financial outcomes can impact how organizations use data analytics tools.
https://en.wikipedia.org/wiki/Data_management
Laboratory informaticsis the specialized application of information technology aimed at optimizing and extending laboratory operations.[1]It encompassesdata acquisition(e.g. through sensors and hardware[2]or voice[3][4][5]), instrument interfacing, laboratory networking,data processing, specialized data management systems (such as achromatography data system), alaboratory information management system, scientific data management (includingdata mininganddata warehousing), and knowledge management (including the use of anelectronic lab notebook). It has become more prevalent with the rise of other "informatics" disciplines such asbioinformatics,cheminformaticsandhealth informatics. Several graduate programs are focused on some form of laboratory informatics, often with a clinical emphasis.[6]A closely related - some consider subsuming - field islaboratory automation. In the context of Public Health Laboratories, theAssociation of Public Health Laboratorieshas identified 19 areas for self-assessment of laboratory informatics in their Laboratories Efficiencies Initiative.[7]These include the following Capability Areas.
https://en.wikipedia.org/wiki/Laboratory_informatics
Project Jupyter(pronounced "Jupiter") is a project to developopen-source software,open standards, and services forinteractive computingacross multipleprogramming languages. It was spun off fromIPythonin 2014 byFernando Pérezand Brian Granger. Project Jupyter's name is a reference to the three core programming languages supported by Jupyter, which areJulia,PythonandR. Its name and logo are anhomagetoGalileo's discovery of themoons of Jupiter, as documented in notebooks attributed to Galileo. Jupyter is financially sponsored by the Jupyter Foundation.[1] The first version of Notebooks for IPython was released in 2011 by a team including Fernando Pérez, Brian Granger, and Min Ragan-Kelley.[2]In 2014, Pérez announced a spin-off project from IPython called Project Jupyter.[3]IPython continues to exist as a Pythonshelland a kernel for Jupyter, while thenotebookand otherlanguage-agnosticparts of IPython moved under the Jupyter name.[4][5]Jupyter supports execution environments (called "kernels") in several dozen languages, including Julia, R,Haskell,Ruby, and Python (via the IPython kernel). In 2015, about 200,000 Jupyter notebooks were available onGitHub. By 2018, about 2.5 million were available.[6]In January 2021, nearly 10 million were available, including notebooks about thefirst observation of gravitational waves[7]and about the 2019 discovery of asupermassive black hole.[8] Majorcloud computingproviders have adopted the Jupyter Notebook or derivative tools as a frontend interface for cloud users. Examples includeAmazon SageMakerNotebooks,[9]Google'sColab,[10][11]andMicrosoft'sAzure Notebook.[12] Visual Studio Codesupports local development of Jupyter notebooks. As of July 2022, the Jupyter extension for VS Code has been downloaded over 40 million times, making it the second-most popular extension in the VS Code Marketplace.[13] The steering committee of Project Jupyter received the 2017ACM Software System Award, an annual award that honors people or an organization "for developing a software system that has had a lasting influence, reflected in contributions to concepts, in commercial acceptance, or both".[14] The Atlanticpublished an article entitled "The Scientific Paper Is Obsolete" in 2018, discussing the role of Jupyter Notebook and theMathematicanotebook in the future of scientific publishing.[15]EconomistPaul Romer, in response, published a blog post in which he reflected on his experiences using Mathematica and Jupyter for research, concluding in part that Jupyter "does a better job of delivering whatTheodore Grayhad in mind when he designed the Mathematica notebook."[16] In 2021,Naturenamed Jupyter as one of ten computing projects that transformed science.[8] Jupyter Notebook can colloquially refer to two different concepts, either the user-facing application to edit code and text, or the underlying file format which is interoperable across many implementations. Jupyter Notebook (formerly IPython Notebook) is aweb-based interactivecomputational environment for creatingnotebookdocuments. Jupyter Notebook is built using severalopen-sourcelibraries, includingIPython,ZeroMQ,Tornado,jQuery,Bootstrap, andMathJax. A Jupyter Notebook application is a browser-basedREPLcontaining an ordered list of input/output cells which can contain code, text (using Github FlavoredMarkdown), mathematics,plotsandrich media. Jupyter Notebook is similar to the notebook interface of other programs such asMaple,Mathematica, andSageMath, a computational interface style that originated with Mathematica in the 1980s. Jupyter interest overtook the popularity of the Mathematica notebook interface in early 2018.[15] JupyterLab is a newer user interface for Project Jupyter, offering a flexible user interface and more features than the classic notebook UI. The first stable release was announced on February 20, 2018.[19][20]In 2015, a joint $6 million grant fromThe Leona M. and Harry B. Helmsley Charitable Trust,The Gordon and Betty Moore Foundation, andThe Alfred P. Sloan Foundationfunded work that led to expanded capabilities of the core Jupyter tools, as well as to the creation of JupyterLab.[21] GitHub announced in November 2022 that JupyterLab would be available in its online Coding platform called Codespace.[22] In August 2023, Jupyter AI, a Jupyter extension, was released. This extension incorporatesgenerative artificial intelligenceinto Jupyter notebooks, enabling users to explain and generate code, rectify errors, summarize content, inquire about their local files, and generate complete notebooks based on natural language prompts.[23] JupyterHub is a multi-user server for Jupyter Notebooks. It is designed to support many users by spawning, managing, and proxying many singular Jupyter Notebook servers.[24] A Jupyter Notebook document is aJSONfile, following a versioned schema, usually ending with the ".ipynb" extension. The main parts of the Jupyter Notebooks are: Metadata, Notebook format and list of cells. Metadata is a data Dictionary of definitions to set up and display the notebook. Notebook Format is a version number of the software. List of cells are different types of Cells for Markdown (display), Code (to execute), and output of the code type cells.[25] While JSON is the most common format, it is possible to forgo some features (like storing images and metadata), and save notebooks as markdown documents using extensions like Jupytext.[26]Jupytext is often used in conjunction with version control to make diffing and merging of notebooks simpler.
https://en.wikipedia.org/wiki/Jupyter
DigiLockeris an Indianstate-ownedsecurecloudbaseddigitizationserviceprovided by theIndianMinistry of Electronics and Information Technology (MeitY)under itsDigital Indiainitiative. DigiLocker allows access to digital versions of various documents including driver's licenses, vehicle registration certificates and academic mark sheets.[3]It also provides 1 GB storage space to each account to upload scanned copies of legacy documents. Users need to possess anAadhaar numberto use DigiLocker. During registration, user identity is verified using aone-time password(OTP) sent to the linked mobile number.[4] The beta version of the service was rolled out in February 2015,[5]and was launched to the public by Prime Minister Narendra Modi on 1 July 2015.[6][7]Storage space for uploaded legacy documents was initially 100 MB.[8]Individual files are limited to 10 MB. In July 2016, DigiLocker recorded 2.013 million users with a repository of 2.413 million documents. The number of users saw a large jump of 753,000 new users in April when the central government urged municipal bodies to use DigiLocker to make their administration paperless.[9] From 2017, the facility was extended to allow students of theICSEboard to store their class X and XII certificates in DigiLocker and share them as required.[10]In February 2017,Kotak Mahindra Bankstarted providing access to documents in DigiLocker from within its net-banking application, allowing users to electronically sign and share them.[11]In May 2017, over 108 hospitals, including theTata Memorial Hospitalwere planning to launch the use of DigiLocker for storing cancer patients' medical documents and test reports. According to a UIDAI architect, patients would be provided a number key, which they could share with other hospitals to grant them access to their test reports.[12] As of December 2019, DigiLocker provides access to over 372 crore authentic documents from 149 issuers. Over 3.3 crore users are registered on the platform and 43 requester organisations are accepting documents from DigiLocker.[13]In 2023, Government of India integrated Passport Application Form with Digilocker.[14]As of December 2024, Digilocker platform facilitated 9.4 billion document issuances to 43.49 crore users.[15] There is also an associated facility fore-signingdocuments. The service is intended to minimise the use of physical documents and reduce administrative expense, while proving the authenticity of the documents, providing secure access to government-issued documents and making it easy for the residents to receive services. Each user's digital locker has the following sections.[16] DigiLocker is not merely a technical platform. The Ministry of Electronics and IT, has notified rules concerning the service.[17]Amendments made to theInformation Technology Act, 2000in February 2017 state that the documents provided and shared through DigiLocker are at par with the corresponding physical certificates.[18] According to this Rule, –(1) Issuers may start issuing and Requesters may start accepting digitally (or electronically) signed certificates or documents shared from subscribers’ Digital Locker accounts at par with the physical documents in accordance with the provisions of the Act and rules made thereunder.(2) When such certificate or document mentioned in sub-rule (1) has been issued or pushed in the Digital Locker System by an issuer and subsequently accessed or accepted by a requester through the URI, it shall be deemed to have been shared by the issuer directly in electronic form.[19] Following are the security measures[21]used in the system
https://en.wikipedia.org/wiki/DigiLocker
CAdES(CMS Advanced Electronic Signatures) is a set of extensions toCryptographic Message Syntax(CMS) signed data making it suitable foradvanced electronic signatures.[1] CMSis a general framework forelectronic signaturesfor various kinds of transactions like purchase requisition, contracts or invoices.[2]CAdES specifies precise profiles ofCMSsigned data making it compliant with the EuropeaneIDAS regulation(Regulation on electronic identification and trust services for electronic transactions in the internal market). The eIDAS regulation enhances and repeals theElectronic Signatures Directive1999/93/EC.[3][4]EIDAS is legally binding in all EU member states since July 2014. An electronic signature that has been created in compliance with eIDAS has the same legal value as a handwritten signature.[3] An electronic signature, technically implemented based on CAdES has the status of an advanced electronic signature.[2]This means that A resulting property of CAdES is that electronically signed documents can remain valid for long periods, even if the signer or verifying party later attempts to deny the validity of the signature. A CAdES-based electronic signature is accepted in a court proceeding as evidence; as advanced electronic signatures are legally binding.[5]But it gets higherprobative valuewhen enhanced to a qualified electronic signature. To receive that legal standing, it needs to be doted with a digital certificate, encrypted by a security signature creation device ("qualified electronic signature").[4][6]The authorship of a statement with a qualified electronic signature cannot be challenged - the statement isnon-repudiable. The document ETSI TS 101 733 Electronic Signature and Infrastructure (ESI) – CMS Advanced Electronic Signature (CAdES) describes the framework.[2] The main document describing the format is ETSI TS 101 733 Electronic Signature and Infrastructure (ESI) – CMS Advanced Electronic Signature (CAdES). The ETSI TS 101 733 was first issued as V1.2.2 (2000–12). The current release version has the release number V2.2.1 (2013-04). ETSI is working on a new draft of CAdES. All drafts and released documents are publicly accessible at[1]. The ETSI TS V.1.7.4 (2008-07) is technically equivalent toRFC5126.RFC5126document builds on existing standards that are widely adopted. These include: ETSI "TS 101 733" specifies formats for Advanced Electronic Signatures built on CMS (CAdES). It defines a number of signed and unsigned optional signature properties, resulting in support for a number of variations in the signature contents and processing requirements. In order to maximize interoperability in communities applying CAdES to particular environments it was necessary to identify a common set of options that are appropriate to that environment. Such a selection is commonly called a profile. ETSI "TS 103 173"[7]describes profiles for CAdES signatures, in particular their use in the context of the EU Services Directive, "Directive 2006/123/EC of the European Parliament and of the Council of 12 December 2006 on services in the internal market". There are four profiles available:
https://en.wikipedia.org/wiki/CAdES_(computing)
Incryptography,PKCS #7("PKCS #7: Cryptographic Message Syntax", "CMS") is a standard syntax for storing signed and/or encrypted data. PKCS #7 is one of the family of standards called Public-Key Cryptography Standards (PKCS) created byRSA Laboratories. The latest version, 1.5, is available as RFC 2315.[1] An update to PKCS #7 is described in RFC 2630,[2]which was replaced in turn by RFC 3369,[3]RFC 3852[4]and then by RFC 5652.[5] PKCS #7 files may be stored both as rawDERformat or asPEMformat. PEM format is the same as DER format but wrapped insideBase64encoding and sandwiched in between‑‑‑‑‑BEGIN PKCS7‑‑‑‑‑and‑‑‑‑‑END PKCS7‑‑‑‑‑. Windows uses the.p7bfile name extension[6]for both these encodings. A typical use of a PKCS #7 file would be to store certificates and/orcertificate revocation lists(CRL). Here's an example of how to first download a certificate, then wrap it inside a PKCS #7 archive and then read from that archive:
https://en.wikipedia.org/wiki/PKCS_7
Communications interceptioncan mean:
https://en.wikipedia.org/wiki/Communications_interception_(disambiguation)
Indiscriminate monitoringis the mass monitoring of individuals or groups without the careful judgement of wrong-doing.[1]This form of monitoring could be done by government agencies, employers, and retailers. Indiscriminate monitoring uses tools such asemail monitoring,telephone tapping, geo-locations, health monitoring to monitor private lives. Organizations that conduct indiscriminate monitoring may also usesurveillance technologiesto collect large amounts of data that could violate privacy laws or regulations. These practices could impact individuals emotionally, mentally, and globally.[2]The government has also issued various protections to protect against indiscriminate monitoring.[3] Indiscriminate monitoring could occur through electronic employee monitoring, social networking, targeted advertising, and geological health monitoring. All of these tools are used to monitor individuals without the direct knowledge of the individual. ElectronicEmployee monitoringis the use of electronic devices to collect data to monitor an employee's performance or general being.[4]The indiscriminate justification for monitoring includes, but is not limited to: Electronic Employee monitoring uses many tools to monitor employees. One of the most common tools of Electronic Employee monitoring is the use of monitoring technology[6]Email monitoring involves the employers usingemployee monitoring softwareto collect data on every single time an employee comes in contact with technology in the workplace.[7]The software will also monitor all passwords, websites, social media, email, screenshots, and other computer actions.[8]In most jurisdictions employers are permitted to use monitoring to protect the company's assets, increase productivity, or protect themselves from liability.[9]However, the impact on privacy could affect employee contentment and well being at the company.[10] Social Media Monitoringis the use ofsocial media measurementand other technologies to capture the data individuals share via these networks.[11]Social networks may allow third-parties to obtain the personal information of individuals through terms-of-agreements.[12]In addition to social media networks collecting information for analytics, government agencies also use social media monitoring for public issues and other manners. The government uses the often public data of social media to conduct data collection on individuals or groups of people.[13] Targeted advertisingis a method used by companies to monitor customer tastes and preferences in order to create personalized advertising.[14]Companies conduct mass surveillance by monitoring user activity and IP activity.[12]Many companies justify targeted advertising by the social and economic implications. However, the indiscriminate privacy violations of producing targeted advertisements, cause consumers to have great concerns.[15] Geological health monitoringis the monitoring of an individual's location and/or health through tools to collect personal information. Geological health monitoring can be conducted through smart toys, home surveillance systems, fitness watches or applications.[16]Technological devices such as, fitness watches could serve as a great tool. However, they do have privacy implications that could risk health data exposure.[17][18] The right to privacy in the constitution is most explicitly mentioned inAmendment I,Amendment III, andAmendment IVof the U.S. Constitution. The privacy of belief, privacy of home, and privacy of the person and possessions is included in the U.S. Constitution.[19] In 2007, the Bush Administration announced that they would issue warrants for the NSA conducting surveillance of citizens without warrants. This announcement provided further protection against Indiscriminate monitoring because it prevented individuals from being monitored without just cause.[3] FISAamendments were passed to promote national security and privacy. These amendments require theNSAto complete certification annually. Furthermore, these amendments state that the use of mass surveillance information for any reason other than national security is prohibited.[3] In 2020,Proposition 24, the Privacy Rights and Enforcement Act Initiative, appeared as a California ballot proposition. This act states that consumers can prevent companies from sharing their personal information. Also, this act can prevent companies from withholding the personal information of individuals through data collection for a long period of time.[20] There may be emotional and mental considerations in regards to indiscriminate monitoring. When individuals know they are monitored, it could produce stress, frustration, and a negative attitude. Individuals could feel degraded if their privacy is infringed on. For example, in the workplace employee monitoring if employees know that their emails and such were being monitored, this could stir up distrust within the workplace and increase job dissatisfaction.[2] Recently, researchers have been discussing the implications of indiscriminate monitoring, the public space, and the government's role. One argument states that the indiscriminate monitoring of the government inflicts on theright to privacyand results in harm to citizens.[21]
https://en.wikipedia.org/wiki/Indiscriminate_monitoring
Thebitis the most basicunit of informationincomputingand digitalcommunication. The name is aportmanteauofbinary digit.[1]The bit represents alogical statewith one of two possiblevalues. These values are most commonly represented as either"1" or "0", but other representations such astrue/false,yes/no,on/off, or+/−are also widely used. The relation between these values and the physical states of the underlyingstorageordeviceis a matter of convention, and different assignments may be used even within the same device orprogram. It may be physically implemented with a two-state device. A contiguous group of binary digits is commonly called abit string, a bit vector, or a single-dimensional (or multi-dimensional)bit array. A group of eight bits is called onebyte, but historically the size of the byte is not strictly defined.[2]Frequently, half, full, double and quadruple words consist of a number of bytes which is a low power of two. A string of four bits is usually anibble. Ininformation theory, one bit is theinformation entropyof a randombinaryvariable that is 0 or 1 with equal probability,[3]or the information that is gained when the value of such a variable becomes known.[4][5]As aunit of informationornegentropy, the bit is also known as ashannon,[6]named afterClaude E. Shannon. As a measure of the length of a digital string that is encoded as symbols over a 0-1 (binary) alphabet, the bit has been called a binit,[7]but this usage is now rare.[8] Indata compression, the goal is to find a shorter representation for a string, so that it requires fewer bits when stored or transmitted; the string would be "compressed" into the shorter representation before doing so, and then "decompressed" into its original form when read from storage or received. The field ofalgorithmic information theoryis devoted to the study of the "irreducible information content" of a string (i.e. its shortest-possible representation length, in bits), under the assumption that the receiver has minimala prioriknowledge of the method used to compress the string. Inerror detection and correction, the goal is to add redundant data to a string, to enable the detection and/or correction of errors during storage or transmission; the redundant data would be computed before doing so, and stored or transmitted, and then "checked" or "corrected" when the data is read or received. The symbol for the binary digit is either "bit", per theIEC 80000-13:2008 standard, or the lowercase character "b", per theIEEE 1541-2002standard. Use of the latter may create confusion with the capital "B" which is the international standard symbol for the byte. Ralph Hartleysuggested the use of a logarithmic measure of information in 1928.[9]Claude E. Shannonfirst used the word "bit" in his seminal 1948 paper "A Mathematical Theory of Communication".[10][11][12]He attributed its origin toJohn W. Tukey, who had written a Bell Labs memo on 9 January 1947 in which he contracted "binary information digit" to simply "bit".[10] A bit can be stored by a digital device or other physical system that exists in either of two possible distinctstates. These may be the two stable states of aflip-flop, two positions of anelectrical switch, two distinctvoltageorcurrentlevels allowed by acircuit, two distinct levels oflight intensity, two directions ofmagnetizationorpolarization, the orientation of reversible double strandedDNA, etc. Perhaps the earliest example of a binary storage device was thepunched cardinvented byBasile Bouchonand Jean-Baptiste Falcon (1732), developed byJoseph Marie Jacquard(1804), and later adopted bySemyon Korsakov,Charles Babbage,Herman Hollerith, and early computer manufacturers likeIBM. A variant of that idea was the perforatedpaper tape. In all those systems, the medium (card or tape) conceptually carried an array of hole positions; each position could be either punched through or not, thus carrying one bit of information. The encoding of text by bits was also used inMorse code(1844) and early digital communications machines such asteletypesandstock ticker machines(1870). The first electrical devices for discrete logic (such aselevatorandtraffic lightcontrolcircuits,telephone switches, and Konrad Zuse's computer) represented bits as the states ofelectrical relayswhich could be either "open" or "closed". These relays functioned as mechanical switches, physically toggling between states to represent binary data, forming the fundamental building blocks of early computing and control systems. When relays were replaced byvacuum tubes, starting in the 1940s, computer builders experimented with a variety of storage methods, such as pressure pulses traveling down amercury delay line, charges stored on the inside surface of acathode-ray tube, or opaque spots printed onglass discsbyphotolithographictechniques. In the 1950s and 1960s, these methods were largely supplanted bymagnetic storagedevices such asmagnetic-core memory,magnetic tapes,drums, anddisks, where a bit was represented by the polarity ofmagnetizationof a certain area of aferromagneticfilm, or by a change in polarity from one direction to the other. The same principle was later used in themagnetic bubble memorydeveloped in the 1980s, and is still found in variousmagnetic stripitems such asmetrotickets and somecredit cards. In modernsemiconductor memory, such asdynamic random-access memoryor asolid-state drive, the two values of a bit are represented by two levels ofelectric chargestored in acapacitoror afloating-gate MOSFET. In certain types ofprogrammable logic arraysandread-only memory, a bit may be represented by the presence or absence of a conducting path at a certain point of a circuit. Inoptical discs, a bit is encoded as the presence or absence of amicroscopicpit on a reflective surface. In one-dimensionalbar codesand two-dimensionalQR codes, bits are encoded as lines or squares which may be either black or white. In modern digital computing, bits are transformed in Booleanlogic gates. Bits are transmitted one at a time inserial transmission. By contrast, multiple bits are transmitted simultaneously in aparallel transmission. Aserial computerprocesses information in either a bit-serial or a byte-serial fashion. From the standpoint of data communications, a byte-serial transmission is an 8-way parallel transmission with binary signalling. In programming languages such asC, abitwise operationoperates on binary strings as though they are vectors of bits, rather than interpreting them asbinary numbers. Data transfer rates are usually measured in decimal SI multiples. For example, achannel capacitymay be specified as 8 kbit/s = 1 kB/s. File sizes are often measured in (binary) IEC multiples of bytes, for example 1 KiB = 1024 bytes = 8192 bits. Confusion may arise in cases where (for historic reasons) filesizes are specified with binary multipliers using the ambiguous prefixes K, M, and G rather than the IEC standard prefixes Ki, Mi, and Gi.[13] Mass storage devices are usually measured in decimal SI multiples, for example 1 TB =1012{\displaystyle 10^{12}}bytes. Confusingly, the storage capacity of a directly addressable memory device, such as aDRAMchip, or an assemblage of such chips on a memory module, is specified as a binary multiple—using the ambiguous prefix G rather than the IEC recommended Gi prefix. For example, a DRAM chip that is specified (and advertised) as having "1 GB" of capacity has230{\displaystyle 2^{30}}bytes of capacity. As at 2022, the difference between the popular understanding of a memory system with "8 GB" of capacity, and the SI-correct meaning of "8 GB" was still causing difficulty to software designers.[14] The bit is not defined in theInternational System of Units(SI). However, theInternational Electrotechnical Commissionissued standardIEC 60027, which specifies that the symbol for binary digit should be 'bit', and this should be used in all multiples, such as 'kbit', for kilobit.[15]However, the lower-case letter 'b' is widely used as well and was recommended by theIEEE 1541 Standard (2002). In contrast, the upper case letter 'B' is the standard and customary symbol for byte. Multiple bits may be expressed and represented in several ways. For convenience of representing commonly reoccurring groups of bits in information technology, severalunits of informationhave traditionally been used. The most common is the unitbyte, coined byWerner Buchholzin June 1956, which historically was used to represent the group of bits used to encode a singlecharacterof text (untilUTF-8multibyte encoding took over) in a computer[2][16][17][18][19]and for this reason it was used as the basicaddressableelement in manycomputer architectures. By 1993, the trend in hardware design had converged on the 8-bitbyte.[20]However, because of the ambiguity of relying on the underlying hardware design, the unitoctetwas defined to explicitly denote a sequence of eight bits. Computers usually manipulate bits in groups of a fixed size, conventionally named "words". Like the byte, the number of bits in a word also varies with the hardware design, and is typically between 8 and 80 bits, or even more in some specialized computers. In the early 21st century, retail personal or server computers have a word size of 32 or 64 bits. TheInternational System of Unitsdefines a series of decimal prefixes for multiples of standardized units which are commonly also used with the bit and the byte. The prefixeskilo(103) throughyotta(1024) increment by multiples of one thousand, and the corresponding units are thekilobit(kbit) through theyottabit(Ybit).
https://en.wikipedia.org/wiki/Bit
Thedecibel(symbol:dB) is a relativeunit of measurementequal to one tenth of abel(B). It expresses the ratio of two values of apower or root-power quantityon alogarithmic scale. Two signals whoselevelsdiffer by one decibel have a power ratio of 101/10(approximately1.26) or root-power ratio of 101/20(approximately1.12).[1][2] The unit fundamentally expresses a relative change but may also be used to express an absolute value as the ratio of a value to a fixed reference value; when used in this way, the unit symbol is often suffixed with letter codes that indicate the reference value. For example, for the reference value of 1volt, a common suffix is "V" (e.g., "20 dBV").[3][4] Two principal types of scaling of the decibel are in common use. When expressing a power ratio, it is defined as ten times thelogarithm with base 10.[5]That is, a change inpowerby a factor of 10 corresponds to a 10 dB change in level. When expressing root-power quantities, a change inamplitudeby a factor of 10 corresponds to a 20 dB change in level. The decibel scales differ by a factor of two, so that the related power and root-power levels change by the same value in linear systems, where power is proportional to the square of amplitude. The definition of the decibel originated in the measurement of transmission loss and power intelephonyof the early 20th century in theBell Systemin the United States. The bel was named in honor ofAlexander Graham Bell, but the bel is seldom used. Instead, the decibel is used for a wide variety of measurements in science andengineering, most prominently forsound powerinacoustics, inelectronicsandcontrol theory. In electronics, thegainsofamplifiers,attenuationof signals, andsignal-to-noise ratiosare often expressed in decibels. The decibel originates from methods used to quantify signal loss in telegraph and telephone circuits. Until the mid-1920s, the unit for loss wasmiles of standard cable(MSC). 1 MSC corresponded to the loss of power over onemile(approximately 1.6 km) of standard telephone cable at a frequency of5000radiansper second (795.8 Hz), and matched closely the smallest attenuation detectable to a listener. A standard telephone cable was "a cable having uniformly distributed resistance of 88 ohms per loop-mile and uniformly distributedshuntcapacitanceof 0.054microfaradsper mile" (approximately corresponding to 19gaugewire).[6] In 1924,Bell Telephone Laboratoriesreceived a favorable response to a new unit definition among members of the International Advisory Committee on Long Distance Telephony in Europe and replaced the MSC with theTransmission Unit(TU). 1 TU was defined such that the number of TUs was ten times the base-10 logarithm of the ratio of measured power to a reference power.[7]The definition was conveniently chosen such that 1 TU approximated 1 MSC; specifically, 1 MSC was 1.056 TU. In 1928, the Bell system renamed the TU into the decibel,[8]being one tenth of a newly defined unit for the base-10 logarithm of the power ratio. It was named thebel, in honor of the telecommunications pioneerAlexander Graham Bell.[9]The bel is seldom used, as the decibel was the proposed working unit.[10] The naming and early definition of the decibel is described in theNBSStandard's Yearbook of 1931:[11] Since the earliest days of the telephone, the need for a unit in which to measure the transmission efficiency of telephone facilities has been recognized. The introduction of cable in 1896 afforded a stable basis for a convenient unit and the "mile of standard" cable came into general use shortly thereafter. This unit was employed up to 1923 when a new unit was adopted as being more suitable for modern telephone work. The new transmission unit is widely used among the foreign telephone organizations and recently it was termed the "decibel" at the suggestion of the International Advisory Committee on Long Distance Telephony. The decibel may be defined by the statement that two amounts of power differ by 1 decibel when they are in the ratio of 100.1and any two amounts of power differ byNdecibels when they are in the ratio of 10N(0.1). The number of transmission units expressing the ratio of any two powers is therefore ten times the common logarithm of that ratio. This method of designating the gain or loss of power in telephone circuits permits direct addition or subtraction of the units expressing the efficiency of different parts of the circuit ... In 1954, J. W. Horton argued that the use of the decibel as a unit for quantities other than transmission loss led to confusion, and suggested the namelogitfor "standard magnitudes which combine by multiplication", to contrast with the nameunitfor "standard magnitudes which combine by addition".[12][clarification needed] In April 2003, theInternational Committee for Weights and Measures(CIPM) considered a recommendation for the inclusion of the decibel in theInternational System of Units(SI), but decided against the proposal.[13]However, the decibel is recognized by other international bodies such as theInternational Electrotechnical Commission(IEC) andInternational Organization for Standardization(ISO).[14]The IEC permits the use of the decibel with root-power quantities as well as power and this recommendation is followed by many national standards bodies, such asNIST, which justifies the use of the decibel forvoltageratios.[15]In spite of their widespread use,suffixes(such as indBAor dBV) are not recognized by the IEC or ISO. The IEC Standard60027-3:2002defines the following quantities. The decibel (dB) is one-tenth of a bel: 1 dB = 0.1 B. The bel (B) is1⁄2ln(10)nepers: 1 B =1⁄2ln(10) Np. The neper is the change in thelevelof aroot-power quantitywhen the root-power quantity changes by a factor ofe, that is1 Np = ln(e) = 1, thereby relating all of the units as nondimensionalnaturallogof root-power-quantity ratios,1 dB=0.11513... Np=0.11513.... Finally, the level of a quantity is the logarithm of the ratio of the value of that quantity to a reference value of the same kind of quantity. Therefore, the bel represents the logarithm of a ratio between two power quantities of 10:1, or the logarithm of a ratio between two root-power quantities of√10:1.[16] Two signals whose levels differ by one decibel have a power ratio of 101/10, which is approximately1.25893, and an amplitude (root-power quantity) ratio of 101/20(1.12202).[1][2] The bel is rarely used either without a prefix or withSI unit prefixesother thandeci; it is customary, for example, to usehundredths of a decibelrather thanmillibels. Thus, five one-thousandths of a bel would normally be written 0.05 dB, and not 5 mB.[17] The method of expressing a ratio as a level in decibels depends on whether the measured property is apower quantityor aroot-power quantity; seePower, root-power, and field quantitiesfor details. When referring to measurements ofpowerquantities, a ratio can be expressed as a level in decibels by evaluating ten times thebase-10 logarithmof the ratio of the measured quantity to reference value. Thus, the ratio ofP(measured power) toP0(reference power) is represented byLP, that ratio expressed in decibels,[18]which is calculated using the formula:[19] The base-10 logarithm of the ratio of the two power quantities is the number of bels. The number of decibels is ten times the number of bels (equivalently, a decibel is one-tenth of a bel).PandP0must measure the same type of quantity, and have the same units before calculating the ratio. IfP=P0in the above equation, thenLP= 0. IfPis greater thanP0thenLPis positive; ifPis less thanP0thenLPis negative. Rearranging the above equation gives the following formula forPin terms ofP0andLP: When referring to measurements of root-power quantities, it is usual to consider the ratio of the squares ofF(measured) andF0(reference). This is because the definitions were originally formulated to give the same value for relative ratios for both power and root-power quantities. Thus, the following definition is used: The formula may be rearranged to give Similarly, inelectrical circuits, dissipated power is typically proportional to the square of voltage or current when theimpedanceis constant. Taking voltage as an example, this leads to the equation for power gain levelLG: whereVoutis theroot-mean-square(rms) output voltage,Vinis the rms input voltage. A similar formula holds for current. The termroot-power quantityis introduced by ISO Standard80000-1:2009as a substitute offield quantity. The termfield quantityis deprecated by that standard androot-poweris used throughout this article. Although power and root-power quantities are different quantities, their respective levels are historically measured in the same units, typically decibels. A factor of 2 is introduced to makechangesin the respective levels match under restricted conditions such as when the medium is linear and thesamewaveform is under consideration with changes in amplitude, or the medium impedance is linear and independent of both frequency and time. This relies on the relationship holding.[20]In a nonlinear system, this relationship does not hold by the definition of linearity. However, even in alinear systemin which the power quantity is the product of two linearly related quantities (e.g. voltage and current), if the impedance is frequency- or time-dependent, this relationship does not hold in general, for example if the energy spectrum of the waveform changes. For differences in level, the required relationship is relaxed from that above to one of proportionality (i.e., the reference quantitiesP0andF0need not be related), or equivalently, must hold to allow the power level difference to be equal to the root-power level difference from powerP1andF1toP2andF2. An example might be anamplifierwith unity voltage gain independent of load and frequency driving a load with a frequency-dependent impedance: the relative voltage gain of the amplifier is always 0 dB, but the power gain depends on the changing spectral composition of the waveform being amplified. Frequency-dependent impedances may be analyzed by considering the quantitiespower spectral densityand the associated root-power quantities via theFourier transform, which allows elimination of the frequency dependence in the analysis by analyzing the system at each frequency independently. Since logarithm differences measured in these units often represent power ratios and root-power ratios, values for both are shown below. The bel is traditionally used as a unit of logarithmic power ratio, while the neper is used for logarithmic root-power (amplitude) ratio. The unit dBW is often used to denote a ratio for which the reference is 1 W, and similarlydBmfor a1 mWreference point. (31.62 V / 1 V)2≈ 1 kW / 1 W, illustrating the consequence from the definitions above thatLGhas the same value, 30 dB, regardless of whether it is obtained from powers or from amplitudes, provided that in the specific system being considered power ratios are equal to amplitude ratios squared. A change in power ratio by a factor of 10 corresponds to a change in level of10 dB. A change in power ratio by a factor of 2 or⁠1/2⁠is approximately achange of 3 dB. More precisely, the change is ±3.0103dB, but this is almost universally rounded to 3 dB in technical writing.[citation needed]This implies an increase in voltage by a factor of√2≈1.4142. Likewise, a doubling or halving of the voltage, corresponding to a quadrupling or quartering of the power, is commonly described as 6 dB rather than ±6.0206dB. Should it be necessary to make the distinction, the number of decibels is written with additionalsignificant figures. 3.000 dB corresponds to a power ratio of 103/10, or1.9953, about 0.24% different from exactly 2, and a voltage ratio of1.4125, about 0.12% different from exactly√2. Similarly, an increase of 6.000 dB corresponds to a power ratio of106/10≈3.9811, about 0.5% different from 4. The decibel is useful for representing large ratios and for simplifying representation of multiplicative effects, such as attenuation from multiple sources along a signal chain. Its application in systems with additive effects is less intuitive, such as in the combinedsound pressure levelof two machines operating together. Care is also necessary with decibels directly in fractions and with the units of multiplicative operations. Thelogarithmic scalenature of the decibel means that a very large range of ratios can be represented by a convenient number, in a manner similar toscientific notation. This allows one to clearly visualize huge changes of some quantity. SeeBode plotandSemi-log plot. For example, 120 dB SPL may be clearer than "a trillion times more intense than the threshold of hearing".[citation needed] Level values in decibels can be added instead of multiplying the underlying power values, which means that the overall gain of a multi-component system, such as a series of amplifier stages, can be calculated by summing the gains in decibels of the individual components, rather than multiply the amplification factors; that is,log(A×B×C)= log(A) + log(B) + log(C). Practically, this means that, armed only with the knowledge that 1 dB is a power gain of approximately 26%, 3 dB is approximately 2× power gain, and 10 dB is 10× power gain, it is possible to determine the power ratio of a system from the gain in dB with only simple addition and multiplication. For example: However, according to its critics, the decibel creates confusion, obscures reasoning, is more related to the era ofslide rulesthan to modern digital processing, and is cumbersome and difficult to interpret.[21][22]Quantities in decibels are not necessarilyadditive,[23][24]thus being "of unacceptable form for use indimensional analysis".[25]Thus, units require special care in decibel operations. Take, for example,carrier-to-noise-density ratioC/N0(in hertz), involving carrier powerC(in watts) and noise power spectral densityN0(in W/Hz). Expressed in decibels, this ratio would be a subtraction (C/N0)dB=CdB−N0 dB. However, the linear-scale units still simplify in the implied fraction, so that the results would be expressed in dB-Hz. According to Mitschke,[26]"The advantage of using a logarithmic measure is that in a transmission chain, there are many elements concatenated, and each has its own gain or attenuation. To obtain the total, addition of decibel values is much more convenient than multiplication of the individual factors." However, for the same reason that humans excel at additive operation over multiplication, decibels are awkward in inherently additive operations:[27] if two machines each individually produce a sound pressure level of, say, 90 dB at a certain point, then when both are operating together we should expect the combined sound pressure level to increase to 93 dB, but certainly not to 180 dB!; suppose that the noise from a machine is measured (including the contribution of background noise) and found to be 87 dBA but when the machine is switched off the background noise alone is measured as 83 dBA. [...] the machine noise [level (alone)] may be obtained by 'subtracting' the 83 dBA background noise from the combined level of 87 dBA; i.e., 84.8 dBA.; in order to find a representative value of the sound level in a room a number of measurements are taken at different positions within the room, and an average value is calculated. [...] Compare the logarithmic and arithmetic averages of [...] 70 dB and 90 dB:logarithmic average= 87 dB;arithmetic average= 80 dB. Addition on a logarithmic scale is calledlogarithmic addition, and can be defined by taking exponentials to convert to a linear scale, adding there, and then taking logarithms to return. For example, where operations on decibels are logarithmic addition/subtraction and logarithmic multiplication/division, while operations on the linear scale are the usual operations: Thelogarithmic meanis obtained from the logarithmic sum by subtracting10log10⁡2{\displaystyle 10\log _{10}2}, since logarithmic division is linear subtraction. Attenuationconstants, in topics such asoptical fibercommunication andradio propagationpath loss, are often expressed as afractionor ratio to distance of transmission. In this case,dB/mrepresents decibel per meter,dB/mirepresents decibel per mile, for example. These quantities are to be manipulated obeying the rules of dimensional analysis, e.g., a 100-meter run with a3.5 dB/kmfiber yields a loss of{{{1}}}3.5 dB/km ×0.1 km. The human perception of the intensity of sound and light more nearly approximates the logarithm of intensity rather than a linear relationship (seeWeber–Fechner law), making the dB scale a useful measure.[28][29][30][31][32][33] The decibel is commonly used inacousticsas a unit ofsound power levelorsound pressure level. The reference pressure for sound in air is set at the typical threshold of perception of an average human and there arecommon comparisons used to illustrate different levels of sound pressure. As sound pressure is a root-power quantity, the appropriate version of the unit definition is used: whereprmsis theroot mean squareof the measured sound pressure andprefis the standard reference sound pressure of 20micropascalsin air or 1 micropascal in water.[34] Use of the decibel in underwater acoustics leads to confusion, in part because of this difference in reference value.[35][36] Sound intensityis proportional to the square of sound pressure. Therefore, the sound intensity level can also be defined as: The human ear has a largedynamic rangein sound reception. The ratio of the sound intensity that causes permanent damage during short exposure to that of the quietest sound that the ear can hear is equal to or greater than 1 trillion (1012).[37]Such large measurement ranges are conveniently expressed in logarithmic scale: the base-10 logarithm of 1012is 12, which is expressed as a sound intensity level of 120 dB re 1 pW/m2. The reference values of I and p in air have been chosen such that this corresponds approximately to a sound pressure level of 120 dB re 20μPa. Since the human ear is not equally sensitive to all sound frequencies, the acoustic power spectrum is modified byfrequency weighting(A-weightingbeing the most common standard) to get the weighted acoustic power before converting to a sound level or noise level in decibels.[38] The decibel is used intelephonyandaudio. Similarly to the use in acoustics, a frequency weighted power is often used. For audio noise measurements in electrical circuits, the weightings are calledpsophometric weightings.[39] In electronics, the decibel is often used to express power or amplitude ratios (as forgains) in preference toarithmeticratios orpercentages. One advantage is that the total decibel gain of a series of components (such as amplifiers andattenuators) can be calculated simply by summing the decibel gains of the individual components. Similarly, in telecommunications, decibels denote signal gain or loss from a transmitter to a receiver through some medium (free space,waveguide,coaxial cable,fiber optics, etc.) using alink budget. The decibel unit can also be combined with a reference level, often indicated via a suffix, to create an absolute unit of electric power. For example, it can be combined withmformilliwattto produce thedBm. A power level of 0 dBm corresponds to one milliwatt, and 1 dBm is one decibel greater (about 1.259 mW). In professional audio specifications, a popular unit is thedBu. This is relative to the root mean square voltage which delivers 1 mW (0 dBm) into a 600-ohm resistor, or√1 mW × 600 Ω≈ 0.775 VRMS. When used in a 600-ohm circuit (historically, the standard reference impedance in telephone circuits), dBu and dBm areidentical. In anoptical link, if a known amount ofopticalpower, indBm(referenced to 1 mW), is launched into a fiber, and the losses, in dB (decibels), of each component (e.g., connectors, splices, and lengths of fiber) are known, the overall link loss may be quickly calculated by addition and subtraction of decibel quantities.[40] In spectrometry and optics, theblocking unitused to measureoptical densityis equivalent to −1 B. In connection with video and digitalimage sensors, decibels generally represent ratios of video voltages or digitized light intensities, using 20 log of the ratio, even when the represented intensity (optical power) is directly proportional to the voltage generated by the sensor, not to its square, as in aCCD imagerwhere response voltage is linear in intensity.[41]Thus, a camerasignal-to-noise ratioor dynamic range quoted as 40 dB represents a ratio of 100:1 between optical signal intensity and optical-equivalent dark-noise intensity, not a 10,000:1 intensity (power) ratio as 40 dB might suggest.[42]Sometimes the 20 log ratio definition is applied to electron counts or photon counts directly, which are proportional to sensor signal amplitude without the need to consider whether the voltage response to intensity is linear.[43] However, as mentioned above, the 10 log intensity convention prevails more generally in physical optics, including fiber optics, so the terminology can become murky between the conventions of digital photographic technology and physics. Most commonly, quantities calleddynamic rangeorsignal-to-noise(of the camera) would be specified in20 log dB, but in related contexts (e.g. attenuation, gain, intensifier SNR, or rejection ratio) the term should be interpreted cautiously, as confusion of the two units can result in very large misunderstandings of the value. Photographers typically use an alternative base-2 log unit, thestop, to describe light intensity ratios or dynamic range. Suffixes are commonly attached to the basic dB unit in order to indicate the reference value by which the ratio is calculated. For example, dBm indicates power measurement relative to 1 milliwatt. In cases where the unit value of the reference is stated, the decibel value is known as "absolute". If the unit value of the reference is not explicitly stated, as in the dB gain of an amplifier, then the decibel value is considered relative. This form of attaching suffixes to dB is widespread in practice, albeit being against the rules promulgated by standards bodies (ISO and IEC),[15]given the "unacceptability of attaching information to units"[a]and the "unacceptability of mixing information with units".[b]TheIEC 60027-3standard recommends the following format:[14]Lx(rexref)or asLx/xref, wherexis the quantity symbol andxrefis the value of the reference quantity, e.g.,LE(re 1 μV/m) = 20 dB orLE/(1 μV/m)= 20 dB for theelectric field strengthErelative to1 μV/mreference value. If the measurement result 20 dB is presented separately, it can be specified using the information in parentheses, which is then part of the surrounding text and not a part of the unit: 20 dB (re1 μV/m) or 20 dB(1 μV/m). Outside of documents adhering to SI units, the practice is very common as illustrated by the following examples. There is no general rule, with various discipline-specific practices. Sometimes the suffix is a unit symbol ("W","K","m"), sometimes it is a transliteration of a unit symbol ("uV" instead of μV for microvolt), sometimes it is an acronym for the unit's name ("sm" for square meter, "m" for milliwatt), other times it is a mnemonic for the type of quantity being calculated ("i" for antenna gain with respect to an isotropic antenna, "λ" for anything normalized by the EM wavelength), or otherwise a general attribute or identifier about the nature of the quantity ("A" for A-weighted sound pressure level). The suffix is often connected with ahyphen, as in "dB‑Hz", or with a space, as in "dB HL", or enclosed in parentheses, as in "dB(HL)", or with no intervening character, as in "dBm" (which is non-compliant with international standards). Since the decibel is defined with respect to power, not amplitude, conversions of voltage ratios to decibels must square the amplitude, or use the factor of 20 instead of 10, as discussed above. Probably the most common usage of "decibels" in reference to sound level is dBSPL, sound pressure level referenced to the nominal threshold of human hearing:[49]The measures of pressure (a root-power quantity) use the factor of 20, and the measures of power (e.g. dBSILand dBSWL) use the factor of 10. See also dBVand dBuabove.
https://en.wikipedia.org/wiki/Decibel
TheVoynich manuscriptis an illustratedcodex, hand-written in an unknown script referred to asVoynichese.[18]Thevellumon which it is written has beencarbon-datedto the early 15th century (1404–1438). Stylistic analysis has indicated the manuscript may have been composed in Italy during theItalian Renaissance.[1][2]The origins, authorship, and purpose of the manuscript are still debated, but currently scholars lack the translation(s) and context needed to both properly entertain or eliminate any of the possibilities. Hypotheses range from ascriptfor anatural languageorconstructed language, an unreadcode,cypher, or other form ofcryptography, or perhaps ahoax,reference work(i.e. folkloricindexorcompendium),glossolalia[19]or work offiction(e.g.science fantasyormythopoeia,metafiction,speculative fiction). The first confirmed owner wasGeorg Baresch, a 17th-centuryalchemistfromPrague.[9][20][21]The manuscript is named afterWilfrid Voynich, a Polish book dealer who purchased it in 1912.[22]The manuscript consists of around 240 pages, but there is evidence that pages are missing. The text is written from left to right, and some pages are foldable sheets of varying sizes. Most of the pages have fantastical illustrations and diagrams, some crudely coloured, with sections of the manuscript showing people, unidentified plants andastrological symbols. Since 1969, it has been held inYale University'sBeinecke Rare Book and Manuscript Library.[23][12][24]In 2020, Yale University published the manuscript online in its entirety in theirdigital library.[25] The Voynich manuscript has been studied by both professional and amateurcryptographers, including American and Britishcodebreakersfrom bothWorld War IandWorld War II.[26]CodebreakersPrescott Currier,William Friedman,Elizebeth Friedman, andJohn Tiltmanwere unsuccessful.[27] The manuscript has never been demonstrably deciphered, and none of the proposed hypotheses have been independently verified.[28]The mystery of its meaning and origin has excited speculation and provoked study. Thecodicology, or physical characteristics of the manuscript, has been studied by researchers. The manuscript measures 23.5 by 16.2 by 5 cm (9.3 by 6.4 by 2.0 in), with hundreds ofvellumpages collected into 18quires. The total number of pages is around 240, but the exact number depends on how the manuscript's unusual foldouts are counted.[12]The quires have been numbered from 1 to 20 in various locations, using a style of numerals consistent with those used in the 15th century, and the top righthand corner of eachrecto(righthand) page has been numbered from 1 to 116, using a style of numerals that originated at a later date. From the various numbering gaps in the quires and pages, it seems likely that in the past, the manuscript had at least 272 pages in 20 quires, some of which were already missing when Wilfrid Voynich acquired the manuscript in 1912. There is strong evidence that many of the book'sbifolioswere reordered at various points in the book's history, and that its pages were originally in a different order than the order they are in today.[13][10] Samples from various parts of the manuscript wereradiocarbon datedat theUniversity of Arizonain 2009. The results were consistent for all samples tested and indicated a date for the parchment between 1404 and 1438.[29]Protein testing in 2014 revealed that the parchment was made from calfskin, and multispectral analysis showed that it had not been written on before the manuscript was created (i.e., it is not apalimpsest). The quality of the parchment is average and has deficiencies, such as holes and tears, common in parchment codices, but was also prepared with so much care that the skin side is largely indistinguishable from the flesh side.[29]The parchment is prepared from "at least fourteen or fifteen entire calfskins".[30] Somefolios(such as 42 and 47) are thicker than the usual parchment.[31] The goat skin binding and covers[32]are not original to the book, but date to its possession by theCollegio Romano.[12]Insect holes are present on the first and last folios of the manuscript in the current order and suggest that a wooden cover was present before the later covers. Discolouring on the edges points to a tanned leather inside cover.[29] Many pages contain substantial drawings or charts which are coloured with paint. Based on modern analysis usingpolarized light microscopy(PLM), it has been determined that aquillpen andiron gall inkwere used for the text and figure outlines. The ink of the drawings, text, and page and quire numbers have similar microscopic characteristics. In 2009,energy-dispersive X-ray spectroscopy(EDS) revealed that the inks contained major amounts of carbon, iron,sulphur,potassiumand calcium withtrace amountsof copper and occasionally zinc. EDS did not show the presence of lead, whileX-ray diffraction(XRD) identified potassiumlead oxide, potassium hydrogen sulphate, andsyngenitein one of the samples tested. The similarity between the drawing inks and text inks suggested a contemporaneous origin.[13] Coloured paint was applied (somewhat crudely) to the ink-outlined figures, possibly at a later date. The blue, white, red-brown, and green paints of the manuscript have been analysed using PLM, XRD, EDS, andscanning electron microscopy(SEM). The pigments used were deemed inexpensive.[29] Computer scientistJorge Stolfiof theUniversity of Campinashighlighted that parts of the text and drawings have been modified, using darker ink over a fainter, earlier script. Evidence for this is visible in various folios, for examplef1r,f3v,f26v,f57v,f67r2,f71r,f72v1,f72v3andf73r.[33] Every page in the manuscript contains text, mostly in an unidentified language, but some have extraneous writing inLatin script. The bulk of the text in the 240-page manuscript is written in an unknown script, running left to right. Most of the characters are composed of one or two simple pen strokes. There exists some dispute as to whether certain characters are distinct, but a script of 20–25 characters would account for virtually all of the text; the exceptions are a few dozen rarer characters that occur only once or twice each. There is no obviouspunctuation.[4] Much of the text is written in a single column in the body of a page, with a slightly ragged right margin and paragraph divisions and sometimes with stars in the left margin.[12]Other text occurs in charts or as labels associated with illustrations. Theductusflows smoothly, giving the impression that the symbols were notenciphered; there is no delay between characters, as would normally be expected in written encoded text. Only a few of the words in the manuscript are thought to have not been written in the unknown script:[17] Various transcription alphabets have been created to encode Voynich characters as Latin characters, to help with cryptanalysis,[37]such as the Extensible (originally: European) Voynich Alphabet (EVA).[38]The first major one was created by the "First Study Group", led by cryptographerWilliam F. Friedmanin the 1940s, where each line of the manuscript was transcribed to an IBMpunch cardto make itmachine readable.[39][40] The text consists of over 170,000 characters,[14]with spaces dividing the text into about 35,000 groups of varying length, usually referred to as "words" or "word tokens" (37,919); 8,114 of those wordsare considered unique"word types".[41]The structure of these words seems to followphonologicalororthographiclaws of some sort; for example, certain characters must appear in each word (like Englishvowels), some characters never follow others, or some may be doubled or tripled, but others may not. The distribution of letters within words is also rather peculiar: Some characters occur only at the beginning of a word, some only at the end (like Greekς), and some always in the middle section.[42] Many researchers have commented upon the highly regular structure of the words.[43]Professor Gonzalo Rubio, an expert in ancient languages atPennsylvania State University, stated: The things we know asgrammatical markers– things that occur commonly at the beginning or end of words, such as 's' or 'd' in our language, and that are used to express grammar, never appear in the middle of 'words' in the Voynich manuscript. That's unheard of for any Indo-European, Hungarian, or Finnish language.[44] Stephan Vonfelt studied statistical properties of the distribution of letters and their correlations (properties which can be vaguely characterised as rhythmic resonance, alliteration, or assonance) and found that under that respect Voynichese is more similar to theMandarin Chinesepinyintext of theRecords of the Grand Historianthan to the text of works from European languages, although the numerical differences between Voynichese and Mandarin Chinese pinyin look larger than those between Mandarin Chinese pinyin and European languages.[45][better source needed] Practically no words have fewer than two letters or more than ten.[14]Some words occur in only certain sections, or in only a few pages; others occur throughout the manuscript. Few repetitions occur among the thousand or so labels attached to the illustrations. There are instances where the same common word appears up to five times in a row[14](seeZipf's law). Words that differ by only one letter also repeat with unusual frequency, causing single-substitution alphabet decipherings to yield babble-like text. In 1962,cryptanalystElizebeth Friedmandescribed such statistical analyses as "doomed to utter frustration".[46] In 2014, a team led by Diego Amancio of theUniversity of São Paulopublished a study using statistical methods to analyse the relationships of the words in the text. Instead of trying to find the meaning, Amancio's team looked for connections and clusters of words. By measuring the frequency and intermittence of words, Amancio claimed to identify the text'skeywordsand produced three-dimensional models of the text's structure and word frequencies. The team concluded that, in 90% of cases, the Voynich systems are similar to those of other known books, indicating that the text is in an actual language, not randomgibberish.[47] The use of the framework was exemplified with the analysis of the Voynich manuscript, with the final conclusion that it differs from a random sequence of words, being compatible with natural languages. Even though our approach is not aimed at deciphering Voynich, it was capable of providing keywords that could be helpful for decipherers in the future.[47] LinguistsClaire Bowernand Luke Lindemann have applied statistical methods to the Voynich manuscript, comparing it to other languages and encodings of languages, and have found both similarities and differences in statistical properties. Character sequences in languages are measured using a metric called h2, or second-order conditional entropy. Natural languages tend to have an h2 between 3 and 4, but Voynichese has much more predictable character sequences, and an h2 around 2. However, at higher levels of organisation, the Voynich manuscript displays properties similar to those of natural languages. Based on this, Bowern dismisses theories that the manuscript is gibberish.[48]It is likely to be an encoded natural language or a constructed language. Bowern also concludes that the statistical properties of the Voynich manuscript are not consistent with the use of asubstitution cipherorpolyalphabetic cipher.[49] As noted in Bowern's review, multiple scribes or "hands" may have written the manuscript, possibly using two methods of encoding at least one natural language.[49][50][51][52]The "language" Voynich A appears in the herbal and pharmaceutical parts of the manuscript. The "language" known as Voynich B appears in thebalneologicalsection, some parts of the medicinal and herbal sections, and the astrological section. The most common vocabulary items of Voynich A and Voynich B are substantially different.Topic modelingof the manuscript suggests that pages identified as written by a particular scribe may relate to a different topic.[49] In terms ofmorphology, if visual spaces in the manuscript are assumed to indicate word breaks, there are consistent patterns that suggest a three-part word structure of prefix, root or midfix, and suffix. Certain characters and character combinations are more likely to appear in particular fields. There are minor variations between Voynich A and Voynich B. The predictability of certain letters in a relatively small number of combinations in certain parts of words appears to explain the low entropy (h2) of Voynichese. In the absence of obvious punctuation, some variants of the same word appear to be specific to typographical positions, such as the beginning of a paragraph, line, or sentence.[49] The Voynich word frequencies of both variants appear to conform to aZipfian distribution, supporting the idea that the text has linguistic meaning. This has implications for the encoding methods most likely to have been used, since some forms of encoding interfere with the Zipfian distribution. Measures of the proportional frequency of the ten most common words is similar to those of the Semitic, Iranian, and Germanic languages. Another measure of morphological complexity, the Moving-Average Type–Token Ratio (MATTR) index, is similar to Iranian, Germanic, and Romance languages.[49] Because the text cannot be read, the manuscript is conventionally divided into sections based on its illustrations. Most of the manuscript forms six different sections, each typified by illustrations with different styles and supposed subject matter[14]except for the last section, in which the only drawings are small stars in the margin. The conventional sections are: Five folios contain only text, and at least 14 folios (28 pages) are missing from the manuscript.[53] The overall impression given by the surviving leaves of the manuscript is that it was meant to serve as apharmacopoeiaor to address topics inmedieval or early modern medicine. However, the puzzling details of the illustrations have fuelled many theories about the book's origin, the contents of its text, and the purpose for which it was intended.[14] The first section of the book is almost certainlyherbal, but attempts have failed to identify the plants, either with actual specimens or with the stylised drawings of contemporaneous herbals.[56]Only a few of the plant drawings can be identified with reasonable certainty, such as awild pansyand themaidenhair fern. The herbal pictures that match pharmacological sketches appear to be clean copies of them, except that missing parts were completed with improbable details. In fact, many of the plant drawings in the herbal section seem to be composite: the roots of one species have been fastened to the leaves of another, with flowers from a third.[56] Astrological considerations frequently played a prominent role in herb gathering,bloodletting, and other medical procedures common during the likeliest dates of the manuscript. However, interpretation remains speculative, apart from the obviousZodiacsymbols and one diagram possibly showing theclassical planets.[14] Much of the book's earlyprovenanceis unknown,[57]though the text and illustrations are all characteristically European. In 2009,University of Arizonaresearchersradiocarbon datedthe manuscript's vellum to between 1404 and 1438.[2][58][59]In addition,McCrone Associatesin Westmont, Illinois, found that the paints in the manuscript were of materials to be expected from that period of European history. There have been erroneous reports that McCrone Associates indicated that much of the ink was added not long after the creation of the parchment, but their official report contains no such statement.[13] The first confirmed owner wasGeorg Baresch, a 17th-centuryalchemistfromPrague. Baresch was apparently puzzled about this "Sphynx" that had been "taking up space uselessly in his library" for many years.[9]He learned thatJesuitscholarAthanasius Kircherfrom theCollegio Romanohad published aCoptic(Egyptian) dictionary andclaimed to have decipheredtheEgyptian hieroglyphs; Baresch twice sent a sample copy of the script to Kircher in Rome, asking for clues. The 1639 letter from Baresch to Kircher is the earliest known mention of the manuscript to have been confirmed.[16] Whether Kircher answered the request or not is not known, but he was apparently interested enough to try to acquire the book, which Baresch refused to yield.[16]Upon Baresch's death, the manuscript passed to his friendJan Marek Marci(also known as Johannes Marcus Marci), thenrectorofCharles Universityin Prague. A few years later, Marci sent the book to Kircher, his longtime friend and correspondent.[16] Marci also sent Kircher a cover letter (in Latin, dated 19 August 1665 or 1666) that was still attached to the book when Voynich acquired it:[9][60][61][62][63][64][65] Reverend and Distinguished Sir, Father in Christ: This book, bequeathed to me by an intimate friend, I destined for you, my very dear Athanasius, as soon as it came into my possession, for I was convinced that it could be read by no one except yourself. The former owner of this book asked your opinion by letter, copying and sending you a portion of the book from which he believed you would be able to read the remainder, but he at that time refused to send the book itself. To its deciphering he devoted unflagging toil, as is apparent from attempts of his which I send you herewith, and he relinquished hope only with his life. But his toil was in vain, for such Sphinxes as these obey no one but their master, Kircher. Accept now this token, such as it is and long overdue though it be, of my affection for you, and burst through its bars, if there are any, with your wonted success. Dr. Raphael, a tutor in theBohemian languageto Ferdinand III, then King of Bohemia, told me the said book belonged to theEmperor Rudolfand that he presented to the bearer who brought him the book 600ducats. He believed the author wasRoger Bacon, the Englishman. On this point I suspend judgement; it is your place to define for us what view we should take thereon, to whose favor and kindness I unreservedly commit myself and remain At the command of your Reverence, Joannes Marcus Marci of Cronland Prague, 19th August, 1665 [or 1666] The "Dr. Raphael" is believed to beRaphael Sobiehrd-Mnishovsky,[4]and the sum of 600 ducats is 67.5ozt(2.10kg) ofactual gold weight. The only matching transaction in Rudolf's records is the 1599 purchase of "a couple of remarkable/rare books" fromCarl Widemannfor the sum of 600florins.[66]Widemann was a prolific collector of esoteric and alchemical manuscripts, so his ownership of the manuscript is plausible, but unproven.[66] While Wilfrid Voynich took Raphael's claims at face value, the Bacon authorship theory has been largely discredited.[17]However, a piece of evidence supporting Rudolf's ownership is the now almost invisible name or signature, on the first page of the book, ofJacobus Horcicky de Tepenecz, the head of Rudolf's botanical gardens in Prague. Rudolf died still owing money to de Tepenecz, and it is possible that de Tepenecz may have been given the book (or simply taken it) in partial payment of that debt.[57] No records of the book for the next 200 years have been found, but in all likelihood, it was stored with the rest of Kircher's correspondence in the library of theCollegio Romano(now thePontifical Gregorian University).[16]It probably remained there until the troops ofVictor Emmanuel II of Italycaptured the city in 1870 and annexed thePapal States. The new Italian government decided to confiscate many properties of the Church, including the library of the Collegio.[16]Many books of the university's library were hastily transferred to the personal libraries of its faculty just before this happened, according to investigations by Xavier Ceccaldi and others, and those books were exempt from confiscation.[16]Kircher's correspondence was among those books, and so, apparently, was the Voynich manuscript, as it still bears theex librisofPetrus Beckx, head of the Jesuit order and the university's rector at the time.[12][16] Beckx's private library was moved to theVilla Mondragone,Frascati, a large country palace near Rome that had been bought by theSociety of Jesusin 1866 and housed the headquarters of the Jesuits'Ghislieri College.[16] In 1903, the Society of Jesus (Collegio Romano) was short of money and decided to sell some of its holdings discreetly to theVatican Library. The sale took place in 1912, but not all of the manuscripts listed for sale ended up going to the Vatican.[67]Wilfrid Voynichacquired 30 of these manuscripts, among them the one which now bears his name.[16]He spent the next seven years attempting to interest scholars in deciphering the script, while he worked to determine the origins of the manuscript.[4] In 1930, the manuscript was inherited after Wilfrid's death by his widowEthel Voynich, author of the novelThe Gadflyand daughter of mathematicianGeorge Boole. She died in 1960 and left the manuscript to her close friend Anne Nill. In 1961, Nill sold the book to antique book dealerHans P. Kraus. Kraus was unable to find a buyer and donated the manuscript to Yale University in 1969, where it was catalogued as "MS 408",[17]sometimes also referred to as "Beinecke MS 408".[12] The timeline of ownership of the Voynich manuscript is given below. The time when it was possibly created is shown in green (early 1400s), based oncarbon datingof thevellum.[57]Periods of unknown ownership are indicated in white. The commonly accepted owners of the 17th century are shown in orange; the long period of storage in the Collegio Romano is yellow. The location where Wilfrid Voynich allegedly acquired the manuscript (Frascati) is shown in green (late 1800s); Voynich's ownership is shown in red, and modern owners are highlighted blue. Many people have been proposed as possible authors of the Voynich manuscript, among themRoger Bacon,John DeeorEdward Kelley,Giovanni Fontana, and Voynich. Marci's 1665/1666 cover letter to Kircher says that, according to his friend the lateRaphael Mnishovsky, the book had once been bought byRudolf II, Holy Roman Emperorand King ofBohemiafor 600ducats, 67.5ozt(2.10kg) of actual gold weight. (Mnishovsky had died in 1644, more than 20 years earlier, and the deal must have occurred before Rudolf's abdication in 1611, at least 55 years before Marci's letter. However,Karl Widemannsold books to Rudolf II in March 1599.) According to the letter, Mnishovsky (but not necessarily Rudolf) speculated that the author was 13th-centuryFranciscanfriar andpolymathRoger Bacon.[6]Marci said that he was suspending judgment about this claim, but it was taken quite seriously by Wilfrid Voynich, who did his best to confirm it.[16]Voynich contemplated the possibility that the author wasAlbertus Magnusif not Roger Bacon.[68] The assumption that Bacon was the author led Voynich to conclude thatJohn Deesold the manuscript to Rudolf. Dee was a mathematician and astrologer at the court of QueenElizabeth I of Englandwho was known to have owned a large collection of Bacon's manuscripts. Dee and hisscryer(spirit medium)Edward Kelleylived in Bohemia for several years, where they had hoped to sell their services to the emperor. However, this sale seems quite unlikely, according to John Schuster, because Dee's meticulously kept diaries do not mention it.[16] If Bacon did not create the Voynich manuscript, a supposed connection to Dee is much weakened. It was thought possible, prior to the carbon dating of the manuscript, that Dee or Kelley might have written it and spread the rumour that it was originally a work of Bacon's in the hopes of later selling it.[69]: 249 Some suspect Voynich of having fabricated the manuscript himself.[7]As an antique book dealer, he probably had the necessary knowledge and means, and a lost book by Roger Bacon would have been worth a fortune. Furthermore, Baresch's letter and Marci's letter only establish the existence of a manuscript, not that the Voynich manuscript is the same one mentioned. These letters could possibly have been the motivation for Voynich to fabricate the manuscript, assuming that he was aware of them. However, many consider the expert internal dating of the manuscript and the June 1999[57]discovery of Baresch's letter to Kircher as having eliminated this possibility.[7][16] Eamon Duffysays that the radiocarbon dating of the parchment (or, more accurately, vellum) "effectively rules out any possibility that the manuscript is a post-medieval forgery", as the consistency of the pages indicates origin from a single source, and "it is inconceivable" that a quantity of unused parchment comprising "at least fourteen or fifteen entire calfskins" could have survived from the early 15th century.[30] It has been suggested that some illustrations in the books of an Italian engineer,Giovanni Fontana, slightly resemble Voynich illustrations.[70]Fontana was familiar with cryptography and used it in his books, although he did not use the Voynich script but a simple substitution cipher. In the bookSecretum de thesauro experimentorum ymaginationis hominum(Secret of the treasure-room of experiments in man's imagination), written c. 1430, Fontana describedmnemonicmachines, written in his cypher.[71]That book and hisBellicorum instrumentorum liberboth used a cryptographic system, described as a simple, rational cipher, based on signs without letters or numbers.[72] Sometime before 1921, Voynich was able to read a name faintly written at the foot of the manuscript's first page: "Jacobj à Tepenecz". This is taken to be a reference to Jakub Hořčický of Tepenec, also known by his Latin nameJacobus Sinapius. Rudolf II had ennobled him in 1607, had appointed him his Imperial Distiller, and had made him curator of his botanical gardens as well as one of his personal physicians. Voynich (and many other people after him) concluded that Jacobus owned the Voynich manuscript prior to Baresch, and he drew a link from that to Rudolf's court, in confirmation of Mnishovsky's story. Jacobus's name has faded further since Voynich saw it, but is still legible underultravioletlight. It does not match the copy of his signature in a document located by Jan Hurych in 2003.[1][8]As a result, it has been suggested that the signature was added later, possibly even fraudulently by Voynich himself.[1] Baresch's letter bears some resemblance to a hoax thatorientalistAndreas Müller[de]once played onAthanasius Kircher. Müller sent some unintelligible text to Kircher with a note explaining that it had come from Egypt, and asking him for a translation. Kircher reportedly solved it.[73]It has been speculated that these were both cryptographic tricks played on Kircher to make him look foolish.[73] Raphael Mnishovsky, the friend of Marci who was the reputed source of the Bacon story, was himself a cryptographer and apparently invented a cipher which he claimed was uncrackable (c. 1618).[74]This has led to the speculation that Mnishovsky might have produced the Voynich manuscript as a practical demonstration of his cipher and made Baresch his unwitting test subject. Indeed, the disclaimer in the Voynich manuscript cover letter could mean that Marci suspected some kind of deception.[74] In his 2006 book,Nick Pellingproposed that the Voynich manuscript was written by 15th-century North Italian architectAntonio Averlino(also known as "Filarete"), a theory broadly consistent with the radiocarbon dating.[10] Jules Janick and Arthur O. Tucker, based on plant and animal identification, and the kabbalah map of central Mexico (folio 86v), argued that it was composed in Mexico between 1562 and 1572.[75] Many hypotheses have been developed about the Voynich manuscript's "language", calledVoynichese: According to the "letter-based cipher" theory, the Voynich manuscript contains a meaningful text in some European language that was intentionally rendered obscure by mapping it to the Voynich manuscript "alphabet" through acipherof some sort—analgorithmthat operated on individual letters. This was the working hypothesis for most 20th-century deciphering attempts, including an informal team ofNSAcryptographers led byWilliam F. Friedmanin the early 1950s.[40] The counterargument is that almost all cipher systems consistent with that era fail to match what is seen in the Voynich manuscript. For example, simplesubstitution cipherswould be excluded because the distribution of letter frequencies does not resemble that of any known language, while the small number of different letter shapes used implies thatnomenclatorandhomophonic ciphersshould be ruled out, because these typically employ larger cipher alphabets.Polyalphabetic cipherswere invented byAlbertiin the 1460s and included the laterVigenère cipher, but they usually yield ciphertexts where all cipher shapes occur with roughly equal probability, quite unlike the language-like letter distribution which the Voynich manuscript appears to have. However, the presence of many tightly grouped shapes in the Voynich manuscript (such as "or", "ar", "ol", "al", "an", "ain", "aiin", "air", "aiir", "am", "ee", "eee", among others) does suggest that its cipher system may make use of a "verbose cipher", where single letters in a plaintext get enciphered into groups of fake letters. For example, the first two lines of pagef15vcontain "oror or" and "or or oro r", which strongly resemble howRoman numeralssuch as "CCC" or "XXXX" would look if verbosely enciphered.[76] In 1943, Joseph Martin Feely suggested that the manuscript might be a scientific diary, written in a privateshorthandsystem, using abbreviations, for a language such as Latin. However, according toMary D'Imperio, "other scholars ... unanimously rejected" Feely's proposed readings of the text and the hypothesis that the text was composed using a "system of abbreviated forms" was "not considered acceptable".[17] This theory holds that the text of the Voynich manuscript is mostly meaningless, but contains meaningful information hidden in inconspicuous details—e.g., the second letter of every word, or the number of letters in each line. This technique, calledsteganography, is very old and was described byJohannes Trithemiusin 1499. Though the plain text was speculated to have been extracted by aCardan grille(an overlay with cut-outs for the meaningful text) of some sort, this seems somewhat unlikely because the words and letters are not arranged on anything like a regular grid. Still, steganographic claims are hard to prove or disprove, becausestegotextscan be arbitrarily hard to find. It has been suggested that the meaningful text could be encoded in the length or shape of certain pen strokes.[77][78] Statistical analysis of the text reveals patterns similar to those ofnatural languages.[49]For instance, theword entropy(about 10 bits per word) is similar to that of English or Latin texts.[3]Amancioet al.(2013)[47]argued that the Voynich manuscript "is mostly compatible with natural languages and incompatible with random texts".[47] The linguistJacques Guyonce suggested that the Voynich manuscript text could be some little-known natural language, writtenplaintextwith an invented alphabet. He suggested Chinese in jest, but later comparison of word length statistics with Vietnamese and Chinese made him view that hypothesis seriously.[79]In many language families of East and Central Asia, mainlySino-Tibetan(Chinese,Tibetan, andBurmese),Austroasiatic(Vietnamese,Khmer, etc.) and possiblyTai(Thai,Lao, etc.),morphemesgenerally have only onesyllable.[80] Child (1976),[81]a linguist of Indo-European languages for the U.S.National Security Agency, proposed that the manuscript was written in a "hitherto unknown North Germanic dialect".[81]He identified in the manuscript a "skeletal syntax several elements of which are reminiscent of certain Germanic languages", while the content is expressed using "a great deal of obscurity".[82] In January 2014, Professor Stephen Bax of theUniversity of Bedfordshiremade public his research into using "bottom up" methodology to understand the manuscript. His method involved looking for and translatingproper nouns, in association with relevant illustrations, in the context of other languages of the same time period. A paper he posted online offers tentative translation of 14 characters and 10 words.[83][84][85][86]He suggested the text is a treatise on nature written in a natural language, rather than a code,[49]but no further work has been done since Bax's death in 2017.[87] Deep learninghas been proposed as a computing method for analysing language families to which the manuscript's alphabet could be related. In 2023, researchers used deep learning algorithms to demonstrate that, out of a representative sample of seven ancient Indian scripts, the strongest resemblance was toKhojki.[88] Tucker & Talbert (2014)[89]published a paper claiming a positive identification of 37 plants, 6 animals, and one mineral referenced in the manuscript to plant drawings in theLibellus de Medicinalibus Indorum Herbisor Badianus manuscript, an Aztecherbalwritten in 1552.[89]Together with the presence ofatacamitein the paint, they argue that the plants were from colonialNew Spainand the text representedNahuatl, the language of theAztecs. They date the manuscript to between 1521 (the date of theSpanish conquest of the Aztec Empire) andcirca1576. These dates contradict the earlier radiocarbon date of the vellum and other elements of the manuscript. However, they argued that the vellum could have been stored and used at a later date. The analysis has been criticised by other Voynich manuscript researchers,[90]who argued that a skilled forger could construct plants that coincidentally have a passing resemblance to theretofore undiscovered existing plants.[91]Nahuatl specialist M.P. Hansen has rejected their proposed readings as pure nonsense.[92] The peculiar internal structure of Voynich manuscript words ledWilliam F. Friedmanto conjecture that the text could be aconstructed language. In 1950, Friedman asked the British army officerJohn Tiltmanto analyse a few pages of the text, but Tiltman did not share this conclusion. In a paper in 1967, Brigadier Tiltman said: After reading my report, Mr. Friedman disclosed to me his belief that the basis of the script was a very primitive form of syntheticuniversal languagesuch as was developed in the form of a philosophical classification of ideas byBishop Wilkinsin 1667 andDalgarnoa little later. It was clear that the productions of these two men were much too systematic, and anything of the kind would have been almost instantly recognisable. My analysis seemed to me to reveal a cumbersome mixture of different kinds of substitution.[4] The concept of a constructed language is quite old, as attested byJohn Wilkins'sPhilosophical Language(1668), but still postdates the generally accepted origin of the Voynich manuscript by two centuries. In most known examples, categories are subdivided by addingsuffixes(fusional languages); as a consequence, a text in a particular subject would have many words with similar prefixes—for example, all plant names would begin with similar letters, and likewise for all diseases, etc. This feature could then explain the repetitive nature of the Voynich text. However, no one has been able yet to assign a plausible meaning to any prefix or suffix in the Voynich manuscript.[5] The fact that the manuscript has defied decipherment thus far has led various scholars to propose that the text does not contain meaningful content in the first place, implying that it may be a medievalhoax. In 2003, computer scientistGordon Ruggshowed that text with characteristics similar to the Voynich manuscript could have been produced using a table of word prefixes, stems, and suffixes, which would have been selected and combined by means of a perforated paper overlay.[93][94]The latter device, known as aCardan grille, was invented around 1550 as an encryption tool, more than 100 years after the estimated creation date of the Voynich manuscript. Some maintain that the similarity between the pseudo-texts generated in Gordon Rugg's experiments and the Voynich manuscript is superficial, and the grille method could be used to emulate any language to a certain degree.[95] In April 2007, a study by Austrian researcher Andreas Schinner published inCryptologiasupported the hoax hypothesis.[18]Schinner posited that the statistical properties of the manuscript's text were more consistent with meaningless gibberish produced using a quasi-stochasticmethod, such as the one described by Rugg, than with Latin and medieval German texts.[18] Some scholars have claimed that the manuscript's text appears too sophisticated to be a hoax. In 2013, Marcelo Montemurro, a theoretical physicist from theUniversity of Manchester, published findings claiming thatsemantic networksexist in the text of the manuscript, such as content-bearing words occurring in a clustered pattern, or new words being used when there was a shift in topic.[96]With this evidence, he believes it unlikely that these features were intentionally "incorporated" into the text to make a hoax more realistic, as most of the required academic knowledge of these structures did not exist at the time the Voynich manuscript would have been written.[97]In 2021, researchers atYale University, using thetf–idfanalysis, further investigated the relation between clusters of subjects in the text and topics as they could be identified by illustrations andpaleographyanalysis. Their conclusion is that clusters derived by computation match with the topics of the illustrations to some degree, thus providing evidence that the Voynich manuscript contains meaningful text.[98] However, other scholars have argued that such sophisticated patterns could also appear in hoaxed documents. In 2016, Gordon Rugg and Gavin Taylor published another article inCryptologiademonstrating that the grille method could reproduce many larger-scale features of the text.[99]In 2019, Torsten Timm and Andreas Schinner published a paper arguing that the text was produced by a process of "self-citation" in which scribes copied and modified meaningless words from earlier in the text. Using a computer simulation of this process, they demonstrated that it could reproduce many of the statistical characteristics of the Voynich manuscript.[100]In 2022,Yale Universityresearchers Daniel Gaskell and Claire Bowern published the results of an experiment in which human participants intentionally tried to write meaningless text. They found that the resulting text was often highly non-random and exhibited many of the same unusual statistical properties as the Voynich manuscript, supporting the idea that some features of the text could have been produced in a hoax.[101] In their 2004 book, Gerry Kennedy and Rob Churchill suggest the possibility that the Voynich manuscript may be a case ofglossolalia(speaking-in-tongues),channelling, oroutsider art.[15]If so, the author felt compelled to write large amounts of text in a manner which resemblesstream of consciousness, either because of voices heard or because of an urge. This often takes place in an invented language in glossolalia, usually made up of fragments of the author's own language, although invented scripts for this purpose are rare. Kennedy and Churchill useHildegard von Bingen's works to point out similarities between the Voynich manuscript and the illustrations that she drew when she was suffering from severe bouts ofmigraine, which can induce a trance-like state prone to glossolalia. Prominent features found in both are abundant "streams of stars", and the repetitive nature of the "nymphs" in thebalneologicalsection.[102] The theory is controversial,[103]and it is virtually impossible to prove or disprove it, short of deciphering the text. Kennedy and Churchill are themselves not convinced of the hypothesis, but consider it plausible. In the culminating chapter of their work, Kennedy states his belief that it is a hoax or forgery. Churchill acknowledges the possibility that the manuscript is either a synthetic forgotten language (as advanced by Friedman), or else a forgery, as the preeminent theory. However, he concludes that, if the manuscript is a genuine creation, mental illness or delusion seems to have affected the author.[15] Since the manuscript's modern rediscovery in 1912, there have been a number of claimed decipherings. One of the earliest efforts to decode the book's code was made in 1921 byWilliam Romaine Newboldfrom theUniversity of Pennsylvania. His singular hypothesis held that the visible text is meaningless, but that each apparent "letter" is in fact constructed of a series of tiny markings discernible only undermagnification. These markings were supposed to be based onancient Greekshorthand, forming a second level of script that held the real content of the writing. Newbold claimed to have used this knowledge to work out entire paragraphs proving the authorship of Bacon and recording his use of acompound microscopefour hundred years beforevan Leeuwenhoek. A circular drawing in the astronomical section depicts an irregularly shaped object with four curved arms, which Newbold interpreted as a picture of a galaxy, which could be obtained only with atelescope.[4] Newbold's analysis has since been dismissed as overly speculative[104]afterJohn Matthews Manlyof theUniversity of Chicagopointed out[when?]serious flaws in his theory. For example, each shorthand character was assumed to have multiple interpretations, and as a result there was no reliable way to determine which was intended for any given case. Newbold's method also required rearranging letters at will until intelligibleLatinwas produced. These factors alone ensure the system enough flexibility that nearly anything at all could be discerned from themicroscopicmarkings. Although evidence ofmicrographyusing theHebrew languagecan be traced as far back as the ninth century, it is nowhere near as compact or complex as the shapes Newbold made out. Close study of the manuscript revealed the markings to be artefacts caused by the way ink cracks as it dries on rough vellum. Perceiving significance in these artefacts can be attributed topareidolia. Thanks to Manly's thorough refutation, the micrography theory is now generally disregarded.[105] In 1943, Joseph Martin Feely publishedRoger Bacon's Cipher: The Right Key Found, in which he claimed that the book was a scientific diary written by Roger Bacon. Feely's method posited that the text was a highly abbreviated medieval Latin written in a simplesubstitution cipher.[17] Leonell C. Strong, a cancer research scientist and amateur cryptographer, claimed that the solution to the Voynich manuscript was a "peculiar double system of arithmetical progressions of a multiple alphabet". Strong published a translation of two pages in 1947, and claimed that theplaintextrevealed the Voynich manuscript to be written by the 16th-century English authorAnthony Ascham, whose works includeA Little Herbal, published in 1550. Notes released after his death reveal that the last stages of his analysis, in which he selected words to combine into phrases, were questionably subjective.[69]: 252 In 1978,Robert Brumbaugh, a professor of classical and medieval philosophy at Yale University, claimed that the manuscript was a forgery intended to fool Emperor Rudolf II into purchasing it, and that the text is Latin enciphered with a complex, two-step method.[17] In 1978, John Stojko publishedLetters to God's Eye,[106]in which he claimed that the Voynich Manuscript was a series of letters written in vowellessUkrainian.[68]The theory caused some sensation among the Ukrainian diaspora at the time, and then in independentUkraineafter 1991.[107]However, the date Stojko gives for the letters, the lack of relation between the text and the images, and the general looseness in the method of decryption have all been criticised.[68] In January 2014, applied linguistics Professor Stephen Bax self-published a paper proposing a "provisional, partial decoding" of the Voynich Manuscript, proposing a translation for ten proper nouns and fourteen letters from the manuscript using techniques similar to those used to successfully translateEgyptian hieroglyphs.[108]He claimed the manuscript to be a treatise on nature, in a Near Eastern or Asian language, but no full translation was made before Bax's death in November 2017.[87] Greg Kondrak, a professor of natural language processing at theUniversity of Alberta, and his graduate student Bradley Hauer usedcomputational linguisticsin an attempt to decode the manuscript.[109]Their findings were presented at the Annual Meeting of the Association for Computational Linguistics in August 2017 in the form of an article suggesting that the manuscript's language is most likelyHebrew, but encoded using alphagrams, i.e. alphabetically orderedanagrams. However, the team admitted that experts in medieval manuscripts who reviewed the work were not convinced.[110][111][112] In September 2017, television writer Nicholas Gibbs claimed to have decoded the manuscript as idiosyncratically abbreviated Latin.[113]He declared the manuscript to be a mostly plagiarised guide to women's health.[22]Scholars would go on to criticise Gibbs for patching together already-existing scholarship with a highly speculative and incorrect translation; Lisa Fagin Davis, director of theMedieval Academy of America, stated that Gibbs' decipherment "doesn't result in Latin that makes sense."[114]Davis added that she was "surprised theTLSpublished it."[115]Other researchers concurred.[27] In February 2018, Ahmet Ardıç, aTurkish Canadianelectrical engineer with an interest inTurkic linguistics, claimed that the manuscript is written in a language variety derived fromOld Turkic.[116][117][118]The text was written using "phonemic orthography", meaning the manuscript's author spelled out words as they were heard phonetically.[119][120]Ardıç began deciphering the manuscript four years earlier with the help of his two sons. "Ahmet noticed that the words in the book appeared to be built of repetitiverootswith prefixes and suffixes added. It reminded him of his native Turkish [...] At first he found seven characters that were the same as Old Turkic, and slowly the language revealed itself [...] Other illustrations allowed them to match up Old Turkic words with the images pictured", reportedThe Canadian Press.[116]Ardıç's team claimed to have deciphered and translated at least300 words"and is confident there is now sufficient vocabulary to read at least 30% of the manuscript."[117][119]His submission to the journalDigital Philology: A Journal of Medieval Cultureswas rejected in 2019.[121] In May 2019, Gerard Cheshire, a biology research assistant at theUniversity of Bristol, claimed that the manuscript is written in a "calligraphic proto-Romance" language. He claimed to have deciphered the manuscript in two weeks using a combination of "lateral thinking and ingenuity."[122][123]Cheshire has suggested that the manuscript is "a compendium of information on herbal remedies, therapeutic bathing, and astrological readings"; that it contains numerous descriptions of medicinal plants[124][125][126][127]and passages that focus on female physical and mental health, reproduction, and parenting; and that the manuscript is the only known text written inproto-Romance.[128]He further claimed: "The manuscript was compiled by Dominican nuns as a source of reference forMaria of Castile, Queen of Aragon."[129] In June 2023, Cheshire published his translation of the foldout illustration on page 158.[130]He claims that it depicts a volcano, and theorises that it places the manuscript's creators near the island ofVulcanowhich was an active volcano during the 15th century.[131]However, experts in medieval documents disputed this interpretation vigorously.[132]Approached for comment, Lisa Fagin Davis gave this explanation: As with most would-be Voynich interpreters, the logic of this proposal is circular and aspirational: he starts with a theory about what a particular series of glyphs might mean, usually because of the word's proximity to an image that he believes he can interpret. He then investigates any number of medieval Romance-language dictionaries until he finds a word that seems to suit his theory. Then he argues that because he has found a Romance-language word that fits his hypothesis, his hypothesis must be right. His "translations" from what is essentially gibberish, an amalgam of multiple languages, are themselves aspirational rather than being actual translations. The University of Bristol subsequently removed a reference to Cheshire's claims from its website,[133]referring, in a statement, to concerns about the validity of the research and stating: "This research was entirely the author's own work and is not affiliated with the University of Bristol, the School of Arts nor the Centre for Medieval Studies".[134][135] Many books and articles have been written about the manuscript. Copies of the manuscript pages were made by alchemistGeorg Bareschin 1637 and sent to Athanasius Kircher, and later by Wilfrid Voynich.[136] In 2004, theBeinecke Rare Book and Manuscript Librarymade high-resolution digital scans publicly available online, and several printed facsimiles appeared. In 2016, the Beinecke Library and Yale University Press co-published a facsimile,The Voynich Manuscript, with scholarly essays.[137] The Beinecke Library also authorised the production of a print run of 898 replicas by the Spanish publisher Siloé in 2017.[138][139] In September 2024,multispectral scansof ten selected pages were made public, revealing details unseen with visible light.[140] The manuscript has inspired various works of fiction, including:
https://en.wikipedia.org/wiki/Voynich_manuscript
Grammar-based codesorgrammar-based compressionarecompressionalgorithms based on the idea of constructing acontext-free grammar(CFG) for the string to be compressed. Examples include universallossless data compressionalgorithms.[1]To compress a data sequencex=x1⋯xn{\displaystyle x=x_{1}\cdots x_{n}}, a grammar-based code transformsx{\displaystyle x}into a context-free grammarG{\displaystyle G}. The problem of finding a smallest grammar for an input sequence (smallest grammar problem) is known to be NP-hard,[2]so many grammar-transform algorithms are proposed from theoretical and practical viewpoints. Generally, the produced grammarG{\displaystyle G}is further compressed by statistical encoders likearithmetic coding. The class of grammar-based codes is very broad. It includesblock codes, the multilevel pattern matching (MPM) algorithm,[3]variations of the incremental parsingLempel-Ziv code,[4]and many other new universal lossless compression algorithms. Grammar-based codes are universal in the sense that they can achieve asymptotically theentropy rateof any stationary,ergodicsource with a finite alphabet. The compression programs of the following are available from external links.
https://en.wikipedia.org/wiki/Grammar-based_code
Ininformation theory, anentropy coding(orentropy encoding) is anylossless data compressionmethod that attempts to approach the lower bound declared byShannon'ssource coding theorem, which states that any lossless data compression method must have an expected code length greater than or equal to the entropy of the source.[1] More precisely, the source coding theorem states that for any source distribution, the expected code length satisfiesEx∼P⁡[ℓ(d(x))]≥Ex∼P⁡[−logb⁡(P(x))]{\displaystyle \operatorname {E} _{x\sim P}[\ell (d(x))]\geq \operatorname {E} _{x\sim P}[-\log _{b}(P(x))]}, whereℓ{\displaystyle \ell }is the function specifying the number of symbols in a code word,d{\displaystyle d}is the coding function,b{\displaystyle b}is the number of symbols used to make output codes andP{\displaystyle P}is the probability of the source symbol. An entropy coding attempts to approach this lower bound. Two of the most common entropy coding techniques areHuffman codingandarithmetic coding.[2]If the approximate entropy characteristics of a data stream are known in advance (especially forsignal compression), a simpler static code may be useful. These static codes includeuniversal codes(such asElias gamma codingorFibonacci coding) andGolomb codes(such asunary codingorRice coding). Since 2014, data compressors have started using theasymmetric numeral systemsfamily of entropy coding techniques, which allows combination of the compression ratio ofarithmetic codingwith a processing cost similar toHuffman coding. Besides using entropy coding as a way to compress digital data, an entropy encoder can also be used to measure the amount ofsimilaritybetweenstreams of dataand already existing classes of data. This is done by generating an entropy coder/compressor for each class of data; unknown data is thenclassifiedby feeding the uncompressed data to each compressor and seeing which compressor yields the highest compression. The coder with the best compression is probably the coder trained on the data that was most similar to the unknown data.
https://en.wikipedia.org/wiki/Entropy_encoding
Incryptanalysis,Kasiski examination(also known asKasiski's testorKasiski's method) is a method of attackingpolyalphabetic substitution ciphers, such as theVigenère cipher.[1][2]It was first published byFriedrich Kasiskiin 1863,[3]but seems to have been independently discovered byCharles Babbageas early as 1846.[4][5] In polyalphabeticsubstitution cipherswhere the substitution alphabets are chosen by the use of akeyword, the Kasiski examination allows a cryptanalyst to deduce the length of the keyword. Once the length of the keyword is discovered, the cryptanalyst lines up the ciphertext inncolumns, wherenis the length of the keyword. Then each column can be treated as the ciphertext of amonoalphabetic substitution cipher. As such, each column can be attacked withfrequency analysis.[6]Similarly, where arotorstream ciphermachine has been used, this method may allow the deduction of the length of individual rotors. The Kasiski examination involves looking for strings of characters that are repeated in theciphertext. The strings should be three characters long or more for the examination to be successful. Then, the distances between consecutive occurrences of the strings are likely to be multiples of the length of the keyword. Thus finding more repeated strings narrows down the possible lengths of the keyword, since we can take thegreatest common divisorof all the distances.[7] The reason this test works is that if a repeated string occurs in theplaintext, and the distance between corresponding characters is a multiple of the keyword length, the keyword letters will line up in the same way with both occurrences of the string. For example, consider the plaintext: The word "the" is a repeated string, appearing multiple times. If we line up the plaintext with a 5-character keyword "beads" : The word "the" is sometimes mapped to "bea", sometimes to "sbe" and other times to "ead". However, it is mapped to "sbe" twice, and in a long enough text, it would likely be mapped multiple times to each of these possibilities. Kasiski observed that the distance between such repeated appearances must be a multiple of the encryption period.[7] In this example, the period is 5, and the distance between the two occurrences of "sbe" is 30, which is 6 times the period. Therefore, the greatest common divisor of the distances between repeated sequences will reveal the key length or a multiple of it. The difficulty of using the Kasiski examination lies in finding repeated strings. This is a very hard task to perform manually, but computers can make it much easier. However, care is still required, since some repeated strings may just be coincidence, so that some of the repeat distances are misleading. The cryptanalyst has to rule out the coincidences to find the correct length. Then, of course, the monoalphabetic ciphertexts that result must be cryptanalyzed. Kasiski actually used "superimposition" to solve the Vigenère cipher. He started by finding the key length, as above. Then he took multiple copies of the message and laid them one-above-another, each one shifted left by the length of the key. Kasiski then observed that eachcolumnwas made up of letters encrypted with a single alphabet. His method was equivalent to the one described above, but is perhaps easier to picture. Modern attacks on polyalphabetic ciphers are essentially identical to that described above, with the one improvement ofcoincidence counting. Instead of looking for repeating groups, a modern analyst would take two copies of the message and lay one above another. Modern analysts use computers, but this description illustrates the principle that the computer algorithms implement. The generalized method:
https://en.wikipedia.org/wiki/Kasiski_examination
TheRiverbank Publicationsis a series of pamphlets written by the people who worked for millionaireGeorge Fabyanin the multi-discipline research facility he built in the early 20th century near Chicago. They were published by Fabyan, often without author credit. The publications oncryptanalysis,[1]mostly written byWilliam Friedman, with contributions fromElizebeth Smith Friedmanand others, are considered seminal in the field.[2]In particular, Publication 22 introduced theIndex of Coincidence, a powerful statistical tool for cryptanalysis. The Riverbank Publications dealt with many subjects investigated at the laboratories. The ones dealing withcryptography[3]began with number 15,[4]: p. 374 ffand consists of:[5][6] Except as noted, the above publications were written byWilliam F. Friedmanand were published byGeorge Fabyan'sRiverbank LaboratoriesinGeneva, Illinois. This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Riverbank_Publications
InInternet culture, the1% ruleis a generalrule of thumbpertaining to participation in anInternet community, stating that only 1% of the users of a website actively create new content, while the other 99% of the participants onlylurk. Variants include the1–9–90 rule(sometimes90–9–1 principleor the89:10:1 ratio),[1]which states that in a collaborative website such as awiki, 90% of the participants of a community only consume content, 9% of the participants change or update content, and 1% of the participants add content. Similar rules are known ininformation science; for instance, the 80/20 rule known as thePareto principlestates that 20 percent of a group will produce 80 percent of the activity, regardless of how the activity is defined. According to the 1% rule, about 1% of Internet users create content, while 99% are just consumers of that content. For example, for every person who posts on a forum, generally about 99 other people view that forum but do not post. The term was coined by authors and bloggers Ben McConnell and Jackie Huba,[2]although there were earlier references this concept[3]that did not use the name. The termslurkandlurking, in reference to online activity, are used to refer to online observation without engaging others in the Internet community.[4] A 2007 study ofradicaljihadistInternet forums found 87% of users had never posted on the forums, 13% had posted at least once, 5% had posted 50 or more times, and only 1% had posted 500 or more times.[5] A 2014 peer-reviewed paper entitled "The 1% Rule in Four Digital Health Social Networks: An Observational Study" empirically examined the 1% rule in health-oriented online forums. The paper concluded that the 1% rule was consistent across the four support groups, with a handful of "Superusers" generating the vast majority of content.[6]A study later that year, from a separate group of researchers, replicated the 2014 van Mierlo study in an online forum for depression.[7]Results indicated that the distribution frequency of the 1% rule fit followedZipf's Law, which is a specific type ofpower law. The "90–9–1" version of this rule states that for websites where users can both create and edit content, 1% of people create content, 9% edit or modify that content, and 90% view the content without contributing. However, the actual percentage is likely to vary depending upon the subject. For example, if a forum requires content submissions as a condition of entry, the percentage of people who participate will probably be significantly higher than 1%, but the content producers will still be a minority of users. This is validated in a study conducted by Michael Wu, who uses economics techniques to analyze the participation inequality across hundreds of communities segmented by industry, audience type, and community focus.[8] The 1% rule is often misunderstood to apply to the Internet in general, but it applies more specifically to any given Internet community. It is for this reason that one can see evidence for the 1% principle on many websites, but aggregated together one can see a different distribution. This latter distribution is still unknown and likely to shift, but various researchers and pundits have speculated on how to characterize the sum total of participation. Research in late 2012 suggested that only 23% of the population (rather than 90%) could properly be classified as lurkers, while 17% of the population could be classified as intense contributors of content.[9]Several years prior, results were reported on a sample of students from Chicago where 60% of the sample created content in some form.[10] A similar concept was introduced by Will Hill ofAT&T Laboratories[11]and later cited byJakob Nielsen; this was the earliest known reference to the term "participation inequality" in an online context.[12]The term regained public attention in 2006 when it was used in a strictly quantitative context within a blog entry on the topic of marketing.[2]
https://en.wikipedia.org/wiki/1%25_rule_(Internet_culture)
Benford's law, also known as theNewcomb–Benford law, thelaw of anomalous numbers, or thefirst-digit law, is an observation that in many real-life sets of numericaldata, theleading digitis likely to be small.[1]In sets that obey the law, the number 1 appears as the leading significant digit about 30% of the time, while 9 appears as the leading significant digit less than 5% of the time. Uniformly distributed digits would each occur about 11.1% of the time.[2]Benford's law also makes predictions about the distribution of second digits, third digits, digit combinations, and so on. Benford's law may be derived by assuming the dataset values are uniformly distributed on a logarithmic scale. The graph to the right shows Benford's law forbase 10. Although a decimal base is most common, the result generalizes to any integer base greater than 2. Further generalizations published in 1995[3]included analogous statements for both thenth leading digit and the joint distribution of the leadingndigits, the latter of which leads to a corollary wherein the significant digits are shown to be astatistically dependentquantity. It has been shown that this result applies to a wide variety of data sets, including electricity bills, street addresses, stock prices, house prices, population numbers, death rates, lengths of rivers, andphysicalandmathematical constants.[4]Like other general principles about natural data—for example, the fact that many data sets are well approximated by anormal distribution—there are illustrative examples and explanations that cover many of the cases where Benford's law applies, though there are many other cases where Benford's law applies that resist simple explanations.[5][6]Benford's law tends to be most accurate when values are distributed across multipleorders of magnitude, especially if the process generating the numbers is described by apower law(which is common in nature). The law is named after physicistFrank Benford, who stated it in 1938 in an article titled "The Law of Anomalous Numbers",[7]although it had been previously stated bySimon Newcombin 1881.[8][9] The law is similar in concept, though not identical in distribution, toZipf's law. A set of numbers is said to satisfy Benford's law if the leading digitd(d∈ {1, ..., 9}) occurs withprobability[10] The leading digits in such a set thus have the following distribution: The quantity⁠P(d){\displaystyle P(d)}⁠is proportional to the space betweendandd+ 1on alogarithmic scale. Therefore, this is the distribution expected if thelogarithmsof the numbers (but not the numbers themselves) areuniformly and randomly distributed. For example, a numberx, constrained to lie between 1 and 10, starts with the digit 1 if1 ≤x< 2, and starts with the digit 9 if9 ≤x< 10. Therefore,xstarts with the digit 1 iflog 1 ≤ logx< log 2, or starts with 9 iflog 9 ≤ logx< log 10. The interval[log 1, log 2]is much wider than the interval[log 9, log 10](0.30 and 0.05 respectively); therefore if logxis uniformly and randomly distributed, it is much more likely to fall into the wider interval than the narrower interval, i.e. more likely to start with 1 than with 9; the probabilities are proportional to the interval widths, giving the equation above (as well as the generalization to other bases besides decimal). Benford's law is sometimes stated in a stronger form, asserting that thefractional partof the logarithm of data is typically close to uniformly distributed between 0 and 1; from this, the main claim about the distribution of first digits can be derived.[5] An extension of Benford's law predicts the distribution of first digits in otherbasesbesidesdecimal; in fact, any baseb≥ 2. The general form is[12] Forb= 2, 1(thebinaryandunary) number systems, Benford's law is true but trivial: All binary and unary numbers (except for 0 or the empty set) start with the digit 1. (On the other hand, thegeneralization of Benford's law to second and later digitsis not trivial, even for binary numbers.[13]) Examining a list of the heights of the58 tallest structures in the world by categoryshows that 1 is by far the most common leading digit,irrespective of the unit of measurement(see "scale invariance" below): Another example is the leading digit of2n. The sequence of the first 96 leading digits (1, 2, 4, 8, 1, 3, 6, 1, 2, 5, 1, 2, 4, 8, 1, 3, 6, 1, ... (sequenceA008952in theOEIS)) exhibits closer adherence to Benford’s law than is expected for random sequences of the same length, because it is derived from a geometric sequence.[14] The discovery of Benford's law goes back to 1881, when the Canadian-American astronomerSimon Newcombnoticed that inlogarithm tablesthe earlier pages (that started with 1) were much more worn than the other pages.[8]Newcomb's published result is the first known instance of this observation and includes a distribution on the second digit as well. Newcomb proposed a law that the probability of a single numberNbeing the first digit of a number was equal to log(N+ 1) − log(N). The phenomenon was again noted in 1938 by the physicistFrank Benford,[7]who tested it on data from 20 different domains and was credited for it. His data set included the surface areas of 335 rivers, the sizes of 3259 US populations, 104physical constants, 1800molecular weights, 5000 entries from a mathematical handbook, 308 numbers contained in an issue ofReader's Digest, the street addresses of the first 342 persons listed inAmerican Men of Scienceand 418 death rates. The total number of observations used in the paper was 20,229. This discovery was later named after Benford (making it an example ofStigler's law). In 1995,Ted Hillproved the result about mixed distributions mentionedbelow.[15][16] Benford's law tends to apply most accurately to data that span severalorders of magnitude. As a rule of thumb, the more orders of magnitude that the data evenly covers, the more accurately Benford's law applies. For instance, one can expect that Benford's law would apply to a list of numbers representing the populations of United Kingdom settlements. But if a "settlement" is defined as a village with population between 300 and 999, then Benford's law will not apply.[17][18] Consider the probability distributions shown below, referenced to alog scale. In each case, the total area in red is the relative probability that the first digit is 1, and the total area in blue is the relative probability that the first digit is 8. For the first distribution, the size of the areas of red and blue are approximately proportional to the widths of each red and blue bar. Therefore, the numbers drawn from this distribution will approximately follow Benford's law. On the other hand, for the second distribution, the ratio of the areas of red and blue is very different from the ratio of the widths of each red and blue bar. Rather, the relative areas of red and blue are determined more by the heights of the bars than the widths. Accordingly, the first digits in this distribution do not satisfy Benford's law at all.[18] Thus, real-world distributions that span severalorders of magnituderather uniformly (e.g., stock-market prices and populations of villages, towns, and cities) are likely to satisfy Benford's law very accurately. On the other hand, a distribution mostly or entirely within one order of magnitude (e.g.,IQ scoresor heights of human adults) is unlikely to satisfy Benford's law very accurately, if at all.[17][18]However, the difference between applicable and inapplicable regimes is not a sharp cut-off: as the distribution gets narrower, the deviations from Benford's law increase gradually. (This discussion is not a full explanation of Benford's law, because it has not explained why data sets are so often encountered that, when plotted as a probability distribution of the logarithm of the variable, are relatively uniform over several orders of magnitude.[19]) In 1970Wolfgang Kriegerproved what is now called the Krieger generator theorem.[20][21]The Krieger generator theorem might be viewed as a justification for the assumption in the Kafri ball-and-box model that, in a given baseB{\displaystyle B}with a fixed number of digits 0, 1, ...,n, ...,B−1{\displaystyle B-1}, digitnis equivalent to a Kafri box containingnnon-interacting balls. Other scientists and statisticians have suggested entropy-related explanations[which?]for Benford's law.[22][23][10][24] Many real-world examples of Benford's law arise from multiplicative fluctuations.[25]For example, if a stock price starts at $100, and then each day it gets multiplied by a randomly chosen factor between 0.99 and 1.01, then over an extended period the probability distribution of its price satisfies Benford's law with higher and higher accuracy. The reason is that thelogarithmof the stock price is undergoing arandom walk, so over time its probability distribution will get more and more broad and smooth (seeabove).[25](More technically, thecentral limit theoremsays that multiplying more and more random variables will create alog-normal distributionwith larger and larger variance, so eventually it covers many orders of magnitude almost uniformly.) To be sure of approximate agreement with Benford's law, the distribution has to be approximately invariant when scaled up by any factor up to 10; alog-normallydistributed data set with wide dispersion would have this approximate property. Unlike multiplicative fluctuations,additivefluctuations do not lead to Benford's law: They lead instead tonormal probability distributions(again by thecentral limit theorem), which do not satisfy Benford's law. By contrast, that hypothetical stock price described above can be written as theproductof many random variables (i.e. the price change factor for each day), so islikelyto follow Benford's law quite well. Anton Formannprovided an alternative explanation by directing attention to the interrelation between thedistributionof the significant digits and the distribution of theobserved variable. He showed in a simulation study that long-right-tailed distributions of arandom variableare compatible with the Newcomb–Benford law, and that for distributions of the ratio of two random variables the fit generally improves.[26]For numbers drawn from certain distributions (IQ scores, human heights) the Benford's law fails to hold because these variates obey a normal distribution, which is known not to satisfy Benford's law,[9]since normal distributions can't span several orders of magnitude and theSignificandof their logarithms will not be (even approximately) uniformly distributed. However, if one "mixes" numbers from those distributions, for example, by taking numbers from newspaper articles, Benford's law reappears. This can also be proven mathematically: if one repeatedly "randomly" chooses aprobability distribution(from an uncorrelated set) and then randomly chooses a number according to that distribution, the resulting list of numbers will obey Benford's law.[15][27]A similar probabilistic explanation for the appearance of Benford's law in everyday-life numbers has been advanced by showing that it arises naturally when one considers mixtures of uniform distributions.[28] In a list of lengths, the distribution of first digits of numbers in the list may be generally similar regardless of whether all the lengths are expressed in metres, yards, feet, inches, etc. The same applies to monetary units. This is not always the case. For example, the height of adult humans almost always starts with a 1 or 2 when measured in metres and almost always starts with 4, 5, 6, or 7 when measured in feet. But in a list of lengths spread evenly over many orders of magnitude—for example, a list of 1000 lengths mentioned in scientific papers that includes the measurements of molecules, bacteria, plants, and galaxies—it is reasonable to expect the distribution of first digits to be the same no matter whether the lengths are written in metres or in feet. When the distribution of the first digits of a data set isscale-invariant(independent of the units that the data are expressed in), it is always given by Benford's law.[29][30] For example, the first (non-zero) digit on the aforementioned list of lengths should have the same distribution whether the unit of measurement is feet or yards. But there are three feet in a yard, so the probability that the first digit of a length in yards is 1 must be the same as the probability that the first digit of a length in feet is 3, 4, or 5; similarly, the probability that the first digit of a length in yards is 2 must be the same as the probability that the first digit of a length in feet is 6, 7, or 8. Applying this to all possible measurement scales gives the logarithmic distribution of Benford's law. Benford's law for first digits isbaseinvariant for number systems. There are conditions and proofs of sum invariance, inverse invariance, and addition and subtraction invariance.[31][32] In 1972,Hal Variansuggested that the law could be used to detect possiblefraudin lists of socio-economic data submitted in support of public planning decisions. Based on the plausible assumption that people who fabricate figures tend to distribute their digits fairly uniformly, a simple comparison of first-digit frequency distribution from the data with the expected distribution according to Benford's law ought to show up any anomalous results.[33] In the United States, evidence based on Benford's law has been admitted in criminal cases at the federal, state, and local levels.[34] Walter Mebane, a political scientist and statistician at the University of Michigan, was the first to apply the second-digit Benford's law-test (2BL-test) inelection forensics.[35]Such analysis is considered a simple, though not foolproof, method of identifying irregularities in election results.[36]Scientific consensus to support the applicability of Benford's law to elections has not been reached in the literature. A 2011 study by the political scientists Joseph Deckert, Mikhail Myagkov, andPeter C. Ordeshookargued that Benford's law is problematic and misleading as a statistical indicator of election fraud.[37]Their method was criticized by Mebane in a response, though he agreed that there are many caveats to the application of Benford's law to election data.[38] Benford's lawhas been used as evidence of fraudin the2009 Iranian elections.[39]An analysis by Mebane found that the second digits in vote counts for PresidentMahmoud Ahmadinejad, the winner of the election, tended to differ significantly from the expectations of Benford's law, and that the ballot boxes with very fewinvalid ballotshad a greater influence on the results, suggesting widespreadballot stuffing.[40]Another study usedbootstrapsimulations to find that the candidateMehdi Karroubireceived almost twice as many vote counts beginning with the digit 7 as would be expected according to Benford's law,[41]while an analysis fromColumbia Universityconcluded that the probability that a fair election would produce both too few non-adjacent digits and the suspicious deviations in last-digit frequencies as found in the 2009 Iranian presidential election is less than 0.5 percent.[42]Benford's law has also been applied for forensic auditing and fraud detection on data from the2003 California gubernatorial election,[43]the2000and2004 United States presidential elections,[44]and the2009 German federal election.[45]The Benford's Law Test was found to be "worth taking seriously as a statistical test for fraud," although "the test is not sensitive to distortions we know significantly affected many votes. In particular, the test does not indicate problems for Florida in 2000."[44] Benford's law has also been misapplied to claim election fraud. When applying the law toJoe Biden's election returns forChicago,Milwaukee, and other localities in the2020 United States presidential election, the distribution of the first digit did not follow Benford's law. The misapplication was a result of looking at data that was tightly bound in range, which violates the assumption inherent in Benford's law that the range of the data be large. The first digit test was applied to precinct-level data, but because precincts rarely receive more than a few thousand votes or fewer than several dozen, Benford's law cannot be expected to apply. According to Mebane, "It is widely understood that the first digits of precinct vote counts are not useful for trying to diagnose election frauds."[46][47] Similarly, the macroeconomic data the Greek government reported to the European Union before entering theeurozonewas shown to be probably fraudulent using Benford's law, albeit years after the country joined.[48][49] Researchers have used Benford's law to detectpsychological pricingpatterns, in a Europe-wide study in consumer product prices before and after euro was introduced in 2002.[50]The idea was that, without psychological pricing, the first two or three digits of price of items should follow Benford's law. Consequently, if the distribution of digits deviates from Benford's law (such as having a lot of 9's), it means merchants may have used psychological pricing. Whenthe euro replaced local currencies in 2002, for a brief period of time, the price of goods in euro was simply converted from the price of goods in local currencies before the replacement. As it is essentially impossible to use psychological pricing simultaneously on both price-in-euro and price-in-local-currency, during the transition period, psychological pricing would be disrupted even if it used to be present. It can only be re-established once consumers have gotten used to prices in a single currency again, this time in euro. As the researchers expected, the distribution of first price digit followed Benford's law, but the distribution of the second and third digits deviated significantly from Benford's law before the introduction, then deviated less during the introduction, then deviated more again after the introduction. The number ofopen reading framesand their relationship to genome size differs betweeneukaryotesandprokaryoteswith the former showing a log-linear relationship and the latter a linear relationship. Benford's law has been used to test this observation with an excellent fit to the data in both cases.[51] A test of regression coefficients in published papers showed agreement with Benford's law.[52]As a comparison group subjects were asked to fabricate statistical estimates. The fabricated results conformed to Benford's law on first digits, but failed to obey Benford's law on second digits. Testing the number of published scientific papers of all registered researchers in Slovenia's national database was shown to strongly conform to Benford's law.[53]Moreover, the authors were grouped by scientific field, and tests indicate natural sciences exhibit greater conformity than social sciences. A 2025 PLOS ONE journal article argues Benford probability distribution of species in certain ecological systems can detect impending transitions of the system.[54] Although thechi-squared testhas been used to test for compliance with Benford's law it has low statistical power when used with small samples. TheKolmogorov–Smirnov testand theKuiper testare more powerful when the sample size is small, particularly when Stephens's corrective factor is used.[55]These tests may be unduly conservative when applied to discrete distributions. Values for the Benford test have been generated by Morrow.[56]The critical values of the test statistics are shown below: These critical values provide the minimum test statistic values required to reject the hypothesis of compliance with Benford's law at the givensignificance levels. Two alternative tests specific to this law have been published: First, the max (m) statistic[57]is given by The leading factorN{\displaystyle {\sqrt {N}}}does not appear in the original formula by Leemis;[57]it was added by Morrow in a later paper.[56] Secondly, the distance (d) statistic[58]is given by where FSD is the first significant digit andNis the sample size. Morrow has determined the critical values for both these statistics, which are shown below:[56] Morrow has also shown that for any random variableX(with a continuousPDF) divided by its standard deviation (σ), some valueAcan be found so that the probability of the distribution of the first significant digit of the random variable|X/σ|A{\displaystyle |X/\sigma |^{A}}will differ from Benford's law by less thanε> 0.[56]The value ofAdepends on the value ofεand the distribution of the random variable. A method of accounting fraud detection based on bootstrapping and regression has been proposed.[59] If the goal is to conclude agreement with the Benford's law rather than disagreement, then thegoodness-of-fit testsmentioned above are inappropriate. In this case the specifictests for equivalenceshould be applied. An empirical distribution is called equivalent to the Benford's law if a distance (for example total variation distance or the usual Euclidean distance) between the probability mass functions is sufficiently small. This method of testing with application to Benford's law is described in Ostrovski.[60] Some well-known infiniteinteger sequencesprovably satisfy Benford's law exactly (in theasymptotic limitas more and more terms of the sequence are included). Among these are theFibonacci numbers,[61][62]thefactorials,[63]the powers of 2,[64][14]and the powers of almost any other number.[64] Likewise, some continuous processes satisfy Benford's law exactly (in the asymptotic limit as the process continues through time). One is anexponential growthordecayprocess: If a quantity is exponentially increasing or decreasing in time, then the percentage of time that it has each first digit satisfies Benford's law asymptotically (i.e. increasing accuracy as the process continues through time). Thesquare rootsandreciprocalsof successive natural numbers do not obey this law.[65]Prime numbers in a finite range follow a Generalized Benford’s law, that approaches uniformity as the size of the range approaches infinity.[66]Lists of local telephone numbers violate Benford's law.[67]Benford's law is violated by the populations of all places with a population of at least 2500 individuals from five US states according to the 1960 and 1970 censuses, where only 19 % began with digit 1 but 20 % began with digit 2, because truncation at 2500 introduces statistical bias.[65]The terminal digits in pathology reports violate Benford's law due to rounding.[68] Distributions that do not span several orders of magnitude will not follow Benford's law. Examples include height, weight, and IQ scores.[9][69] A number of criteria, applicable particularly to accounting data, have been suggested where Benford's law can be expected to apply.[70] Mathematically, Benford’s law applies if the distribution being tested fits the "Benford’s law compliance theorem".[17]The derivation says that Benford's law is followed if theFourier transformof the logarithm of the probability density function is zero for all integer values. Most notably, this is satisfied if the Fourier transform is zero (or negligible) forn≥ 1. This is satisfied if the distribution is wide (since wide distribution implies a narrow Fourier transform). Smith summarizes thus (p. 716): Benford's law is followed by distributions that are wide compared with unit distance along the logarithmic scale. Likewise, the law is not followed by distributions that are narrow compared with unit distance … If the distribution is wide compared with unit distance on the log axis, it means that the spread in the set of numbers being examined is much greater than ten. In short, Benford’s law requires that the numbers in the distribution being measured have a spread across at least an order of magnitude. Benford's law was empirically tested against the numbers (up to the 10th digit) generated by a number of important distributions, including theuniform distribution, theexponential distribution, thenormal distribution, and others.[9] The uniform distribution, as might be expected, does not obey Benford's law. In contrast, theratio distributionoftwo uniform distributionsis well-described by Benford's law. Neither the normal distribution nor the ratio distribution of two normal distributions (theCauchy distribution) obey Benford's law. Although thehalf-normal distributiondoes not obey Benford's law, the ratio distribution of two half-normal distributions does. Neither the right-truncated normal distribution nor the ratio distribution of two right-truncated normal distributions are well described by Benford's law. This is not surprising as this distribution is weighted towards larger numbers. Benford's law also describes the exponential distribution and the ratio distribution of two exponential distributions well. The fit of chi-squared distribution depends on thedegrees of freedom(df) with good agreement with df = 1 and decreasing agreement as the df increases. TheF-distribution is fitted well for low degrees of freedom. With increasing dfs the fit decreases but much more slowly than the chi-squared distribution. The fit of the log-normal distribution depends on themeanand thevarianceof the distribution. The variance has a much greater effect on the fit than does the mean. Larger values of both parameters result in better agreement with the law. The ratio of two log normal distributions is a log normal so this distribution was not examined. Other distributions that have been examined include the Muth distribution,Gompertz distribution,Weibull distribution,gamma distribution,log-logistic distributionand theexponential power distributionall of which show reasonable agreement with the law.[57][71]TheGumbel distribution– a density increases with increasing value of the random variable – does not show agreement with this law.[71] It is possible to extend the law to digits beyond the first.[72]In particular, for any given number of digits, the probability of encountering a number starting with the string of digitsnof that length – discarding leading zeros – is given by Thus, the probability that a number starts with the digits 3, 1, 4 (some examples are 3.14, 3.142,π, 314280.7, and 0.00314005) islog10(1 + 1/314) ≈ 0.00138, as in the box with the log-log graph on the right. This result can be used to find the probability that a particular digit occurs at a given position within a number. For instance, the probability that a "2" is encountered as the second digit is[72] The probability thatd(d= 0, 1, ..., 9) is encountered as then-th (n> 1) digit is The distribution of then-th digit, asnincreases, rapidly approaches a uniform distribution with 10% for each of the ten digits, as shown below.[72]Four digits is often enough to assume a uniform distribution of 10% as "0" appears 10.0176% of the time in the fourth digit, while "9" appears 9.9824% of the time. Averageandmomentsof random variables for the digits 1 to 9 following this law have been calculated:[73] For the two-digit distribution according to Benford's law these values are also known:[74] A table of the exact probabilities for the joint occurrence of the first two digits according to Benford's law is available,[74]as is the population correlation between the first and second digits:[74]ρ= 0.0561. Benford's law has appeared as a plot device in some twenty-first century popular entertainment.
https://en.wikipedia.org/wiki/Benford%27s_law
Bradford's lawis a pattern first described bySamuel C. Bradfordin 1934 that estimates theexponentiallydiminishing returnsof searching for references inscience journals. One formulation is that if journals in a field are sorted by number of articles into three groups, each with about one-third of all articles, then the number of journals in each group will be proportional to 1:n:n2.[1]There are a number of related formulations of the principle. In many disciplines, this pattern is called aPareto distribution. As a practical example, suppose that a researcher has five corescientific journalsfor his or her subject. Suppose that in a month there are 12 articles of interest in those journals. Suppose further that in order to find another dozen articles of interest, the researcher would have to go to an additional 10 journals. Then that researcher's Bradford multiplierbmis 2 (i.e. 10/5). For each new dozen articles, that researcher will need to look inbmtimes as many journals. After looking in 5, 10, 20, 40, etc. journals, most researchers quickly realize that there is little point in looking further. Different researchers have different numbers of core journals, and different Bradford multipliers. But the pattern holds quite well across many subjects, and may well be a general pattern for human interactions in social systems. LikeZipf's law, to which it is related, we do not have a good explanation for why it works, but knowing that it does is very useful for librarians. What it means is that for each specialty, it is sufficient to identify the "core publications" for that field and only stock those; very rarely will researchers need to go outside that set.[verification needed] However, its impact has been far greater than that. Armed with this idea and inspired byVannevar Bush's famous articleAs We May Think,Eugene Garfieldat theInstitute for Scientific Informationin the 1960s developed a comprehensive index of how scientific thinking propagates. HisScience Citation Index(SCI) had the effect of making it easy to identify exactly which scientists did science that had an impact, and which journals that science appeared in. It also caused the discovery, which some did not expect, that a few journals, such asNatureandScience, were core for all ofhard science. The same pattern does not happen with the humanities or the social sciences. The result of this is pressure on scientists to publish in the best journals, and pressure on universities to ensure access to that core set of journals. On the other hand, the set of "core journals" may vary more or less strongly with the individual researchers, and even more strongly along schools-of-thought divides. There is also a danger of over-representing majority views if journals are selected in this fashion. Bradford's law is also known as Bradford's law of scattering or the Bradford distribution, as it describes how the articles on a particular subject are scattered throughout the mass of periodicals.[2]Another more general term that has come into use since 2006 is information scattering, an often observed phenomenon related to information collections where there are a few sources that have many items of relevant information about a topic, while most sources have only a few.[3]This law of distribution in bibliometrics can be applied to theWorld Wide Webas well.[4] Hjørland and Nicolaisen identified three kinds of scattering:[5] They found that the literature of Bradford's law (including Bradford's own papers) is unclear in relation to which kind of scattering is actually being measured. The interpretation of Bradford's law in terms of a geometric progression was suggested by V. Yatsko,[6]who introduced an additional constant and demonstrated that Bradford distribution can be applied to a variety of objects, not only to distribution of articles or citations across journals. V. Yatsko's interpretation (Y-interpretation) can be effectively used to compute threshold values in case it is necessary to distinguish subsets within a set of objects (successful/unsuccessful applicants, developed/underdeveloped regions, etc.).
https://en.wikipedia.org/wiki/Bradford%27s_law
Inlinguistics, thebrevity law(also calledZipf's law of abbreviation) is a linguistic law that qualitatively states that the more frequently a word is used, the shorter that word tends to be, and vice versa; the less frequently a word is used, the longer it tends to be.[1]This is astatistical regularitythat can be found innatural languagesand other natural systems and that claims to be a general rule. The brevity law was originally formulated by the linguistGeorge Kingsley Zipfin 1945 as anegative correlationbetween the frequency of a word and its size. He analyzed awritten corpusinAmerican Englishand showed that the average lengths in terms of the average number ofphonemesfell as the frequency of occurrence increased. Similarly, in aLatincorpus, he found a negative correlation between the number ofsyllablesin a word and the frequency of its appearance. This observation says that the most frequent words in a language are the shortest, e.g. themost common words in Englishare:the, be (in different forms), to, of, and, a; all containing 1 to 3 phonemes. He claimed that this Law of Abbreviation is a universal structural property of language, hypothesizing that it arises as a result of individuals optimising form-meaning mappings under competing pressures to communicate accurately but also efficiently.[2][3] Since then, the law has been empirically verified for almost a thousand languages of 80 differentlinguistic familiesfor the relationship between the number oflettersin a written word & itsfrequency in text.[4]The Brevity law appears universal and has also been observed acoustically when word size is measured in terms of word duration.[5]2016 evidence suggests it holds in theacoustic communicationof other primates.[6] The origin of this statistical pattern seems to be related to optimization principles and derived by a mediation between two major constraints: the pressure to reduce the cost of production and the pressure to maximize transmission success. This idea is very related with theprinciple of least effort, which postulates that efficiency selects a path of least resistance or "effort". This principle of reducing the cost of production might also be related to principles of optimaldata compressionininformation theory.[7]
https://en.wikipedia.org/wiki/Brevity_law
Demographic gravitationis a concept of "social physics",[1]introduced byPrinceton UniversityastrophysicistJohn Quincy Stewart[2]in 1947.[3]It is an attempt to use equations and notions ofclassical physics, such asgravity, to seek simplified insights and even laws ofdemographicbehaviour for large numbers of human beings. A basic conception within it is that large numbers of people, in a city for example, actually behave as an attractive force for other people to migrate there. It has been related[4][5]to W. J.Reilly's law of retail gravitation,[6][7]George Kingsley Zipf's Demographic Energy,[8]and to the theory oftrip distribution through gravity models. Writing in the journalSociometry, Stewart set out an "agenda for social physics." Comparing themicroscopicversusmacroscopicviewpoints in the methodology of formulatingphysical laws, he made an analogy with thesocial sciences: Fortunately for physics, the macroscopic approach was the commonsense one, and the early investigators –Boyle, Charles,Gay-Lussac– were able to establish the laws of gases. The situation with respect to "social physics" is reversed... If Robert Boyle had taken the attitude of many social scientists, he would not have been willing to measure the pressure and volume of a sample of air until an encyclopedic history of its molecules had been compiled. Boyle did not even know that air contained argon and helium but he found a very important law.[3] Stewart proceeded to applyNewtonianformulae of gravitation to that of "the average interrelations of people" on a wide geographic scale, elucidating such notions as "the demographic force of attraction," demographic energy, force, potential and gradient.[3] The following are some of the key equations (with plain English paraphrases) from his article insociometry: (Demographic force = (population 1 multiplied by population 2) divided by (distance squared)) (Demographic energy = (population 1, multiplied by population 2) divided by distance; this is also Zipf's determinant) (Demographic potential of population at point 1 = population at point 2, divided by distance) (Demographic potential in general = population divided by distance, in persons per mile) (Demographic gradient = persons per (i.e. divided by) square mile) The potential of population at any point is equivalent to the measure of proximity of people at that point (this also has relevance toGeorgisteconomic renttheoryEconomic rent#Land rent). For comparison, Reilly's retail gravity equilibrium (or Balance/Break Point) is paraphrased as: (Population 1 divided by (distance to balance, squared) = Population 2 / (distance to balance, squared)) Recently, a stochastic version has been proposed[9]according to which the probabilitypj{\displaystyle p_{j}}of a sitej{\displaystyle j}to become urban is given by wherewk=1{\displaystyle w_{k}=1}for urban sites andwk=0{\displaystyle w_{k}=0}otherwise,dj,k{\displaystyle d_{j,k}}is the distance between sitesj{\displaystyle j}andk{\displaystyle k}, andC{\displaystyle C}controls the overall growth-rate. The parameterγ{\displaystyle \gamma }determines the degree of compactness.
https://en.wikipedia.org/wiki/Demographic_gravitation
Aword listis a list of words in alexicon, generally sorted by frequency of occurrence (either bygraded levels, or as a ranked list). A word list is compiled bylexical frequency analysiswithin a giventext corpus, and is used incorpus linguisticsto investigate genealogies and evolution of languages and texts. A word which appears only once in the corpus is called ahapax legomena. Inpedagogy, word lists are used incurriculum designforvocabulary acquisition. A lexicon sorted by frequency "provides a rational basis for making sure that learners get the best return for their vocabulary learning effort" (Nation 1997), but is mainly intended forcourse writers, not directly for learners. Frequency lists are also made for lexicographical purposes, serving as a sort ofchecklistto ensure that common words are not left out. Some major pitfalls are the corpus content, the corpusregister, and the definition of "word". While word counting is a thousand years old, with still gigantic analysis done by hand in the mid-20th century,natural language electronic processingof large corpora such as movie subtitles (SUBTLEX megastudy) has accelerated the research field. Incomputational linguistics, afrequency listis a sorted list ofwords(word types) together with theirfrequency, where frequency here usually means the number of occurrences in a givencorpus, from which the rank can be derived as the position in the list. Nation (Nation 1997) noted the incredible help provided by computing capabilities, making corpus analysis much easier. He cited several key issues which influence the construction of frequency lists: Most of currently available studies are based on writtentext corpus, more easily available and easy to process. However,New et al. 2007proposed to tap into the large number of subtitles available online to analyse large numbers of speeches.Brysbaert & New 2009made a long critical evaluation of the traditional textual analysis approach, and support a move toward speech analysis and analysis of film subtitles available online. The initial research saw a handful of follow-up studies,[1]providing valuable frequency count analysis for various languages. In depth SUBTLEX researches over cleaned up open subtitles were produce for French (New et al. 2007), American English (Brysbaert & New 2009;Brysbaert, New & Keuleers 2012), Dutch (Keuleers & New 2010), Chinese (Cai & Brysbaert 2010), Spanish (Cuetos et al. 2011), Greek (Dimitropoulou et al. 2010), Vietnamese (Pham, Bolger & Baayen 2011), Brazil Portuguese (Tang 2012) and Portugal Portuguese (Soares et al. 2015), Albanian (Avdyli & Cuetos 2013), Polish (Mandera et al. 2014) and Catalan (2019[2]), Welsh (Van Veuhen et al. 2024[3]). SUBTLEX-IT (2015) provides raw data only.[4] In any case, the basic "word" unit should be defined. For Latin scripts, words are usually one or several characters separated either by spaces or punctuation. But exceptions can arise : English "can't" and French "aujourd'hui" include punctuations while French "chateau d'eau" designs a concept different from the simple addition of its components while including a space. It may also be preferable to group words of aword familyunder the representation of itsbase word. Thus,possible, impossible, possibilityare words of the same word family, represented by the base word*possib*. For statistical purpose, all these words are summed up under the base word form *possib*, allowing the ranking of a concept and form occurrence. Moreover, other languages may present specific difficulties. Such is the case of Chinese, which does not use spaces between words, and where a specified chain of several characters can be interpreted as either a phrase of unique-character words, or as a multi-character word. It seems thatZipf's lawholds for frequency lists drawn from longer texts of any natural language. Frequency lists are a useful tool when building an electronic dictionary, which is a prerequisite for a wide range of applications incomputational linguistics. German linguists define theHäufigkeitsklasse(frequency class)N{\displaystyle N}of an item in the list using thebase 2 logarithmof the ratio between its frequency and the frequency of the most frequent item. The most common item belongs to frequency class 0 (zero) and any item that is approximately half as frequent belongs in class 1. In the example list above, the misspelled wordoutragioushas a ratio of 76/3789654 and belongs in class 16. where⌊…⌋{\displaystyle \lfloor \ldots \rfloor }is thefloor function. Frequency lists, together withsemantic networks, are used to identify the least common, specialized terms to be replaced by theirhypernymsin a process ofsemantic compression. Those lists are not intended to be given directly to students, but rather to serve as a guideline for teachers and textbook authors (Nation 1997).Paul Nation's modern language teaching summary encourages first to "move from high frequency vocabulary and special purposes [thematic] vocabulary to low frequency vocabulary, then to teach learners strategies to sustain autonomous vocabulary expansion" (Nation 2006). Word frequency is known to have various effects (Brysbaert et al. 2011;Rudell 1993). Memorization is positively affected by higher word frequency, likely because the learner is subject to more exposures (Laufer 1997). Lexical access is positively influenced by high word frequency, a phenomenon calledword frequency effect(Segui et al.). The effect of word frequency is related to the effect ofage-of-acquisition, the age at which the word was learned. Below is a review of available resources. Word counting is an ancient field,[5]with known discussion back toHellenistictime. In 1944,Edward Thorndike,Irvin Lorgeand colleagues[6]hand-counted 18,000,000 running words to provide the first large-scale English language frequency list, before modern computers made such projects far easier (Nation 1997). 20th century's works all suffer from their age. In particular, words relating to technology, such as "blog," which, in 2014, was #7665 in frequency[7]in the Corpus of Contemporary American English,[8]was first attested to in 1999,[9][10][11]and does not appear in any of these three lists. The Teacher Word Book contains 30,000 lemmas or ~13,000 word families (Goulden, Nation and Read, 1990). A corpus of 18 million written words was hand analysed. The size of its source corpus increased its usefulness, but its age, and language changes, have reduced its applicability (Nation 1997). The General Service List contains 2,000 headwords divided into two sets of 1,000 words. A corpus of 5 million written words was analyzed in the 1940s. The rate of occurrence (%) for different meanings, and parts of speech, of the headword are provided. Various criteria, other than frequence and range, were carefully applied to the corpus. Thus, despite its age, some errors, and its corpus being entirely written text, it is still an excellent database of word frequency, frequency of meanings, and reduction of noise (Nation 1997). This list was updated in 2013 by Dr. Charles Browne, Dr. Brent Culligan and Joseph Phillips as theNew General Service List. A corpus of 5 million running words, from written texts used in United States schools (various grades, various subject areas). Its value is in its focus on school teaching materials, and its tagging of words by the frequency of each word, in each of the school grade, and in each of the subject areas (Nation 1997). These now contain 1 million words from a written corpus representing different dialects of English. These sources are used to produce frequency lists (Nation 1997). A review has been made byNew & Pallier. An attempt was made in the 1950s–60s with theFrançais fondamental. It includes the F.F.1 list with 1,500 high-frequency words, completed by a later F.F.2 list with 1,700 mid-frequency words, and the most used syntax rules.[12]It is claimed that 70 grammatical words constitute 50% of the communicatives sentence,[13][14]while 3,680 words make about 95~98% of coverage.[15]A list of 3,000 frequent words is available.[16] The French Ministry of the Education also provide a ranked list of the 1,500 most frequentword families, provided by the lexicologueÉtienne Brunet.[17]Jean Baudot made a study on the model of the American Brown study, entitled "Fréquences d'utilisation des mots en français écrit contemporain".[18] More recently, the projectLexique3provides 142,000 French words, withorthography,phonetic,syllabation,part of speech,gender, number of occurrence in the source corpus, frequency rank, associatedlexemes, etc., available under an open licenseCC-by-sa-4.0.[19] This Lexique3 is a continuous study from which originate theSubtlex movementcited above.New et al. 2007made a completely new counting based on online film subtitles. There have been several studies of Spanish word frequency (Cuetos et al. 2011).[20] Chinese corpora have long been studied from the perspective of frequency lists. The historical way to learn Chinese vocabulary is based on characters frequency (Allanic 2003). American sinologistJohn DeFrancismentioned its importance for Chinese as a foreign language learning and teaching inWhy Johnny Can't Read Chinese(DeFrancis 1966). As a frequency toolkit, Da (Da 1998) and the Taiwanese Ministry of Education (TME 1997) provided large databases with frequency ranks for characters and words. TheHSKlist of 8,848 high and medium frequency words in thePeople's Republic of China, and theRepublic of China (Taiwan)'sTOPlist of about 8,600 common traditional Chinese words are two other lists displaying common Chinese words and characters. Following the SUBTLEX movement,Cai & Brysbaert 2010recently made a rich study of Chinese word and character frequencies. Wiktionarycontains frequency lists in more languages.[21] Most frequently used words in different languages based on Wikipedia or combined corpora.[22]
https://en.wikipedia.org/wiki/Frequency_list
Gibrat's law, sometimes calledGibrat's rule of proportionate growthor thelaw of proportionate effect,[1]is a rule defined byRobert Gibrat(1904–1980) in 1931 stating that the proportionalrate of growthof a firm is independent of its absolute size.[2][3]The law of proportionate growth gives rise to a firm size distribution that islog-normal.[4] Gibrat's law is also applied tocitiessize and growth rate,[5]where proportionate growth process may give rise to a distribution of city sizes that is log-normal, as predicted by Gibrat's law. While the city size distribution is often associated withZipf's law, this holds only in the upper tail. When considering the entire size distribution, not just the largest cities, then the city size distribution is log-normal.[6]The log-normality of the distribution reconciles Gibrat's law also for cities: The law of proportionate effect will therefore imply that the logarithms of the variable will be distributed following the log-normal distribution.[2]In isolation, the upper tail (less than 1,000 out of 24,000 cities) fits both the log-normal and thePareto distribution: the uniformly most powerful unbiased test comparing the lognormal to the power law shows that the largest 1000 cities are distinctly in the power law regime.[7] However, it has been argued that it is problematic to define cities through their fairly arbitrary legal boundaries (the places method treatsCambridgeandBoston, Massachusetts, as two separate units). A clustering method to construct cities from the bottom up by clustering populated areas obtained from high-resolution data finds a power-law distribution of city size consistent with Zipf's law in almost the entire range of sizes.[8]Note that populated areas are still aggregated rather than individual based. A new method based on individual street nodes for the clustering process leads to the concept of natural cities. It has been found that natural cities exhibit a striking Zipf's law[9]Furthermore, the clustering method allows for a direct assessment of Gibrat's law. It is found that the growth of agglomerations is not consistent with Gibrat's law: the mean andstandard deviationof the growth rates of cities follows a power-law with the city size.[10] In general, processes characterized by Gibrat's law converge to a limiting distribution, often proposed to be thelog-normal, or apower law, depending on more specific assumptions about thestochasticgrowth process. However, the tail of the lognormal may fall off too quickly, and itsPDFis not monotonic, but rather has aY-interceptof zero probability at the origin. The typical power law is the Pareto I, which has a tail that cannot model fall-off in the tail at large outcomes size, and which does not extend downwards to zero, but rather must be truncated at some positive minimum value. More recently, theWeibull distributionhas been derived as the limiting distribution for Gibrat processes, by recognizing that (a) the increments of the growth process are not independent, but rather correlated, in magnitude, and (b) the increment magnitudes typically have monotonic PDFs.[11]The Weibull PDF can appear essentiallylog-loglinear over orders of magnitude ranging from zero, while eventually falling off at unreasonably large outcome sizes. In the study of thefirms(business), the scholars do not agree that the foundation and the outcome of Gibrat's law are empirically correct.[citation needed][12]
https://en.wikipedia.org/wiki/Gibrat%27s_law
Inlinguistics,Heaps' law(also calledHerdan's law) is anempirical lawwhich describes the number of distinct words in a document (or set of documents) as a function of the document length (so called type-token relation). It can be formulated as whereVRis the number of distinct words in an instance text of sizen.Kand β are free parameters determined empirically. With Englishtext corpora, typicallyKis between 10 and 100, and β is between 0.4 and 0.6. The law is frequently attributed toHarold Stanley Heaps, but was originally discovered by Gustav Herdan (1960).[1]Under mild assumptions, the Herdan–Heaps law is asymptotically equivalent toZipf's lawconcerning the frequencies of individual words within a text.[2]This is a consequence of the fact that the type-token relation (in general) of a homogenous text can be derived from the distribution of its types.[3] Empirically, Heaps' law is preserved even when the document is randomly shuffled,[4]meaning that it does not depend on the ordering of words, but only the frequency of words.[5]This is used as evidence for deriving Heaps' law from Zipf's law.[4] Heaps' law means that as more instance text is gathered, there will be diminishing returns in terms of discovery of the full vocabulary from which the distinct terms are drawn. Deviations from Heaps' law, as typically observed in English text corpora, have been identified in corpora generated with large language models.[6] Heaps' law also applies to situations in which the "vocabulary" is just some set of distinct types which are attributes of some collection of objects. For example, the objects could be people, and the types could be country of origin of the person. If persons are selected randomly (that is, we are not selecting based on country of origin), then Heaps' law says we will quickly have representatives from most countries (in proportion to their population) but it will become increasingly difficult to cover the entire set of countries by continuing this method of sampling. Heaps' law has been observed also in single-celltranscriptomes[7]consideringgenesas the distinct objects in the "vocabulary". Thiscomputational linguistics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Heaps%27_law