text
stringlengths 11
320k
| source
stringlengths 26
161
|
|---|---|
Inmathematics, and especially incategory theory, acommutative diagramis adiagramsuch that all directed paths in the diagram with the same start and endpoints lead to the same result.[1]It is said that commutative diagrams play the role in category theory thatequationsplay inalgebra.[2]
A commutative diagram often consists of three parts:
In algebra texts, the type of morphism can be denoted with different arrow usages:
The meanings of different arrows are not entirely standardized: the arrows used for monomorphisms, epimorphisms, and isomorphisms are also used forinjections,surjections, andbijections, as well as the cofibrations, fibrations, and weak equivalences in amodel category.
Commutativity makes sense for apolygonof any finite number of sides (including just 1 or 2), and a diagram is commutative if every polygonal subdiagram is commutative.
Note that a diagram may be non-commutative, i.e., the composition of different paths in the diagram may not give the same result.
In the left diagram, which expresses thefirst isomorphism theorem, commutativity of the triangle means thatf=f~∘π{\displaystyle f={\tilde {f}}\circ \pi }. In the right diagram, commutativity of the square meansh∘f=k∘g{\displaystyle h\circ f=k\circ g}.
In order for the diagram below to commute, three equalities must be satisfied:
Here, since the first equality follows from the last two, it suffices to show that (2) and (3) are true in order for the diagram to commute. However, since equality (3) generally does not follow from the other two, it is generally not enough to have only equalities (1) and (2) if one were to show that the diagram commutes.
Diagram chasing(also calleddiagrammatic search) is a method ofmathematical proofused especially inhomological algebra, where one establishes a property of some morphism by tracing the elements of a commutative diagram. A proof by diagram chasing typically involves the formal use of the properties of the diagram, such asinjectiveorsurjectivemaps, orexact sequences.[5]Asyllogismis constructed, for which the graphical display of the diagram is just a visual aid. It follows that one ends up "chasing" elements around the diagram, until the desired element or result is constructed or verified.
Examples of proofs by diagram chasing include those typically given for thefive lemma, thesnake lemma, thezig-zag lemma, and thenine lemma.
In higher category theory, one considers not only objects and arrows, but arrows between the arrows, arrows between arrows between arrows, and so onad infinitum. For example, the category of small categoriesCatis naturally a 2-category, withfunctorsas its arrows andnatural transformationsas the arrows between functors. In this setting, commutative diagrams may include these higher arrows as well, which are often depicted in the following style:⇒{\displaystyle \Rightarrow }. For example, the following (somewhat trivial) diagram depicts two categoriesCandD, together with two functorsF,G:C→Dand a natural transformationα:F⇒G:
There are two kinds of composition in a 2-category (calledvertical compositionandhorizontal composition), and they may also be depicted viapasting diagrams(see2-category#Definitionsfor examples).
A commutative diagram in a categoryCcan be interpreted as afunctorfrom an index categoryJtoC;one calls the functor adiagram.
More formally, a commutative diagram is a visualization of a diagram indexed by aposet category. Such a diagram typically includes:
Conversely, given a commutative diagram, it defines a poset category, where:
However, not every diagram commutes (the notion of diagram strictly generalizes commutative diagram). As a simple example, the diagram of a single object with an endomorphism (f:X→X{\displaystyle f\colon X\to X}), or with two parallel arrows (∙⇉∙{\displaystyle \bullet \rightrightarrows \bullet }, that is,f,g:X→Y{\displaystyle f,g\colon X\to Y}, sometimes called thefree quiver), as used in the definition ofequalizerneed not commute. Further, diagrams may be messy or impossible to draw, when the number of objects or morphisms is large (or even infinite).
|
https://en.wikipedia.org/wiki/Commutative_diagram
|
Inneurophysiology,commutationis the process by which the brain'sneural circuitsexhibit non-commutativity.
Physiologist Douglas B. Tweed and coworkers have considered whether certain neural circuits in thebrainexhibit noncommutativity and state:
Innoncommutative algebra, order makes a difference to multiplication, so thata×b≠b×a{\displaystyle a\times b\neq b\times a}. This feature is necessary for computingrotarymotion, because order makes a difference to the combined effect of two rotations. It has therefore been proposed that there are non-commutative operators in thebrain circuitsthat deal with rotations, includingmotor systemcircuits that steer theeyes,headand limbs, andsensory systemcircuits that handle spatial information. This idea is controversial: studies of eye and head control have revealed behaviours that are consistent with non-commutativity in the brain, but none that clearly rules out all commutative models.
Tweed goes on to demonstrate non-commutative computation in thevestibulo-ocular reflexby showing that subjects rotated in darkness can hold their gaze points stable in space – correctly computing different final eye-position commands when put through the same two rotations in different orders, in a way that is unattainable by any commutative system.[1]
Thisneurosciencearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Commutative_(neurophysiology)
|
Inmathematics, thecommutatorgives an indication of the extent to which a certainbinary operationfails to becommutative. There are different definitions used ingroup theoryandring theory.
Thecommutatorof two elements,gandh, of agroupG, is the element
This element is equal to the group's identity if and only ifgandhcommute (that is, if and only ifgh=hg).
The set of all commutators of a group is not in general closed under the group operation, but thesubgroupofGgeneratedby all commutators is closed and is called thederived groupor thecommutator subgroupofG. Commutators are used to definenilpotentandsolvablegroups and the largestabelianquotient group.
The definition of the commutator above is used throughout this article, but many group theorists define the commutator as
Using the first definition, this can be expressed as[g−1,h−1].
Commutator identities are an important tool ingroup theory.[3]The expressionaxdenotes theconjugateofabyx, defined asx−1ax.
Identity (5) is also known as theHall–Witt identity, afterPhilip HallandErnst Witt. It is a group-theoretic analogue of theJacobi identityfor the ring-theoretic commutator (see next section).
N.B., the above definition of the conjugate ofabyxis used by some group theorists.[4]Many other group theorists define the conjugate ofabyxasxax−1.[5]This is often writtenxa{\displaystyle {}^{x}a}. Similar identities hold for these conventions.
Many identities that are true modulo certain subgroups are also used. These can be particularly useful in the study ofsolvable groupsandnilpotent groups. For instance, in any group, second powers behave well:
If thederived subgroupis central, then
Ringsoften do not support division. Thus, thecommutatorof two elementsaandbof a ring (or anyassociative algebra) is defined differently by
The commutator is zero if and only ifaandbcommute. Inlinear algebra, if twoendomorphismsof a space are represented by commuting matrices in terms of one basis, then they are so represented in terms of every basis. By using the commutator as aLie bracket, every associative algebra can be turned into aLie algebra.
Theanticommutatorof two elementsaandbof a ring or associative algebra is defined by
Sometimes[a,b]+{\displaystyle [a,b]_{+}}is used to denote anticommutator, while[a,b]−{\displaystyle [a,b]_{-}}is then used for commutator.[6]The anticommutator is used less often, but can be used to defineClifford algebrasandJordan algebrasand in the derivation of theDirac equationinparticle physics.
The commutator of two operators acting on aHilbert spaceis a central concept inquantum mechanics, since it quantifies how well the twoobservablesdescribed by these operators can be measured simultaneously. Theuncertainty principleis ultimately a theorem about such commutators, by virtue of theRobertson–Schrödinger relation.[7]Inphase space, equivalent commutators of functionstar-productsare calledMoyal bracketsand are completely isomorphic to the Hilbert space commutator structures mentioned.
The commutator has the following properties:
Relation (3) is calledanticommutativity, while (4) is theJacobi identity.
IfAis a fixed element of a ringR, identity (1) can be interpreted as aLeibniz rulefor the mapadA:R→R{\displaystyle \operatorname {ad} _{A}:R\rightarrow R}given byadA(B)=[A,B]{\displaystyle \operatorname {ad} _{A}(B)=[A,B]}. In other words, the map adAdefines aderivationon the ringR. Identities (2), (3) represent Leibniz rules for more than two factors, and are valid for any derivation. Identities (4)–(6) can also be interpreted as Leibniz rules. Identities (7), (8) expressZ-bilinearity.
From identity (9), one finds that the commutator of integer powers of ring elements is:
Some of the above identities can be extended to the anticommutator using the above ± subscript notation.[8]For example:
Consider a ring or algebra in which theexponentialeA=exp(A)=1+A+12!A2+⋯{\displaystyle e^{A}=\exp(A)=1+A+{\tfrac {1}{2!}}A^{2}+\cdots }can be meaningfully defined, such as aBanach algebraor a ring offormal power series.
In such a ring,Hadamard's lemmaapplied to nested commutators gives:eABe−A=B+[A,B]+12![A,[A,B]]+13![A,[A,[A,B]]]+⋯=eadA(B).{\textstyle e^{A}Be^{-A}\ =\ B+[A,B]+{\frac {1}{2!}}[A,[A,B]]+{\frac {1}{3!}}[A,[A,[A,B]]]+\cdots \ =\ e^{\operatorname {ad} _{A}}(B).}(For the last expression, seeAdjoint derivationbelow.) This formula underlies theBaker–Campbell–Hausdorff expansionof log(exp(A) exp(B)).
A similar expansion expresses the group commutator of expressionseA{\displaystyle e^{A}}(analogous to elements of aLie group) in terms of a series of nested commutators (Lie brackets),eAeBe−Ae−B=exp([A,B]+12![A+B,[A,B]]+13!(12[A,[B,[B,A]]]+[A+B,[A+B,[A,B]]])+⋯).{\displaystyle e^{A}e^{B}e^{-A}e^{-B}=\exp \!\left([A,B]+{\frac {1}{2!}}[A{+}B,[A,B]]+{\frac {1}{3!}}\left({\frac {1}{2}}[A,[B,[B,A]]]+[A{+}B,[A{+}B,[A,B]]]\right)+\cdots \right).}
When dealing withgraded algebras, the commutator is usually replaced by thegraded commutator, defined in homogeneous components as
Especially if one deals with multiple commutators in a ringR, another notation turns out to be useful. For an elementx∈R{\displaystyle x\in R}, we define theadjointmappingadx:R→R{\displaystyle \mathrm {ad} _{x}:R\to R}by:
This mapping is aderivationon the ringR:
By theJacobi identity, it is also a derivation over the commutation operation:
Composing such mappings, we get for exampleadxady(z)=[x,[y,z]]{\displaystyle \operatorname {ad} _{x}\operatorname {ad} _{y}(z)=[x,[y,z]\,]}andadx2(z)=adx(adx(z))=[x,[x,z]].{\displaystyle \operatorname {ad} _{x}^{2}\!(z)\ =\ \operatorname {ad} _{x}\!(\operatorname {ad} _{x}\!(z))\ =\ [x,[x,z]\,].}We may considerad{\displaystyle \mathrm {ad} }itself as a mapping,ad:R→End(R){\displaystyle \mathrm {ad} :R\to \mathrm {End} (R)}, whereEnd(R){\displaystyle \mathrm {End} (R)}is the ring of mappings fromRto itself with composition as the multiplication operation. Thenad{\displaystyle \mathrm {ad} }is aLie algebrahomomorphism, preserving the commutator:
By contrast, it isnotalways a ring homomorphism: usuallyadxy≠adxady{\displaystyle \operatorname {ad} _{xy}\,\neq \,\operatorname {ad} _{x}\operatorname {ad} _{y}}.
Thegeneral Leibniz rule, expanding repeated derivatives of a product, can be written abstractly using the adjoint representation:
Replacingx{\displaystyle x}by the differentiation operator∂{\displaystyle \partial }, andy{\displaystyle y}by the multiplication operatormf:g↦fg{\displaystyle m_{f}:g\mapsto fg}, we getad(∂)(mf)=m∂(f){\displaystyle \operatorname {ad} (\partial )(m_{f})=m_{\partial (f)}}, and applying both sides to a functiong, the identity becomes the usual Leibniz rule for thenth derivative∂n(fg){\displaystyle \partial ^{n}\!(fg)}.
|
https://en.wikipedia.org/wiki/Commutator
|
Particle statisticsis a particular description of multipleparticlesinstatistical mechanics. A key prerequisite concept is that of astatistical ensemble(an idealization comprising thestate spaceof possible states of a system, each labeled with a probability) that emphasizes properties of a large system as a whole at the expense of knowledge about parameters of separate particles. When an ensemble describes a system of particles with similar properties, their number is called theparticle numberand usually denoted byN.
Inclassical mechanics, all particles (fundamentalandcomposite particles, atoms, molecules, electrons, etc.) in the system are considereddistinguishable. This means that individual particles in a system can be tracked. As a consequence, switching the positions of any pair of particles in the system leads to a different configuration of the system. Furthermore, there is no restriction on placing more than one particle in any given state accessible to the system. These characteristics of classical positions are calledMaxwell–Boltzmann statistics.
The fundamental feature ofquantum mechanicsthat distinguishes it from classical mechanics is that particles of a particular type areindistinguishablefrom one another. This means that in an ensemble of similar particles, interchanging any two particles does not lead to a new configuration of the system. In the language of quantum mechanics this means that thewave functionof the system is invariant up to a phase with respect to the interchange of the constituent particles. In the case of a system consisting of particles of different kinds (for example, electrons and protons), the wave function of the system is invariant up to a phase separately for both assemblies of particles.
The applicable definition of a particle does not require it to beelementaryor even"microscopic", but it requires that all itsdegrees of freedom(orinternal states) that are relevant to the physical problem considered shall be known. All quantum particles, such asleptonsandbaryons, in the universe have threetranslational motiondegrees of freedom (represented with the wave function) and one discrete degree of freedom, known asspin. Progressively more"complex"particles obtain progressively more internal freedoms (such as variousquantum numbersin anatom), and, when thenumber of internal statesthat "identical" particles in an ensemble can occupy dwarfs their count (the particle number), then effects of quantum statistics become negligible. That's why quantum statistics is useful when one considers, say,helium liquidorammoniagas (itsmoleculeshave a large, but conceivable number of internal states), but is useless applied to systems constructed ofmacromolecules.
While this difference between classical and quantum descriptions of systems is fundamental to all of quantum statistics, quantum particles are divided into two further classes on the basis of thesymmetryof the system. Thespin–statistics theorembinds two particular kinds ofcombinatorial symmetrywith two particular kinds ofspin symmetry, namelybosonsandfermions.
|
https://en.wikipedia.org/wiki/Particle_statistics
|
Physicsis thescientificstudy ofmatter, itsfundamental constituents, itsmotionand behavior throughspaceandtime, and the related entities ofenergyandforce.[1]It is one of the most fundamental scientific disciplines.[2][3][4]A scientist who specializes in the field of physics is called aphysicist.
Physics is one of the oldestacademic disciplines.[5]Over much of the past two millennia, physics,chemistry,biology, and certain branches of mathematics were a part ofnatural philosophy, but during theScientific Revolutionin the 17th century, these natural sciences branched into separate research endeavors. Physics intersects with manyinterdisciplinaryareas of research, such asbiophysicsandquantum chemistry, and the boundaries of physics are not rigidly defined. New ideas in physics often explain the fundamental mechanisms studied by other sciences[2]and suggest new avenues of research in these and other academic disciplines such as mathematics and philosophy.
Advances in physics often enable newtechnologies. For example, advances in the understanding ofelectromagnetism,solid-state physics, andnuclear physicsled directly to the development of technologies that have transformed modern society, such as television, computers,domestic appliances, andnuclear weapons;[2]advances inthermodynamicsled to the development of industrialization; and advances inmechanicsinspired the development ofcalculus.
The wordphysicscomes from theLatinphysica('study of nature'), which itself is a borrowing of theGreekφυσική(phusikḗ'natural science'), a term derived fromφύσις(phúsis'origin, nature, property').[6][7][8]
Astronomyis one of the oldestnatural sciences. Early civilizations dating before 3000 BCE, such as theSumerians,ancient Egyptians, and theIndus Valley Civilisation, had a predictive knowledge and a basic awareness of the motions of the Sun, Moon, and stars. The stars and planets, believed to represent gods, were often worshipped. While the explanations for the observed positions of the stars were often unscientific and lacking in evidence, these early observations laid the foundation for later astronomy, as the stars were found to traversegreat circlesacross the sky,[5]which could not explain the positions of theplanets.
According toAsger Aaboe, the origins of Western astronomy can be found inMesopotamia, and all Western efforts in theexact sciencesare descended from lateBabylonian astronomy.[9]Egyptian astronomersleft monuments showing knowledge of the constellations and the motions of the celestial bodies,[10]while Greek poetHomerwrote of various celestial objects in hisIliadandOdyssey; laterGreek astronomersprovided names, which are still used today, for most constellations visible from theNorthern Hemisphere.[11]
Natural philosophyhas its origins inGreeceduring theArchaic period(650 BCE – 480 BCE), whenpre-Socratic philosopherslikeThalesrejectednon-naturalisticexplanations for natural phenomena and proclaimed that every event had a natural cause.[12]They proposed ideas verified by reason and observation, and many of their hypotheses proved successful in experiment;[13]for example,atomismwas found to be correct approximately 2000 years after it was proposed byLeucippusand his pupilDemocritus.[14]
During theclassical periodin Greece (6th, 5th and 4th centuries BCE) and inHellenistic times,natural philosophydeveloped along many lines of inquiry.Aristotle(Greek:Ἀριστοτέλης,Aristotélēs) (384–322 BCE), a student ofPlato,
wrote on many subjects, including a substantial treatise on "Physics" – in the 4th century BC.Aristotelian physicswas influential for about two millennia. His approach mixed some limited observation with logical deductive arguments, but did not rely on experimental verification of deduced statements. Aristotle's foundational work in Physics, though very imperfect, formed a framework against which later thinkers further developed the field. His approach is entirely superseded today.
He explained ideas such asmotion(andgravity) with the theory offour elements.
Aristotle believed that each of the four classical elements (air, fire, water, earth) had its own natural place.[15]Because of their differing densities, each element will revert to its own specific place in the atmosphere.[16]So, because of their weights, fire would be at the top, air underneath fire, then water, then lastly earth. He also stated that when a small amount of one element enters the natural place of another, the less abundant element will automatically go towards its own natural place. For example, if there is a fire on the ground, the flames go up into the air in an attempt to go back into its natural place where it belongs. His laws of motion included: that heavier objects will fall faster, the speed being proportional to the weight and the speed of the object that is falling depends inversely on the density object it is falling through (e.g. density of air).[17]He also stated that, when it comes to violent motion (motion of an object when a force is applied to it by a second object) that the speed that object moves, will only be as fast or strong as the measure of force applied to it.[17]The problem of motion and its causes was studied carefully, leading to the philosophical notion of a "prime mover" as the ultimate source of all motion in the world (Book 8 of his treatisePhysics).
TheWestern Roman Empirefell to invaders and internal decay in the fifth century, resulting in a decline in intellectual pursuits in western Europe. By contrast, the Eastern Roman Empire (usually known as theByzantine Empire) resisted the attacks from invaders and continued to advance various fields of learning, including physics.[19]In the sixth century,John Philoponuschallenged the dominant Aristotelian approach to science although much of his work was focused on Christian theology.[20]
In the sixth century,Isidore of Miletuscreated an important compilation ofArchimedes' works that are copied in theArchimedes Palimpsest.Islamic scholarshipinheritedAristotelian physicsfrom the Greeks and during theIslamic Golden Agedeveloped it further, especially placing emphasis on observation anda priorireasoning, developing early forms of thescientific method.
The most notable innovations under Islamic scholarship were in the field ofopticsand vision,[21]which came from the works of many scientists likeIbn Sahl,Al-Kindi,Ibn al-Haytham,Al-FarisiandAvicenna. The most notable work wasThe Book of Optics(also known as Kitāb al-Manāẓir), written by Ibn al-Haytham, in which he presented the alternative to the ancient Greek idea about vision.[22]His discussed his experiments withcamera obscura, showing that light moved in a straight line; he encouraged readers to reproduce his experiments making him one of the originators of thescientific method[23][24]
Physics became a separate science whenearly modern Europeansused experimental and quantitative methods to discover what are now considered to be thelaws of physics.[25][page needed]
Major developments in this period include the replacement of thegeocentric modelof theSolar Systemwith the heliocentricCopernican model, thelaws governing the motion of planetary bodies(determined byJohannes Keplerbetween 1609 and 1619), Galileo's pioneering work ontelescopesandobservational astronomyin the 16th and 17th centuries, andIsaac Newton's discovery and unification of thelaws of motionanduniversal gravitation(that would come to bear his name).[26]Newton, and separatelyGottfried Wilhelm Leibniz, developedcalculus,[27]the mathematical study of continuous change, and Newton applied it to solve physical problems.[28]
The discovery of laws inthermodynamics,chemistry, andelectromagneticsresulted from research efforts during theIndustrial Revolutionas energy needs increased.[30]By the end of the 19th century, theories of thermodynamics,mechanics, and electromagnetics matched a wide variety of observations. Taken together these theories became the basis for what would later be calledclassical physics.[31]: 2
A few experimental results remained inexplicable. Classical electromagnetism presumed a medium, anluminiferous aetherto support the propagation of waves, but this medium could not be detected. The intensity of light from hot glowingblackbodyobjects did not match the predictions of thermodynamics and electromagnetism. The character ofelectron emissionof illuminated metals differed from predictions. These failures, seemingly insignificant in the big picture would upset the physics world in first two decades of the 20th century.[31]
Modern physicsbegan in the early 20th century with the work ofMax Planckin quantum theory andAlbert Einstein's theory of relativity. Both of these theories came about due to inaccuracies in classical mechanics in certain situations.Classical mechanicspredicted that thespeed of lightdepends on the motion of the observer, which could not be resolved with the constant speed predicted byMaxwell's equationsof electromagnetism. This discrepancy was corrected by Einstein's theory ofspecial relativity, which replaced classical mechanics for fast-moving bodies and allowed for a constant speed of light.[35]Black-body radiationprovided another problem for classical physics, which was corrected when Planck proposed that the excitation of material oscillators is possible only in discrete steps proportional to their frequency. This, along with thephotoelectric effectand a complete theory predicting discreteenergy levelsofelectron orbitals, led to the theory of quantum mechanics improving on classical physics at very small scales.[36]
Quantum mechanics would come to be pioneered byWerner Heisenberg,Erwin SchrödingerandPaul Dirac.[36]From this early work, and work in related fields, theStandard Model of particle physicswas derived.[37]Following the discovery of a particle with properties consistent with theHiggs bosonatCERNin 2012,[38]allfundamental particlespredicted by the standard model, and no others, appear to exist; however,physics beyond the Standard Model, with theories such assupersymmetry, is an active area of research.[39]Areas of mathematics in general are important to this field, such as the study ofprobabilitiesandgroups.
Physics deals with a wide variety of systems, although certain theories are used by all physicists. Each of these theories was experimentally tested numerous times and found to be an adequate approximation of nature.
These central theories are important tools for research into more specialized topics, and any physicist, regardless of their specialization, is expected to be literate in them. These include classical mechanics, quantum mechanics, thermodynamics andstatistical mechanics,electromagnetism, and special relativity.
In the first decades of the 20th century physics was revolutionized by the discoveries of quantum mechanics and relativity. The changes were so fundamental that these new concepts became the foundation of "modern physics", with other topics becoming "classical physics". The majority of applications of physics are essentially classical.[40]:xxxiThe laws of classical physics accurately describe systems whose important length scales are greater than the atomic scale and whose motions are much slower than the speed of light.[40]: xxxiiOutside of this domain, observations do not match predictions provided by classical mechanics.[31]: 6
Classical physics includes the traditional branches and topics that were recognized and well-developed before the beginning of the 20th century—classical mechanics, thermodynamics, and electromagnetism.[31]:2Classical mechanics is concerned with bodies acted on byforcesand bodies inmotionand may be divided intostatics(study of the forces on a body or bodies not subject to an acceleration),kinematics(study of motion without regard to its causes), anddynamics(study of motion and the forces that affect it); mechanics may also be divided intosolid mechanicsandfluid mechanics(known together ascontinuum mechanics), the latter include such branches ashydrostatics,hydrodynamicsandpneumatics. Acoustics is the study of how sound is produced, controlled, transmitted and received.[41]Important modern branches of acoustics includeultrasonics, the study of sound waves of very high frequency beyond the range of human hearing;bioacoustics, the physics of animal calls and hearing,[42]andelectroacoustics, the manipulation of audible sound waves using electronics.[43]
Optics, the study of light, is concerned not only withvisible lightbut also withinfraredandultraviolet radiation, which exhibit all of the phenomena of visible light except visibility, e.g., reflection, refraction, interference, diffraction, dispersion, and polarization of light. Heat is a form of energy, the internal energy possessed by the particles of which a substance is composed; thermodynamics deals with the relationships between heat and other forms of energy. Electricity andmagnetismhave been studied as a single branch of physics since the intimate connection between them was discovered in the early 19th century; anelectric currentgives rise to amagnetic field, and a changing magnetic field induces an electric current.Electrostaticsdeals withelectric chargesat rest,electrodynamicswith moving charges, andmagnetostaticswith magnetic poles at rest.
The discovery of relativity and of quantum mechanics in the first decades of the 20th century transformed the conceptual basis of physics without reducing the practical value of most of the physical theories developed up to that time. Consequently the topics of physics have come to be divided into "classical physics" and "modern physics", with the latter category including effects related to quantum mechanics and relativity.[31]:2Classical physics is generally concerned with matter and energy on the normal scale of observation, while much of modern physics is concerned with the behavior of matter and energy under extreme conditions or on a very large or very small scale. For example,atomicandnuclear physicsstudy matter on the smallest scale at whichchemical elementscan be identified. Thephysics of elementary particlesis on an even smaller scale since it is concerned with the most basic units of matter; this branch of physics is also known as high-energy physics because of the extremely high energies necessary to produce many types of particles inparticle accelerators. On this scale, ordinary, commonsensical notions of space, time, matter, and energy are no longer valid.[44]
The two chief theories of modern physics present a different picture of the concepts of space, time, and matter from that presented by classical physics. Classical mechanics approximates nature as continuous, while quantum theory is concerned with the discrete nature of many phenomena at the atomic and subatomic level and with the complementary aspects of particles and waves in the description of such phenomena. The theory of relativity is concerned with the description of phenomena that take place in aframe of referencethat is in motion with respect to an observer; the special theory of relativity is concerned with motion in the absence of gravitational fields and thegeneral theory of relativitywith motion and its connection withgravitation. Both quantum theory and the theory of relativity find applications in many areas of modern physics.[45]
Fundamental concepts in modern physics include:
Physicists use the scientific method to test the validity of aphysical theory. By using a methodical approach to compare the implications of a theory with the conclusions drawn from its related experiments and observations, physicists are better able to test the validity of a theory in a logical, unbiased, and repeatable way. To that end, experiments are performed and observations are made in order to determine the validity or invalidity of a theory.[46]
A scientific law is a concise verbal or mathematical statement of a relation that expresses a fundamental principle of some theory, such as Newton's law of universal gravitation.[47]
Theorists seek to developmathematical modelsthat both agree with existing experiments and successfully predict future experimental results, whileexperimentalistsdevise and perform experiments to test theoretical predictions and explore new phenomena. Althoughtheoryand experiment are developed separately, they strongly affect and depend upon each other. Progress in physics frequently comes about when experimental results defy explanation by existing theories, prompting intense focus on applicable modelling, and when new theories generate experimentally testablepredictions, which inspire the development of new experiments (and often related equipment).[48]
Physicistswho work at the interplay of theory and experiment are calledphenomenologists, who study complex phenomena observed in experiment and work to relate them to afundamental theory.[49]
Theoretical physics has historically taken inspiration from philosophy; electromagnetism was unified this way.[a]Beyond the known universe, the field of theoretical physics also deals with hypothetical issues,[b]such asparallel universes, amultiverse, andhigher dimensions. Theorists invoke these ideas in hopes of solving particular problems with existing theories; they then explore the consequences of these ideas and work toward making testable predictions.
Experimental physics expands, and is expanded by, engineering and technology. Experimental physicists who are involved inbasic researchdesign and perform experiments with equipment such as particle accelerators andlasers, whereas those involved inapplied researchoften work in industry, developing technologies such asmagnetic resonance imaging(MRI) andtransistors.Feynmanhas noted that experimentalists may seek areas that have not been explored well by theorists.[50]
Physics covers a wide range ofphenomena, fromelementary particles(such asquarks,neutrinos, andelectrons) to the largestsuperclustersof galaxies. Included in these phenomena are the most basic objects composing all other things. Therefore, physics is sometimes called the "fundamental science".[51]Physics aims to describe the various phenomena that occur in nature in terms of simpler phenomena. Thus, physics aims to both connect the things observable to humans to root causes, and then connect these causes together.
For example, theancient Chineseobserved that certain rocks (lodestoneandmagnetite) were attracted to one another by an invisible force. This effect was later called magnetism, which was first rigorously studied in the 17th century. But even before the Chinese discovered magnetism, theancient Greeksknew of other objects such asamber, that when rubbed with fur would cause a similar invisible attraction between the two.[52]This was also first studied rigorously in the 17th century and came to be called electricity. Thus, physics had come to understand two observations of nature in terms of some root cause (electricity and magnetism). However, further work in the 19th century revealed that these two forces were just two different aspects of one force—electromagnetism. This process of "unifying" forces continues today, and electromagnetism and theweak nuclear forceare now considered to be two aspects of theelectroweak interaction. Physics hopes to find an ultimate reason (theory of everything) for why nature is as it is (see sectionCurrent researchbelow for more information).[53]
Research in physics is continually progressing on a large number of fronts.
In condensed matter physics, an important unsolved theoretical problem is that ofhigh-temperature superconductivity.[54]Many condensed matter experiments are aiming to fabricate workablespintronicsandquantum computers.[55][56]
In particle physics, the first pieces of experimental evidence for physics beyond the Standard Model have begun to appear. Foremost among these are indications thatneutrinoshave non-zeromass. These experimental results appear to have solved the long-standingsolar neutrino problem, and the physics of massive neutrinos remains an area of active theoretical and experimental research. The Large Hadron Collider has already found the Higgs boson, but future research aims to prove or disprove thesupersymmetry, which extends the Standard Model of particle physics. Research on the nature of the major mysteries of dark matter anddark energyis also currently ongoing.[57]
Although much progress has been made in high-energy,quantum, and astronomical physics, many everyday phenomena involvingcomplexity,[58]chaos,[59]orturbulence[60]are still poorly understood. Complex problems that seem like they could be solved by a clever application of dynamics and mechanics remain unsolved; examples include the formation of sandpiles, nodes in trickling water, the shape of water droplets, mechanisms ofsurface tensioncatastrophes, and self-sorting in shaken heterogeneous collections.[c][61]
These complex phenomena have received growing attention since the 1970s for several reasons, including the availability of modern mathematical methods and computers, which enabledcomplex systemsto be modeled in new ways. Complex physics has become part of increasingly interdisciplinary research, as exemplified by the study of turbulence in aerodynamics and the observation ofpattern formationin biological systems. In the 1932Annual Review of Fluid Mechanics,Horace Lambsaid:[62]
I am an old man now, and when I die and go to heaven there are two matters on which I hope for enlightenment. One is quantum electrodynamics, and the other is the turbulent motion of fluids. And about the former I am rather optimistic.
The major fields of physics, along with their subfields and the theories and concepts they employ, are shown in the following table.
Since the 20th century, the individual fields of physics have become increasingly specialised, and today most physicists work in a single field for their entire careers. "Universalists" such as Einstein (1879–1955) andLev Landau(1908–1968), who worked in multiple fields of physics, are now very rare.[d]
Contemporary research in physics can be broadly divided intonuclearandparticle physics;condensed matter physics;atomic, molecular, and optical physics;astrophysics; and applied physics. Some physics departments also supportphysics education researchandphysics outreach.[63]
Particle physics is the study of the elementary constituents ofmatterand energy and theinteractionsbetween them.[64]In addition, particle physicists design and develop the high-energy accelerators,[65]detectors,[66]andcomputer programs[67]necessary for this research. The field is also called "high-energy physics" because many elementary particles do not occur naturally but are created only during high-energycollisionsof other particles.[68]
Currently, the interactions of elementary particles andfieldsare described by theStandard Model.[69]The model accounts for the 12 known particles of matter (quarksandleptons) that interact via thestrong, weak, and electromagneticfundamental forces.[69]Dynamics are described in terms of matter particles exchanginggauge bosons(gluons,W and Z bosons, andphotons, respectively).[70]The Standard Model also predicts a particle known as the Higgs boson.[69]In July 2012 CERN, the European laboratory for particle physics, announced the detection of a particle consistent with the Higgs boson,[71]an integral part of theHiggs mechanism.
Nuclear physics is the field of physics that studies the constituents and interactions ofatomic nuclei. The most commonly known applications of nuclear physics arenuclear powergeneration andnuclear weaponstechnology, but the research has provided application in many fields, including those innuclear medicineand magnetic resonance imaging,ion implantationinmaterials engineering, andradiocarbon datingin geology andarchaeology.
Atomic,molecular, and optical physics (AMO) is the study of matter—matter and light—matter interactions on the scale of single atoms and molecules. The three areas are grouped together because of their interrelationships, the similarity of methods used, and the commonality of their relevant energy scales. All three areas include both classical, semi-classical andquantumtreatments; they can treat their subject from a microscopic view (in contrast to a macroscopic view).
Atomic physics studies theelectron shellsof atoms. Current research focuses on activities in quantum control, cooling and trapping of atoms and ions,[72][73][74]low-temperature collision dynamics and the effects of electron correlation on structure and dynamics. Atomic physics is influenced by thenucleus(seehyperfine splitting), but intra-nuclear phenomena such asfissionandfusionare considered part of nuclear physics.
Molecular physicsfocuses on multi-atomic structures and their internal and external interactions with matter and light.Optical physicsis distinct from optics in that it tends to focus not on the control of classical light fields by macroscopic objects but on the fundamental properties ofoptical fieldsand their interactions with matter in the microscopic realm.
Condensed matter physics is the field of physics that deals with the macroscopic physical properties of matter.[75][76]In particular, it is concerned with the "condensed"phasesthat appear whenever the number of particles in a system is extremely large and the interactions between them are strong.[55]
The most familiar examples of condensed phases aresolidsand liquids, which arise from the bonding by way of theelectromagnetic forcebetween atoms.[77]More exotic condensed phases include thesuperfluid[78]and theBose–Einstein condensate[79]found in certain atomic systems at very low temperature, thesuperconductingphase exhibited byconduction electronsin certain materials,[80]and theferromagneticandantiferromagneticphases ofspinsonatomic lattices.[81]
Condensed matter physics is the largest field of contemporary physics. Historically, condensed matter physics grew out of solid-state physics, which is now considered one of its main subfields.[82]The termcondensed matter physicswas apparently coined byPhilip Andersonwhen he renamed his research group—previouslysolid-state theory—in 1967.[83]In 1978, the Division of Solid State Physics of theAmerican Physical Societywas renamed as the Division of Condensed Matter Physics.[82]Condensed matter physics has a large overlap with chemistry,materials science,nanotechnologyand engineering.[55]
Astrophysics and astronomy are the application of the theories and methods of physics to the study ofstellar structure,stellar evolution, the origin of the Solar System, and related problems of cosmology. Because astrophysics is a broad subject, astrophysicists typically apply many disciplines of physics, including mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics.[84]
The discovery byKarl Janskyin 1931 that radio signals were emitted by celestial bodies initiated the science ofradio astronomy. Most recently, the frontiers of astronomy have been expanded by space exploration. Perturbations and interference from the Earth's atmosphere make space-based observations necessary forinfrared,ultraviolet,gamma-ray, andX-ray astronomy.
Physical cosmology is the study of the formation and evolution of the universe on its largest scales. Albert Einstein's theory of relativity plays a central role in all modern cosmological theories. In the early 20th century,Hubble's discovery that the universe is expanding, as shown by theHubble diagram, prompted rival explanations known as thesteady stateuniverse and theBig Bang.
The Big Bang was confirmed by the success ofBig Bang nucleosynthesisand the discovery of thecosmic microwave backgroundin 1964. The Big Bang model rests on two theoretical pillars: Albert Einstein's general relativity and thecosmological principle. Cosmologists have recently established theΛCDM modelof the evolution of the universe, which includescosmic inflation,dark energy, anddark matter.
Aphysicistis ascientistwho specializes in the field of physics, which encompasses the interactions of matter and energy at all length and time scales in the physical universe.[85][86]Physicists generally are interested in the root or ultimate causes ofphenomena, and usually frame their understanding in mathematical terms. They work across a wide range ofresearch fields, spanning all length scales: fromsub-atomicandparticle physics, throughbiological physics, tocosmologicallength scales encompassing theuniverseas a whole. The field generally includes two types of physicists:experimental physicistswho specialize in the observation of natural phenomena and the development and analysis of experiments, andtheoretical physicistswho specialize in mathematical modeling of physical systems to rationalize, explain and predict natural phenomena.[85]
Physics, as with the rest of science, relies on thephilosophy of scienceand its "scientific method" to advance knowledge of the physical world.[90]The scientific method employsa priori and a posteriorireasoning as well as the use ofBayesian inferenceto measure the validity of a given theory.[91]Study of the philosophical issues surrounding physics, thephilosophy of physics, involves issues such as the nature ofspace and time,determinism, andmetaphysicaloutlooks such asempiricism,naturalism, andrealism.[92]
Many physicists have written about the philosophical implications of their work, for instanceLaplace, who championedcausal determinism,[93]andErwin Schrödinger, who wrote on quantum mechanics.[94][95]The mathematical physicistRoger Penrosehas been called aPlatonistbyStephen Hawking,[96]a view Penrose discusses in his book,The Road to Reality.[97]Hawking referred to himself as an "unashamed reductionist" and took issue with Penrose's views.[98]
Mathematics provides a compact and exact language used to describe the order in nature. This was noted and advocated byPythagoras,[99]Plato,[100]Galileo,[101]and Newton. Some theorists, likeHilary PutnamandPenelope Maddy, hold that logical truths, and therefore mathematical reasoning, depend on theempiricalworld. This is usually combined with the claim that the laws of logic express universal regularities found in the structural features of the world, which may explain the peculiar relation between these fields.
Physics uses mathematics[102]to organise and formulate experimental results. From those results,preciseorestimatedsolutions are obtained, or quantitative results, from which new predictions can be made and experimentally confirmed or negated. The results from physics experiments are numerical data, with theirunits of measureand estimates of the errors in the measurements. Technologies based on mathematics, likecomputationhave madecomputational physicsan active area of research.
Ontologyis a prerequisite for physics, but not for mathematics. It means physics is ultimately concerned with descriptions of the real world, while mathematics is concerned with abstract patterns, even beyond the real world. Thus physics statements are synthetic, while mathematical statements are analytic. Mathematics contains hypotheses, while physics contains theories. Mathematics statements have to be only logically true, while predictions of physics statements must match observed and experimental data.
The distinction is clear-cut, but not always obvious. For example,mathematical physicsis the application of mathematics in physics. Its methods are mathematical, but its subject is physical.[103]The problems in this field start with a "mathematical model of a physical situation" (system) and a "mathematical description of a physical law" that will be applied to that system. Every mathematical statement used for solving has a hard-to-find physical meaning. The final mathematical solution has an easier-to-find meaning, because it is what the solver is looking for.[clarification needed]
Physics is a branch offundamental science(also called basic science). Physics is also called "thefundamental science" because all branches of natural science including chemistry, astronomy, geology, and biology are constrained by laws of physics.[51]Similarly, chemistry is often calledthe central sciencebecause of its role in linking the physical sciences. For example, chemistry studies properties, structures, andreactionsof matter (chemistry's focus on the molecular and atomic scaledistinguishes it from physics). Structures are formed because particles exert electrical forces on each other, properties include physical characteristics of given substances, and reactions are bound by laws of physics, likeconservation of energy,mass, andcharge. Fundamental physics seeks to better explain and understand phenomena in all spheres, without a specific practical application as a goal, other than the deeper insight into the phenomema themselves.
Applied physics is a general term for physics research and development that is intended for a particular use. An applied physics curriculum usually contains a few classes in an applied discipline, like geology or electrical engineering. It usually differs from engineering in that an applied physicist may not be designing something in particular, but rather is using physics or conducting physics research with the aim of developing new technologies or solving a problem.
The approach is similar to that ofapplied mathematics. Applied physicists use physics in scientific research. For instance, people working onaccelerator physicsmight seek to build betterparticle detectorsfor research in theoretical physics.
Physics is used heavily in engineering. For example, statics, a subfield ofmechanics, is used in the building of bridges and other static structures. The understanding and use of acoustics results in sound control and better concert halls; similarly, the use of optics creates better optical devices. An understanding of physics makes for more realisticflight simulators, video games, and movies, and is often critical inforensicinvestigations.
With thestandard consensusthat thelawsof physics are universal and do not change with time, physics can be used to study things that would ordinarily be mired inuncertainty. For example, in the study of the origin of the Earth, a physicist can reasonably model Earth's mass, temperature, and rate of rotation, as a function of time allowing the extrapolation forward or backward in time and so predict future or prior events. It also allows for simulations in engineering that speed up the development of a new technology.
There is also considerableinterdisciplinarity, so many other important fields are influenced by physics (e.g., the fields ofeconophysicsandsociophysics).
|
https://en.wikipedia.org/wiki/Physics
|
Inmathematics, thequasi-commutative propertyis an extension or generalization of the generalcommutative property. This property is used in specific applications with various definitions.
Twomatricesp{\displaystyle p}andq{\displaystyle q}are said to have thecommutative propertywheneverpq=qp{\displaystyle pq=qp}
The quasi-commutative property in matrices is defined[1]as follows. Given two non-commutable matricesx{\displaystyle x}andy{\displaystyle y}xy−yx=z{\displaystyle xy-yx=z}
satisfy the quasi-commutative property wheneverz{\displaystyle z}satisfies the following properties:xz=zxyz=zy{\displaystyle {\begin{aligned}xz&=zx\\yz&=zy\end{aligned}}}
An example is found in thematrix mechanicsintroduced byHeisenbergas a version ofquantum mechanics. In this mechanics,pandqare infinite matrices corresponding respectively to the momentum and position variables of a particle.[1]These matrices are written out atMatrix mechanics#Harmonic oscillator, and z = iħ times the infiniteunit matrix, where ħ is thereduced Planck constant.
A functionf:X×Y→X{\displaystyle f:X\times Y\to X}is said to bequasi-commutative[2]iff(f(x,y1),y2)=f(f(x,y2),y1)for allx∈X,y1,y2∈Y.{\displaystyle f\left(f\left(x,y_{1}\right),y_{2}\right)=f\left(f\left(x,y_{2}\right),y_{1}\right)\qquad {\text{ for all }}x\in X,\;y_{1},y_{2}\in Y.}
Iff(x,y){\displaystyle f(x,y)}is instead denoted byx∗y{\displaystyle x\ast y}then this can be rewritten as:(x∗y)∗y2=(x∗y2)∗yfor allx∈X,y,y2∈Y.{\displaystyle (x\ast y)\ast y_{2}=\left(x\ast y_{2}\right)\ast y\qquad {\text{ for all }}x\in X,\;y,y_{2}\in Y.}
|
https://en.wikipedia.org/wiki/Quasi-commutative_property
|
Incomputer science, atraceis anequivalence classofstrings, wherein certain letters in the string are allowed tocommute, but others are not. Traces generalize the concept of strings by relaxing the requirement for all the letters to have a definite order, instead allowing for indefinite orderings in which certain reshufflings could take place. In an opposite way, traces generalize the concept ofsets with multiplicitiesby allowing for specifying some incomplete ordering of the letters rather than requiring complete equivalence under all reorderings. Thetrace monoidorfree partially commutative monoidis amonoidof traces.
Traces were introduced byPierre CartierandDominique Foatain 1969 to give a combinatorial proof ofMacMahon's master theorem. Traces are used in theories ofconcurrent computation, where commuting letters stand for portions of a job that can execute independently of one another, while non-commuting letters stand for locks,synchronization pointsorthread joins.[1]
The trace monoid is constructed from thefree monoid(the set of all strings of finite length) as follows. First, sets of commuting letters are given by anindependency relation. These induce an equivalence relation of equivalent strings; the elements of the equivalence classes are the traces. The equivalence relation then partitions the elements of the free monoid into a set of equivalence classes; the result is still a monoid; it is aquotient monoidnow called thetrace monoid. The trace monoid isuniversal, in that all dependency-homomorphic (see below) monoids are in factisomorphic.
Trace monoids are commonly used to modelconcurrent computation, forming the foundation forprocess calculi. They are the object of study intrace theory. The utility of trace monoids comes from the fact that they are isomorphic to the monoid ofdependency graphs; thus allowing algebraic techniques to be applied tographs, and vice versa. They are also isomorphic tohistory monoids, which model the history of computation of individual processes in the context of all scheduled processes on one or more computers.
LetΣ∗{\displaystyle \Sigma ^{*}}denote the free monoid on a set of generatorsΣ{\displaystyle \Sigma }, that is, the set of all strings written in the alphabetΣ{\displaystyle \Sigma }. The asterisk is a standard notation for theKleene star. Anindependency relationI{\displaystyle I}on the alphabetΣ{\displaystyle \Sigma }then induces a symmetric binary relation∼{\displaystyle \sim }on the set of stringsΣ∗{\displaystyle \Sigma ^{*}}: two stringsu,v{\displaystyle u,v}are related,u∼v,{\displaystyle u\sim v,}if and only if there existx,y∈Σ∗{\displaystyle x,y\in \Sigma ^{*}}, and a pair(a,b)∈I{\displaystyle (a,b)\in I}such thatu=xaby{\displaystyle u=xaby}andv=xbay{\displaystyle v=xbay}. Here,u,v,x{\displaystyle u,v,x}andy{\displaystyle y}are understood to be strings (elements ofΣ∗{\displaystyle \Sigma ^{*}}), whilea{\displaystyle a}andb{\displaystyle b}are letters (elements ofΣ{\displaystyle \Sigma }).
Thetraceis defined as thereflexive transitive closureof∼{\displaystyle \sim }. The trace is thus anequivalence relationonΣ∗{\displaystyle \Sigma ^{*}}and is denoted by≡D{\displaystyle \equiv _{D}}, whereD{\displaystyle D}is the dependency relation corresponding toI.{\displaystyle I.}D=(Σ×Σ)∖I{\displaystyle D=(\Sigma \times \Sigma )\setminus I}andI=(Σ×Σ)∖D.{\displaystyle I=(\Sigma \times \Sigma )\setminus D.}Different independencies or dependencies will give different equivalence relations.
Thetransitive closureimplies thatu≡Dv{\displaystyle u\equiv _{D}v}if and only if there exists a sequence of strings(w0,w1,⋯,wn){\displaystyle (w_{0},w_{1},\cdots ,w_{n})}such thatu∼w0,{\displaystyle u\sim w_{0},}v∼wn,{\displaystyle v\sim w_{n},}andwi∼wi+1{\displaystyle w_{i}\sim w_{i+1}}for all0≤i<n{\displaystyle 0\leq i<n}. The trace is stable under the monoid operation onΣ∗{\displaystyle \Sigma ^{*}}, i.e.,concatenation, and≡D{\displaystyle \equiv _{D}}is therefore acongruence relationonΣ∗.{\displaystyle \Sigma ^{*}.}
The trace monoid, commonly denoted asM(D){\displaystyle \mathbb {M} (D)}, is defined as the quotient monoid
The homomorphism
is commonly referred to as thenatural homomorphismorcanonical homomorphism. That the termsnaturalorcanonicalare deserved follows from the fact that this morphism embodies a universal property, as discussed in a later section.
One will also find the trace monoid denoted asM(Σ,I){\displaystyle M(\Sigma ,I)}whereI{\displaystyle I}is the independency relation. One can also find the commutation relation used instead of the independency relation; it differs from the independency relation by also including all the diagonal elements ofΣ×Σ{\textstyle \Sigma \times \Sigma }since letters "commute with themselves" in a free monoid of strings of those letters.
Consider the alphabetΣ={a,b,c}{\displaystyle \Sigma =\{a,b,c\}}. A possible dependency relation is
The corresponding independency is
Therefore, the lettersb,c{\displaystyle b,c}commute. Thus, for example, a trace equivalence class for the stringabababbca{\displaystyle abababbca}would be
and the equivalence class[abababbca]D{\displaystyle [abababbca]_{D}}would be an element of the trace monoid.
Thecancellation propertystates that equivalence is maintained underright cancellation. That is, ifw≡v{\displaystyle w\equiv v}, then(w÷a)≡(v÷a){\displaystyle (w\div a)\equiv (v\div a)}. Here, the notationw÷a{\displaystyle w\div a}denotes right cancellation, the removal of the first occurrence of the letterafrom the stringw, starting from the right-hand side. Equivalence is also maintained by left-cancellation. Several corollaries follow:
A strong form ofLevi's lemmaholds for traces. Specifically, ifuv≡xy{\displaystyle uv\equiv xy}for stringsu,v,x,y, then there exist stringsz1,z2,z3{\displaystyle z_{1},z_{2},z_{3}}andz4{\displaystyle z_{4}}such that(w2,w3)∈ID{\displaystyle (w_{2},w_{3})\in I_{D}}for all lettersw2∈Σ{\displaystyle w_{2}\in \Sigma }andw3∈Σ{\displaystyle w_{3}\in \Sigma }such thatw2{\displaystyle w_{2}}occurs inz2{\displaystyle z_{2}}andw3{\displaystyle w_{3}}occurs inz3{\displaystyle z_{3}}, and
Adependency morphism(with respect to a dependencyD) is a morphism
to some monoidM, such that the "usual" trace properties hold, namely:
Dependency morphisms are universal, in the sense that for a given, fixed dependencyD, ifψ:Σ∗→M{\displaystyle \psi :\Sigma ^{*}\to M}is a dependency morphism to a monoidM, thenMisisomorphicto the trace monoidM(D){\displaystyle \mathbb {M} (D)}. In particular, the natural homomorphism is a dependency morphism.
There are two well-known normal forms for words in trace monoids. One is thelexicographicnormal form, due to Anatolij V. Anisimov andDonald Knuth, and the other is theFoatanormal form due toPierre CartierandDominique Foatawho studied the trace monoid for itscombinatoricsin the 1960s.[3]
Unicode'sNormalization Form Canonical Decomposition(NFD) is an example of a lexicographic normal form - the ordering is to sort consecutive characters with non-zero canonical combining class by that class.
Just as a formal language can be regarded as a subset ofΣ∗{\displaystyle \Sigma ^{*}}, the set of all possible strings, so a trace language is defined as a subset ofM(D){\displaystyle \mathbb {M} (D)}all possible traces.
Alternatively, but equivalently, a languageL⊆Σ∗{\displaystyle L\subseteq \Sigma ^{*}}is a trace language, or is said to beconsistentwith dependencyDif
where
is the trace closure of a set of strings.
General references
Seminal publications
|
https://en.wikipedia.org/wiki/Trace_monoid
|
First-order logic, also calledpredicate logic,predicate calculus, orquantificational logic, is a collection offormal systemsused inmathematics,philosophy,linguistics, andcomputer science. First-order logic usesquantified variablesover non-logical objects, and allows the use of sentences that contain variables. Rather than propositions such as "all men are mortal", in first-order logic one can have expressions in the form "for allx, ifxis a man, thenxis mortal"; where "for allx"is a quantifier,xis a variable, and "...is a man" and "...is mortal" are predicates.[1]This distinguishes it frompropositional logic, which does not use quantifiers orrelations;[2]: 161in this sense, propositional logic is the foundation of first-order logic.
A theory about a topic, such asset theory, a theory for groups,[3]or a formal theory ofarithmetic, is usually a first-order logic together with a specifieddomain of discourse(over which the quantified variables range), finitely many functions from that domain to itself, finitely manypredicatesdefined on that domain, and a set of axioms believed to hold about them. "Theory" is sometimes understood in a more formal sense as just a set of sentences in first-order logic.
The term "first-order" distinguishes first-order logic fromhigher-order logic, in which there are predicates having predicates or functions as arguments, or in which quantification over predicates, functions, or both, are permitted.[4]: 56In first-order theories, predicates are often associated with sets. In interpreted higher-order theories, predicates may be interpreted as sets of sets.
There are manydeductive systemsfor first-order logic which are bothsound, i.e. all provable statements are true in all models; andcomplete, i.e. all statements which are true in all models are provable. Although thelogical consequencerelation is onlysemidecidable, much progress has been made inautomated theorem provingin first-order logic. First-order logic also satisfies severalmetalogicaltheorems that make it amenable to analysis inproof theory, such as theLöwenheim–Skolem theoremand thecompactness theorem.
First-order logic is the standard for the formalization of mathematics intoaxioms, and is studied in thefoundations of mathematics.Peano arithmeticandZermelo–Fraenkel set theoryare axiomatizations ofnumber theoryand set theory, respectively, into first-order logic. No first-order theory, however, has the strength to uniquely describe a structure with an infinite domain, such as thenatural numbersor thereal line. Axiom systems that do fully describe these two structures, i.e.categoricalaxiom systems, can be obtained in stronger logics such assecond-order logic.
The foundations of first-order logic were developed independently byGottlob FregeandCharles Sanders Peirce.[5]For a history of first-order logic and how it came to dominate formal logic, see José Ferreirós (2001).
Whilepropositional logicdeals with simple declarative propositions, first-order logic additionally coverspredicatesandquantification. A predicate evaluates totrueorfalsefor an entity or entities in thedomain of discourse.
Consider the two sentences "Socratesis a philosopher" and "Platois a philosopher". Inpropositional logic, these sentences themselves are viewed as the individuals of study, and might be denoted, for example, by variables such aspandq. They are not viewed as an application of a predicate, such asisPhil{\displaystyle {\text{isPhil}}}, to any particular objects in the domain of discourse, instead viewing them as purely an utterance which is either true or false.[6]However, in first-order logic, these two sentences may be framed as statements that a certain individual or non-logical object has a property. In this example, both sentences happen to have the common formisPhil(x){\displaystyle {\text{isPhil}}(x)}for some individualx{\displaystyle x}, in the first sentence the value of the variablexis "Socrates", and in the second sentence it is "Plato". Due to the ability to speak about non-logical individuals along with the original logical connectives, first-order logic includes propositional logic.[7]: 29–30
The truth of a formula such as "xis a philosopher" depends on which object is denoted byxand on the interpretation of the predicate "is a philosopher". Consequently, "xis a philosopher" alone does not have a definite truth value of true or false, and is akin to a sentence fragment.[8]Relationships between predicates can be stated usinglogical connectives. For example, the first-order formula "ifxis a philosopher, thenxis a scholar", is aconditionalstatement with "xis a philosopher" as its hypothesis, and "xis a scholar" as its conclusion, which again needs specification ofxin order to have a definite truth value.
Quantifiers can be applied to variables in a formula. The variablexin the previous formula can be universally quantified, for instance, with the first-order sentence "For everyx, ifxis a philosopher, thenxis a scholar". Theuniversal quantifier"for every" in this sentence expresses the idea that the claim "ifxis a philosopher, thenxis a scholar" holds forallchoices ofx.
Thenegationof the sentence "For everyx, ifxis a philosopher, thenxis a scholar" is logically equivalent to the sentence "There existsxsuch thatxis a philosopher andxis not a scholar". Theexistential quantifier"there exists" expresses the idea that the claim "xis a philosopher andxis not a scholar" holds forsomechoice ofx.
The predicates "is a philosopher" and "is a scholar" each take a single variable. In general, predicates can take several variables. In the first-order sentence "Socrates is the teacher of Plato", the predicate "is the teacher of" takes two variables.
An interpretation (or model) of a first-order formula specifies what each predicate means, and the entities that can instantiate the variables. These entities form thedomain of discourseor universe, which is usually required to be a nonempty set. For example, consider the sentence "There existsxsuch thatxis a philosopher." This sentence is seen as being true in an interpretation such that the domain of discourse consists of all human beings, and that the predicate "is a philosopher" is understood as "was the author of theRepublic." It is true, as witnessed by Plato in that text.[clarification needed]
There are two key parts of first-order logic. Thesyntaxdetermines which finite sequences of symbols are well-formed expressions in first-order logic, while thesemanticsdetermines the meanings behind these expressions.
Unlike natural languages, such as English, the language of first-order logic is completely formal, so that it can be mechanically determined whether a given expression iswell formed. There are two key types of well-formed expressions:terms, which intuitively represent objects, andformulas, which intuitively express statements that can be true or false. The terms and formulas of first-order logic are strings ofsymbols, where all the symbols together form thealphabetof the language.
As with allformal languages, the nature of the symbols themselves is outside the scope of formal logic; they are often regarded simply as letters and punctuation symbols.
It is common to divide the symbols of the alphabet intological symbols, which always have the same meaning, andnon-logical symbols, whose meaning varies by interpretation.[9]For example, the logical symbol∧{\displaystyle \land }always represents "and"; it is never interpreted as "or", which is represented by the logical symbol∨{\displaystyle \lor }. However, a non-logical predicate symbol such as Phil(x) could be interpreted to mean "xis a philosopher", "xis a man named Philip", or any other unary predicate depending on the interpretation at hand.
Logical symbols are a set of characters that vary by author, but usually include the following:[10]
Not all of these symbols are required in first-order logic. Either one of the quantifiers along with negation, conjunction (or disjunction), variables, brackets, and equality suffices.
Other logical symbols include the following:
Non-logical symbolsrepresent predicates (relations), functions and constants. It used to be standard practice to use a fixed, infinite set of non-logical symbols for all purposes:
When the arity of a predicate symbol or function symbol is clear from context, the superscriptnis often omitted.
In this traditional approach, there is only one language of first-order logic.[13]This approach is still common, especially in philosophically oriented books.
A more recent practice is to use different non-logical symbols according to the application one has in mind. Therefore, it has become necessary to name the set of all non-logical symbols used in a particular application. This choice is made via asignature.[14]
Typical signatures in mathematics are {1, ×} or just {×} forgroups,[3]or {0, 1, +, ×, <} forordered fields. There are no restrictions on the number of non-logical symbols. The signature can beempty, finite, or infinite, evenuncountable. Uncountable signatures occur for example in modern proofs of theLöwenheim–Skolem theorem.
Though signatures might in some cases imply how non-logical symbols are to be interpreted,interpretationof the non-logical symbols in the signature is separate (and not necessarily fixed). Signatures concern syntax rather than semantics.
In this approach, every non-logical symbol is of one of the following types:
The traditional approach can be recovered in the modern approach, by simply specifying the "custom" signature to consist of the traditional sequences of non-logical symbols.
Theformation rulesdefine the terms and formulas of first-order logic.[16]When terms and formulas are represented as strings of symbols, these rules can be used to write aformal grammarfor terms and formulas. These rules are generallycontext-free(each production has a single symbol on the left side), except that the set of symbols may be allowed to be infinite and there may be many start symbols, for example the variables in the case ofterms.
The set oftermsisinductively definedby the following rules:[17]
Only expressions which can be obtained by finitely many applications of rules 1 and 2 are terms. For example, no expression involving a predicate symbol is a term.
The set offormulas(also calledwell-formed formulas[18]orWFFs) is inductively defined by the following rules:
Only expressions which can be obtained by finitely many applications of rules 1–5 are formulas. The formulas obtained from the first two rules are said to beatomic formulas.
For example:
is a formula, iffis a unary function symbol,Pa unary predicate symbol, and Q a ternary predicate symbol. However,∀xx→{\displaystyle \forall x\,x\rightarrow }is not a formula, although it is a string of symbols from the alphabet.
The role of the parentheses in the definition is to ensure that any formula can only be obtained in one way—by following the inductive definition (i.e., there is a uniqueparse treefor each formula). This property is known asunique readabilityof formulas. There are many conventions for where parentheses are used in formulas. For example, some authors use colons or full stops instead of parentheses, or change the places in which parentheses are inserted. Each author's particular definition must be accompanied by a proof of unique readability.
For convenience, conventions have been developed about the precedence of the logical operators, to avoid the need to write parentheses in some cases. These rules are similar to theorder of operationsin arithmetic. A common convention is:
Moreover, extra punctuation not required by the definition may be inserted—to make formulas easier to read. Thus the formula:
might be written as:
In a formula, a variable may occurfreeorbound(or both). One formalization of this notion is due to Quine, first the concept of a variable occurrence is defined, then whether a variable occurrence is free or bound, then whether a variable symbol overall is free or bound. In order to distinguish different occurrences of the identical symbolx, each occurrence of a variable symbolxin a formula φ is identified with the initial substring of φ up to the point at which said instance of the symbolxappears.[8]p.297Then, an occurrence ofxis said to be bound if that occurrence ofxlies within the scope of at least one of either∃x{\displaystyle \exists x}or∀x{\displaystyle \forall x}. Finally,xis bound in φ if all occurrences ofxin φ are bound.[8]pp.142--143
Intuitively, a variable symbol is free in a formula if at no point is it quantified:[8]pp.142--143in∀yP(x,y), the sole occurrence of variablexis free while that ofyis bound. The free and bound variable occurrences in a formula are defined inductively as follows.
For example, in∀x∀y(P(x) →Q(x,f(x),z)),xandyoccur only bound,[19]zoccurs only free, andwis neither because it does not occur in the formula.
Free and bound variables of a formula need not be disjoint sets: in the formulaP(x) → ∀xQ(x), the first occurrence ofx, as argument ofP, is free while the second one, as argument ofQ, is bound.
A formula in first-order logic with no free variable occurrences is called afirst-ordersentence. These are the formulas that will have well-definedtruth valuesunder an interpretation. For example, whether a formula such as Phil(x) is true must depend on whatxrepresents. But the sentence∃xPhil(x)will be either true or false in a given interpretation.
In mathematics, the language of orderedabelian groupshas one constant symbol 0, one unary function symbol −, one binary function symbol +, and one binary relation symbol ≤. Then:
The axioms for ordered abelian groups can be expressed as a set of sentences in the language. For example, the axiom stating that the group is commutative is usually written(∀x)(∀y)[x+y=y+x].{\displaystyle (\forall x)(\forall y)[x+y=y+x].}
Aninterpretationof a first-order language assigns a denotation to each non-logical symbol (predicate symbol, function symbol, or constant symbol) in that language. It also determines adomain of discoursethat specifies the range of the quantifiers. The result is that each term is assigned an object that it represents, each predicate is assigned a property of objects, and each sentence is assigned a truth value. In this way, an interpretation provides semantic meaning to the terms, predicates, and formulas of the language. The study of the interpretations of formal languages is calledformal semantics. What follows is a description of the standard orTarskiansemantics for first-order logic. (It is also possible to definegame semantics for first-order logic, but aside from requiring theaxiom of choice, game semantics agree with Tarskian semantics for first-order logic, so game semantics will not be elaborated herein.)
The most common way of specifying an interpretation (especially in mathematics) is to specify astructure(also called amodel; see below). The structure consists of a domain of discourseDand an interpretation functionImapping non-logical symbols to predicates, functions, and constants.
The domain of discourseDis a nonempty set of "objects" of some kind. Intuitively, given an interpretation, a first-order formula becomes a statement about these objects; for example,∃xP(x){\displaystyle \exists xP(x)}states the existence of some object inDfor which the predicatePis true (or, more precisely, for which the predicate assigned to the predicate symbolPby the interpretation is true). For example, one can takeDto be the set ofintegers.
Non-logical symbols are interpreted as follows:
A formula evaluates to true or false given an interpretation and avariable assignmentμ that associates an element of the domain of discourse with each variable. The reason that a variable assignment is required is to give meanings to formulas with free variables, such asy=x{\displaystyle y=x}. The truth value of this formula changes depending on the values thatxandydenote.
First, the variable assignment μ can be extended to all terms of the language, with the result that each term maps to a single element of the domain of discourse. The following rules are used to make this assignment:
Next, each formula is assigned a truth value. The inductive definition used to make this assignment is called theT-schema.
If a formula does not contain free variables, and so is a sentence, then the initial variable assignment does not affect its truth value. In other words, a sentence is true according toMandμ{\displaystyle \mu }if and only if it is true according toMand every other variable assignmentμ′{\displaystyle \mu '}.
There is a second common approach to defining truth values that does not rely on variable assignment functions. Instead, given an interpretationM, one first adds to the signature a collection of constant symbols, one for each element of the domain of discourse inM; say that for eachdin the domain the constant symbolcdis fixed. The interpretation is extended so that each new constant symbol is assigned to its corresponding element of the domain. One now defines truth for quantified formulas syntactically, as follows:
This alternate approach gives exactly the same truth values to all sentences as the approach via variable assignments.
If a sentence φ evaluates totrueunder a given interpretationM, one says thatMsatisfiesφ; this is denoted[20]M⊨φ{\displaystyle M\vDash \varphi }. A sentence issatisfiableif there is some interpretation under which it is true. This is a bit different from the symbol⊨{\displaystyle \vDash }from model theory, whereM⊨ϕ{\displaystyle M\vDash \phi }denotes satisfiability in a model, i.e. "there is a suitable assignment of values inM{\displaystyle M}'s domain to variable symbols ofϕ{\displaystyle \phi }".[21]
Satisfiability of formulas with free variables is more complicated, because an interpretation on its own does not determine the truth value of such a formula. The most common convention is that a formula φ with free variablesx1{\displaystyle x_{1}}, ...,xn{\displaystyle x_{n}}is said to be satisfied by an interpretation if the formula φ remains true regardless which individuals from the domain of discourse are assigned to its free variablesx1{\displaystyle x_{1}}, ...,xn{\displaystyle x_{n}}. This has the same effect as saying that a formula φ is satisfied if and only if itsuniversal closure∀x1…∀xnϕ(x1,…,xn){\displaystyle \forall x_{1}\dots \forall x_{n}\phi (x_{1},\dots ,x_{n})}is satisfied.
A formula islogically valid(or simplyvalid) if it is true in every interpretation.[22]These formulas play a role similar totautologiesin propositional logic.
A formula φ is alogical consequenceof a formula ψ if every interpretation that makes ψ true also makes φ true. In this case one says that φ is logically implied by ψ.
An alternate approach to the semantics of first-order logic proceeds viaabstract algebra. This approach generalizes theLindenbaum–Tarski algebrasof propositional logic. There are three ways of eliminating quantified variables from first-order logic that do not involve replacing quantifiers with other variable binding term operators:
Thesealgebrasare alllatticesthat properly extend thetwo-element Boolean algebra.
Tarski and Givant (1987) showed that the fragment of first-order logic that has noatomic sentencelying in the scope of more than three quantifiers has the same expressive power asrelation algebra.[23]: 32–33This fragment is of great interest because it suffices forPeano arithmeticand mostaxiomatic set theory, including the canonicalZermelo–Fraenkel set theory(ZFC). They also prove that first-order logic with a primitiveordered pairis equivalent to a relation algebra with two ordered pairprojection functions.[24]: 803
Afirst-order theoryof a particular signature is a set ofaxioms, which are sentences consisting of symbols from that signature. The set of axioms is often finite orrecursively enumerable, in which case the theory is calledeffective. Some authors require theories to also include all logical consequences of the axioms. The axioms are considered to hold within the theory and from them other sentences that hold within the theory can be derived.
A first-order structure that satisfies all sentences in a given theory is said to be amodelof the theory. Anelementary classis the set of all structures satisfying a particular theory. These classes are a main subject of study inmodel theory.
Many theories have anintended interpretation, a certain model that is kept in mind when studying the theory. For example, the intended interpretation ofPeano arithmeticconsists of the usualnatural numberswith their usual operations. However, the Löwenheim–Skolem theorem shows that most first-order theories will also have other,nonstandard models.
A theory isconsistent(within adeductive system) if it is not possible to prove a contradiction from the axioms of the theory. A theory iscompleteif, for every formula in its signature, either that formula or its negation is a logical consequence of the axioms of the theory.Gödel's incompleteness theoremshows that effective first-order theories that include a sufficient portion of the theory of the natural numbers can never be both consistent and complete.
The definition above requires that the domain of discourse of any interpretation must be nonempty. There are settings, such asinclusive logic, where empty domains are permitted. Moreover, if a class of algebraic structures includes an empty structure (for example, there is an emptyposet), that class can only be an elementary class in first-order logic if empty domains are permitted or the empty structure is removed from the class.
There are several difficulties with empty domains, however:
Thus, when the empty domain is permitted, it must often be treated as a special case. Most authors, however, simply exclude the empty domain by definition.
Adeductive systemis used to demonstrate, on a purely syntactic basis, that one formula is a logical consequence of another formula. There are many such systems for first-order logic, includingHilbert-style deductive systems,natural deduction, thesequent calculus, thetableaux method, andresolution. These share the common property that a deduction is a finite syntactic object; the format of this object, and the way it is constructed, vary widely. These finite deductions themselves are often calledderivationsin proof theory. They are also often calledproofsbut are completely formalized unlike natural-languagemathematical proofs.
A deductive system issoundif any formula that can be derived in the system is logically valid. Conversely, a deductive system iscompleteif every logically valid formula is derivable. All of the systems discussed in this article are both sound and complete. They also share the property that it is possible to effectively verify that a purportedly valid deduction is actually a deduction; such deduction systems are calledeffective.
A key property of deductive systems is that they are purely syntactic, so that derivations can be verified without considering any interpretation. Thus, a sound argument is correct in every possible interpretation of the language, regardless of whether that interpretation is about mathematics, economics, or some other area.
In general, logical consequence in first-order logic is onlysemidecidable: if a sentence A logically implies a sentence B then this can be discovered (for example, by searching for a proof until one is found, using some effective, sound, complete proof system). However, if A does not logically imply B, this does not mean that A logically implies the negation of B. There is no effective procedure that, given formulas A and B, always correctly decides whether A logically implies B.
Arule of inferencestates that, given a particular formula (or set of formulas) with a certain property as a hypothesis, another specific formula (or set of formulas) can be derived as a conclusion. The rule is sound (or truth-preserving) if it preserves validity in the sense that whenever any interpretation satisfies the hypothesis, that interpretation also satisfies the conclusion.
For example, one common rule of inference is therule of substitution. Iftis a term and φ is a formula possibly containing the variablex, then φ[t/x] is the result of replacing all free instances ofxbytin φ. The substitution rule states that for any φ and any termt, one can conclude φ[t/x] from φ provided that no free variable oftbecomes bound during the substitution process. (If some free variable oftbecomes bound, then to substitutetforxit is first necessary to change the bound variables of φ to differ from the free variables oft.)
To see why the restriction on bound variables is necessary, consider the logically valid formula φ given by∃x(x=y){\displaystyle \exists x(x=y)}, in the signature of (0,1,+,×,=) of arithmetic. Iftis the term "x + 1", the formula φ[t/y] is∃x(x=x+1){\displaystyle \exists x(x=x+1)}, which will be false in many interpretations. The problem is that the free variablexoftbecame bound during the substitution. The intended replacement can be obtained by renaming the bound variablexof φ to something else, sayz, so that the formula after substitution is∃z(z=x+1){\displaystyle \exists z(z=x+1)}, which is again logically valid.
The substitution rule demonstrates several common aspects of rules of inference. It is entirely syntactical; one can tell whether it was correctly applied without appeal to any interpretation. It has (syntactically defined) limitations on when it can be applied, which must be respected to preserve the correctness of derivations. Moreover, as is often the case, these limitations are necessary because of interactions between free and bound variables that occur during syntactic manipulations of the formulas involved in the inference rule.
A deduction in a Hilbert-style deductive system is a list of formulas, each of which is alogical axiom, a hypothesis that has been assumed for the derivation at hand or follows from previous formulas via a rule of inference. The logical axioms consist of severalaxiom schemasof logically valid formulas; these encompass a significant amount of propositional logic. The rules of inference enable the manipulation of quantifiers. Typical Hilbert-style systems have a small number of rules of inference, along with several infinite schemas of logical axioms. It is common to have onlymodus ponensanduniversal generalizationas rules of inference.
Natural deduction systems resemble Hilbert-style systems in that a deduction is a finite list of formulas. However, natural deduction systems have no logical axioms; they compensate by adding additional rules of inference that can be used to manipulate the logical connectives in formulas in the proof.
The sequent calculus was developed to study the properties of natural deduction systems.[25]Instead of working with one formula at a time, it usessequents, which are expressions of the form:
where A1, ..., An, B1, ..., Bkare formulas and the turnstile symbol⊢{\displaystyle \vdash }is used as punctuation to separate the two halves. Intuitively, a sequent expresses the idea that(A1∧⋯∧An){\displaystyle (A_{1}\land \cdots \land A_{n})}implies(B1∨⋯∨Bk){\displaystyle (B_{1}\lor \cdots \lor B_{k})}.
Unlike the methods just described the derivations in the tableaux method are not lists of formulas. Instead, a derivation is a tree of formulas. To show that a formula A is provable, the tableaux method attempts to demonstrate that the negation of A is unsatisfiable. The tree of the derivation has¬A{\displaystyle \lnot A}at its root; the tree branches in a way that reflects the structure of the formula. For example, to show thatC∨D{\displaystyle C\lor D}is unsatisfiable requires showing that C and D are each unsatisfiable; this corresponds to a branching point in the tree with parentC∨D{\displaystyle C\lor D}and children C and D.
Theresolution ruleis a single rule of inference that, together withunification, is sound and complete for first-order logic. As with the tableaux method, a formula is proved by showing that the negation of the formula is unsatisfiable. Resolution is commonly used in automated theorem proving.
The resolution method works only with formulas that are disjunctions of atomic formulas; arbitrary formulas must first be converted to this form throughSkolemization. The resolution rule states that from the hypothesesA1∨⋯∨Ak∨C{\displaystyle A_{1}\lor \cdots \lor A_{k}\lor C}andB1∨⋯∨Bl∨¬C{\displaystyle B_{1}\lor \cdots \lor B_{l}\lor \lnot C}, the conclusionA1∨⋯∨Ak∨B1∨⋯∨Bl{\displaystyle A_{1}\lor \cdots \lor A_{k}\lor B_{1}\lor \cdots \lor B_{l}}can be obtained.
Many identities can be proved, which establish equivalences between particular formulas. These identities allow for rearranging formulas by moving quantifiers across other connectives and are useful for putting formulas inprenex normal form. Some provable identities include:
There are several different conventions for using equality (or identity) in first-order logic. The most common convention, known asfirst-order logic with equality, includes the equality symbol as a primitive logical symbol which is always interpreted as the real equality relation between members of the domain of discourse, such that the "two" given members are the same member. This approach also adds certain axioms about equality to the deductive system employed. These equality axioms are:[26]: 198–200
These areaxiom schemas, each of which specifies an infinite set of axioms. The third schema is known asLeibniz's law, "the principle of substitutivity", "the indiscernibility of identicals", or "the replacement property". The second schema, involving the function symbolf, is (equivalent to) a special case of the third schema, using the formula:
Then
Sincex=yis given, andf(...,x, ...) =f(...,x, ...) true by reflexivity, we havef(...,x, ...) =f(...,y, ...)
Many other properties of equality are consequences of the axioms above, for example:
An alternate approach considers the equality relation to be a non-logical symbol. This convention is known asfirst-order logic without equality. If an equality relation is included in the signature, the axioms of equality must now be added to the theories under consideration, if desired, instead of being considered rules of logic. The main difference between this method and first-order logic with equality is that an interpretation may now interpret two distinct individuals as "equal" (although, by Leibniz's law, these will satisfy exactly the same formulas under any interpretation). That is, the equality relation may now be interpreted by an arbitraryequivalence relationon the domain of discourse that iscongruentwith respect to the functions and relations of the interpretation.
When this second convention is followed, the termnormal modelis used to refer to an interpretation where no distinct individualsaandbsatisfya=b. In first-order logic with equality, only normal models are considered, and so there is no term for a model other than a normal model. When first-order logic without equality is studied, it is necessary to amend the statements of results such as theLöwenheim–Skolem theoremso that only normal models are considered.
First-order logic without equality is often employed in the context ofsecond-order arithmeticand other higher-order theories of arithmetic, where the equality relation between sets of natural numbers is usually omitted.
If a theory has a binary formulaA(x,y) which satisfies reflexivity and Leibniz's law, the theory is said to have equality, or to be a theory with equality. The theory may not have all instances of the above schemas as axioms, but rather as derivable theorems. For example, in theories with no function symbols and a finite number of relations, it is possible todefineequality in terms of the relations, by defining the two termssandtto be equal if any relation is unchanged by changingstotin any argument.
Some theories allow otherad hocdefinitions of equality:
One motivation for the use of first-order logic, rather thanhigher-order logic, is that first-order logic has manymetalogicalproperties that stronger logics do not have. These results concern general properties of first-order logic itself, rather than properties of individual theories. They provide fundamental tools for the construction of models of first-order theories.
Gödel's completeness theorem, proved byKurt Gödelin 1929, establishes that there are sound, complete, effective deductive systems for first-order logic, and thus the first-order logical consequence relation is captured by finite provability. Naively, the statement that a formula φ logically implies a formula ψ depends on every model of φ; these models will in general be of arbitrarily large cardinality, and so logical consequence cannot be effectively verified by checking every model. However, it is possible to enumerate all finite derivations and search for a derivation of ψ from φ. If ψ is logically implied by φ, such a derivation will eventually be found. Thus first-order logical consequence issemidecidable: it is possible to make an effective enumeration of all pairs of sentences (φ,ψ) such that ψ is a logical consequence of φ.
Unlikepropositional logic, first-order logic isundecidable(although semidecidable), provided that the language has at least one predicate of arity at least 2 (other than equality). This means that there is nodecision procedurethat determines whether arbitrary formulas are logically valid. This result was established independently byAlonzo ChurchandAlan Turingin 1936 and 1937, respectively, giving a negative answer to theEntscheidungsproblemposed byDavid HilbertandWilhelm Ackermannin 1928. Their proofs demonstrate a connection between the unsolvability of the decision problem for first-order logic and the unsolvability of thehalting problem.
There are systems weaker than full first-order logic for which the logical consequence relation is decidable. These include propositional logic andmonadic predicate logic, which is first-order logic restricted to unary predicate symbols and no function symbols. Other logics with no function symbols which are decidable are theguarded fragmentof first-order logic, as well astwo-variable logic. TheBernays–Schönfinkel classof first-order formulas is also decidable. Decidable subsets of first-order logic are also studied in the framework ofdescription logics.
TheLöwenheim–Skolem theoremshows that if a first-order theory ofcardinalityλ has an infinite model, then it has models of every infinite cardinality greater than or equal to λ. One of the earliest results inmodel theory, it implies that it is not possible to characterizecountabilityor uncountability in a first-order language with a countable signature. That is, there is no first-order formula φ(x) such that an arbitrary structure M satisfies φ if and only if the domain of discourse of M is countable (or, in the second case, uncountable).
The Löwenheim–Skolem theorem implies that infinite structures cannot becategoricallyaxiomatized in first-order logic. For example, there is no first-order theory whose only model is the real line: any first-order theory with an infinite model also has a model of cardinality larger than the continuum. Since the real line is infinite, any theory satisfied by the real line is also satisfied by somenonstandard models. When the Löwenheim–Skolem theorem is applied to first-order set theories, the nonintuitive consequences are known asSkolem's paradox.
Thecompactness theoremstates that a set of first-order sentences has a model if and only if every finite subset of it has a model.[29]This implies that if a formula is a logical consequence of an infinite set of first-order axioms, then it is a logical consequence of some finite number of those axioms. This theorem was proved first by Kurt Gödel as a consequence of the completeness theorem, but many additional proofs have been obtained over time. It is a central tool in model theory, providing a fundamental method for constructing models.
The compactness theorem has a limiting effect on which collections of first-order structures are elementary classes. For example, the compactness theorem implies that any theory that has arbitrarily large finite models has an infinite model. Thus, the class of all finitegraphsis not an elementary class (the same holds for many other algebraic structures).
There are also more subtle limitations of first-order logic that are implied by the compactness theorem. For example, in computer science, many situations can be modeled as adirected graphof states (nodes) and connections (directed edges). Validating such a system may require showing that no "bad" state can be reached from any "good" state. Thus, one seeks to determine if the good and bad states are in differentconnected componentsof the graph. However, the compactness theorem can be used to show that connected graphs are not an elementary class in first-order logic, and there is no formula φ(x,y) of first-order logic, in thelogic of graphs, that expresses the idea that there is a path fromxtoy. Connectedness can be expressed insecond-order logic, however, but not with only existential set quantifiers, asΣ11{\displaystyle \Sigma _{1}^{1}}also enjoys compactness.
Per Lindströmshowed that the metalogical properties just discussed actually characterize first-order logic in the sense that no stronger logic can also have those properties (Ebbinghaus and Flum 1994, Chapter XIII). Lindström defined a class of abstract logical systems, and a rigorous definition of the relative strength of a member of this class. He established two theorems for systems of this type:
Although first-order logic is sufficient for formalizing much of mathematics and is commonly used in computer science and other fields, it has certain limitations. These include limitations on its expressiveness and limitations of the fragments of natural languages that it can describe.
For instance, first-order logic is undecidable, meaning a sound, complete and terminating decision algorithm for provability is impossible. This has led to the study of interesting decidable fragments, such as C2: first-order logic with two variables and thecounting quantifiers∃≥n{\displaystyle \exists ^{\geq n}}and∃≤n{\displaystyle \exists ^{\leq n}}.[30]
TheLöwenheim–Skolem theoremshows that if a first-order theory has any infinite model, then it has infinite models of every cardinality. In particular, no first-order theory with an infinite model can becategorical. Thus, there is no first-order theory whose only model has the set of natural numbers as its domain, or whose only model has the set of real numbers as its domain. Many extensions of first-order logic, including infinitary logics and higher-order logics, are more expressive in the sense that they do permit categorical axiomatizations of the natural numbers or real numbers[clarification needed]. This expressiveness comes at a metalogical cost, however: byLindström's theorem, the compactness theorem and the downward Löwenheim–Skolem theorem cannot hold in any logic stronger than first-order.
First-order logic is able to formalize many simple quantifier constructions in natural language, such as "every person who lives in Perth lives in Australia". Hence, first-order logic is used as a basis forknowledge representation languages, such asFO(.).
Still, there are complicated features of natural language that cannot be expressed in first-order logic. "Any logical system which is appropriate as an instrument for the analysis of natural language needs a much richer structure than first-order predicate logic".[31]
There are many variations of first-order logic. Some of these are inessential in the sense that they merely change notation without affecting the semantics. Others change the expressive power more significantly, by extending the semantics through additional quantifiers or other new logical symbols. For example, infinitary logics permit formulas of infinite size, and modal logics add symbols for possibility and necessity.
First-order logic can be studied in languages with fewer logical symbols than were described above:
Restrictions such as these are useful as a technique to reduce the number of inference rules or axiom schemas in deductive systems, which leads to shorter proofs of metalogical results. The cost of the restrictions is that it becomes more difficult to express natural-language statements in the formal system at hand, because the logical connectives used in the natural language statements must be replaced by their (longer) definitions in terms of the restricted collection of logical connectives. Similarly, derivations in the limited systems may be longer than derivations in systems that include additional connectives. There is thus a trade-off between the ease of working within the formal system and the ease of proving results about the formal system.
It is also possible to restrict the arities of function symbols and predicate symbols, in sufficiently expressive theories. One can in principle dispense entirely with functions of arity greater than 2 and predicates of arity greater than 1 in theories that include apairing function. This is a function of arity 2 that takes pairs of elements of the domain and returns anordered paircontaining them. It is also sufficient to have two predicate symbols of arity 2 that define projection functions from an ordered pair to its components. In either case it is necessary that the natural axioms for a pairing function and its projections are satisfied.
Ordinary first-order interpretations have a single domain of discourse over which all quantifiers range.Many-sorted first-order logicallows variables to have differentsorts, which have different domains. This is also calledtyped first-order logic, and the sorts calledtypes(as indata type), but it is not the same as first-ordertype theory. Many-sorted first-order logic is often used in the study ofsecond-order arithmetic.[33]
When there are only finitely many sorts in a theory, many-sorted first-order logic can be reduced to single-sorted first-order logic.[34]: 296–299One introduces into the single-sorted theory a unary predicate symbol for each sort in the many-sorted theory and adds an axiom saying that these unary predicates partition the domain of discourse. For example, if there are two sorts, one adds predicate symbolsP1(x){\displaystyle P_{1}(x)}andP2(x){\displaystyle P_{2}(x)}and the axiom:
Then the elements satisfyingP1{\displaystyle P_{1}}are thought of as elements of the first sort, and elements satisfyingP2{\displaystyle P_{2}}as elements of the second sort. One can quantify over each sort by using the corresponding predicate symbol to limit the range of quantification. For example, to say there is an element of the first sort satisfying formulaφ(x){\displaystyle \varphi (x)}, one writes:
Additional quantifiers can be added to first-order logic.
Infinitary logic allows infinitely long sentences. For example, one may allow a conjunction or disjunction of infinitely many formulas, or quantification over infinitely many variables. Infinitely long sentences arise in areas of mathematics includingtopologyandmodel theory.
Infinitary logic generalizes first-order logic to allow formulas of infinite length. The most common way in which formulas can become infinite is through infinite conjunctions and disjunctions. However, it is also possible to admit generalized signatures in which function and relation symbols are allowed to have infinite arities, or in which quantifiers can bind infinitely many variables. Because an infinite formula cannot be represented by a finite string, it is necessary to choose some other representation of formulas; the usual representation in this context is a tree. Thus, formulas are, essentially, identified with their parse trees, rather than with the strings being parsed.
The most commonly studied infinitary logics are denotedLαβ, where α and β are each eithercardinal numbersor the symbol ∞. In this notation, ordinary first-order logic isLωω.
In the logicL∞ω, arbitrary conjunctions or disjunctions are allowed when building formulas, and there is an unlimited supply of variables. More generally, the logic that permits conjunctions or disjunctions with less than κ constituents is known asLκω. For example,Lω1ωpermitscountableconjunctions and disjunctions.
The set of free variables in a formula ofLκωcan have any cardinality strictly less than κ, yet only finitely many of them can be in the scope of any quantifier when a formula appears as a subformula of another.[35]In other infinitary logics, a subformula may be in the scope of infinitely many quantifiers. For example, inLκ∞, a single universal or existential quantifier may bind arbitrarily many variables simultaneously. Similarly, the logicLκλpermits simultaneous quantification over fewer than λ variables, as well as conjunctions and disjunctions of size less than κ.
Fixpoint logic extends first-order logic by adding the closure under the least fixed points of positive operators.[36]
The characteristic feature of first-order logic is that individuals can be quantified, but not predicates. Thus
is a legal first-order formula, but
is not, in most formalizations of first-order logic.Second-order logicextends first-order logic by adding the latter type of quantification. Otherhigher-order logicsallow quantification over even highertypesthan second-order logic permits. These higher types include relations between relations, functions from relations to relations between relations, and other higher-type objects. Thus the "first" in first-order logic describes the type of objects that can be quantified.
Unlike first-order logic, for which only one semantics is studied, there are several possible semantics for second-order logic. The most commonly employed semantics for second-order and higher-order logic is known asfull semantics. The combination of additional quantifiers and the full semantics for these quantifiers makes higher-order logic stronger than first-order logic. In particular, the (semantic) logical consequence relation for second-order and higher-order logic is not semidecidable; there is no effective deduction system for second-order logic that is sound and complete under full semantics.
Second-order logic with full semantics is more expressive than first-order logic. For example, it is possible to create axiom systems in second-order logic that uniquely characterize the natural numbers and the real line. The cost of this expressiveness is that second-order and higher-order logics have fewer attractive metalogical properties than first-order logic. For example, the Löwenheim–Skolem theorem and compactness theorem of first-order logic become false when generalized to higher-order logics with full semantics.
Automated theorem provingrefers to the development of computer programs that search and find derivations (formal proofs) of mathematical theorems.[37]Finding derivations is a difficult task because thesearch spacecan be very large; an exhaustive search of every possible derivation is theoretically possible butcomputationally infeasiblefor many systems of interest in mathematics. Thus complicatedheuristic functionsare developed to attempt to find a derivation in less time than a blind search.[38]
The related area of automatedproof verificationuses computer programs to check that human-created proofs are correct. Unlike complicated automated theorem provers, verification systems may be small enough that their correctness can be checked both by hand and through automated software verification. This validation of the proof verifier is needed to give confidence that any derivation labeled as "correct" is actually correct.
Some proof verifiers, such asMetamath, insist on having a complete derivation as input. Others, such asMizarandIsabelle, take a well-formatted proof sketch (which may still be very long and detailed) and fill in the missing pieces by doing simple proof searches or applying known decision procedures: the resulting derivation is then verified by a small core "kernel". Many such systems are primarily intended for interactive use by human mathematicians: these are known asproof assistants. They may also use formal logics that are stronger than first-order logic, such as type theory. Because a full derivation of any nontrivial result in a first-order deductive system will be extremely long for a human to write,[39]results are often formalized as a series of lemmas, for which derivations can be constructed separately.
Automated theorem provers are also used to implementformal verificationin computer science. In this setting, theorem provers are used to verify the correctness of programs and of hardware such asprocessorswith respect to aformal specification. Because such analysis is time-consuming and thus expensive, it is usually reserved for projects in which a malfunction would have grave human or financial consequences.
For the problem ofmodel checking, efficientalgorithmsare known todecidewhether an input finite structure satisfies a first-order formula, in addition tocomputational complexitybounds: seeModel checking § First-order logic.
|
https://en.wikipedia.org/wiki/First-order_logic
|
Inlogic, a set ofsymbolsis commonly used to express logical representation. The following table lists many common symbols, together with their name, how they should be read out loud, and the related field ofmathematics. Additionally, the subsequent columns contains an informal explanation, a short example, theUnicodelocation, the name for use inHTMLdocuments,[1]and theLaTeXsymbol.
⇒→⊃
⇔↔≡
′
¬˜!′
The prime symbol is placed after the negated thing, e.g.p′{\displaystyle p'}[2]
∧·&
∨+∥
⊕⊻—≢
⊤
⊥
∀
∃
∃!
()
𝔻
⊢
⊨
—⇔
≡
—
⇔
≔
=def{\displaystyle {\stackrel {\scriptscriptstyle \mathrm {def} }{=}}}\stackrel{
\scriptscriptstyle \mathrm{def}}{=}
The following symbols are either advanced and context-sensitive or very rarely used:
It may also denote a negation (used primarily in electronics).
\urcorner
|
https://en.wikipedia.org/wiki/List_of_logic_symbols
|
Inmathematical logic,abstract algebraic logicis the study of the algebraization ofdeductive systemsarising as an abstraction of the well-knownLindenbaum–Tarski algebra, and how the resulting algebras are related to logical systems.[1]
The archetypal association of this kind, one fundamental to the historical origins ofalgebraic logicand lying at the heart of all subsequently developed subtheories, is the association between the class ofBoolean algebrasand classicalpropositional calculus. This association was discovered byGeorge Boolein the 1850s, and then further developed and refined by others, especiallyC. S. PeirceandErnst Schröder, from the 1870s to the 1890s. This work culminated inLindenbaum–Tarski algebras, devised byAlfred Tarskiand his studentAdolf Lindenbaumin the 1930s. Later, Tarski and his American students (whose ranks include Don Pigozzi) went on to discovercylindric algebra, whose representable instances algebraize all of classicalfirst-order logic, and revivedrelation algebra, whosemodelsinclude all well-knownaxiomatic set theories.
Classical algebraic logic, which comprises all work in algebraic logic until about 1960, studied the properties of specific classes of algebras used to "algebraize" specific logical systems of particular interest to specific logical investigations. Generally, the algebra associated with a logical system was found to be a type oflattice, possibly enriched with one or moreunary operationsother than latticecomplementation.
Abstract algebraic logicis a modern subarea of algebraic logic that emerged in Poland during the 1950s and 60s with the work ofHelena Rasiowa,Roman Sikorski,Jerzy Łoś, andRoman Suszko(to name but a few). It reached maturity in the 1980s with the seminal publications of the Polish logicianJanusz Czelakowski, the Dutch logicianWim Blokand the American logicianDon Pigozzi. The focus of abstract algebraic logic shifted from the study of specific classes of algebras associated with specific logical systems (the focus of classical algebraic logic), to the study of:
The passage from classical algebraic logic to abstract algebraic logic may be compared to the passage from "modern" orabstract algebra(i.e., the study ofgroups,rings,modules,fields, etc.) touniversal algebra(the study of classes of algebras of arbitrary similarity types (algebraicsignatures) satisfying specific abstract properties).
The two main motivations for the development of abstract algebraic logic are closely connected to (1) and (3) above. With respect to (1), a critical step in the transition was initiated by the work of Rasiowa. Her goal was to abstract results and methods known to hold for the classicalpropositional calculusandBoolean algebrasand some other closely related logical systems, in such a way that these results and methods could be applied to a much wider variety of propositional logics.
(3) owes much to the joint work of Blok and Pigozzi exploring the different forms that the well-knowndeduction theoremof classical propositional calculus andfirst-order logictakes on in a wide variety of logical systems. They related these various forms of the deduction theorem to the properties of the algebraic counterparts of these logical systems.
Abstract algebraic logic has become a well established subfield of algebraic logic, with many deep and interesting results. These results explain many properties of different classes of logical systems previously explained only on a case-by-case basis or shrouded in mystery. Perhaps the most important achievement of abstract algebraic logic has been the classification of propositional logics in ahierarchy, called theabstract algebraic hierarchyor Leibniz hierarchy, whose different levels roughly reflect the strength of the ties between a logic at a particular level and its associated class of algebras. The position of a logic in this hierarchy determines the extent to which that logic may be studied using known algebraic methods and techniques. Once a logic is assigned to a level of this hierarchy, one may draw on the powerful arsenal of results, accumulated over the past 30-odd years, governing the algebras situated at the same level of the hierarchy.
The similar terms 'general algebraic logic' and 'universal algebraic logic' refer the approach of the Hungarian School includingHajnal Andréka,István Németiand others.
|
https://en.wikipedia.org/wiki/Abstract_algebraic_logic
|
Inmathematicsandmathematical logic,Boolean algebrais a branch ofalgebra. It differs fromelementary algebrain two ways. First, the values of thevariablesare thetruth valuestrueandfalse, usually denoted by 1 and 0, whereas in elementary algebra the values of the variables are numbers. Second, Boolean algebra useslogical operatorssuch asconjunction(and) denoted as∧,disjunction(or) denoted as∨, andnegation(not) denoted as¬. Elementary algebra, on the other hand, uses arithmetic operators such as addition, multiplication, subtraction, and division. Boolean algebra is therefore a formal way of describinglogical operationsin the same way that elementary algebra describes numerical operations.
Boolean algebra was introduced byGeorge Boolein his first bookThe Mathematical Analysis of Logic(1847),[1]and set forth more fully in hisAn Investigation of the Laws of Thought(1854).[2]According toHuntington, the termBoolean algebrawas first suggested byHenry M. Shefferin 1913,[3]althoughCharles Sanders Peircegave the title "A Boolian [sic] Algebra with One Constant" to the first chapter of his "The Simplest Mathematics" in 1880.[4]Boolean algebra has been fundamental in the development ofdigital electronics, and is provided for in all modernprogramming languages. It is also used inset theoryandstatistics.[5]
A precursor of Boolean algebra wasGottfried Wilhelm Leibniz'salgebra of concepts. The usage of binary in relation to theI Chingwas central to Leibniz'scharacteristica universalis. It eventually created the foundations of algebra of concepts.[6]Leibniz's algebra of concepts is deductively equivalent to the Boolean algebra of sets.[7]
Boole's algebra predated the modern developments inabstract algebraandmathematical logic; it is however seen as connected to the origins of both fields.[8]In an abstract setting, Boolean algebra was perfected in the late 19th century byJevons,Schröder,Huntingtonand others, until it reached the modern conception of an (abstract)mathematical structure.[8]For example, the empirical observation that one can manipulate expressions in thealgebra of sets, by translating them into expressions in Boole's algebra, is explained in modern terms by saying that the algebra of sets isaBoolean algebra(note theindefinite article). In fact,M. H. Stoneproved in 1936that every Boolean algebra isisomorphicto afield of sets.[9][10]
In the 1930s, while studyingswitching circuits,Claude Shannonobserved that one could also apply the rules of Boole's algebra in this setting,[11]and he introducedswitching algebraas a way to analyze and design circuits by algebraic means in terms oflogic gates. Shannon already had at his disposal the abstract mathematical apparatus, thus he cast his switching algebra as thetwo-element Boolean algebra. In modern circuit engineering settings, there is little need to consider other Boolean algebras, thus "switching algebra" and "Boolean algebra" are often used interchangeably.[12][13][14]
Efficient implementationofBoolean functionsis a fundamental problem in thedesignofcombinational logiccircuits. Modernelectronic design automationtools forvery-large-scale integration(VLSI) circuits often rely on an efficient representation of Boolean functions known as (reduced ordered)binary decision diagrams(BDD) forlogic synthesisandformal verification.[15]
Logic sentences that can be expressed in classicalpropositional calculushave anequivalent expressionin Boolean algebra. Thus,Boolean logicis sometimes used to denote propositional calculus performed in this way.[16][17][18]Boolean algebra is not sufficient to capture logic formulas usingquantifiers, like those fromfirst-order logic.
Although the development ofmathematical logicdid not follow Boole's program, the connection between his algebra and logic was later put on firm ground in the setting ofalgebraic logic, which also studies the algebraic systems of many other logics.[8]Theproblem of determining whetherthe variables of a given Boolean (propositional) formula can be assigned in such a way as to make the formula evaluate to true is called theBoolean satisfiability problem(SAT), and is of importance totheoretical computer science, being the first problem shown to beNP-complete. The closely relatedmodel of computationknown as aBoolean circuitrelatestime complexity(of analgorithm) tocircuit complexity.
Whereas expressions denote mainlynumbersin elementary algebra, in Boolean algebra, they denote thetruth valuesfalseandtrue. These values are represented with thebits, 0 and 1. They do not behave like theintegers0 and 1, for which1 + 1 = 2, but may be identified with the elements of thetwo-element fieldGF(2), that is,integer arithmetic modulo 2, for which1 + 1 = 0. Addition and multiplication then play the Boolean roles of XOR (exclusive-or) and AND (conjunction), respectively, with disjunctionx∨y(inclusive-or) definable asx+y−xyand negation¬xas1 −x. InGF(2),−may be replaced by+, since they denote the same operation; however, this way of writing Boolean operations allows applying the usual arithmetic operations of integers (this may be useful when using a programming language in whichGF(2)is not implemented).
Boolean algebra also deals withfunctionswhich have their values in the set{0,1}. Asequence of bitsis a commonly used example of such a function. Another common example is the totality of subsets of a setE: to a subsetFofE, one can define theindicator functionthat takes the value1onF, and0outsideF. The most general example is the set elements of aBoolean algebra, with all of the foregoing being instances thereof.
As with elementary algebra, the purely equational part of the theory may be developed, without considering explicit values for the variables.[19]
While Elementary algebra has four operations (addition, subtraction, multiplication, and division), the Boolean algebra has only three basic operations:conjunction,disjunction, andnegation, expressed with the correspondingbinary operatorsAND(∧{\displaystyle \land }) and OR (∨{\displaystyle \lor }) and theunary operatorNOT(¬{\displaystyle \neg }), collectively referred to asBoolean operators.[20]Variables in Boolean algebra that store the logical value of 0 and 1 are called theBoolean variables. They are used to store either true or false values.[21]The basic operations on Boolean variablesxandyare defined as follows:
Alternatively, the values ofx∧y,x∨y, and ¬xcan be expressed by tabulating their values withtruth tablesas follows:[22]
When used in expressions, the operators are applied according to the precedence rules. As with elementary algebra, expressions in parentheses are evaluated first, following the precedence rules.[23]
If the truth values 0 and 1 are interpreted as integers, these operations may be expressed with the ordinary operations of arithmetic (wherex+yuses addition andxyuses multiplication), or by the minimum/maximum functions:
One might consider that only negation and one of the two other operations are basic because of the following identities that allow one to define conjunction in terms of negation and the disjunction, and vice versa (De Morgan's laws):[24]
Operations composed from the basic operations include, among others, the following:
These definitions give rise to the following truth tables giving the values of these operations for all four possible inputs.
Alawof Boolean algebra is anidentitysuch asx∨ (y∨z) = (x∨y) ∨zbetween two Boolean terms, where aBoolean termis defined as an expression built up from variables and the constants 0 and 1 using the operations ∧, ∨, and ¬. The concept can be extended to terms involving other Boolean operations such as ⊕, →, and ≡, but such extensions are unnecessary for the purposes to which the laws are put. Such purposes include the definition of aBoolean algebraas anymodelof the Boolean laws, and as a means for deriving new laws from old as in the derivation ofx∨ (y∧z) =x∨ (z∧y)fromy∧z=z∧y(as treated in§ Axiomatizing Boolean algebra).
Boolean algebra satisfies many of the same laws as ordinary algebra when one matches up ∨ with addition and ∧ with multiplication. In particular the following laws are common to both kinds of algebra:[25][26]
The following laws hold in Boolean algebra, but not in ordinary algebra:
Takingx= 2in the third law above shows that it is not an ordinary algebra law, since2 × 2 = 4. The remaining five laws can be falsified in ordinary algebra by taking all variables to be 1. For example, in absorption law 1, the left hand side would be1(1 + 1) = 2, while the right hand side would be 1 (and so on).
All of the laws treated thus far have been for conjunction and disjunction. These operations have the property that changing either argument either leaves the output unchanged, or the output changes in the same way as the input. Equivalently, changing any variable from 0 to 1 never results in the output changing from 1 to 0. Operations with this property are said to bemonotone. Thus the axioms thus far have all been for monotonic Boolean logic. Nonmonotonicity enters via complement ¬ as follows.[5]
The complement operation is defined by the following two laws.
All properties of negation including the laws below follow from the above two laws alone.[5]
In both ordinary and Boolean algebra, negation works by exchanging pairs of elements, hence in both algebras it satisfies the double negation law (also called involution law)
But whereasordinary algebrasatisfies the two laws
Boolean algebra satisfiesDe Morgan's laws:
The laws listed above define Boolean algebra, in the sense that they entail the rest of the subject. The lawscomplementation1 and 2, together with the monotone laws, suffice for this purpose and can therefore be taken as one possiblecompleteset of laws oraxiomatizationof Boolean algebra. Every law of Boolean algebra follows logically from these axioms. Furthermore, Boolean algebras can then be defined as themodelsof these axioms as treated in§ Boolean algebras.
Writing down further laws of Boolean algebra cannot give rise to any new consequences of these axioms, nor can it rule out any model of them. In contrast, in a list of some but not all of the same laws, there could have been Boolean laws that did not follow from those on the list, and moreover there would have been models of the listed laws that were not Boolean algebras.
This axiomatization is by no means the only one, or even necessarily the most natural given that attention was not paid as to whether some of the axioms followed from others, but there was simply a choice to stop when enough laws had been noticed, treated further in§ Axiomatizing Boolean algebra. Or the intermediate notion of axiom can be sidestepped altogether by defining a Boolean law directly as anytautology, understood as an equation that holds for all values of its variables over 0 and 1.[27][28]All these definitions of Boolean algebra can be shown to be equivalent.
Principle: If {X, R} is apartially ordered set, then {X, R(inverse)} is also a partially ordered set.
There is nothing special about the choice of symbols for the values of Boolean algebra. 0 and 1 could be renamed toαandβ, and as long as it was done consistently throughout, it would still be Boolean algebra, albeit with some obvious cosmetic differences.
But suppose 0 and 1 were renamed 1 and 0 respectively. Then it would still be Boolean algebra, and moreover operating on the same values. However, it would not be identical to our original Boolean algebra because now ∨ behaves the way ∧ used to do and vice versa. So there are still some cosmetic differences to show that the notation has been changed, despite the fact that 0s and 1s are still being used.
But if in addition to interchanging the names of the values, the names of the two binary operations are also interchanged,nowthere is no trace of what was done. The end product is completely indistinguishable from what was started with. The columns forx∧yandx∨yin the truth tables have changed places, but that switch is immaterial.
When values and operations can be paired up in a way that leaves everything important unchanged when all pairs are switched simultaneously, the members of each pair are calleddualto each other. Thus 0 and 1 are dual, and ∧ and ∨ are dual. Theduality principle, also calledDe Morgan duality, asserts that Boolean algebra is unchanged when all dual pairs are interchanged.
One change not needed to make as part of this interchange was to complement. Complement is aself-dualoperation. The identity or do-nothing operationx(copy the input to the output) is also self-dual. A more complicated example of a self-dual operation is(x∧y) ∨ (y∧z) ∨ (z∧x). There is no self-dual binary operation that depends on both its arguments. A composition of self-dual operations is a self-dual operation. For example, iff(x,y,z) = (x∧y) ∨ (y∧z) ∨ (z∧x), thenf(f(x,y,z),x,t)is a self-dual operation of four argumentsx,y,z,t.
The principle of duality can be explained from agroup theoryperspective by the fact that there are exactly four functions that are one-to-one mappings (automorphisms) of the set ofBoolean polynomialsback to itself: the identity function, the complement function, the dual function and the contradual function (complemented dual). These four functions form agroupunderfunction composition, isomorphic to theKlein four-group,actingon the set of Boolean polynomials.Walter Gottschalkremarked that consequently a more appropriate name for the phenomenon would be theprinciple(orsquare)of quaternality.[5]: 21–22
AVenn diagram[29]can be used as a representation of a Boolean operation using shaded overlapping regions. There is one region for each variable, all circular in the examples here. The interior and exterior of regionxcorresponds respectively to the values 1 (true) and 0 (false) for variablex. The shading indicates the value of the operation for each combination of regions, with dark denoting 1 and light 0 (some authors use the opposite convention).
The three Venn diagrams in the figure below represent respectively conjunctionx∧y, disjunctionx∨y, and complement ¬x.
For conjunction, the region inside both circles is shaded to indicate thatx∧yis 1 when both variables are 1. The other regions are left unshaded to indicate thatx∧yis 0 for the other three combinations.
The second diagram represents disjunctionx∨yby shading those regions that lie inside either or both circles. The third diagram represents complement ¬xby shading the regionnotinside the circle.
While we have not shown the Venn diagrams for the constants 0 and 1, they are trivial, being respectively a white box and a dark box, neither one containing a circle. However, we could put a circle forxin those boxes, in which case each would denote a function of one argument,x, which returns the same value independently ofx, called a constant function. As far as their outputs are concerned, constants and constant functions are indistinguishable; the difference is that a constant takes no arguments, called azeroaryornullaryoperation, while a constant function takes one argument, which it ignores, and is aunaryoperation.
Venn diagrams are helpful in visualizing laws. The commutativity laws for ∧ and ∨ can be seen from the symmetry of the diagrams: a binary operation that was not commutative would not have a symmetric diagram because interchangingxandywould have the effect of reflecting the diagram horizontally and any failure of commutativity would then appear as a failure of symmetry.
Idempotenceof ∧ and ∨ can be visualized by sliding the two circles together and noting that the shaded area then becomes the whole circle, for both ∧ and ∨.
To see the first absorption law,x∧ (x∨y) =x, start with the diagram in the middle forx∨yand note that the portion of the shaded area in common with thexcircle is the whole of thexcircle. For the second absorption law,x∨ (x∧y) =x, start with the left diagram forx∧yand note that shading the whole of thexcircle results in just thexcircle being shaded, since the previous shading was inside thexcircle.
The double negation law can be seen by complementing the shading in the third diagram for ¬x, which shades thexcircle.
To visualize the first De Morgan's law,(¬x) ∧ (¬y) = ¬(x∨y), start with the middle diagram forx∨yand complement its shading so that only the region outside both circles is shaded, which is what the right hand side of the law describes. The result is the same as if we shaded that region which is both outside thexcircleandoutside theycircle, i.e. the conjunction of their exteriors, which is what the left hand side of the law describes.
The second De Morgan's law,(¬x) ∨ (¬y) = ¬(x∧y), works the same way with the two diagrams interchanged.
The first complement law,x∧ ¬x= 0, says that the interior and exterior of thexcircle have no overlap. The second complement law,x∨ ¬x= 1, says that everything is either inside or outside thexcircle.
Digital logic is the application of the Boolean algebra of 0 and 1 to electronic hardware consisting oflogic gatesconnected to form acircuit diagram. Each gate implements a Boolean operation, and is depicted schematically by a shape indicating the operation. The shapes associated with the gates for conjunction (AND-gates), disjunction (OR-gates), and complement (inverters) are as follows:[30]
The lines on the left of each gate represent input wires orports. The value of the input is represented by a voltage on the lead. For so-called "active-high" logic, 0 is represented by a voltage close to zero or "ground," while 1 is represented by a voltage close to the supply voltage; active-low reverses this. The line on the right of each gate represents the output port, which normally follows the same voltage conventions as the input ports.
Complement is implemented with an inverter gate. The triangle denotes the operation that simply copies the input to the output; the small circle on the output denotes the actual inversion complementing the input. The convention of putting such a circle on any port means that the signal passing through this port is complemented on the way through, whether it is an input or output port.
Theduality principle, orDe Morgan's laws, can be understood as asserting that complementing all three ports of an AND gate converts it to an OR gate and vice versa, as shown in Figure 4 below. Complementing both ports of an inverter however leaves the operation unchanged.
More generally, one may complement any of the eight subsets of the three ports of either an AND or OR gate. The resulting sixteen possibilities give rise to only eight Boolean operations, namely those with an odd number of 1s in their truth table. There are eight such because the "odd-bit-out" can be either 0 or 1 and can go in any of four positions in the truth table. There being sixteen binary Boolean operations, this must leave eight operations with an even number of 1s in their truth tables. Two of these are the constants 0 and 1 (as binary operations that ignore both their inputs); four are the operations that depend nontrivially on exactly one of their two inputs, namelyx,y, ¬x, and ¬y; and the remaining two arex⊕y(XOR) and its complementx≡y.
The term "algebra" denotes both a subject, namely the subject ofalgebra, and an object, namely analgebraic structure. Whereas the foregoing has addressed the subject of Boolean algebra, this section deals with mathematical objects called Boolean algebras, defined in full generality as any model of the Boolean laws. We begin with a special case of the notion definable without reference to the laws, namely concrete Boolean algebras, and then givethe formal definitionof the general notion.
Aconcrete Boolean algebraorfield of setsis any nonempty set of subsets of a given setXclosed under the set operations ofunion,intersection, andcomplementrelative toX.[5]
(HistoricallyXitself was required to be nonempty as well to exclude the degenerate or one-element Boolean algebra, which is the one exception to the rule that all Boolean algebras satisfy the same equations since the degenerate algebra satisfies every equation. However, this exclusion conflicts with the preferred purely equational definition of "Boolean algebra", there being no way to rule out the one-element algebra using only equations— 0 ≠ 1 does not count, being a negated equation. Hence modern authors allow the degenerate Boolean algebra and letXbe empty.)
Example 1.Thepower set2XofX, consisting of allsubsetsofX. HereXmay be any set: empty, finite, infinite, or evenuncountable.
Example 2.The empty set andX. This two-element algebra shows that a concrete Boolean algebra can be finite even when it consists of subsets of an infinite set. It can be seen that every field of subsets ofXmust contain the empty set andX. Hence no smaller example is possible, other than the degenerate algebra obtained by takingXto be empty so as to make the empty set andXcoincide.
Example 3.The set of finite andcofinitesets of integers, where a cofinite set is one omitting only finitely many integers. This is clearly closed under complement, and is closed under union because the union of a cofinite set with any set is cofinite, while the union of two finite sets is finite. Intersection behaves like union with "finite" and "cofinite" interchanged. This example is countably infinite because there are only countably many finite sets of integers.
Example 4.For a less trivial example of the point made by example 2, consider aVenn diagramformed bynclosed curvespartitioningthe diagram into 2nregions, and letXbe the (infinite) set of all points in the plane not on any curve but somewhere within the diagram. The interior of each region is thus an infinite subset ofX, and every point inXis in exactly one region. Then the set of all 22npossible unions of regions (including the empty set obtained as the union of the empty set of regions andXobtained as the union of all 2nregions) is closed under union, intersection, and complement relative toXand therefore forms a concrete Boolean algebra. Again, there are finitely many subsets of an infinite set forming a concrete Boolean algebra, with example 2 arising as the casen= 0 of no curves.
A subsetYofXcan be identified with anindexed familyof bits withindex setX, with the bit indexed byx∈Xbeing 1 or 0 according to whether or notx∈Y. (This is the so-calledcharacteristic functionnotion of a subset.) For example, a 32-bit computer word consists of 32 bits indexed by the set {0,1,2,...,31}, with 0 and 31 indexing the low and high order bits respectively. For a smaller example, ifX={a,b,c}{\displaystyle X=\{a,b,c\}}wherea, b, care viewed as bit positions in that order from left to right, the eight subsets {}, {c}, {b}, {b,c}, {a}, {a,c}, {a,b}, and {a,b,c} ofXcan be identified with the respective bit vectors 000, 001, 010, 011, 100, 101, 110, and 111. Bit vectors indexed by the set of natural numbers are infinitesequencesof bits, while those indexed by therealsin theunit interval[0,1] are packed too densely to be able to write conventionally but nonetheless form well-defined indexed families (imagine coloring every point of the interval [0,1] either black or white independently; the black points then form an arbitrary subset of [0,1]).
From this bit vector viewpoint, a concrete Boolean algebra can be defined equivalently as a nonempty set of bit vectors all of the same length (more generally, indexed by the same set) and closed under the bit vector operations ofbitwise∧, ∨, and ¬, as in1010∧0110 = 0010,1010∨0110 = 1110, and¬1010 = 0101, the bit vector realizations of intersection, union, and complement respectively.
The set {0,1} and its Boolean operations as treated above can be understood as the special case of bit vectors of length one, which by the identification of bit vectors with subsets can also be understood as the two subsets of a one-element set. This is called theprototypicalBoolean algebra, justified by the following observation.
This observation is proved as follows. Certainly any law satisfied by all concrete Boolean algebras is satisfied by the prototypical one since it is concrete. Conversely any law that fails for some concrete Boolean algebra must have failed at a particular bit position, in which case that position by itself furnishes a one-bit counterexample to that law. Nondegeneracy ensures the existence of at least one bit position because there is only one empty bit vector.
The final goal of the next section can be understood as eliminating "concrete" from the above observation. That goal is reached via the stronger observation that, up to isomorphism, all Boolean algebras are concrete.
The Boolean algebras so far have all been concrete, consisting of bit vectors or equivalently of subsets of some set. Such a Boolean algebra consists of a set and operations on that set which can beshownto satisfy the laws of Boolean algebra.
Instead of showing that the Boolean laws are satisfied, we can instead postulate a setX, two binary operations onX, and one unary operation, andrequirethat those operations satisfy the laws of Boolean algebra. The elements ofXneed not be bit vectors or subsets but can be anything at all. This leads to the more generalabstractdefinition.
For the purposes of this definition it is irrelevant how the operations came to satisfy the laws, whether by fiat or proof. All concrete Boolean algebras satisfy the laws (by proof rather than fiat), whence every concrete Boolean algebra is a Boolean algebra according to our definitions. This axiomatic definition of a Boolean algebra as a set and certain operations satisfying certain laws or axiomsby fiatis entirely analogous to the abstract definitions ofgroup,ring,fieldetc. characteristic of modern orabstract algebra.
Given any complete axiomatization of Boolean algebra, such as the axioms for acomplementeddistributive lattice, a sufficient condition for analgebraic structureof this kind to satisfy all the Boolean laws is that it satisfy just those axioms. The following is therefore an equivalent definition.
The section onaxiomatizationlists other axiomatizations, any of which can be made the basis of an equivalent definition.
Although every concrete Boolean algebra is a Boolean algebra, not every Boolean algebra need be concrete. Letnbe asquare-freepositive integer, one not divisible by the square of an integer, for example 30 but not 12. The operations ofgreatest common divisor,least common multiple, and division inton(that is, ¬x=n/x), can be shown to satisfy all the Boolean laws when their arguments range over the positive divisors ofn. Hence those divisors form a Boolean algebra. These divisors are not subsets of a set, making the divisors ofna Boolean algebra that is not concrete according to our definitions.
However, if each divisor ofnisrepresentedby the set of its prime factors, this nonconcrete Boolean algebra isisomorphicto the concrete Boolean algebra consisting of all sets of prime factors ofn, with union corresponding to least common multiple, intersection to greatest common divisor, and complement to division inton. So this example, while not technically concrete, is at least "morally" concrete via this representation, called anisomorphism. This example is an instance of the following notion.
The next question is answered positively as follows.
That is, up to isomorphism, abstract and concrete Boolean algebras are the same thing. This result depends on theBoolean prime ideal theorem, a choice principle slightly weaker than theaxiom of choice. This strong relationship implies a weaker result strengthening the observation in the previous subsection to the following easy consequence of representability.
It is weaker in the sense that it does not of itself imply representability. Boolean algebras are special here, for example arelation algebrais a Boolean algebra with additional structure but it is not the case that every relation algebra is representable in the sense appropriate to relation algebras.
The above definition of an abstract Boolean algebra as a set together with operations satisfying "the" Boolean laws raises the question of what those laws are. A simplistic answer is "all Boolean laws", which can be defined as all equations that hold for the Boolean algebra of 0 and 1. However, since there are infinitely many such laws, this is not a satisfactory answer in practice, leading to the question of it suffices to require only finitely many laws to hold.
In the case of Boolean algebras, the answer is "yes": the finitely many equations listed above are sufficient. Thus, Boolean algebra is said to befinitely axiomatizableorfinitely based.
Moreover, the number of equations needed can be further reduced. To begin with, some of the above laws are implied by some of the others. A sufficient subset of the above laws consists of the pairs of associativity, commutativity, and absorption laws, distributivity of ∧ over ∨ (or the other distributivity law—one suffices), and the two complement laws. In fact, this is the traditional axiomatization of Boolean algebra as acomplementeddistributive lattice.
By introducing additional laws not listed above, it becomes possible to shorten the list of needed equations yet further; for instance, with the vertical bar representing theSheffer strokeoperation, the single axiom((a∣b)∣c)∣(a∣((a∣c)∣a))=c{\displaystyle ((a\mid b)\mid c)\mid (a\mid ((a\mid c)\mid a))=c}is sufficient to completely axiomatize Boolean algebra. It is also possible to find longer single axioms using more conventional operations; seeMinimal axioms for Boolean algebra.[32]
Propositional logicis alogical systemthat is intimately connected to Boolean algebra.[5]Many syntactic concepts of Boolean algebra carry over to propositional logic with only minor changes in notation and terminology, while the semantics of propositional logic are defined via Boolean algebras in a way that the tautologies (theorems) of propositional logic correspond to equational theorems of Boolean algebra.
Syntactically, every Boolean term corresponds to apropositional formulaof propositional logic. In this translation between Boolean algebra and propositional logic, Boolean variablesx, y,... becomepropositional variables(oratoms)P, Q, ... Boolean terms such asx∨ybecome propositional formulasP∨Q; 0 becomesfalseor⊥, and 1 becomestrueorT. It is convenient when referring to generic propositions to use Greek letters Φ, Ψ, ... as metavariables (variables outside the language of propositional calculus, used when talkingaboutpropositional calculus) to denote propositions.
The semantics of propositional logic rely ontruth assignments. The essential idea of a truth assignment is that the propositional variables are mapped to elements of a fixed Boolean algebra, and then thetruth valueof a propositional formula using these letters is the element of the Boolean algebra that is obtained by computing the value of the Boolean term corresponding to the formula. In classical semantics, only the two-element Boolean algebra is used, while inBoolean-valued semanticsarbitrary Boolean algebras are considered. Atautologyis a propositional formula that is assigned truth value1by every truth assignment of its propositional variables to an arbitrary Boolean algebra (or, equivalently, every truth assignment to the two element Boolean algebra).
These semantics permit a translation between tautologies of propositional logic and equational theorems of Boolean algebra. Every tautology Φ of propositional logic can be expressed as the Boolean equation Φ = 1, which will be a theorem of Boolean algebra. Conversely, every theorem Φ = Ψ of Boolean algebra corresponds to the tautologies (Φ ∨ ¬Ψ) ∧ (¬Φ ∨ Ψ) and (Φ ∧ Ψ) ∨ (¬Φ ∧ ¬Ψ). If → is in the language, these last tautologies can also be written as (Φ → Ψ) ∧ (Ψ → Φ), or as two separate theorems Φ → Ψ and Ψ → Φ; if ≡ is available, then the single tautology Φ ≡ Ψ can be used.
One motivating application of propositional calculus is the analysis of propositions and deductive arguments in natural language.[33]Whereas the proposition "ifx= 3, thenx+ 1 = 4" depends on the meanings of such symbols as + and 1, the proposition "ifx= 3, thenx= 3" does not; it is true merely by virtue of its structure, and remains true whether "x= 3" is replaced by "x= 4" or "the moon is made of green cheese." The generic or abstract form of this tautology is "ifP, thenP," or in the language of Boolean algebra,P→P.[citation needed]
ReplacingPbyx= 3 or any other proposition is calledinstantiationofPby that proposition. The result of instantiatingPin an abstract proposition is called aninstanceof the proposition. Thus,x= 3 →x= 3 is a tautology by virtue of being an instance of the abstract tautologyP→P. All occurrences of the instantiated variable must be instantiated with the same proposition, to avoid such nonsense asP→x= 3 orx= 3 →x= 4.
Propositional calculus restricts attention to abstract propositions, those built up from propositional variables using Boolean operations. Instantiation is still possible within propositional calculus, but only by instantiating propositional variables by abstract propositions, such as instantiatingQbyQ→PinP→ (Q→P) to yield the instanceP→ ((Q→P) →P).
(The availability of instantiation as part of the machinery of propositional calculus avoids the need for metavariables within the language of propositional calculus, since ordinary propositional variables can be considered within the language to denote arbitrary propositions. The metavariables themselves are outside the reach of instantiation, not being part of the language of propositional calculus but rather part of the same language for talking about it that this sentence is written in, where there is a need to be able to distinguish propositional variables and their instantiations as being distinct syntactic entities.)
An axiomatization of propositional calculus is a set of tautologies calledaxiomsand one or more inference rules for producing new tautologies from old. Aproofin an axiom systemAis a finite nonempty sequence of propositions each of which is either an instance of an axiom ofAor follows by some rule ofAfrom propositions appearing earlier in the proof (thereby disallowing circular reasoning). The last proposition is thetheoremproved by the proof. Every nonempty initial segment of a proof is itself a proof, whence every proposition in a proof is itself a theorem. An axiomatization issoundwhen every theorem is a tautology, andcompletewhen every tautology is a theorem.[34]
Propositional calculus is commonly organized as aHilbert system, whose operations are just those of Boolean algebra and whose theorems are Boolean tautologies, those Boolean terms equal to the Boolean constant 1. Another form issequent calculus, which has two sorts, propositions as in ordinary propositional calculus, and pairs of lists of propositions calledsequents, such asA∨B,A∧C, ... ⊢A,B→C, ....The two halves of a sequent are called the antecedent and the succedent respectively. The customary metavariable denoting an antecedent or part thereof is Γ, and for a succedent Δ; thus Γ,A⊢ Δ would denote a sequent whose succedent is a list Δ and whose antecedent is a list Γ with an additional propositionAappended after it. The antecedent is interpreted as the conjunction of its propositions, the succedent as the disjunction of its propositions, and the sequent itself as theentailmentof the succedent by the antecedent.
Entailment differs from implication in that whereas the latter is a binaryoperationthat returns a value in a Boolean algebra, the former is a binaryrelationwhich either holds or does not hold. In this sense, entailment is anexternalform of implication, meaning external to the Boolean algebra, thinking of the reader of the sequent as also being external and interpreting and comparing antecedents and succedents in some Boolean algebra. The natural interpretation of ⊢ is as ≤ in the partial order of the Boolean algebra defined byx≤yjust whenx∨y=y. This ability to mix external implication ⊢ and internal implication → in the one logic is among the essential differences between sequent calculus and propositional calculus.[35]
Boolean algebra as the calculus of two values is fundamental to computer circuits, computer programming, and mathematical logic, and is also used in other areas of mathematics such as set theory and statistics.[5]
In the early 20th century, several electrical engineers[who?]intuitively recognized that Boolean algebra was analogous to the behavior of certain types of electrical circuits.Claude Shannonformally proved such behavior was logically equivalent to Boolean algebra in his 1937 master's thesis,A Symbolic Analysis of Relay and Switching Circuits.
Today, all modern general-purposecomputersperform their functions using two-value Boolean logic; that is, their electrical circuits are a physical manifestation of two-value Boolean logic. They achieve this in various ways: asvoltages on wiresin high-speed circuits and capacitive storage devices, as orientations of amagnetic domainin ferromagnetic storage devices, as holes inpunched cardsorpaper tape, and so on. (Some early computers used decimal circuits or mechanisms instead of two-valued logic circuits.)
Of course, it is possible to code more than two symbols in any given medium. For example, one might use respectively 0, 1, 2, and 3 volts to code a four-symbol alphabet on a wire, or holes of different sizes in a punched card. In practice, the tight constraints of high speed, small size, and low power combine to make noise a major factor. This makes it hard to distinguish between symbols when there are several possible symbols that could occur at a single site. Rather than attempting to distinguish between four voltages on one wire, digital designers have settled on two voltages per wire, high and low.
Computers use two-value Boolean circuits for the above reasons. The most common computer architectures use ordered sequences of Boolean values, called bits, of 32 or 64 values, e.g. 01101000110101100101010101001011. When programming inmachine code,assembly language, and certain otherprogramming languages, programmers work with the low-level digital structure of thedata registers. These registers operate on voltages, where zero volts represents Boolean 0, and a reference voltage (often +5 V, +3.3 V, or +1.8 V) represents Boolean 1. Such languages support both numeric operations and logical operations. In this context, "numeric" means that the computer treats sequences of bits asbinary numbers(base two numbers) and executes arithmetic operations like add, subtract, multiply, or divide. "Logical" refers to the Boolean logical operations of disjunction, conjunction, and negation between two sequences of bits, in which each bit in one sequence is simply compared to its counterpart in the other sequence. Programmers therefore have the option of working in and applying the rules of either numeric algebra or Boolean algebra as needed. A core differentiating feature between these families of operations is the existence of thecarryoperation in the first but not the second.
Other areas where two values is a good choice are the law and mathematics. In everyday relaxed conversation, nuanced or complex answers such as "maybe" or "only on the weekend" are acceptable. In more focused situations such as a court of law or theorem-based mathematics, however, it is deemed advantageous to frame questions so as to admit a simple yes-or-no answer—is the defendant guilty or not guilty, is the proposition true or false—and to disallow any other answer. However, limiting this might prove in practice for the respondent, the principle of the simple yes–no question has become a central feature of both judicial and mathematical logic, makingtwo-valued logicdeserving of organization and study in its own right.
A central concept of set theory is membership. An organization may permit multiple degrees of membership, such as novice, associate, and full. With sets, however, an element is either in or out. The candidates for membership in a set work just like the wires in a digital computer: each candidate is either a member or a nonmember, just as each wire is either high or low.
Algebra being a fundamental tool in any area amenable to mathematical treatment, these considerations combine to make the algebra of two values of fundamental importance to computer hardware, mathematical logic, and set theory.
Two-valued logic can be extended tomulti-valued logic, notably by replacing the Boolean domain {0, 1} with the unit interval [0,1], in which case rather than only taking values 0 or 1, any value between and including 0 and 1 can be assumed. Algebraically, negation (NOT) is replaced with 1 −x, conjunction (AND) is replaced with multiplication (xy), and disjunction (OR) is defined viaDe Morgan's law. Interpreting these values as logicaltruth valuesyields a multi-valued logic, which forms the basis forfuzzy logicandprobabilistic logic. In these interpretations, a value is interpreted as the "degree" of truth – to what extent a proposition is true, or the probability that the proposition is true.
The original application for Boolean operations wasmathematical logic, where it combines the truth values, true or false, of individual formulas.
Natural languages such as English have words for several Boolean operations, in particular conjunction (and), disjunction (or), negation (not), and implication (implies).But notis synonymous withand not. When used to combine situational assertions such as "the block is on the table" and "cats drink milk", which naïvely are either true or false, the meanings of theselogical connectivesoften have the meaning of their logical counterparts. However, with descriptions of behavior such as "Jim walked through the door", one starts to notice differences such as failure of commutativity, for example, the conjunction of "Jim opened the door" with "Jim walked through the door" in that order is not equivalent to their conjunction in the other order, sinceandusually meansand thenin such cases. Questions can be similar: the order "Is the sky blue, and why is the sky blue?" makes more sense than the reverse order. Conjunctive commands about behavior are like behavioral assertions, as inget dressed and go to school. Disjunctive commands suchlove me or leave meorfish or cut baittend to be asymmetric via the implication that one alternative is less preferable. Conjoined nouns such astea and milkgenerally describe aggregation as with set union whiletea or milkis a choice. However, context can reverse these senses, as inyour choices are coffee and teawhich usually means the same asyour choices are coffee or tea(alternatives). Double negation, as in "I don't not like milk", rarely means literally "I do like milk" but rather conveys some sort of hedging, as though to imply that there is a third possibility. "Not not P" can be loosely interpreted as "surely P", and althoughPnecessarily implies "not notP," the converse is suspect in English, much as withintuitionistic logic. In view of the highly idiosyncratic usage of conjunctions in natural languages, Boolean algebra cannot be considered a reliable framework for interpreting them.
Boolean operations are used indigital logicto combine the bits carried on individual wires, thereby interpreting them over {0,1}. When a vector ofnidentical binary gates are used to combine two bit vectors each ofnbits, the individual bit operations can be understood collectively as a single operation on values from aBoolean algebrawith 2nelements.
Naive set theoryinterprets Boolean operations as acting on subsets of a given setX. As we saw earlier this behavior exactly parallels the coordinate-wise combinations of bit vectors, with the union of two sets corresponding to the disjunction of two bit vectors and so on.
The 256-element free Boolean algebra on three generators is deployed incomputer displaysbased onraster graphics, which usebit blitto manipulate whole regions consisting ofpixels, relying on Boolean operations to specify how the source region should be combined with the destination, typically with the help of a third region called themask. Modernvideo cardsoffer all223= 256ternary operations for this purpose, with the choice of operation being a one-byte (8-bit) parameter. The constantsSRC = 0xaaor0b10101010,DST = 0xccor0b11001100, andMSK = 0xf0or0b11110000allow Boolean operations such as(SRC^DST)&MSK(meaning XOR the source and destination and then AND the result with the mask) to be written directly as a constant denoting a byte calculated at compile time,0x80in the(SRC^DST)&MSKexample,0x88if justSRC^DST, etc. At run time the video card interprets the byte as the raster operation indicated by the original expression in a uniform way that requires remarkably little hardware and which takes time completely independent of the complexity of the expression.
Solid modelingsystems forcomputer aided designoffer a variety of methods for building objects from other objects, combination by Boolean operations being one of them. In this method the space in which objects exist is understood as a setSofvoxels(the three-dimensional analogue of pixels in two-dimensional graphics) and shapes are defined as subsets ofS, allowing objects to be combined as sets via union, intersection, etc. One obvious use is in building a complex shape from simple shapes simply as the union of the latter. Another use is in sculpting understood as removal of material: any grinding, milling, routing, or drilling operation that can be performed with physical machinery on physical materials can be simulated on the computer with the Boolean operationx∧ ¬yorx−y, which in set theory is set difference, remove the elements ofyfrom those ofx. Thus given two shapes one to be machined and the other the material to be removed, the result of machining the former to remove the latter is described simply as their set difference.
Search engine queries also employ Boolean logic. For this application, each web page on the Internet may be considered to be an "element" of a "set." The following examples use a syntax supported byGoogle.[NB 1]
|
https://en.wikipedia.org/wiki/Boolean_algebra_(logic)
|
Inmathematicsandabstract algebra, aBoolean domainis asetconsisting of exactly two elements whose interpretations includefalseandtrue. Inlogic, mathematics andtheoretical computer science, a Boolean domain is usually written as {0, 1},[1][2][3][4][5]orB.{\displaystyle \mathbb {B} .}[6][7]
Thealgebraic structurethat naturally builds on a Boolean domain is theBoolean algebra with two elements. Theinitial objectin thecategoryofbounded latticesis a Boolean domain.
Incomputer science, a Boolean variable is avariablethat takes values in some Boolean domain. Someprogramming languagesfeaturereserved wordsor symbols for the elements of the Boolean domain, for examplefalseandtrue. However, many programming languages do not have aBoolean data typein the strict sense. InCorBASIC, for example, falsity is represented by the number 0 and truth is represented by the number 1 or −1, and all variables that can take these values can also take any other numerical values.
The Boolean domain {0, 1} can be replaced by theunit interval[0,1], in which case rather than only taking values 0 or 1, any value between and including 0 and 1 can be assumed. Algebraically, negation (NOT) is replaced with1−x,{\displaystyle 1-x,}conjunction (AND) is replaced with multiplication (xy{\displaystyle xy}), and disjunction (OR) is defined viaDe Morgan's lawto be1−(1−x)(1−y)=x+y−xy{\displaystyle 1-(1-x)(1-y)=x+y-xy}.
Interpreting these values as logicaltruth valuesyields amulti-valued logic, which forms the basis forfuzzy logicandprobabilistic logic. In these interpretations, a value is interpreted as the "degree" of truth – to what extent a proposition is true, or the probability that the proposition is true.
|
https://en.wikipedia.org/wiki/Boolean_domain
|
Inmathematics, aBoolean functionis afunctionwhoseargumentsand result assume values from a two-element set (usually {true, false}, {0,1} or {−1,1}).[1][2]Alternative names areswitching function, used especially in oldercomputer scienceliterature,[3][4]andtruth function(orlogical function), used inlogic. Boolean functions are the subject ofBoolean algebraandswitching theory.[5]
A Boolean function takes the formf:{0,1}k→{0,1}{\displaystyle f:\{0,1\}^{k}\to \{0,1\}}, where{0,1}{\displaystyle \{0,1\}}is known as theBoolean domainandk{\displaystyle k}is a non-negative integer called thearityof the function. In the case wherek=0{\displaystyle k=0}, the function is a constant element of{0,1}{\displaystyle \{0,1\}}. A Boolean function with multiple outputs,f:{0,1}k→{0,1}m{\displaystyle f:\{0,1\}^{k}\to \{0,1\}^{m}}withm>1{\displaystyle m>1}is avectorialorvector-valuedBoolean function (anS-boxin symmetriccryptography).[6]
There are22k{\displaystyle 2^{2^{k}}}different Boolean functions withk{\displaystyle k}arguments; equal to the number of differenttruth tableswith2k{\displaystyle 2^{k}}entries.
Everyk{\displaystyle k}-ary Boolean function can be expressed as apropositional formulaink{\displaystyle k}variablesx1,...,xk{\displaystyle x_{1},...,x_{k}}, and two propositional formulas arelogically equivalentif and only if they express the same Boolean function.
The rudimentary symmetric Boolean functions (logical connectivesorlogic gates) are:
An example of a more complicated function is themajority function(of an odd number of inputs).
A Boolean function may be specified in a variety of ways:
Algebraically, as apropositional formulausing rudimentary Boolean functions:
Boolean formulas can also be displayed as a graph:
In order to optimize electronic circuits, Boolean formulas can beminimizedusing theQuine–McCluskey algorithmorKarnaugh map.
A Boolean function can have a variety of properties:[7]
Circuit complexityattempts to classify Boolean functions with respect to the size or depth of circuits that can compute them.
A Boolean function may be decomposed usingBoole's expansion theoremin positive and negativeShannoncofactors(Shannon expansion), which are the (k−1)-ary functions resulting from fixing one of the arguments (to 0 or 1). The generalk-ary functions obtained by imposing a linear constraint on a set of inputs (a linear subspace) are known assubfunctions.[8]
TheBoolean derivativeof the function to one of the arguments is a (k−1)-ary function that is true when the output of the function is sensitive to the chosen input variable; it is the XOR of the two corresponding cofactors. A derivative and a cofactor are used in aReed–Muller expansion. The concept can be generalized as ak-ary derivative in the direction dx, obtained as the difference (XOR) of the function at x and x + dx.[8]
TheMöbius transform(orBoole–Möbius transform) of a Boolean function is the set of coefficients of itspolynomial(algebraic normal form), as a function of the monomial exponent vectors. It is aself-inversetransform. It can be calculated efficiently using abutterfly algorithm("Fast Möbius Transform"), analogous to theFast Fourier Transform.[9]CoincidentBoolean functions are equal to their Möbius transform, i.e. their truth table (minterm) values equal their algebraic (monomial) coefficients.[10]There are 2^2^(k−1) coincident functions ofkarguments.[11]
TheWalsh transformof a Boolean function is a k-ary integer-valued function giving the coefficients of a decomposition intolinear functions(Walsh functions), analogous to the decomposition of real-valued functions intoharmonicsby theFourier transform. Its square is thepower spectrumorWalsh spectrum. The Walsh coefficient of a single bit vector is a measure for the correlation of that bit with the output of the Boolean function. The maximum (in absolute value) Walsh coefficient is known as thelinearityof the function.[8]The highest number of bits (order) for which all Walsh coefficients are 0 (i.e. the subfunctions are balanced) is known asresiliency, and the function is said to becorrelation immuneto that order.[8]The Walsh coefficients play a key role inlinear cryptanalysis.
Theautocorrelationof a Boolean function is a k-ary integer-valued function giving the correlation between a certain set of changes in the inputs and the function output. For a given bit vector it is related to the Hamming weight of the derivative in that direction. The maximal autocorrelation coefficient (in absolute value) is known as theabsolute indicator.[7][8]If all autocorrelation coefficients are 0 (i.e. the derivatives are balanced) for a certain number of bits then the function is said to satisfy thepropagation criterionto that order; if they are all zero then the function is abent function.[12]The autocorrelation coefficients play a key role indifferential cryptanalysis.
The Walsh coefficients of a Boolean function and its autocorrelation coefficients are related by the equivalent of theWiener–Khinchin theorem, which states that the autocorrelation and the power spectrum are a Walsh transform pair.[8]
These concepts can be extended naturally tovectorialBoolean functions by considering their output bits (coordinates) individually, or more thoroughly, by looking at the set of all linear functions of output bits, known as itscomponents.[6]The set of Walsh transforms of the components is known as aLinear Approximation Table(LAT)[13][14]orcorrelation matrix;[15][16]it describes the correlation between different linear combinations of input and output bits. The set of autocorrelation coefficients of the components is theautocorrelation table,[14]related by a Walsh transform of the components[17]to the more widely usedDifference Distribution Table(DDT)[13][14]which lists the correlations between differences in input and output bits (see also:S-box).
Any Boolean functionf(x):{0,1}n→{0,1}{\displaystyle f(x):\{0,1\}^{n}\rightarrow \{0,1\}}can be uniquely extended (interpolated) to thereal domainby amultilinear polynomialinRn{\displaystyle \mathbb {R} ^{n}}, constructed by summing the truth table values multiplied byindicator polynomials:f∗(x)=∑a∈{0,1}nf(a)∏i:ai=1xi∏i:ai=0(1−xi){\displaystyle f^{*}(x)=\sum _{a\in {\{0,1\}}^{n}}f(a)\prod _{i:a_{i}=1}x_{i}\prod _{i:a_{i}=0}(1-x_{i})}For example, the extension of the binary XOR functionx⊕y{\displaystyle x\oplus y}is0(1−x)(1−y)+1x(1−y)+1(1−x)y+0xy{\displaystyle 0(1-x)(1-y)+1x(1-y)+1(1-x)y+0xy}which equalsx+y−2xy{\displaystyle x+y-2xy}Some other examples are negation (1−x{\displaystyle 1-x}), AND (xy{\displaystyle xy}) and OR (x+y−xy{\displaystyle x+y-xy}). When all operands are independent (share no variables) a function's polynomial form can be found by repeatedly applying the polynomials of the operators in a Boolean formula. When the coefficients are calculatedmodulo 2one obtains thealgebraic normal form(Zhegalkin polynomial).
Direct expressions for the coefficients of the polynomial can be derived by taking an appropriate derivative:f∗(00)=(f∗)(00)=f(00)f∗(01)=(∂1f∗)(00)=−f(00)+f(01)f∗(10)=(∂2f∗)(00)=−f(00)+f(10)f∗(11)=(∂1∂2f∗)(00)=f(00)−f(01)−f(10)+f(11){\displaystyle {\begin{array}{lcl}f^{*}(00)&=&(f^{*})(00)&=&f(00)\\f^{*}(01)&=&(\partial _{1}f^{*})(00)&=&-f(00)+f(01)\\f^{*}(10)&=&(\partial _{2}f^{*})(00)&=&-f(00)+f(10)\\f^{*}(11)&=&(\partial _{1}\partial _{2}f^{*})(00)&=&f(00)-f(01)-f(10)+f(11)\\\end{array}}}this generalizes as theMöbius inversionof thepartially ordered setof bit vectors:f∗(m)=∑a⊆m(−1)|a|+|m|f(a){\displaystyle f^{*}(m)=\sum _{a\subseteq m}(-1)^{|a|+|m|}f(a)}where|a|{\displaystyle |a|}denotes the weight of the bit vectora{\displaystyle a}. Taken modulo 2, this is theBooleanMöbius transform, giving thealgebraic normal formcoefficients:f^(m)=⨁a⊆mf(a){\displaystyle {\hat {f}}(m)=\bigoplus _{a\subseteq m}f(a)}In both cases, the sum is taken over all bit-vectorsacovered bym, i.e. the "one" bits ofaform a subset of the one bits ofm.
When the domain is restricted to the n-dimensionalhypercube[0,1]n{\displaystyle [0,1]^{n}}, the polynomialf∗(x):[0,1]n→[0,1]{\displaystyle f^{*}(x):[0,1]^{n}\rightarrow [0,1]}gives the probability of a positive outcome when the Boolean functionfis applied tonindependent random (Bernoulli) variables, with individual probabilitiesx. A special case of this fact is thepiling-up lemmaforparity functions. The polynomial form of a Boolean function can also be used as its natural extension tofuzzy logic.
Often, the Boolean domain is taken as{−1,1}{\displaystyle \{-1,1\}}, with false ("0") mapping to 1 and true ("1") to −1 (seeAnalysis of Boolean functions). The polynomial corresponding tog(x):{−1,1}n→{−1,1}{\displaystyle g(x):\{-1,1\}^{n}\rightarrow \{-1,1\}}is then given by:g∗(x)=∑a∈{−1,1}ng(a)∏i:ai=−11−xi2∏i:ai=11+xi2{\displaystyle g^{*}(x)=\sum _{a\in {\{-1,1\}}^{n}}g(a)\prod _{i:a_{i}=-1}{\frac {1-x_{i}}{2}}\prod _{i:a_{i}=1}{\frac {1+x_{i}}{2}}}Using the symmetric Boolean domain simplifies certain aspects of theanalysis, since negation corresponds to multiplying by −1 andlinear functionsaremonomials(XOR is multiplication). This polynomial form thus corresponds to theWalsh transform(in this context also known asFourier transform) of the function (see above). The polynomial also has the same statistical interpretation as the one in the standard Boolean domain, except that it now deals with the expected valuesE(X)=P(X=1)−P(X=−1)∈[−1,1]{\displaystyle E(X)=P(X=1)-P(X=-1)\in [-1,1]}(seepiling-up lemmafor an example).
Boolean functions play a basic role in questions ofcomplexity theoryas well as the design of processors fordigital computers, where they are implemented in electronic circuits usinglogic gates.
The properties of Boolean functions are critical incryptography, particularly in the design ofsymmetric key algorithms(seesubstitution box).
Incooperative gametheory, monotone Boolean functions are calledsimple games(voting games); this notion is applied to solve problems insocial choice theory.
|
https://en.wikipedia.org/wiki/Boolean_function
|
Inmathematicsandmathematical logic,Boolean algebrais a branch ofalgebra. It differs fromelementary algebrain two ways. First, the values of thevariablesare thetruth valuestrueandfalse, usually denoted by 1 and 0, whereas in elementary algebra the values of the variables are numbers. Second, Boolean algebra useslogical operatorssuch asconjunction(and) denoted as∧,disjunction(or) denoted as∨, andnegation(not) denoted as¬. Elementary algebra, on the other hand, uses arithmetic operators such as addition, multiplication, subtraction, and division. Boolean algebra is therefore a formal way of describinglogical operationsin the same way that elementary algebra describes numerical operations.
Boolean algebra was introduced byGeorge Boolein his first bookThe Mathematical Analysis of Logic(1847),[1]and set forth more fully in hisAn Investigation of the Laws of Thought(1854).[2]According toHuntington, the termBoolean algebrawas first suggested byHenry M. Shefferin 1913,[3]althoughCharles Sanders Peircegave the title "A Boolian [sic] Algebra with One Constant" to the first chapter of his "The Simplest Mathematics" in 1880.[4]Boolean algebra has been fundamental in the development ofdigital electronics, and is provided for in all modernprogramming languages. It is also used inset theoryandstatistics.[5]
A precursor of Boolean algebra wasGottfried Wilhelm Leibniz'salgebra of concepts. The usage of binary in relation to theI Chingwas central to Leibniz'scharacteristica universalis. It eventually created the foundations of algebra of concepts.[6]Leibniz's algebra of concepts is deductively equivalent to the Boolean algebra of sets.[7]
Boole's algebra predated the modern developments inabstract algebraandmathematical logic; it is however seen as connected to the origins of both fields.[8]In an abstract setting, Boolean algebra was perfected in the late 19th century byJevons,Schröder,Huntingtonand others, until it reached the modern conception of an (abstract)mathematical structure.[8]For example, the empirical observation that one can manipulate expressions in thealgebra of sets, by translating them into expressions in Boole's algebra, is explained in modern terms by saying that the algebra of sets isaBoolean algebra(note theindefinite article). In fact,M. H. Stoneproved in 1936that every Boolean algebra isisomorphicto afield of sets.[9][10]
In the 1930s, while studyingswitching circuits,Claude Shannonobserved that one could also apply the rules of Boole's algebra in this setting,[11]and he introducedswitching algebraas a way to analyze and design circuits by algebraic means in terms oflogic gates. Shannon already had at his disposal the abstract mathematical apparatus, thus he cast his switching algebra as thetwo-element Boolean algebra. In modern circuit engineering settings, there is little need to consider other Boolean algebras, thus "switching algebra" and "Boolean algebra" are often used interchangeably.[12][13][14]
Efficient implementationofBoolean functionsis a fundamental problem in thedesignofcombinational logiccircuits. Modernelectronic design automationtools forvery-large-scale integration(VLSI) circuits often rely on an efficient representation of Boolean functions known as (reduced ordered)binary decision diagrams(BDD) forlogic synthesisandformal verification.[15]
Logic sentences that can be expressed in classicalpropositional calculushave anequivalent expressionin Boolean algebra. Thus,Boolean logicis sometimes used to denote propositional calculus performed in this way.[16][17][18]Boolean algebra is not sufficient to capture logic formulas usingquantifiers, like those fromfirst-order logic.
Although the development ofmathematical logicdid not follow Boole's program, the connection between his algebra and logic was later put on firm ground in the setting ofalgebraic logic, which also studies the algebraic systems of many other logics.[8]Theproblem of determining whetherthe variables of a given Boolean (propositional) formula can be assigned in such a way as to make the formula evaluate to true is called theBoolean satisfiability problem(SAT), and is of importance totheoretical computer science, being the first problem shown to beNP-complete. The closely relatedmodel of computationknown as aBoolean circuitrelatestime complexity(of analgorithm) tocircuit complexity.
Whereas expressions denote mainlynumbersin elementary algebra, in Boolean algebra, they denote thetruth valuesfalseandtrue. These values are represented with thebits, 0 and 1. They do not behave like theintegers0 and 1, for which1 + 1 = 2, but may be identified with the elements of thetwo-element fieldGF(2), that is,integer arithmetic modulo 2, for which1 + 1 = 0. Addition and multiplication then play the Boolean roles of XOR (exclusive-or) and AND (conjunction), respectively, with disjunctionx∨y(inclusive-or) definable asx+y−xyand negation¬xas1 −x. InGF(2),−may be replaced by+, since they denote the same operation; however, this way of writing Boolean operations allows applying the usual arithmetic operations of integers (this may be useful when using a programming language in whichGF(2)is not implemented).
Boolean algebra also deals withfunctionswhich have their values in the set{0,1}. Asequence of bitsis a commonly used example of such a function. Another common example is the totality of subsets of a setE: to a subsetFofE, one can define theindicator functionthat takes the value1onF, and0outsideF. The most general example is the set elements of aBoolean algebra, with all of the foregoing being instances thereof.
As with elementary algebra, the purely equational part of the theory may be developed, without considering explicit values for the variables.[19]
While Elementary algebra has four operations (addition, subtraction, multiplication, and division), the Boolean algebra has only three basic operations:conjunction,disjunction, andnegation, expressed with the correspondingbinary operatorsAND(∧{\displaystyle \land }) and OR (∨{\displaystyle \lor }) and theunary operatorNOT(¬{\displaystyle \neg }), collectively referred to asBoolean operators.[20]Variables in Boolean algebra that store the logical value of 0 and 1 are called theBoolean variables. They are used to store either true or false values.[21]The basic operations on Boolean variablesxandyare defined as follows:
Alternatively, the values ofx∧y,x∨y, and ¬xcan be expressed by tabulating their values withtruth tablesas follows:[22]
When used in expressions, the operators are applied according to the precedence rules. As with elementary algebra, expressions in parentheses are evaluated first, following the precedence rules.[23]
If the truth values 0 and 1 are interpreted as integers, these operations may be expressed with the ordinary operations of arithmetic (wherex+yuses addition andxyuses multiplication), or by the minimum/maximum functions:
One might consider that only negation and one of the two other operations are basic because of the following identities that allow one to define conjunction in terms of negation and the disjunction, and vice versa (De Morgan's laws):[24]
Operations composed from the basic operations include, among others, the following:
These definitions give rise to the following truth tables giving the values of these operations for all four possible inputs.
Alawof Boolean algebra is anidentitysuch asx∨ (y∨z) = (x∨y) ∨zbetween two Boolean terms, where aBoolean termis defined as an expression built up from variables and the constants 0 and 1 using the operations ∧, ∨, and ¬. The concept can be extended to terms involving other Boolean operations such as ⊕, →, and ≡, but such extensions are unnecessary for the purposes to which the laws are put. Such purposes include the definition of aBoolean algebraas anymodelof the Boolean laws, and as a means for deriving new laws from old as in the derivation ofx∨ (y∧z) =x∨ (z∧y)fromy∧z=z∧y(as treated in§ Axiomatizing Boolean algebra).
Boolean algebra satisfies many of the same laws as ordinary algebra when one matches up ∨ with addition and ∧ with multiplication. In particular the following laws are common to both kinds of algebra:[25][26]
The following laws hold in Boolean algebra, but not in ordinary algebra:
Takingx= 2in the third law above shows that it is not an ordinary algebra law, since2 × 2 = 4. The remaining five laws can be falsified in ordinary algebra by taking all variables to be 1. For example, in absorption law 1, the left hand side would be1(1 + 1) = 2, while the right hand side would be 1 (and so on).
All of the laws treated thus far have been for conjunction and disjunction. These operations have the property that changing either argument either leaves the output unchanged, or the output changes in the same way as the input. Equivalently, changing any variable from 0 to 1 never results in the output changing from 1 to 0. Operations with this property are said to bemonotone. Thus the axioms thus far have all been for monotonic Boolean logic. Nonmonotonicity enters via complement ¬ as follows.[5]
The complement operation is defined by the following two laws.
All properties of negation including the laws below follow from the above two laws alone.[5]
In both ordinary and Boolean algebra, negation works by exchanging pairs of elements, hence in both algebras it satisfies the double negation law (also called involution law)
But whereasordinary algebrasatisfies the two laws
Boolean algebra satisfiesDe Morgan's laws:
The laws listed above define Boolean algebra, in the sense that they entail the rest of the subject. The lawscomplementation1 and 2, together with the monotone laws, suffice for this purpose and can therefore be taken as one possiblecompleteset of laws oraxiomatizationof Boolean algebra. Every law of Boolean algebra follows logically from these axioms. Furthermore, Boolean algebras can then be defined as themodelsof these axioms as treated in§ Boolean algebras.
Writing down further laws of Boolean algebra cannot give rise to any new consequences of these axioms, nor can it rule out any model of them. In contrast, in a list of some but not all of the same laws, there could have been Boolean laws that did not follow from those on the list, and moreover there would have been models of the listed laws that were not Boolean algebras.
This axiomatization is by no means the only one, or even necessarily the most natural given that attention was not paid as to whether some of the axioms followed from others, but there was simply a choice to stop when enough laws had been noticed, treated further in§ Axiomatizing Boolean algebra. Or the intermediate notion of axiom can be sidestepped altogether by defining a Boolean law directly as anytautology, understood as an equation that holds for all values of its variables over 0 and 1.[27][28]All these definitions of Boolean algebra can be shown to be equivalent.
Principle: If {X, R} is apartially ordered set, then {X, R(inverse)} is also a partially ordered set.
There is nothing special about the choice of symbols for the values of Boolean algebra. 0 and 1 could be renamed toαandβ, and as long as it was done consistently throughout, it would still be Boolean algebra, albeit with some obvious cosmetic differences.
But suppose 0 and 1 were renamed 1 and 0 respectively. Then it would still be Boolean algebra, and moreover operating on the same values. However, it would not be identical to our original Boolean algebra because now ∨ behaves the way ∧ used to do and vice versa. So there are still some cosmetic differences to show that the notation has been changed, despite the fact that 0s and 1s are still being used.
But if in addition to interchanging the names of the values, the names of the two binary operations are also interchanged,nowthere is no trace of what was done. The end product is completely indistinguishable from what was started with. The columns forx∧yandx∨yin the truth tables have changed places, but that switch is immaterial.
When values and operations can be paired up in a way that leaves everything important unchanged when all pairs are switched simultaneously, the members of each pair are calleddualto each other. Thus 0 and 1 are dual, and ∧ and ∨ are dual. Theduality principle, also calledDe Morgan duality, asserts that Boolean algebra is unchanged when all dual pairs are interchanged.
One change not needed to make as part of this interchange was to complement. Complement is aself-dualoperation. The identity or do-nothing operationx(copy the input to the output) is also self-dual. A more complicated example of a self-dual operation is(x∧y) ∨ (y∧z) ∨ (z∧x). There is no self-dual binary operation that depends on both its arguments. A composition of self-dual operations is a self-dual operation. For example, iff(x,y,z) = (x∧y) ∨ (y∧z) ∨ (z∧x), thenf(f(x,y,z),x,t)is a self-dual operation of four argumentsx,y,z,t.
The principle of duality can be explained from agroup theoryperspective by the fact that there are exactly four functions that are one-to-one mappings (automorphisms) of the set ofBoolean polynomialsback to itself: the identity function, the complement function, the dual function and the contradual function (complemented dual). These four functions form agroupunderfunction composition, isomorphic to theKlein four-group,actingon the set of Boolean polynomials.Walter Gottschalkremarked that consequently a more appropriate name for the phenomenon would be theprinciple(orsquare)of quaternality.[5]: 21–22
AVenn diagram[29]can be used as a representation of a Boolean operation using shaded overlapping regions. There is one region for each variable, all circular in the examples here. The interior and exterior of regionxcorresponds respectively to the values 1 (true) and 0 (false) for variablex. The shading indicates the value of the operation for each combination of regions, with dark denoting 1 and light 0 (some authors use the opposite convention).
The three Venn diagrams in the figure below represent respectively conjunctionx∧y, disjunctionx∨y, and complement ¬x.
For conjunction, the region inside both circles is shaded to indicate thatx∧yis 1 when both variables are 1. The other regions are left unshaded to indicate thatx∧yis 0 for the other three combinations.
The second diagram represents disjunctionx∨yby shading those regions that lie inside either or both circles. The third diagram represents complement ¬xby shading the regionnotinside the circle.
While we have not shown the Venn diagrams for the constants 0 and 1, they are trivial, being respectively a white box and a dark box, neither one containing a circle. However, we could put a circle forxin those boxes, in which case each would denote a function of one argument,x, which returns the same value independently ofx, called a constant function. As far as their outputs are concerned, constants and constant functions are indistinguishable; the difference is that a constant takes no arguments, called azeroaryornullaryoperation, while a constant function takes one argument, which it ignores, and is aunaryoperation.
Venn diagrams are helpful in visualizing laws. The commutativity laws for ∧ and ∨ can be seen from the symmetry of the diagrams: a binary operation that was not commutative would not have a symmetric diagram because interchangingxandywould have the effect of reflecting the diagram horizontally and any failure of commutativity would then appear as a failure of symmetry.
Idempotenceof ∧ and ∨ can be visualized by sliding the two circles together and noting that the shaded area then becomes the whole circle, for both ∧ and ∨.
To see the first absorption law,x∧ (x∨y) =x, start with the diagram in the middle forx∨yand note that the portion of the shaded area in common with thexcircle is the whole of thexcircle. For the second absorption law,x∨ (x∧y) =x, start with the left diagram forx∧yand note that shading the whole of thexcircle results in just thexcircle being shaded, since the previous shading was inside thexcircle.
The double negation law can be seen by complementing the shading in the third diagram for ¬x, which shades thexcircle.
To visualize the first De Morgan's law,(¬x) ∧ (¬y) = ¬(x∨y), start with the middle diagram forx∨yand complement its shading so that only the region outside both circles is shaded, which is what the right hand side of the law describes. The result is the same as if we shaded that region which is both outside thexcircleandoutside theycircle, i.e. the conjunction of their exteriors, which is what the left hand side of the law describes.
The second De Morgan's law,(¬x) ∨ (¬y) = ¬(x∧y), works the same way with the two diagrams interchanged.
The first complement law,x∧ ¬x= 0, says that the interior and exterior of thexcircle have no overlap. The second complement law,x∨ ¬x= 1, says that everything is either inside or outside thexcircle.
Digital logic is the application of the Boolean algebra of 0 and 1 to electronic hardware consisting oflogic gatesconnected to form acircuit diagram. Each gate implements a Boolean operation, and is depicted schematically by a shape indicating the operation. The shapes associated with the gates for conjunction (AND-gates), disjunction (OR-gates), and complement (inverters) are as follows:[30]
The lines on the left of each gate represent input wires orports. The value of the input is represented by a voltage on the lead. For so-called "active-high" logic, 0 is represented by a voltage close to zero or "ground," while 1 is represented by a voltage close to the supply voltage; active-low reverses this. The line on the right of each gate represents the output port, which normally follows the same voltage conventions as the input ports.
Complement is implemented with an inverter gate. The triangle denotes the operation that simply copies the input to the output; the small circle on the output denotes the actual inversion complementing the input. The convention of putting such a circle on any port means that the signal passing through this port is complemented on the way through, whether it is an input or output port.
Theduality principle, orDe Morgan's laws, can be understood as asserting that complementing all three ports of an AND gate converts it to an OR gate and vice versa, as shown in Figure 4 below. Complementing both ports of an inverter however leaves the operation unchanged.
More generally, one may complement any of the eight subsets of the three ports of either an AND or OR gate. The resulting sixteen possibilities give rise to only eight Boolean operations, namely those with an odd number of 1s in their truth table. There are eight such because the "odd-bit-out" can be either 0 or 1 and can go in any of four positions in the truth table. There being sixteen binary Boolean operations, this must leave eight operations with an even number of 1s in their truth tables. Two of these are the constants 0 and 1 (as binary operations that ignore both their inputs); four are the operations that depend nontrivially on exactly one of their two inputs, namelyx,y, ¬x, and ¬y; and the remaining two arex⊕y(XOR) and its complementx≡y.
The term "algebra" denotes both a subject, namely the subject ofalgebra, and an object, namely analgebraic structure. Whereas the foregoing has addressed the subject of Boolean algebra, this section deals with mathematical objects called Boolean algebras, defined in full generality as any model of the Boolean laws. We begin with a special case of the notion definable without reference to the laws, namely concrete Boolean algebras, and then givethe formal definitionof the general notion.
Aconcrete Boolean algebraorfield of setsis any nonempty set of subsets of a given setXclosed under the set operations ofunion,intersection, andcomplementrelative toX.[5]
(HistoricallyXitself was required to be nonempty as well to exclude the degenerate or one-element Boolean algebra, which is the one exception to the rule that all Boolean algebras satisfy the same equations since the degenerate algebra satisfies every equation. However, this exclusion conflicts with the preferred purely equational definition of "Boolean algebra", there being no way to rule out the one-element algebra using only equations— 0 ≠ 1 does not count, being a negated equation. Hence modern authors allow the degenerate Boolean algebra and letXbe empty.)
Example 1.Thepower set2XofX, consisting of allsubsetsofX. HereXmay be any set: empty, finite, infinite, or evenuncountable.
Example 2.The empty set andX. This two-element algebra shows that a concrete Boolean algebra can be finite even when it consists of subsets of an infinite set. It can be seen that every field of subsets ofXmust contain the empty set andX. Hence no smaller example is possible, other than the degenerate algebra obtained by takingXto be empty so as to make the empty set andXcoincide.
Example 3.The set of finite andcofinitesets of integers, where a cofinite set is one omitting only finitely many integers. This is clearly closed under complement, and is closed under union because the union of a cofinite set with any set is cofinite, while the union of two finite sets is finite. Intersection behaves like union with "finite" and "cofinite" interchanged. This example is countably infinite because there are only countably many finite sets of integers.
Example 4.For a less trivial example of the point made by example 2, consider aVenn diagramformed bynclosed curvespartitioningthe diagram into 2nregions, and letXbe the (infinite) set of all points in the plane not on any curve but somewhere within the diagram. The interior of each region is thus an infinite subset ofX, and every point inXis in exactly one region. Then the set of all 22npossible unions of regions (including the empty set obtained as the union of the empty set of regions andXobtained as the union of all 2nregions) is closed under union, intersection, and complement relative toXand therefore forms a concrete Boolean algebra. Again, there are finitely many subsets of an infinite set forming a concrete Boolean algebra, with example 2 arising as the casen= 0 of no curves.
A subsetYofXcan be identified with anindexed familyof bits withindex setX, with the bit indexed byx∈Xbeing 1 or 0 according to whether or notx∈Y. (This is the so-calledcharacteristic functionnotion of a subset.) For example, a 32-bit computer word consists of 32 bits indexed by the set {0,1,2,...,31}, with 0 and 31 indexing the low and high order bits respectively. For a smaller example, ifX={a,b,c}{\displaystyle X=\{a,b,c\}}wherea, b, care viewed as bit positions in that order from left to right, the eight subsets {}, {c}, {b}, {b,c}, {a}, {a,c}, {a,b}, and {a,b,c} ofXcan be identified with the respective bit vectors 000, 001, 010, 011, 100, 101, 110, and 111. Bit vectors indexed by the set of natural numbers are infinitesequencesof bits, while those indexed by therealsin theunit interval[0,1] are packed too densely to be able to write conventionally but nonetheless form well-defined indexed families (imagine coloring every point of the interval [0,1] either black or white independently; the black points then form an arbitrary subset of [0,1]).
From this bit vector viewpoint, a concrete Boolean algebra can be defined equivalently as a nonempty set of bit vectors all of the same length (more generally, indexed by the same set) and closed under the bit vector operations ofbitwise∧, ∨, and ¬, as in1010∧0110 = 0010,1010∨0110 = 1110, and¬1010 = 0101, the bit vector realizations of intersection, union, and complement respectively.
The set {0,1} and its Boolean operations as treated above can be understood as the special case of bit vectors of length one, which by the identification of bit vectors with subsets can also be understood as the two subsets of a one-element set. This is called theprototypicalBoolean algebra, justified by the following observation.
This observation is proved as follows. Certainly any law satisfied by all concrete Boolean algebras is satisfied by the prototypical one since it is concrete. Conversely any law that fails for some concrete Boolean algebra must have failed at a particular bit position, in which case that position by itself furnishes a one-bit counterexample to that law. Nondegeneracy ensures the existence of at least one bit position because there is only one empty bit vector.
The final goal of the next section can be understood as eliminating "concrete" from the above observation. That goal is reached via the stronger observation that, up to isomorphism, all Boolean algebras are concrete.
The Boolean algebras so far have all been concrete, consisting of bit vectors or equivalently of subsets of some set. Such a Boolean algebra consists of a set and operations on that set which can beshownto satisfy the laws of Boolean algebra.
Instead of showing that the Boolean laws are satisfied, we can instead postulate a setX, two binary operations onX, and one unary operation, andrequirethat those operations satisfy the laws of Boolean algebra. The elements ofXneed not be bit vectors or subsets but can be anything at all. This leads to the more generalabstractdefinition.
For the purposes of this definition it is irrelevant how the operations came to satisfy the laws, whether by fiat or proof. All concrete Boolean algebras satisfy the laws (by proof rather than fiat), whence every concrete Boolean algebra is a Boolean algebra according to our definitions. This axiomatic definition of a Boolean algebra as a set and certain operations satisfying certain laws or axiomsby fiatis entirely analogous to the abstract definitions ofgroup,ring,fieldetc. characteristic of modern orabstract algebra.
Given any complete axiomatization of Boolean algebra, such as the axioms for acomplementeddistributive lattice, a sufficient condition for analgebraic structureof this kind to satisfy all the Boolean laws is that it satisfy just those axioms. The following is therefore an equivalent definition.
The section onaxiomatizationlists other axiomatizations, any of which can be made the basis of an equivalent definition.
Although every concrete Boolean algebra is a Boolean algebra, not every Boolean algebra need be concrete. Letnbe asquare-freepositive integer, one not divisible by the square of an integer, for example 30 but not 12. The operations ofgreatest common divisor,least common multiple, and division inton(that is, ¬x=n/x), can be shown to satisfy all the Boolean laws when their arguments range over the positive divisors ofn. Hence those divisors form a Boolean algebra. These divisors are not subsets of a set, making the divisors ofna Boolean algebra that is not concrete according to our definitions.
However, if each divisor ofnisrepresentedby the set of its prime factors, this nonconcrete Boolean algebra isisomorphicto the concrete Boolean algebra consisting of all sets of prime factors ofn, with union corresponding to least common multiple, intersection to greatest common divisor, and complement to division inton. So this example, while not technically concrete, is at least "morally" concrete via this representation, called anisomorphism. This example is an instance of the following notion.
The next question is answered positively as follows.
That is, up to isomorphism, abstract and concrete Boolean algebras are the same thing. This result depends on theBoolean prime ideal theorem, a choice principle slightly weaker than theaxiom of choice. This strong relationship implies a weaker result strengthening the observation in the previous subsection to the following easy consequence of representability.
It is weaker in the sense that it does not of itself imply representability. Boolean algebras are special here, for example arelation algebrais a Boolean algebra with additional structure but it is not the case that every relation algebra is representable in the sense appropriate to relation algebras.
The above definition of an abstract Boolean algebra as a set together with operations satisfying "the" Boolean laws raises the question of what those laws are. A simplistic answer is "all Boolean laws", which can be defined as all equations that hold for the Boolean algebra of 0 and 1. However, since there are infinitely many such laws, this is not a satisfactory answer in practice, leading to the question of it suffices to require only finitely many laws to hold.
In the case of Boolean algebras, the answer is "yes": the finitely many equations listed above are sufficient. Thus, Boolean algebra is said to befinitely axiomatizableorfinitely based.
Moreover, the number of equations needed can be further reduced. To begin with, some of the above laws are implied by some of the others. A sufficient subset of the above laws consists of the pairs of associativity, commutativity, and absorption laws, distributivity of ∧ over ∨ (or the other distributivity law—one suffices), and the two complement laws. In fact, this is the traditional axiomatization of Boolean algebra as acomplementeddistributive lattice.
By introducing additional laws not listed above, it becomes possible to shorten the list of needed equations yet further; for instance, with the vertical bar representing theSheffer strokeoperation, the single axiom((a∣b)∣c)∣(a∣((a∣c)∣a))=c{\displaystyle ((a\mid b)\mid c)\mid (a\mid ((a\mid c)\mid a))=c}is sufficient to completely axiomatize Boolean algebra. It is also possible to find longer single axioms using more conventional operations; seeMinimal axioms for Boolean algebra.[32]
Propositional logicis alogical systemthat is intimately connected to Boolean algebra.[5]Many syntactic concepts of Boolean algebra carry over to propositional logic with only minor changes in notation and terminology, while the semantics of propositional logic are defined via Boolean algebras in a way that the tautologies (theorems) of propositional logic correspond to equational theorems of Boolean algebra.
Syntactically, every Boolean term corresponds to apropositional formulaof propositional logic. In this translation between Boolean algebra and propositional logic, Boolean variablesx, y,... becomepropositional variables(oratoms)P, Q, ... Boolean terms such asx∨ybecome propositional formulasP∨Q; 0 becomesfalseor⊥, and 1 becomestrueorT. It is convenient when referring to generic propositions to use Greek letters Φ, Ψ, ... as metavariables (variables outside the language of propositional calculus, used when talkingaboutpropositional calculus) to denote propositions.
The semantics of propositional logic rely ontruth assignments. The essential idea of a truth assignment is that the propositional variables are mapped to elements of a fixed Boolean algebra, and then thetruth valueof a propositional formula using these letters is the element of the Boolean algebra that is obtained by computing the value of the Boolean term corresponding to the formula. In classical semantics, only the two-element Boolean algebra is used, while inBoolean-valued semanticsarbitrary Boolean algebras are considered. Atautologyis a propositional formula that is assigned truth value1by every truth assignment of its propositional variables to an arbitrary Boolean algebra (or, equivalently, every truth assignment to the two element Boolean algebra).
These semantics permit a translation between tautologies of propositional logic and equational theorems of Boolean algebra. Every tautology Φ of propositional logic can be expressed as the Boolean equation Φ = 1, which will be a theorem of Boolean algebra. Conversely, every theorem Φ = Ψ of Boolean algebra corresponds to the tautologies (Φ ∨ ¬Ψ) ∧ (¬Φ ∨ Ψ) and (Φ ∧ Ψ) ∨ (¬Φ ∧ ¬Ψ). If → is in the language, these last tautologies can also be written as (Φ → Ψ) ∧ (Ψ → Φ), or as two separate theorems Φ → Ψ and Ψ → Φ; if ≡ is available, then the single tautology Φ ≡ Ψ can be used.
One motivating application of propositional calculus is the analysis of propositions and deductive arguments in natural language.[33]Whereas the proposition "ifx= 3, thenx+ 1 = 4" depends on the meanings of such symbols as + and 1, the proposition "ifx= 3, thenx= 3" does not; it is true merely by virtue of its structure, and remains true whether "x= 3" is replaced by "x= 4" or "the moon is made of green cheese." The generic or abstract form of this tautology is "ifP, thenP," or in the language of Boolean algebra,P→P.[citation needed]
ReplacingPbyx= 3 or any other proposition is calledinstantiationofPby that proposition. The result of instantiatingPin an abstract proposition is called aninstanceof the proposition. Thus,x= 3 →x= 3 is a tautology by virtue of being an instance of the abstract tautologyP→P. All occurrences of the instantiated variable must be instantiated with the same proposition, to avoid such nonsense asP→x= 3 orx= 3 →x= 4.
Propositional calculus restricts attention to abstract propositions, those built up from propositional variables using Boolean operations. Instantiation is still possible within propositional calculus, but only by instantiating propositional variables by abstract propositions, such as instantiatingQbyQ→PinP→ (Q→P) to yield the instanceP→ ((Q→P) →P).
(The availability of instantiation as part of the machinery of propositional calculus avoids the need for metavariables within the language of propositional calculus, since ordinary propositional variables can be considered within the language to denote arbitrary propositions. The metavariables themselves are outside the reach of instantiation, not being part of the language of propositional calculus but rather part of the same language for talking about it that this sentence is written in, where there is a need to be able to distinguish propositional variables and their instantiations as being distinct syntactic entities.)
An axiomatization of propositional calculus is a set of tautologies calledaxiomsand one or more inference rules for producing new tautologies from old. Aproofin an axiom systemAis a finite nonempty sequence of propositions each of which is either an instance of an axiom ofAor follows by some rule ofAfrom propositions appearing earlier in the proof (thereby disallowing circular reasoning). The last proposition is thetheoremproved by the proof. Every nonempty initial segment of a proof is itself a proof, whence every proposition in a proof is itself a theorem. An axiomatization issoundwhen every theorem is a tautology, andcompletewhen every tautology is a theorem.[34]
Propositional calculus is commonly organized as aHilbert system, whose operations are just those of Boolean algebra and whose theorems are Boolean tautologies, those Boolean terms equal to the Boolean constant 1. Another form issequent calculus, which has two sorts, propositions as in ordinary propositional calculus, and pairs of lists of propositions calledsequents, such asA∨B,A∧C, ... ⊢A,B→C, ....The two halves of a sequent are called the antecedent and the succedent respectively. The customary metavariable denoting an antecedent or part thereof is Γ, and for a succedent Δ; thus Γ,A⊢ Δ would denote a sequent whose succedent is a list Δ and whose antecedent is a list Γ with an additional propositionAappended after it. The antecedent is interpreted as the conjunction of its propositions, the succedent as the disjunction of its propositions, and the sequent itself as theentailmentof the succedent by the antecedent.
Entailment differs from implication in that whereas the latter is a binaryoperationthat returns a value in a Boolean algebra, the former is a binaryrelationwhich either holds or does not hold. In this sense, entailment is anexternalform of implication, meaning external to the Boolean algebra, thinking of the reader of the sequent as also being external and interpreting and comparing antecedents and succedents in some Boolean algebra. The natural interpretation of ⊢ is as ≤ in the partial order of the Boolean algebra defined byx≤yjust whenx∨y=y. This ability to mix external implication ⊢ and internal implication → in the one logic is among the essential differences between sequent calculus and propositional calculus.[35]
Boolean algebra as the calculus of two values is fundamental to computer circuits, computer programming, and mathematical logic, and is also used in other areas of mathematics such as set theory and statistics.[5]
In the early 20th century, several electrical engineers[who?]intuitively recognized that Boolean algebra was analogous to the behavior of certain types of electrical circuits.Claude Shannonformally proved such behavior was logically equivalent to Boolean algebra in his 1937 master's thesis,A Symbolic Analysis of Relay and Switching Circuits.
Today, all modern general-purposecomputersperform their functions using two-value Boolean logic; that is, their electrical circuits are a physical manifestation of two-value Boolean logic. They achieve this in various ways: asvoltages on wiresin high-speed circuits and capacitive storage devices, as orientations of amagnetic domainin ferromagnetic storage devices, as holes inpunched cardsorpaper tape, and so on. (Some early computers used decimal circuits or mechanisms instead of two-valued logic circuits.)
Of course, it is possible to code more than two symbols in any given medium. For example, one might use respectively 0, 1, 2, and 3 volts to code a four-symbol alphabet on a wire, or holes of different sizes in a punched card. In practice, the tight constraints of high speed, small size, and low power combine to make noise a major factor. This makes it hard to distinguish between symbols when there are several possible symbols that could occur at a single site. Rather than attempting to distinguish between four voltages on one wire, digital designers have settled on two voltages per wire, high and low.
Computers use two-value Boolean circuits for the above reasons. The most common computer architectures use ordered sequences of Boolean values, called bits, of 32 or 64 values, e.g. 01101000110101100101010101001011. When programming inmachine code,assembly language, and certain otherprogramming languages, programmers work with the low-level digital structure of thedata registers. These registers operate on voltages, where zero volts represents Boolean 0, and a reference voltage (often +5 V, +3.3 V, or +1.8 V) represents Boolean 1. Such languages support both numeric operations and logical operations. In this context, "numeric" means that the computer treats sequences of bits asbinary numbers(base two numbers) and executes arithmetic operations like add, subtract, multiply, or divide. "Logical" refers to the Boolean logical operations of disjunction, conjunction, and negation between two sequences of bits, in which each bit in one sequence is simply compared to its counterpart in the other sequence. Programmers therefore have the option of working in and applying the rules of either numeric algebra or Boolean algebra as needed. A core differentiating feature between these families of operations is the existence of thecarryoperation in the first but not the second.
Other areas where two values is a good choice are the law and mathematics. In everyday relaxed conversation, nuanced or complex answers such as "maybe" or "only on the weekend" are acceptable. In more focused situations such as a court of law or theorem-based mathematics, however, it is deemed advantageous to frame questions so as to admit a simple yes-or-no answer—is the defendant guilty or not guilty, is the proposition true or false—and to disallow any other answer. However, limiting this might prove in practice for the respondent, the principle of the simple yes–no question has become a central feature of both judicial and mathematical logic, makingtwo-valued logicdeserving of organization and study in its own right.
A central concept of set theory is membership. An organization may permit multiple degrees of membership, such as novice, associate, and full. With sets, however, an element is either in or out. The candidates for membership in a set work just like the wires in a digital computer: each candidate is either a member or a nonmember, just as each wire is either high or low.
Algebra being a fundamental tool in any area amenable to mathematical treatment, these considerations combine to make the algebra of two values of fundamental importance to computer hardware, mathematical logic, and set theory.
Two-valued logic can be extended tomulti-valued logic, notably by replacing the Boolean domain {0, 1} with the unit interval [0,1], in which case rather than only taking values 0 or 1, any value between and including 0 and 1 can be assumed. Algebraically, negation (NOT) is replaced with 1 −x, conjunction (AND) is replaced with multiplication (xy), and disjunction (OR) is defined viaDe Morgan's law. Interpreting these values as logicaltruth valuesyields a multi-valued logic, which forms the basis forfuzzy logicandprobabilistic logic. In these interpretations, a value is interpreted as the "degree" of truth – to what extent a proposition is true, or the probability that the proposition is true.
The original application for Boolean operations wasmathematical logic, where it combines the truth values, true or false, of individual formulas.
Natural languages such as English have words for several Boolean operations, in particular conjunction (and), disjunction (or), negation (not), and implication (implies).But notis synonymous withand not. When used to combine situational assertions such as "the block is on the table" and "cats drink milk", which naïvely are either true or false, the meanings of theselogical connectivesoften have the meaning of their logical counterparts. However, with descriptions of behavior such as "Jim walked through the door", one starts to notice differences such as failure of commutativity, for example, the conjunction of "Jim opened the door" with "Jim walked through the door" in that order is not equivalent to their conjunction in the other order, sinceandusually meansand thenin such cases. Questions can be similar: the order "Is the sky blue, and why is the sky blue?" makes more sense than the reverse order. Conjunctive commands about behavior are like behavioral assertions, as inget dressed and go to school. Disjunctive commands suchlove me or leave meorfish or cut baittend to be asymmetric via the implication that one alternative is less preferable. Conjoined nouns such astea and milkgenerally describe aggregation as with set union whiletea or milkis a choice. However, context can reverse these senses, as inyour choices are coffee and teawhich usually means the same asyour choices are coffee or tea(alternatives). Double negation, as in "I don't not like milk", rarely means literally "I do like milk" but rather conveys some sort of hedging, as though to imply that there is a third possibility. "Not not P" can be loosely interpreted as "surely P", and althoughPnecessarily implies "not notP," the converse is suspect in English, much as withintuitionistic logic. In view of the highly idiosyncratic usage of conjunctions in natural languages, Boolean algebra cannot be considered a reliable framework for interpreting them.
Boolean operations are used indigital logicto combine the bits carried on individual wires, thereby interpreting them over {0,1}. When a vector ofnidentical binary gates are used to combine two bit vectors each ofnbits, the individual bit operations can be understood collectively as a single operation on values from aBoolean algebrawith 2nelements.
Naive set theoryinterprets Boolean operations as acting on subsets of a given setX. As we saw earlier this behavior exactly parallels the coordinate-wise combinations of bit vectors, with the union of two sets corresponding to the disjunction of two bit vectors and so on.
The 256-element free Boolean algebra on three generators is deployed incomputer displaysbased onraster graphics, which usebit blitto manipulate whole regions consisting ofpixels, relying on Boolean operations to specify how the source region should be combined with the destination, typically with the help of a third region called themask. Modernvideo cardsoffer all223= 256ternary operations for this purpose, with the choice of operation being a one-byte (8-bit) parameter. The constantsSRC = 0xaaor0b10101010,DST = 0xccor0b11001100, andMSK = 0xf0or0b11110000allow Boolean operations such as(SRC^DST)&MSK(meaning XOR the source and destination and then AND the result with the mask) to be written directly as a constant denoting a byte calculated at compile time,0x80in the(SRC^DST)&MSKexample,0x88if justSRC^DST, etc. At run time the video card interprets the byte as the raster operation indicated by the original expression in a uniform way that requires remarkably little hardware and which takes time completely independent of the complexity of the expression.
Solid modelingsystems forcomputer aided designoffer a variety of methods for building objects from other objects, combination by Boolean operations being one of them. In this method the space in which objects exist is understood as a setSofvoxels(the three-dimensional analogue of pixels in two-dimensional graphics) and shapes are defined as subsets ofS, allowing objects to be combined as sets via union, intersection, etc. One obvious use is in building a complex shape from simple shapes simply as the union of the latter. Another use is in sculpting understood as removal of material: any grinding, milling, routing, or drilling operation that can be performed with physical machinery on physical materials can be simulated on the computer with the Boolean operationx∧ ¬yorx−y, which in set theory is set difference, remove the elements ofyfrom those ofx. Thus given two shapes one to be machined and the other the material to be removed, the result of machining the former to remove the latter is described simply as their set difference.
Search engine queries also employ Boolean logic. For this application, each web page on the Internet may be considered to be an "element" of a "set." The following examples use a syntax supported byGoogle.[NB 1]
|
https://en.wikipedia.org/wiki/Boolean_logic
|
Causalityis an influence by which oneevent,process, state, orobject(acause) contributes to the production of another event, process, state, or object (aneffect) where the cause is at least partly responsible for the effect, and the effect is at least partly dependent on the cause.[1]Thecauseof something may also be described as thereasonfor the event or process.[2]
In general, a process can have multiple causes,[1]which are also said to becausal factorsfor it, and all lie in itspast. An effect can in turn be a cause of, or causal factor for, many other effects, which all lie in itsfuture. Some writers have held that causality ismetaphysicallyprior to notions oftime and space.[3][4][5]Causality is anabstractionthat indicates how the world progresses.[6]As such it is a basic concept; it is more apt to be an explanation of other concepts of progression than something to be explained by other more fundamental concepts. The concept is like those ofagencyandefficacy. For this reason, a leap ofintuitionmay be needed to grasp it.[7][8]Accordingly, causality is implicit in the structure of ordinary language,[9]as well as explicit in the language ofscientific causal notation.
In English studies ofAristotelian philosophy, the word "cause" is used as a specialized technical term, the translation ofAristotle's term αἰτία, by which Aristotle meant "explanation" or "answer to a 'why' question". Aristotle categorized thefour types of answersas material, formal, efficient, and final "causes". In this case, the "cause" is the explanans for theexplanandum, and failure to recognize that different kinds of "cause" are being considered can lead to futile debate. Of Aristotle's four explanatory modes, the one nearest to the concerns of the present article is the "efficient" one.
David Hume, as part of his opposition torationalism, argued that pure reason alone cannot prove the reality of efficient causality; instead, he appealed to custom and mental habit, observing that all human knowledge derives solely fromexperience.
The topic of causality remains a staple incontemporary philosophy.
The nature of cause and effect is a concern of the subject known asmetaphysics.Kantthought that time and space were notions prior to human understanding of the progress or evolution of the world, and he also recognized the priority of causality. But he did not have the understanding that came with knowledge ofMinkowski geometryand thespecial theory of relativity, that the notion of causality can be used as a prior foundation from which toconstruct notionsof time and space.[3][4][5]
A general metaphysical question about cause and effect is: "what kind of entity can be a cause, and what kind of entity can be an effect?"
One viewpoint on this question is that cause and effect are of one and the same kind of entity, causality being an asymmetric relation between them. That is to say, it would make good sense grammatically to say either "Ais the cause andBthe effect" or "Bis the cause andAthe effect", though only one of those two can be actually true. In this view, one opinion, proposed as a metaphysical principle inprocess philosophy, is that every cause and every effect is respectively some process, event, becoming, or happening.[4]An example is 'his tripping over the step was the cause, and his breaking his ankle the effect'. Another view is that causes and effects are 'states of affairs', with the exact natures of those entities being more loosely defined than in process philosophy.[10]
Another viewpoint on this question is the more classical one, that a cause and its effect can be of different kinds of entity. For example, in Aristotle's efficient causal explanation, an action can be a cause while anenduringobject is its effect. For example, the generative actions of his parents can be regarded as the efficient cause, with Socrates being the effect, Socrates being regarded as an enduring object, in philosophical tradition called a 'substance', as distinct from an action.
Since causality is a subtle metaphysical notion, considerable intellectual effort, along with exhibition of evidence, is needed to establish knowledge of it in particular empirical circumstances. According toDavid Hume, the human mind is unable to perceive causal relations directly. On this ground, the scholar distinguished between the regularity view of causality and the counterfactual notion.[11]According to thecounterfactual view,XcausesYif and only if, withoutX, Ywould not exist. Hume interpreted the latter as an ontological view, i.e., as a description of the nature of causality but, given the limitations of the human mind, advised using the former (stating, roughly, thatXcausesYif and only if the two events are spatiotemporally conjoined, andXprecedesY) as an epistemic definition of causality. We need an epistemic concept of causality in order to distinguish between causal and noncausal relations. The contemporary philosophical literature on causality can be divided into five big approaches to causality. These include the (mentioned above) regularity,probabilistic, counterfactual,mechanistic, and manipulationist views. The five approaches can be shown to be reductive, i.e., define causality in terms of relations of other types.[12]According to this reading, they define causality in terms of, respectively, empirical regularities (constant conjunctions of events), changes inconditional probabilities, counterfactual conditions, mechanisms underlying causal relations, and invariance under intervention.
Causality has the properties of antecedence and contiguity.[13][14]These are topological, and are ingredients for space-time geometry. As developed byAlfred Robb, these properties allow the derivation of the notions of time and space.[15]Max Jammerwrites "the Einstein postulate ... opens the way to a straightforward construction of the causal topology ... of Minkowski space."[16]Causal efficacy propagates no faster than light.[17]
Thus, the notion of causality is metaphysically prior to the notions of time and space. In practical terms, this is because use of the relation of causality is necessary for the interpretation of empirical experiments. Interpretation of experiments is needed to establish the physical and geometrical notions of time and space.
Thedeterministicworld-view holds that the history of theuniversecan be exhaustively represented as aprogression of eventsfollowing one after the other as cause and effect.[14]Incompatibilismholds that determinism is incompatible with free will, so if determinism is true, "free will" does not exist.Compatibilism, on the other hand, holds that determinism is compatible with, or even necessary for, free will.[18]
Causes may sometimes be distinguished into two types: necessary and sufficient.[19]A third type of causation, which requires neither necessity nor sufficiency, but which contributes to the effect, is called a "contributory cause".
J. L. Mackieargues that usual talk of "cause" in fact refers toINUSconditions (insufficient butnon-redundant parts of a condition which is itselfunnecessary butsufficient for the occurrence of the effect).[22]An example is a short circuit as a cause for a house burning down. Consider the collection of events: the short circuit, the proximity of flammable material, and the absence of firefighters. Together these are unnecessary but sufficient to the house's burning down (since many other collections of events certainly could have led to the house burning down, for example shooting the house with a flamethrower in the presence of oxygen and so forth). Within this collection, the short circuit is an insufficient (since the short circuit by itself would not have caused the fire) but non-redundant (because the fire would not have happened without it, everything else being equal) part of a condition which is itself unnecessary but sufficient for the occurrence of the effect. So, the short circuit is an INUS condition for the occurrence of the house burning down.
Conditionalstatements arenotstatements of causality. An important distinction is that statements of causality require the antecedent to precede or coincide with the consequent in time, whereas conditional statements do not require this temporal order. Confusion commonly arises since many different statements in English may be presented using "If ..., then ..." form (and, arguably, because this form is far more commonly used to make a statement of causality). The two types of statements are distinct, however.
For example, all of the following statements are true when interpreting "If ..., then ..." as the material conditional:
The first is true since both theantecedentand theconsequentare true. The second is true insentential logicand indeterminate in natural language, regardless of the consequent statement that follows, because the antecedent is false.
The ordinaryindicative conditionalhas somewhat more structure than the material conditional. For instance, although the first is the closest, neither of the preceding two statements seems true as an ordinary indicative reading. But the sentence:
intuitively seems to be true, even though there is no straightforward causal relation in this hypothetical situation between Shakespeare's not writing Macbeth and someone else's actually writing it.
Another sort of conditional, thecounterfactual conditional, has a stronger connection with causality, yet even counterfactual statements are not all examples of causality. Consider the following two statements:
In the first case, it would be incorrect to say that A's being a trianglecausedit to have three sides, since the relationship between triangularity and three-sidedness is that of definition. The property of having three sides actually determines A's state as a triangle. Nonetheless, even when interpreted counterfactually, the first statement is true. An early version of Aristotle's "four cause" theory is described as recognizing "essential cause". In this version of the theory, that the closed polygon has three sides is said to be the "essential cause" of its being a triangle.[23]This use of the word 'cause' is of course now far obsolete. Nevertheless, it is within the scope of ordinary language to say that it is essential to a triangle that it has three sides.
A full grasp of the concept of conditionals is important tounderstandingthe literature on causality. In everyday language, loose conditional statements are often enough made, and need to be interpreted carefully.
Fallacies of questionable cause, also known as causal fallacies,non-causa pro causa(Latin for "non-cause for cause"), or false cause, areinformal fallacieswhere a cause is incorrectly identified.
Counterfactual theories define causation in terms of a counterfactual relation, and can often be seen as "floating" their account of causality on top of an account of the logic ofcounterfactual conditionals. Counterfactual theories reduce facts about causation to facts about what would have been true under counterfactual circumstances.[24]The idea is that causal relations can be framed in the form of "Had C not occurred, E would not have occurred." This approach can be traced back toDavid Hume's definition of the causal relation as that "where, if the first object had not been, the second never had existed."[25]More full-fledged analysis of causation in terms of counterfactual conditionals only came in the 20th century after development of thepossible world semanticsfor the evaluation of counterfactual conditionals. In his 1973 paper "Causation,"David Lewisproposed the following definition of the notion ofcausal dependence:[26]
Causation is then analyzed in terms of counterfactual dependence. That is, C causes E if and only if there exists a sequence of events C, D1, D2, ... Dk, E such that each event in the sequence counterfactually depends on the previous. This chain of causal dependence may be called amechanism.
Note that the analysis does not purport to explain how we make causal judgements or how we reason about causation, but rather to give a metaphysical account of what it is for there to be a causal relation between some pair of events. If correct, the analysis has the power to explain certain features of causation. Knowing that causation is a matter of counterfactual dependence, we may reflect on the nature of counterfactual dependence to account for the nature of causation. For example, in his paper "Counterfactual Dependence and Time's Arrow," Lewis sought to account for the time-directedness of counterfactual dependence in terms of the semantics of the counterfactual conditional.[27]If correct, this theory can serve to explain a fundamental part of our experience, which is that we can causally affect the future but not the past.
One challenge for the counterfactual account isoverdetermination, whereby an effect has multiple causes. For instance, suppose Alice and Bob both throw bricks at a window and it breaks. If Alice hadn't thrown the brick, then it still would have broken, suggesting that Alice wasn't a cause; however, intuitively, Alice did cause the window to break. The Halpern-Pearl definitions of causality take account of examples like these.[28]The first and third Halpern-Pearl conditions are easiest to understand: AC1 requires that Alice threw the brick and the window broke in the actual work. AC3 requires that Alice throwing the brick is a minimal cause (cf. blowing a kiss and throwing a brick). Taking the "updated" version of AC2(a), the basic idea is that we have to find a set of variables and settings thereof such that preventing Alice from throwing a brick also stops the window from breaking. One way to do this is to stop Bob from throwing the brick. Finally, for AC2(b), we have to hold things as per AC2(a) and show that Alice throwing the brick breaks the window. (The full definition is a little more involved, involving checking all subsets of variables.)
Interpreting causation as adeterministicrelation means that ifAcausesB, thenAmustalwaysbe followed byB. In this sense, war does not cause deaths, nor doessmokingcausecanceroremphysema. As a result, many turn to a notion of probabilistic causation. Informally,A("The person is a smoker") probabilistically causesB("The person has now or will have cancer at some time in the future"), if the information thatAoccurred increases the likelihood ofBs occurrence. Formally, P{B|A}≥ P{B} where P{B|A} is the conditional probability thatBwill occur given the information thatAoccurred, and P{B} is the probability thatBwill occur having no knowledge whetherAdid or did not occur. This intuitive condition is not adequate as a definition for probabilistic causation because of its being too general and thus not meeting our intuitive notion of cause and effect. For example, ifAdenotes the event "The person is a smoker,"Bdenotes the event "The person now has or will have cancer at some time in the future" andCdenotes the event "The person now has or will have emphysema some time in the future," then the following three relationships hold: P{B|A} ≥ P{B}, P{C|A} ≥ P{C} and P{B|C} ≥ P{B}. The last relationship states that knowing that the person has emphysema increases the likelihood that he will have cancer. The reason for this is that having the information that the person has emphysema increases the likelihood that the person is a smoker, thus indirectly increasing the likelihood that the person will have cancer. However, we would not want to conclude that having emphysema causes cancer. Thus, we need additional conditions such as temporal relationship ofAtoBand a rational explanation as to the mechanism of action. It is hard to quantify this last requirement and thus different authors prefer somewhat different definitions.[citation needed]
When experimental interventions are infeasible or illegal, the derivation of a cause-and-effect relationship from observational studies must rest on some qualitative theoretical assumptions, for example, that symptoms do not cause diseases, usually expressed in the form of missing arrows incausal graphssuch asBayesian networksorpath diagrams. The theory underlying these derivations relies on the distinction betweenconditional probabilities, as inP(cancer|smoking){\displaystyle P(cancer|smoking)}, andinterventional probabilities, as inP(cancer|do(smoking)){\displaystyle P(cancer|do(smoking))}. The former reads: "the probability of finding cancer in a person known to smoke, having started, unforced by the experimenter, to do so at an unspecified time in the past", while the latter reads: "the probability of finding cancer in a person forced by the experimenter to smoke at a specified time in the past". The former is a statistical notion that can be estimated by observation with negligible intervention by the experimenter, while the latter is a causal notion which is estimated in an experiment with an important controlled randomized intervention. It is specifically characteristic ofquantal phenomenathat observations defined by incompatible variables always involve important intervention by the experimenter, as described quantitatively by theobserver effect.[vague]In classicalthermodynamics,processesare initiated by interventions calledthermodynamic operations. In other branches of science, for exampleastronomy, the experimenter can often observe with negligible intervention.
The theory of "causal calculus"[29](also known as do-calculus,Judea Pearl's Causal Calculus, Calculus of
Actions) permits one to infer interventional probabilities from conditional probabilities in causalBayesian networkswith unmeasured variables. One very practical result of this theory is the characterization ofconfounding variables, namely, a sufficient set of variables that, if adjusted for, would yield the correct causal effect between variables of interest. It can be shown that a sufficient set for estimating the causal effect ofX{\displaystyle X}onY{\displaystyle Y}is any set of non-descendants ofX{\displaystyle X}thatd{\displaystyle d}-separateX{\displaystyle X}fromY{\displaystyle Y}after removing all arrows emanating fromX{\displaystyle X}. This criterion, called "backdoor", provides a mathematical definition of "confounding" and helps researchers identify accessible sets of variables worthy of measurement.
While derivations in causal calculus rely on the structure of the causal graph, parts of the causal structure can, under certain assumptions, be learned from statistical data. The basic idea goes back toSewall Wright's 1921 work[30]onpath analysis. A "recovery" algorithm was developed by Rebane and Pearl (1987)[31]which rests on Wright's distinction between the three possible types of causal substructures allowed in adirected acyclic graph(DAG):
Type 1 and type 2 represent the same statistical dependencies (i.e.,X{\displaystyle X}andZ{\displaystyle Z}are independent givenY{\displaystyle Y}) and are, therefore, indistinguishable within purelycross-sectional data. Type 3, however, can be uniquely identified, sinceX{\displaystyle X}andZ{\displaystyle Z}are marginally independent and all other pairs are dependent. Thus, while theskeletons(the graphs stripped of arrows) of these three triplets are identical, the directionality of the arrows is partially identifiable. The same distinction applies whenX{\displaystyle X}andZ{\displaystyle Z}have common ancestors, except that one must first condition on those ancestors. Algorithms have been developed to systematically determine the skeleton of the underlying graph and, then, orient all arrows whose directionality is dictated by the conditional independencies observed.[29][32][33][34]
Alternative methods of structure learning search through themanypossible causal structures among the variables, and remove ones which are strongly incompatible with the observedcorrelations. In general this leaves a set of possible causal relations, which should then be tested by analyzing time series data or, preferably, designing appropriately controlledexperiments. In contrast with Bayesian Networks,path analysis(and its generalization,structural equation modeling), serve better to estimate a known causal effect or to test a causal model than to generate causal hypotheses.
For nonexperimental data, causal direction can often be inferred if information about time is available. This is because (according to many, though not all, theories) causes must precede their effects temporally. This can be determined by statisticaltime seriesmodels, for instance, or with a statistical test based on the idea ofGranger causality, or by direct experimental manipulation. The use of temporal data can permit statistical tests of a pre-existing theory of causal direction. For instance, our degree of confidence in the direction and nature of causality is much greater when supported bycross-correlations,ARIMAmodels, orcross-spectral analysisusing vector time series data than bycross-sectional data.
Nobel laureateHerbert A. Simonand philosopherNicholas Rescher[35]claim that the asymmetry of the causal relation is unrelated to the asymmetry of any mode of implication that contraposes. Rather, a causal relation is not a relation between values of variables, but a function of one variable (the cause) on to another (the effect). So, given a system of equations, and a set of variables appearing in these equations, we can introduce an asymmetric relation among individual equations and variables that corresponds perfectly to our commonsense notion of a causal ordering. The system of equations must have certain properties, most importantly, if some values are chosen arbitrarily, the remaining values will be determined uniquely through a path of serial discovery that is perfectly causal. They postulate the inherent serialization of such a system of equations may correctly capture causation in all empirical fields, including physics and economics.
Some theorists have equated causality with manipulability.[36][37][38][39]Under these theories,xcausesyonly in the case that one can changexin order to changey. This coincides with commonsense notions of causations, since often we ask causal questions in order to change some feature of the world. For instance, we are interested in knowing thecauses of crimeso that we might find ways of reducing it.
These theories have been criticized on two primary grounds. First, theorists complain that these accounts arecircular. Attempting to reduce causal claims to manipulation requires that manipulation is more basic than causal interaction. But describing manipulations in non-causal terms has provided a substantial difficulty.
The second criticism centers around concerns ofanthropocentrism. It seems to many people that causality is some existing relationship in the world that we can harness for our desires. If causality is identified with our manipulation, then this intuition is lost. In this sense, it makes humans overly central to interactions in the world.
Some attempts to defend manipulability theories are recent accounts that do not claim to reduce causality to manipulation. These accounts use manipulation as a sign or feature in causation without claiming that manipulation is more fundamental than causation.[29][40]
Some theorists are interested in distinguishing between causal processes and non-causal processes (Russell 1948; Salmon 1984).[41][42]These theorists often want to distinguish between a process and apseudo-process. As an example, a ball moving through the air (a process) is contrasted with the motion of a shadow (a pseudo-process). The former is causal in nature while the latter is not.
Salmon (1984)[41]claims that causal processes can be identified by their ability to transmit an alteration over space and time. An alteration of the ball (a mark by a pen, perhaps) is carried with it as the ball goes through the air. On the other hand, an alteration of the shadow (insofar as it is possible) will not be transmitted by the shadow as it moves along.
These theorists claim that the important concept for understanding causality is not causal relationships or causal interactions, but rather identifying causal processes. The former notions can then be defined in terms of causal processes.
A subgroup of the process theories is the mechanistic view on causality. It states that causal relations supervene on mechanisms. While the notion of mechanism is understood differently, the definition put forward by the group of philosophers referred to as the 'New Mechanists' dominate the literature.[43]
For the scientific investigation of efficient causality, the cause and effect are each best conceived of as temporally transient processes.
Within the conceptual frame of thescientific method, an investigator sets up several distinct and contrasting temporally transient material processes that have the structure ofexperiments, and records candidate material responses, normally intending to determine causality in the physical world.[44]For instance, one may want to know whether a high intake ofcarrotscauses humans to develop thebubonic plague. The quantity of carrot intake is a process that is varied from occasion to occasion. The occurrence or non-occurrence of subsequent bubonic plague is recorded. To establish causality, the experiment must fulfill certain criteria, only one example of which is mentioned here. For example, instances of the hypothesized cause must be set up to occur at a time when the hypothesized effect is relatively unlikely in the absence of the hypothesized cause; such unlikelihood is to be established by empirical evidence. A mere observation of acorrelationis not nearly adequate to establish causality. In nearly all cases, establishment of causality relies on repetition of experiments and probabilistic reasoning. Hardly ever is causality established more firmly than as more or less probable. It is most convenient for establishment of causality if the contrasting material states of affairs are precisely matched, except for only one variable factor, perhaps measured by a real number.
One has to be careful in the use of the word cause in physics. Properly speaking, the hypothesized cause and the hypothesized effect are each temporally transient processes. For example, force is a useful concept for the explanation of acceleration, but force is not by itself a cause. More is needed. For example, a temporally transient process might be characterized by a definite change of force at a definite time. Such a process can be regarded as a cause. Causality is not inherently implied inequations of motion, but postulated as an additionalconstraintthat needs to be satisfied (i.e. a cause always precedes its effect). This constraint has mathematical implications[45]such as theKramers-Kronig relations.
Causality is one of the most fundamental and essential notions of physics.[46]Causal efficacy cannot 'propagate' faster than light. Otherwise, reference coordinate systems could be constructed (using theLorentz transformofspecial relativity) in which an observer would see an effect precede its cause (i.e. the postulate of causality would be violated).
Causal notions appear in the context of the flow of mass-energy. Any actual process has causal efficacy that can propagate no faster than light. In contrast, an abstraction has no causal efficacy. Its mathematical expression does not propagate in the ordinary sense of the word, though it may refer to virtual or nominal 'velocities' with magnitudes greater than that of light. For example, wave packets are mathematical objects that havegroup velocityandphase velocity. The energy of a wave packet travels at the group velocity (under normal circumstances); since energy has causal efficacy, the group velocity cannot be faster than the speed of light. The phase of a wave packet travels at the phase velocity; since phase is not causal, the phase velocity of a wave packet can be faster than light.[47]
Causal notions are important in general relativity to the extent that the existence of an arrow of time demands that the universe's semi-Riemannian manifoldbe orientable, so that "future" and "past" are globally definable quantities.
Acausal systemis asystemwith output and internal states that depends only on the current and previous input values. A system that hassomedependence on input values from the future (in addition to possible past or current input values) is termed anacausalsystem, and a system that dependssolelyon future input values is ananticausal system. Acausal filters, for example, can only exist as postprocessing filters, because these filters can extract future values from a memory buffer or a file.
We have to be very careful with causality in physics and engineering. Cellier, Elmqvist, and Otter[48]describe causality forming the basis of physics as a misconception, because physics is essentially acausal. In their article they cite a simple example: "The relationship between voltage across and current through an electrical resistor can be described by Ohm's law: V = IR, yet, whether it is the current flowing through the resistor that causes a voltage drop, or whether it is the difference between the electrical potentials on the two wires that causes current to flow is, from a physical perspective, a meaningless question". In fact, if we explain cause-effect using the law, we need two explanations to describe an electrical resistor: as a voltage-drop-causer or as a current-flow-causer. There is no physical experiment in the world that can distinguish between action and reaction.
Austin Bradford Hillbuilt upon the work ofHumeandPopperand suggested in his paper "The Environment and Disease: Association or Causation?" that aspects of an association such as strength, consistency, specificity, and temporality be considered in attempting to distinguish causal from noncausal associations in the epidemiological situation. (SeeBradford Hill criteria.) He did not note however, that temporality is the only necessary criterion among those aspects. Directed acyclic graphs (DAGs) are increasingly used in epidemiology to help enlighten causal thinking.[49]
Psychologists take an empirical approach to causality, investigating how people and non-human animals detect or infer causation from sensory information, prior experience andinnate knowledge.
Attribution:Attribution theoryis thetheoryconcerning how people explain individual occurrences of causation.Attributioncan be external (assigning causality to an outside agent or force—claiming that some outside thing motivated the event) or internal (assigning causality to factors within the person—taking personalresponsibilityoraccountabilityfor one's actions and claiming that the person was directly responsible for the event). Taking causation one step further, the type of attribution a person provides influences their future behavior.
The intention behind the cause or the effect can be covered by the subject ofaction. See alsoaccident;blame;intent; and responsibility.
WhereasDavid Humeargued that causes are inferred from non-causal observations,Immanuel Kantclaimed that people have innate assumptions about causes. Within psychology,Patricia Cheng[8]attempted to reconcile the Humean and Kantian views. According to her power PC theory, people filter observations of events through an intuition that causes have the power to generate (or prevent) their effects, thereby inferring specific cause-effect relations.
Our view of causation depends on what we consider to be the relevant events. Another way to view the statement, "Lightning causes thunder" is to see both lightning and thunder as two perceptions of the same event, viz., an electric discharge that we perceive first visually and then aurally.
David Sobel andAlison Gopnikfrom the Psychology Department of UC Berkeley designed a device known asthe blicket detectorwhich would turn on when an object was placed on it. Their research suggests that "even young children will easily and swiftly learn about a new causal power of an object and spontaneously use that information in classifying and naming the object."[50]
Some researchers such as Anjan Chatterjee at the University of Pennsylvania and Jonathan Fugelsang at the University of Waterloo are using neuroscience techniques to investigate the neural and psychological underpinnings of causal launching events in which one object causes another object to move. Both temporal and spatial factors can be manipulated.[51]
SeeCausal Reasoning (Psychology)for more information.
Statisticsandeconomicsusually employ pre-existing data or experimental data to infer causality by regression methods. The body of statistical techniques involves substantial use ofregression analysis. Typically a linear relationship such as
is postulated, in whichyi{\displaystyle y_{i}}is theith observation of the dependent variable (hypothesized to be the caused variable),xj,i{\displaystyle x_{j,i}}forj=1,...,kis theith observation on thejth independent variable (hypothesized to be a causative variable), andei{\displaystyle e_{i}}is the error term for theith observation (containing the combined effects of all other causative variables, which must be uncorrelated with the included independent variables). If there is reason to believe that none of thexj{\displaystyle x_{j}}s is caused byy, then estimates of the coefficientsaj{\displaystyle a_{j}}are obtained. If the null hypothesis thataj=0{\displaystyle a_{j}=0}is rejected, then the alternative hypothesis thataj≠0{\displaystyle a_{j}\neq 0}and equivalently thatxj{\displaystyle x_{j}}causesycannot be rejected. On the other hand, if the null hypothesis thataj=0{\displaystyle a_{j}=0}cannot be rejected, then equivalently the hypothesis of no causal effect ofxj{\displaystyle x_{j}}onycannot be rejected. Here the notion of causality is one of contributory causality as discussedabove: If the true valueaj≠0{\displaystyle a_{j}\neq 0}, then a change inxj{\displaystyle x_{j}}will result in a change inyunlesssome other causative variable(s), either included in the regression or implicit in the error term, change in such a way as to exactly offset its effect; thus a change inxj{\displaystyle x_{j}}isnot sufficientto changey. Likewise, a change inxj{\displaystyle x_{j}}isnot necessaryto changey, because a change inycould be caused by something implicit in the error term (or by some other causative explanatory variable included in the model).
The above way of testing for causality requires belief that there is no reverse causation, in whichywould causexj{\displaystyle x_{j}}. This belief can be established in one of several ways. First, the variablexj{\displaystyle x_{j}}may be a non-economic variable: for example, if rainfall amountxj{\displaystyle x_{j}}is hypothesized to affect the futures priceyof some agricultural commodity, it is impossible that in fact the futures price affects rainfall amount (provided thatcloud seedingis never attempted). Second, theinstrumental variablestechnique may be employed to remove any reverse causation by introducing a role for other variables (instruments) that are known to be unaffected by the dependent variable. Third, the principle that effects cannot precede causes can be invoked, by including on the right side of the regression only variables that precede in time the dependent variable; this principle is invoked, for example, in testing forGranger causalityand in its multivariate analog,vector autoregression, both of which control for lagged values of the dependent variable while testing for causal effects of lagged independent variables.
Regression analysis controls for other relevant variables by including them as regressors (explanatory variables). This helps to avoid false inferences of causality due to the presence of a third, underlying, variable that influences both the potentially causative variable and the potentially caused variable: its effect on the potentially caused variable is captured by directly including it in the regression, so that effect will not be picked up as an indirect effect through the potentially causative variable of interest. Given the above procedures, coincidental (as opposed to causal) correlation can be probabilistically rejected if data samples are large and if regression results passcross-validationtests showing that the correlations hold even for data that were not used in the regression. Asserting with certitude that a common-cause is absent and the regression represents the true causal structure isin principleimpossible.[52]
The problem of omitted variable bias, however, has to be balanced against the risk of insertingCausal colliders, in which the addition of a new variablexj+1{\displaystyle x_{j+1}}induces a correlation betweenxj{\displaystyle x_{j}}andy{\displaystyle y}viaBerkson's paradox.[29]
Apart from constructing statistical models of observational and experimental data, economists use axiomatic (mathematical) models to infer and represent causal mechanisms. Highly abstract theoretical models that isolate and idealize one mechanism dominate microeconomics. In macroeconomics, economists use broad mathematical models that are calibrated on historical data. A subgroup of calibrated models,dynamic stochastic general equilibrium(DSGE) models are employed to represent (in a simplified way) the whole economy and simulate changes in fiscal and monetary policy.[53]
For quality control in manufacturing in the 1960s,Kaoru Ishikawadeveloped a cause and effect diagram, known as anIshikawa diagramor fishbone diagram. The diagram categorizes causes, such as into the six main categories shown here. These categories are then sub-divided. Ishikawa's method identifies "causes" in brainstorming sessions conducted among various groups involved in the manufacturing process. These groups can then be labeled as categories in the diagrams. The use of these diagrams has now spread beyond quality control, and they are used in other areas of management and in design and engineering. Ishikawa diagrams have been criticized for failing to make the distinction between necessary conditions and sufficient conditions. It seems that Ishikawa was not even aware of this distinction.[54]
In the discussion of history, events are sometimes considered as if in some way being agents that can then bring about other historical events. Thus, the combination of poor harvests, the hardships of the peasants, high taxes, lack of representation of the people, and kingly ineptitude are among thecausesof theFrench Revolution. This is a somewhatPlatonicandHegelianview thatreifiescauses asontological entities. In Aristotelian terminology, this use approximates to the case of theefficientcause.
Some philosophers of history such asArthur Dantohave claimed that "explanations in history and elsewhere" describe "not simply an event—something that happens—but a change".[55]Like many practicing historians, they treat causes as intersecting actions and sets of actions which bring about "larger changes", in Danto's words: to decide "what are the elements which persist through a change" is "rather simple" when treating an individual's "shift in attitude", but "it is considerably more complex and metaphysically challenging when we are interested in such a change as, say, the break-up of feudalism or the emergence of nationalism".[56]
Much of the historical debate about causes has focused on the relationship between communicative and other actions, between singular and repeated ones, and between actions, structures of action or group and institutional contexts and wider sets of conditions.[57]John Gaddishas distinguished between exceptional and general causes (followingMarc Bloch) and between "routine" and "distinctive links" in causal relationships: "in accounting for what happened at Hiroshima on August 6, 1945, we attach greater importance to the fact that President Truman ordered the dropping of an atomic bomb than to the decision of the Army Air Force to carry out his orders."[58]He has also pointed to the difference between immediate, intermediate and distant causes.[59]For his part, Christopher Lloyd puts forward four "general concepts of causation" used in history: the "metaphysical idealist concept, which asserts that the phenomena of the universe are products of or emanations from an omnipotent being or such final cause"; "the empiricist (orHumean) regularity concept, which is based on the idea of causation being a matter of constant conjunctions of events"; "the functional/teleological/consequential concept", which is "goal-directed, so that goals are causes"; and the "realist, structurist and dispositional approach, which sees relational structures and internal dispositions as the causes of phenomena".[60]
According tolawandjurisprudence,legal causemust be demonstrated to hold adefendantliable for acrimeor atort(i.e. a civil wrong such as negligence or trespass). It must be proven that causality, or a "sufficient causal link" relates the defendant's actions to the criminal event or damage in question. Causation is also an essential legal element that must be proven to qualify for remedy measures underinternational trade law.[61]
Vedic period(c.1750–500 BCE) literature has karma's Eastern origins.[62]Karma is the belief held bySanatana Dharmaand major religions that a person's actions cause certain effects in the current life and/or in futurelife, positively or negatively. The various philosophical schools (darshanas) provide different accounts of the subject. The doctrine ofsatkaryavadaaffirms that the effect inheres in the cause in some way. The effect is thus either a real or apparent modification of the cause. The doctrine ofasatkaryavadaaffirms that the effect does not inhere in the cause, but is a new arising. SeeNyayafor some details of the theory of causation in the Nyaya school. InBrahma Samhita, Brahma describes Krishna as the prime cause of all causes.[63]
Bhagavad-gītā 18.14identifies five causes for any action (knowing which it can be perfected): the body, the individual soul, the senses, the efforts and the supersoul.
According toMonier-Williams, in theNyāyacausation theory from Sutra I.2.I,2 in theVaisheshikaphilosophy, from causal non-existence is effectual non-existence; but, not effectual non-existence from causal non-existence. A cause precedes an effect. With a threads and cloth metaphors, three causes are:
Monier-Williams also proposed that Aristotle's and the Nyaya's causality are considered conditional aggregates necessary to man's productive work.[64]
Karmais the causality principle focusing on 1) causes, 2) actions, 3) effects, where it is the mind's phenomena that guide the actions that the actor performs. Buddhism trains the actor's actions for continued and uncontrived virtuous outcomes aimed at reducing suffering. This follows theSubject–verb–objectstructure.[citation needed]
The general or universal definition of pratityasamutpada (or "dependent origination" or "dependent arising" or "interdependent co-arising") is that everything arises in dependence upon multiple causes and conditions; nothing exists as a singular, independent entity. A traditional example in Buddhist texts is of three sticks standing upright and leaning against each other and supporting each other. If one stick is taken away, the other two will fall to the ground.[65][66]
Causality in theChittamatrinBuddhist school approach,Asanga's (c.400 CE) mind-only Buddhist school, asserts that objects cause consciousness in the mind's image. Because causes precede effects, which must be different entities, then subject and object are different. For this school, there are no objects which are entities external to a perceiving consciousness. The Chittamatrin and theYogacharaSvatantrikaschools accept that there are no objects external to the observer's causality. This largely follows theNikayasapproach.[67][68][69][70]
TheVaibhashika(c.500 CE) is anearly Buddhist schoolwhich favors direct object contact and accepts simultaneous cause and effects. This is based in the consciousness example which says, intentions and feelings are mutually accompanying mental factors that support each other like poles in tripod. In contrast, simultaneous cause and effect rejectors say that if the effect already exists, then it cannot effect the same way again. How past, present and future are accepted is a basis for various Buddhist school's causality viewpoints.[71][72][73]
All the classic Buddhist schools teachkarma. "The law of karma is a special instance of the law of cause and effect, according to which all our actions of body, speech, and mind are causes and all our experiences are their effects."[74]
Aristotleidentified four kinds of answer or explanatory mode to various "Why?" questions. He thought that, for any given topic, all four kinds of explanatory mode were important, each in its own right. As a result of traditional specialized philosophical peculiarities of language, with translations between ancient Greek, Latin, and English, the word 'cause' is nowadays in specialized philosophical writings used to label Aristotle's four kinds.[23][75]In ordinary language, the word 'cause' has a variety of meanings, the most common of which refers to efficient causation, which is the topic of the present article.
Of Aristotle's four kinds or explanatory modes, only one, the 'efficient cause' is a cause as defined in the leading paragraph of this present article. The other three explanatory modes might be rendered material composition, structure and dynamics, and, again, criterion of completion. The word that Aristotle used wasαἰτία. For the present purpose, that Greek word would be better translated as "explanation" than as "cause" as those words are most often used in current English. Another translation of Aristotle is that he meant "the four Becauses" as four kinds of answer to "why" questions.[23]
Aristotle assumed efficient causality as referring to a basic fact of experience, not explicable by, or reducible to, anything more fundamental or basic.
In some works of Aristotle, the four causes are listed as (1) the essential cause, (2) the logical ground, (3) the moving cause, and (4) the final cause. In this listing, a statement of essential cause is a demonstration that an indicated object conforms to a definition of the word that refers to it. A statement of logical ground is an argument as to why an object statement is true. These are further examples of the idea that a "cause" in general in the context of Aristotle's usage is an "explanation".[23]
The word "efficient" used here can also be translated from Aristotle as "moving" or "initiating".[23]
Efficient causation was connected withAristotelian physics, which recognized thefour elements(earth, air, fire, water), and added thefifth element(aether). Water and earth by their intrinsic propertygravitasor heaviness intrinsically fall toward, whereas air and fire by their intrinsic propertylevitasor lightness intrinsically rise away from, Earth's center—the motionless center of the universe—in a straight line while accelerating during the substance's approach to its natural place.
As air remained on Earth, however, and did not escape Earth while eventually achieving infinite speed—an absurdity—Aristotle inferred that the universe is finite in size and contains an invisible substance that holds planet Earth and its atmosphere, thesublunary sphere, centered in the universe. And since celestial bodies exhibit perpetual, unaccelerated motion orbiting planet Earth in unchanging relations, Aristotle inferred that the fifth element,aither, that fills space and composes celestial bodies intrinsically moves in perpetual circles, the only constant motion between two points. (An object traveling a straight line from pointAtoBand back must stop at either point before returning to the other.)
Left to itself, a thing exhibitsnatural motion, but can—according toAristotelian metaphysics—exhibitenforced motionimparted by an efficient cause. The form of plants endows plants with the processes nutrition and reproduction, the form of animals adds locomotion, and the form of humankind adds reason atop these. A rock normally exhibitsnatural motion—explained by the rock's material cause of being composed of the element earth—but a living thing can lift the rock, anenforced motiondiverting the rock from its natural place and natural motion. As a further kind of explanation, Aristotle identified the final cause, specifying a purpose or criterion of completion in light of which something should be understood.
Aristotle himself explained,
Causemeans
(a) in one sense, that as the result of whose presence something comes into being—e.g., the bronze of a statue and the silver of a cup, and the classes which contain these [i.e., thematerial cause];
(b) in another sense, the form or pattern; that is, the essential formula and the classes which contain it—e.g. the ratio 2:1 and number in general is the cause of the octave—and the parts of the formula [i.e., theformal cause].
(c) The source of the first beginning of change or rest; e.g. the man who plans is a cause, and the father is the cause of the child, and in general that which produces is the cause of that which is produced, and that which changes of that which is changed [i.e., theefficient cause].
(d) The same as "end"; i.e. the final cause; e.g., as the "end" of walking is health. For why does a man walk? "To be healthy", we say, and by saying this we consider that we have supplied the cause [thefinal cause].
(e) All those means towards the end which arise at the instigation of something else, as, e.g., fat-reducing, purging, drugs, and instruments are causes of health; for they all have the end as their object, although they differ from each other as being some instruments, others actions [i.e., necessary conditions].
Aristotle further discerned two modes of causation: proper (prior) causation and accidental (chance) causation. All causes, proper and accidental, can be spoken as potential or as actual, particular or generic. The same language refers to the effects of causes, so that generic effects are assigned to generic causes, particular effects to particular causes, and actual effects to operating causes.
Avertinginfinite regress, Aristotle inferred the first mover—anunmoved mover. The first mover's motion, too, must have been caused, but, being an unmoved mover, must have moved only toward a particular goal or desire.
While the plausibility of causality was accepted inPyrrhonism,[79]it was equally accepted that it was plausible that nothing was the cause of anything.[80]
In line with Aristotelian cosmology,Thomas Aquinasposed a hierarchy prioritizing Aristotle's four causes: "final > efficient > material > formal".[81]Aquinas sought to identify the first efficient cause—now simplyfirst cause—as everyone would agree, said Aquinas, to call itGod. Later in the Middle Ages, many scholars conceded that the first cause was God, but explained that many earthly events occur within God's design or plan, and thereby scholars sought freedom to investigate the numeroussecondary causes.[82]
For Aristotelian philosophy before Aquinas, the wordcausehad a broad meaning. It meant 'answer to a why question' or 'explanation', and Aristotelian scholars recognized four kinds of such answers. With the end of theMiddle Ages, in many philosophical usages, the meaning of the word 'cause' narrowed. It often lost that broad meaning, and was restricted to just one of the four kinds. For authors such asNiccolò Machiavelli, in the field of political thinking, andFrancis Bacon, concerningsciencemore generally, Aristotle's moving cause was the focus of their interest. A widely used modern definition of causality in this newly narrowed sense was assumed byDavid Hume.[81]He undertook an epistemological and metaphysical investigation of the notion of moving cause. He denied that we can ever perceive cause and effect, except by developing a habit or custom of mind where we come to associate two types of object or event, always contiguous and occurring one after the other.[11]In Part III, section XV of his bookA Treatise of Human Nature, Hume expanded this to a list of eight ways of judging whether two things might be cause and effect. The first three:
And then additionally there are three connected criteria which come from our experience and which are "the source of most of our philosophical reasonings":
And then two more:
In 1949, physicistMax Borndistinguished determination from causality. For him, determination meant that actual events are so linked by laws of nature that certainly reliable predictions and retrodictions can be made from sufficient present data about them. He describes two kinds of causation: nomic or generic causation and singular causation. Nomic causality means that cause and effect are linked by more or less certain or probabilistic general laws covering many possible or potential instances; this can be recognized as a probabilized version of Hume's criterion 3. An occasion of singular causation is a particular occurrence of a definite complex of events that are physically linked by antecedence and contiguity, which may be recognized as criteria 1 and 2.[13]
|
https://en.wikipedia.org/wiki/Causality
|
Deductive reasoningis the process of drawing validinferences. An inference isvalidif its conclusion followslogicallyfrom itspremises, meaning that it is impossible for the premises to be true and the conclusion to be false. For example, the inference from the premises "all men are mortal" and "Socratesis a man" to the conclusion "Socrates is mortal" is deductively valid. Anargumentissoundif it is validandall its premises are true. One approach defines deduction in terms of the intentions of the author: they have to intend for the premises to offer deductive support to the conclusion. With the help of this modification, it is possible to distinguish valid from invalid deductive reasoning: it is invalid if the author's belief about the deductive support is false, but even invalid deductive reasoning is a form of deductive reasoning.
Deductive logicstudies under what conditions an argument is valid. According to thesemanticapproach, an argument is valid if there is no possibleinterpretationof the argument whereby its premises are true and its conclusion is false. Thesyntacticapproach, by contrast, focuses onrules of inference, that is, schemas of drawing a conclusion from a set of premises based only on theirlogical form. There are various rules of inference, such asmodus ponensandmodus tollens. Invalid deductive arguments, which do not follow a rule of inference, are calledformal fallacies. Rules of inference are definitory rules and contrast with strategic rules, which specify what inferences one needs to draw in order to arrive at an intended conclusion.
Deductive reasoning contrasts with non-deductive orampliativereasoning. For ampliative arguments, such asinductiveorabductive arguments, the premises offer weaker support to their conclusion: they indicate that it is most likely, but they do not guarantee its truth. They make up for this drawback with their ability to provide genuinely new information (that is, information not already found in the premises), unlike deductive arguments.
Cognitive psychologyinvestigates the mental processes responsible for deductive reasoning. One of its topics concerns the factors determining whether people draw valid or invalid deductive inferences. One such factor is the form of the argument: for example, people draw valid inferences more successfully for arguments of the form modus ponens than of the form modus tollens. Another factor is the content of the arguments: people are more likely to believe that an argument is valid if the claim made in its conclusion is plausible. A general finding is that people tend to perform better for realistic and concrete cases than for abstract cases. Psychological theories of deductive reasoning aim to explain these findings by providing an account of the underlying psychological processes.Mental logic theorieshold that deductive reasoning is a language-like process that happens through the manipulation of representations using rules of inference.Mental model theories, on the other hand, claim that deductive reasoning involves models of possible states of the world without the medium of language or rules of inference. According todual-process theoriesof reasoning, there are two qualitatively different cognitive systems responsible for reasoning.
The problem of deduction is relevant to various fields and issues.Epistemologytries to understand howjustificationis transferred from thebeliefin the premises to the belief in the conclusion in the process of deductive reasoning.Probability logicstudies how the probability of the premises of an inference affects the probability of its conclusion. The controversial thesis ofdeductivismdenies that there are other correct forms of inference besides deduction.Natural deductionis a type of proof system based on simple and self-evident rules of inference. In philosophy, the geometrical method is a way of philosophizing that starts from a small set of self-evident axioms and tries to build a comprehensive logical system using deductive reasoning.
Deductive reasoning is the psychological process of drawing deductiveinferences. An inference is a set ofpremisestogether with a conclusion. This psychological process starts from the premises andreasonsto a conclusion based on and supported by these premises. If the reasoning was done correctly, it results in avaliddeduction: the truth of the premises ensures the truth of the conclusion.[1][2][3][4]For example, in thesyllogisticargument "all frogs are amphibians; no cats are amphibians; therefore, no cats are frogs" the conclusion is true because its two premises are true. But even arguments with wrong premises can be deductively valid if they obey this principle, as in "all frogs are mammals; no cats are mammals; therefore, no cats are frogs". If the premises of a validargumentare true, then it is called asoundargument.[5]
The relation between the premises and the conclusion of a deductive argument is usually referred to as "logical consequence". According toAlfred Tarski, logical consequence has 3 essential features: it is necessary, formal, and knowablea priori.[6][7]It is necessary in the sense that the premises of valid deductive arguments necessitate the conclusion: it is impossible for the premises to be true and the conclusion to be false, independent of any other circumstances.[6][7]Logical consequence is formal in the sense that it depends only on the form or the syntax of the premises and the conclusion. This means that the validity of a particular argument does not depend on the specific contents of this argument. If it is valid, then any argument with the same logical form is also valid, no matter how different it is on the level of its contents.[6][7]Logical consequence is knowable a priori in the sense that noempiricalknowledge of the world is necessary to determine whether a deduction is valid. So it is not necessary to engage in any form of empirical investigation.[6][7]Some logicians define deduction in terms ofpossible worlds: A deductive inference is valid if and only if, there is no possible world in which its conclusion is false while its premises are true. This means that there are no counterexamples: the conclusion is true inallsuch cases, not just inmostcases.[1]
It has been argued against this and similar definitions that they fail to distinguish between valid and invalid deductive reasoning, i.e. they leave it open whether there are invalid deductive inferences and how to define them.[8][9]Some authors define deductive reasoning in psychological terms in order to avoid this problem. According to Mark Vorobey, whether an argument is deductive depends on the psychological state of the person making the argument: "An argument is deductive if, and only if, the author of the argument believes that the truth of the premises necessitates (guarantees) the truth of the conclusion".[8]A similar formulation holds that the speakerclaimsorintendsthat the premises offer deductive support for their conclusion.[10][11]This is sometimes categorized as aspeaker-determineddefinition of deduction since it depends also on the speaker whether the argument in question is deductive or not. Forspeakerlessdefinitions, on the other hand, only the argument itself matters independent of the speaker.[9]One advantage of this type of formulation is that it makes it possible to distinguish between good or valid and bad or invalid deductive arguments: the argument is good if the author'sbeliefconcerning the relation between the premises and the conclusion is true, otherwise it is bad.[8]One consequence of this approach is that deductive arguments cannot be identified by the law of inference they use. For example, an argument of the formmodus ponensmay be non-deductive if the author's beliefs are sufficiently confused. That brings with it an important drawback of this definition: it is difficult to apply to concrete cases since the intentions of the author are usually not explicitly stated.[8]
Deductive reasoning is studied inlogic,psychology, and thecognitive sciences.[3][1]Some theorists emphasize in their definition the difference between these fields. On this view, psychology studies deductive reasoning as an empirical mental process, i.e. what happens when humans engage in reasoning.[3][1]But the descriptive question of how actual reasoning happens is different from thenormativequestion of how itshouldhappen or what constitutescorrectdeductive reasoning, which is studied by logic.[3][12][6]This is sometimes expressed by stating that, strictly speaking, logic does not study deductive reasoning but the deductive relation between premises and a conclusion known aslogical consequence. But this distinction is not always precisely observed in the academic literature.[3]One important aspect of this difference is that logic is not interested in whether the conclusion of an argument is sensible.[1]So from the premise "the printer has ink" one may draw the unhelpful conclusion "the printer has ink and the printer has ink and the printer has ink", which has little relevance from a psychological point of view. Instead, actual reasoners usually try to remove redundant or irrelevant information and make the relevant information more explicit.[1]The psychological study of deductive reasoning is also concerned with how good people are at drawing deductive inferences and with the factors determining their performance.[3][5]Deductive inferences are found both innatural languageand informal logical systems, such aspropositional logic.[1][13]
Deductive arguments differ from non-deductive arguments in that the truth of their premises ensures the truth of their conclusion.[14][15][6]There are two important conceptions of what this exactly means. They are referred to as thesyntacticand thesemanticapproach.[13][6][5]According to the syntactic approach, whether an argument is deductively valid depends only on its form, syntax, or structure. Two arguments have the same form if they use the same logical vocabulary in the same arrangement, even if their contents differ.[13][6][5]For example, the arguments "if it rains then the street will be wet; it rains; therefore, the street will be wet" and "if the meat is not cooled then it will spoil; the meat is not cooled; therefore, it will spoil" have the same logical form: they follow themodus ponens. Their form can be expressed more abstractly as "if A then B; A; therefore B" in order to make the common syntax explicit.[5]There are various other valid logical forms orrules of inference, likemodus tollensor thedisjunction elimination. The syntactic approach then holds that an argument is deductively valid if and only if its conclusion can be deduced from its premises using a valid rule of inference.[13][6][5]One difficulty for the syntactic approach is that it is usually necessary to express the argument in aformal languagein order to assess whether it is valid. This often brings with it the difficulty of translating thenatural languageargument into a formal language, a process that comes with various problems of its own.[13]Another difficulty is due to the fact that the syntactic approach depends on the distinction between formal and non-formal features. While there is a wide agreement concerning the paradigmatic cases, there are also various controversial cases where it is not clear how this distinction is to be drawn.[16][12]
The semantic approach suggests an alternative definition of deductive validity. It is based on the idea that the sentences constituting the premises and conclusions have to beinterpretedin order to determine whether the argument is valid.[13][6][5]This means that one ascribes semantic values to the expressions used in the sentences, such as the reference to an object forsingular termsor to atruth-valuefor atomic sentences. The semantic approach is also referred to as the model-theoretic approach since the branch of mathematics known asmodel theoryis often used to interpret these sentences.[13][6]Usually, many different interpretations are possible, such as whether a singular term refers to one object or to another. According to the semantic approach, an argument is deductively valid if and only if there is no possible interpretation where its premises are true and its conclusion is false.[13][6][5]Some objections to the semantic approach are based on the claim that the semantics of a language cannot be expressed in the same language, i.e. that a richermetalanguageis necessary. This would imply that the semantic approach cannot provide a universal account of deduction for language as an all-encompassing medium.[13][12]
Deductive reasoning usually happens by applyingrules of inference. A rule of inference is a way or schema of drawing a conclusion from a set of premises.[17]This happens usually based only on thelogical formof the premises. A rule of inference is valid if, when applied to true premises, the conclusion cannot be false. A particular argument is valid if it follows a valid rule of inference. Deductive arguments that do not follow a valid rule of inference are calledformal fallacies: the truth of their premises does not ensure the truth of their conclusion.[18][14]
In some cases, whether a rule of inference is valid depends on the logical system one is using. The dominant logical system isclassical logicand the rules of inference listed here are all valid in classical logic. But so-calleddeviant logicsprovide a different account of which inferences are valid. For example, the rule of inference known asdouble negation elimination, i.e. that if a proposition isnot not truethen it is alsotrue, is accepted in classical logic but rejected inintuitionistic logic.[19][20]
Modus ponens (also known as "affirming the antecedent" or "the law of detachment") is the primary deductiverule of inference. It applies to arguments that have as first premise aconditional statement(P→Q{\displaystyle P\rightarrow Q}) and as second premise the antecedent (P{\displaystyle P}) of the conditional statement. It obtains the consequent (Q{\displaystyle Q}) of the conditional statement as its conclusion. The argument form is listed below:
In this form of deductive reasoning, the consequent (Q{\displaystyle Q}) obtains as the conclusion from the premises of a conditional statement (P→Q{\displaystyle P\rightarrow Q}) and its antecedent (P{\displaystyle P}). However, the antecedent (P{\displaystyle P}) cannot be similarly obtained as the conclusion from the premises of the conditional statement (P→Q{\displaystyle P\rightarrow Q}) and the consequent (Q{\displaystyle Q}). Such an argument commits thelogical fallacyofaffirming the consequent.
The following is an example of an argument using modus ponens:
Modus tollens (also known as "the law of contrapositive") is a deductive rule of inference. It validates an argument that has as premises a conditional statement (formula) and the negation of the consequent (¬Q{\displaystyle \lnot Q}) and as conclusion the negation of the antecedent (¬P{\displaystyle \lnot P}). In contrast tomodus ponens, reasoning with modus tollens goes in the opposite direction to that of the conditional. The general expression for modus tollens is the following:
The following is an example of an argument using modus tollens:
Ahypotheticalsyllogismis an inference that takes two conditional statements and forms a conclusion by combining the hypothesis of one statement with the conclusion of another. Here is the general form:
In there being a subformula in common between the two premises that does not occur in the consequence, this resembles syllogisms interm logic, although it differs in that this subformula is a proposition whereas in Aristotelian logic, this common element is a term and not a proposition.
The following is an example of an argument using a hypothetical syllogism:
Various formal fallacies have been described. They are invalid forms of deductive reasoning.[18][14]An additional aspect of them is that they appear to be valid on some occasions or on the first impression. They may thereby seduce people into accepting and committing them.[22]One type of formal fallacy isaffirming the consequent, as in "if John is a bachelor, then he is male; John is male; therefore, John is a bachelor".[23]This is similar to the valid rule of inference namedmodus ponens, but the second premise and the conclusion are switched around, which is why it is invalid. A similar formal fallacy isdenying the antecedent, as in "if Othello is a bachelor, then he is male; Othello is not a bachelor; therefore, Othello is not male".[24][25]This is similar to the valid rule of inference calledmodus tollens, the difference being that the second premise and the conclusion are switched around. Other formal fallacies includeaffirming a disjunct,denying a conjunct, and thefallacy of the undistributed middle. All of them have in common that the truth of their premises does not ensure the truth of their conclusion. But it may still happen by coincidence that both the premises and the conclusion of formal fallacies are true.[18][14]
Rules of inferences are definitory rules: they determine whether an argument is deductively valid or not. But reasoners are usually not just interested in making any kind of valid argument. Instead, they often have a specific point or conclusion that they wish to prove or refute. So given a set of premises, they are faced with the problem of choosing the relevant rules of inference for their deduction to arrive at their intended conclusion.[13][26][27]This issue belongs to the field of strategic rules: the question of which inferences need to be drawn to support one's conclusion. The distinction between definitory and strategic rules is not exclusive to logic: it is also found in various games.[13][26][27]Inchess, for example, the definitory rules state thatbishopsmay only move diagonally while the strategic rules recommend that one should control the center and protect one'skingif one intends to win. In this sense, definitory rules determine whether one plays chess or something else whereas strategic rules determine whether one is a good or a bad chess player.[13][26]The same applies to deductive reasoning: to be an effective reasoner involves mastering both definitory and strategic rules.[13]
Deductive arguments are evaluated in terms of theirvalidityandsoundness.
An argument isvalidif it is impossible for itspremisesto be true while its conclusion is false. In other words, the conclusion must be true if the premises are true. An argument can be “valid” even if one or more of its premises are false.
An argument issoundif it isvalidand the premises are true.
It is possible to have a deductive argument that is logicallyvalidbut is notsound. Fallacious arguments often take that form.
The following is an example of an argument that is “valid”, but not “sound”:
The example's first premise is false – there are people who eat carrots who are not quarterbacks – but the conclusion would necessarily be true, if the premises were true. In other words, it is impossible for the premises to be true and the conclusion false. Therefore, the argument is “valid”, but not “sound”. False generalizations – such as "Everyone who eats carrots is a quarterback" – are often used to make unsound arguments. The fact that there are some people who eat carrots but are not quarterbacks proves the flaw of the argument.
In this example, the first statement usescategorical reasoning, saying that all carrot-eaters are definitely quarterbacks. This theory of deductive reasoning – also known asterm logic– was developed byAristotle, but was superseded bypropositional (sentential) logicandpredicate logic.[citation needed]
Deductive reasoning can be contrasted withinductive reasoning, in regards to validity and soundness. In cases of inductive reasoning, even though the premises are true and the argument is “valid”, it is possible for the conclusion to be false (determined to be false with a counterexample or other means).
Deductive reasoning is usually contrasted with non-deductive orampliativereasoning.[13][28][29]The hallmark of valid deductive inferences is that it is impossible for their premises to be true and their conclusion to be false. In this way, the premises provide the strongest possible support to their conclusion.[13][28][29]The premises of ampliative inferences also support their conclusion. But this support is weaker: they are not necessarily truth-preserving. So even for correct ampliative arguments, it is possible that their premises are true and their conclusion is false.[11]Two important forms of ampliative reasoning areinductiveandabductive reasoning.[30]Sometimes the term "inductive reasoning" is used in a very wide sense to cover all forms of ampliative reasoning.[11]However, in a more strict usage, inductive reasoning is just one form of ampliative reasoning.[30]In the narrow sense, inductive inferences are forms of statistical generalization. They are usually based on many individualobservationsthat all show a certain pattern. These observations are then used to form a conclusion either about a yet unobserved entity or about a general law.[31][32][33]For abductive inferences, the premises support the conclusion because the conclusion is the best explanation of why the premises are true.[30][34]
The support ampliative arguments provide for their conclusion comes in degrees: some ampliative arguments are stronger than others.[11][35][30]This is often explained in terms ofprobability: the premises make it more likely that the conclusion is true.[13][28][29]Strong ampliative arguments make their conclusion very likely, but not absolutely certain. An example of ampliative reasoning is the inference from the premise "every raven in a random sample of 3200 ravens is black" to the conclusion "all ravens are black": the extensive random sample makes the conclusion very likely, but it does not exclude that there are rare exceptions.[35]In this sense, ampliative reasoning is defeasible: it may become necessary to retract an earlier conclusion upon receiving new related information.[12][30]Ampliative reasoning is very common in everyday discourse and thesciences.[13][36]
An important drawback of deductive reasoning is that it does not lead to genuinely new information.[5]This means that the conclusion only repeats information already found in the premises. Ampliative reasoning, on the other hand, goes beyond the premises by arriving at genuinely new information.[13][28][29]One difficulty for this characterization is that it makes deductive reasoning appear useless: if deduction is uninformative, it is not clear why people would engage in it and study it.[13][37]It has been suggested that this problem can be solved by distinguishing between surface and depth information. On this view, deductive reasoning is uninformative on the depth level, in contrast to ampliative reasoning. But it may still be valuable on the surface level by presenting the information in the premises in a new and sometimes surprising way.[13][5]
A popular misconception of the relation between deduction and induction identifies their difference on the level of particular and general claims.[2][9][38]On this view, deductive inferences start from general premises and draw particular conclusions, while inductive inferences start from particular premises and draw general conclusions. This idea is often motivated by seeing deduction and induction as two inverse processes that complement each other: deduction istop-downwhile induction isbottom-up. But this is a misconception that does not reflect how valid deduction is defined in the field oflogic: a deduction is valid if it is impossible for its premises to be true while its conclusion is false, independent of whether the premises or the conclusion are particular or general.[2][9][1][5][3]Because of this, some deductive inferences have a general conclusion and some also have particular premises.[2]
Cognitive psychologystudies the psychological processes responsible for deductive reasoning.[3][5]It is concerned, among other things, with how good people are at drawing valid deductive inferences. This includes the study of the factors affecting their performance, their tendency to commitfallacies, and the underlyingbiasesinvolved.[3][5]A notable finding in this field is that the type of deductive inference has a significant impact on whether the correct conclusion is drawn.[3][5][39][40]In a meta-analysis of 65 studies, for example, 97% of the subjects evaluatedmodus ponensinferences correctly, while the success rate formodus tollenswas only 72%. On the other hand, even some fallacies likeaffirming the consequentordenying the antecedentwere regarded as valid arguments by the majority of the subjects.[3]An important factor for these mistakes is whether the conclusion seems initially plausible: the more believable the conclusion is, the higher the chance that a subject will mistake a fallacy for a valid argument.[3][5]
An important bias is thematching bias, which is often illustrated using theWason selection task.[5][3][41][42]In an often-cited experiment byPeter Wason, 4 cards are presented to the participant. In one case, the visible sides show the symbols D, K, 3, and 7 on the different cards. The participant is told that every card has a letter on one side and a number on the other side, and that "[e]very card which has a D on one side has a 3 on the other side". Their task is to identify which cards need to be turned around in order to confirm or refute this conditional claim. The correct answer, only given by about 10%, is the cards D and 7. Many select card 3 instead, even though the conditional claim does not involve any requirements on what symbols can be found on the opposite side of card 3.[3][5]But this result can be drastically changed if different symbols are used: the visible sides show "drinking a beer", "drinking a coke", "16 years of age", and "22 years of age" and the participants are asked to evaluate the claim "[i]f a person is drinking beer, then the person must be over 19 years of age". In this case, 74% of the participants identified correctly that the cards "drinking a beer" and "16 years of age" have to be turned around.[3][5]These findings suggest that the deductive reasoning ability is heavily influenced by the content of the involved claims and not just by the abstract logical form of the task: the more realistic and concrete the cases are, the better the subjects tend to perform.[3][5]
Another bias is called the "negative conclusion bias", which happens when one of the premises has the form of a negativematerial conditional,[5][43][44]as in "If the card does not have an A on the left, then it has a 3 on the right. The card does not have a 3 on the right. Therefore, the card has an A on the left". The increased tendency to misjudge the validity of this type of argument is not present for positive material conditionals, as in "If the card has an A on the left, then it has a 3 on the right. The card does not have a 3 on the right. Therefore, the card does not have an A on the left".[5]
Various psychological theories of deductive reasoning have been proposed. These theories aim to explain how deductive reasoning works in relation to the underlying psychological processes responsible. They are often used to explain the empirical findings, such as why human reasoners are more susceptible to some types of fallacies than to others.[3][1][45]
An important distinction is betweenmental logic theories, sometimes also referred to asrule theories, andmental model theories.Mental logic theoriessee deductive reasoning as alanguage-like process that happens through the manipulation of representations.[3][1][46][45]This is done by applying syntactic rules of inference in a way very similar to how systems ofnatural deductiontransform their premises to arrive at a conclusion.[45]On this view, some deductions are simpler than others since they involve fewer inferential steps.[3]This idea can be used, for example, to explain why humans have more difficulties with some deductions, like themodus tollens, than with others, like themodus ponens: because the more error-prone forms do not have a native rule of inference but need to be calculated by combining several inferential steps with other rules of inference. In such cases, the additional cognitive labor makes the inferences more open to error.[3]
Mental model theories, on the other hand, hold that deductive reasoning involves models ormental representationsof possible states of the world without the medium of language or rules of inference.[3][1][45]In order to assess whether a deductive inference is valid, the reasoner mentally constructs models that are compatible with the premises of the inference. The conclusion is then tested by looking at these models and trying to find a counterexample in which the conclusion is false. The inference is valid if no such counterexample can be found.[3][1][45]In order to reduce cognitive labor, only such models are represented in which the premises are true. Because of this, the evaluation of some forms of inference only requires the construction of very few models while for others, many different models are necessary. In the latter case, the additional cognitive labor required makes deductive reasoning more error-prone, thereby explaining the increased rate of error observed.[3][1]This theory can also explain why some errors depend on the content rather than the form of the argument. For example, when the conclusion of an argument is very plausible, the subjects may lack the motivation to search for counterexamples among the constructed models.[3]
Both mental logic theories and mental model theories assume that there is one general-purpose reasoning mechanism that applies to all forms of deductive reasoning.[3][46][47]But there are also alternative accounts that posit various different special-purpose reasoning mechanisms for different contents and contexts. In this sense, it has been claimed that humans possess a special mechanism for permissions and obligations, specifically for detecting cheating in social exchanges. This can be used to explain why humans are often more successful in drawing valid inferences if the contents involve human behavior in relation to social norms.[3]Another example is the so-calleddual-process theory.[5][3]This theory posits that there are two distinct cognitive systems responsible for reasoning. Their interrelation can be used to explain commonly observed biases in deductive reasoning. System 1 is the older system in terms of evolution. It is based on associative learning and happens fast and automatically without demanding many cognitive resources.[5][3]System 2, on the other hand, is of more recent evolutionary origin. It is slow and cognitively demanding, but also more flexible and under deliberate control.[5][3]The dual-process theory posits that system 1 is the default system guiding most of our everyday reasoning in a pragmatic way. But for particularly difficult problems on the logical level, system 2 is employed. System 2 is mostly responsible for deductive reasoning.[5][3]
Theabilityof deductive reasoning is an important aspect ofintelligenceand manytests of intelligenceinclude problems that call for deductive inferences.[1]Because of this relation to intelligence, deduction is highly relevant to psychology and the cognitive sciences.[5]But the subject of deductive reasoning is also pertinent to thecomputer sciences, for example, in the creation ofartificial intelligence.[1]
Deductive reasoning plays an important role inepistemology. Epistemology is concerned with the question ofjustification, i.e. to point out which beliefs are justified and why.[48][49]Deductive inferences are able to transfer the justification of the premises onto the conclusion.[3]So while logic is interested in the truth-preserving nature of deduction, epistemology is interested in the justification-preserving nature of deduction. There are different theories trying to explain why deductive reasoning is justification-preserving.[3]According toreliabilism, this is the case because deductions are truth-preserving: they are reliable processes that ensure a true conclusion given the premises are true.[3][50][51]Some theorists hold that the thinker has to have explicit awareness of the truth-preserving nature of the inference for the justification to be transferred from the premises to the conclusion. One consequence of such a view is that, for young children, this deductive transference does not take place since they lack this specific awareness.[3]
Probability logicis interested in how the probability of the premises of an argument affects the probability of its conclusion. It differs from classical logic, which assumes that propositions are either true or false but does not take into consideration the probability or certainty that a proposition is true or false.[52][53]
Aristotle, aGreek philosopher, started documenting deductive reasoning in the 4th century BC.[54]René Descartes, in his bookDiscourse on Method, refined the idea for theScientific Revolution. Developing four rules to follow for proving an idea deductively, Descartes laid the foundation for the deductive portion of thescientific method. Descartes' background in geometry and mathematics influenced his ideas on the truth and reasoning, causing him to develop a system of general reasoning now used for most mathematical reasoning. Similar to postulates, Descartes believed that ideas could be self-evident and that reasoning alone must prove that observations are reliable. These ideas also lay the foundations for the ideas ofrationalism.[55]
Deductivism is a philosophical position that gives primacy to deductive reasoning or arguments over their non-deductive counterparts.[56][57]It is often understood as the evaluative claim that only deductive inferences aregoodorcorrectinferences. This theory would have wide-reaching consequences for various fields since it implies that the rules of deduction are "the only acceptable standard ofevidence".[56]This way, the rationality or correctness of the different forms of inductive reasoning is denied.[57][58]Some forms of deductivism express this in terms of degrees of reasonableness or probability. Inductive inferences are usually seen as providing a certain degree of support for their conclusion: they make it more likely that their conclusion is true. Deductivism states that such inferences are not rational: the premises either ensure their conclusion, as in deductive reasoning, or they do not provide any support at all.[59]
One motivation for deductivism is theproblem of inductionintroduced byDavid Hume. It consists in the challenge of explaining how or whether inductive inferences based on past experiences support conclusions about future events.[57][60][59]For example, a chicken comes to expect, based on all its past experiences, that the person entering its coop is going to feed it, until one day the person "at last wrings its neck instead".[61]According toKarl Popper's falsificationism, deductive reasoning alone is sufficient. This is due to its truth-preserving nature: a theory can be falsified if one of its deductive consequences is false.[62][63]So while inductive reasoning does not offer positive evidence for a theory, the theory still remains a viable competitor until falsified byempirical observation. In this sense, deduction alone is sufficient for discriminating between competing hypotheses about what is the case.[57]Hypothetico-deductivismis a closely related scientific method, according to which science progresses by formulating hypotheses and then aims to falsify them by trying to make observations that run counter to their deductive consequences.[64][65]
The term "natural deduction" refers to a class of proof systems based on self-evident rules of inference.[66][67]The first systems of natural deduction were developed byGerhard GentzenandStanislaw Jaskowskiin the 1930s. The core motivation was to give a simple presentation of deductive reasoning that closely mirrors how reasoning actually takes place.[68]In this sense, natural deduction stands in contrast to other less intuitive proof systems, such asHilbert-style deductive systems, which employ axiom schemes to expresslogical truths.[66]Natural deduction, on the other hand, avoids axioms schemes by including many different rules of inference that can be used to formulate proofs. These rules of inference express howlogical constantsbehave. They are often divided intointroduction rules and elimination rules. Introduction rules specify under which conditions a logical constant may be introduced into a new sentence of theproof.[66][67]For example, the introduction rule for the logical constant"∧{\displaystyle \land }"(and) is"A,B(A∧B){\displaystyle {\frac {A,B}{(A\land B)}}}". It expresses that, given the premises"A{\displaystyle A}"and"B{\displaystyle B}"individually, one may draw the conclusion"A∧B{\displaystyle A\land B}"and thereby include it in one's proof. This way, the symbol"∧{\displaystyle \land }"is introduced into the proof. The removal of this symbol is governed by other rules of inference, such as the elimination rule"(A∧B)A{\displaystyle {\frac {(A\land B)}{A}}}", which states that one may deduce the sentence"A{\displaystyle A}"from the premise"(A∧B){\displaystyle (A\land B)}". Similar introduction and elimination rules are given for other logical constants, such as the propositional operator"¬{\displaystyle \lnot }", thepropositional connectives"∨{\displaystyle \lor }"and"→{\displaystyle \rightarrow }", and thequantifiers"∃{\displaystyle \exists }"and"∀{\displaystyle \forall }".[66][67]
The focus on rules of inferences instead of axiom schemes is an important feature of natural deduction.[66][67]But there is no general agreement on how natural deduction is to be defined. Some theorists hold that all proof systems with this feature are forms of natural deduction. This would include various forms ofsequent calculi[a]ortableau calculi. But other theorists use the term in a more narrow sense, for example, to refer to the proof systems developed by Gentzen and Jaskowski. Because of its simplicity, natural deduction is often used for teaching logic to students.[66]
The geometrical method is a method ofphilosophybased on deductive reasoning. It starts from a small set ofself-evidentaxioms and tries to build a comprehensive logical system based only on deductive inferences from these firstaxioms.[69]It was initially formulated byBaruch Spinozaand came to prominence in variousrationalistphilosophical systems in the modern era.[70]It gets its name from the forms ofmathematical demonstrationfound in traditionalgeometry, which are usually based on axioms,definitions, and inferredtheorems.[71][72]An important motivation of the geometrical method is to repudiatephilosophical skepticismby grounding one's philosophical system on absolutely certain axioms. Deductive reasoning is central to this endeavor because of its necessarily truth-preserving nature. This way, the certainty initially invested only in the axioms is transferred to all parts of the philosophical system.[69]
One recurrent criticism of philosophical systems build using the geometrical method is that their initial axioms are not as self-evident or certain as their defenders proclaim.[69]This problem lies beyond the deductive reasoning itself, which only ensures that the conclusion is true if the premises are true, but not that the premises themselves are true. For example, Spinoza's philosophical system has been criticized this way based on objections raised against thecausalaxiom, i.e. that "the knowledge of an effect depends on and involves knowledge of its cause".[73]A different criticism targets not the premises but the reasoning itself, which may at times implicitly assume premises that are themselves not self-evident.[69]
|
https://en.wikipedia.org/wiki/Deductive_reasoning
|
Inlogic,Peirce's lawis named after thephilosopherandlogicianCharles Sanders Peirce. It was taken as anaxiomin his first axiomatisation ofpropositional logic. It can be thought of as thelaw of excluded middlewritten in a form that involves only one sort of connective, namely implication.
Inpropositional calculus,Peirce's lawsays that ((P→Q)→P)→P. Written out, this means thatPmust be true if there is a propositionQsuch that the truth ofPfollows fromthe truth of "ifPthenQ".
Peirce's law does not hold inintuitionistic logicorintermediate logicsand cannot be deduced from thededuction theoremalone.
Under theCurry–Howard isomorphism, Peirce's law is the type ofcontinuationoperators, e.g.call/ccinScheme.[1]
Here is Peirce's own statement of the law:
Peirce goes on to point out an immediate application of the law:
Warning: As explained in the text, "a" here does not denote a propositional atom, but something like thequantified propositional formula∀pp{\displaystyle \forall p\,p}. The formula((x→y) →a) →xwould not be atautologyifawere interpreted as an atom.
In intuitionistic logic, ifP{\displaystyle P}is proven or rejected, or ifQ{\displaystyle Q}is proven valid, then Peirce's law for the two propositions holds. But the law's special case whenQ{\displaystyle Q}is rejected, calledconsequentia mirabilis, is equivalent to excluded middle already overminimal logic. This also means that Piece's law entails classical logic over intuitionistic logic. This is shown below.
Firstly, fromP→Q{\displaystyle P\to Q}follows the equivalenceP↔(P∧Q){\displaystyle P\leftrightarrow (P\land Q)}, and so(P→Q)→P{\displaystyle (P\to Q)\to P}is equivalent to(P→Q)→(P∧Q){\displaystyle (P\to Q)\to (P\land Q)}. With this, one can also establish Peirce's law by establishing the equivalent form((P→Q)→(P∧Q))→P{\displaystyle ((P\to Q)\to (P\land Q))\to P}. Considering the caseQ=⊥{\displaystyle Q=\bot }likewise also shows how double-negation elimination¬¬P→P{\displaystyle \neg \neg P\to P}implies consequentia mirabilis, and this direction even only uses minimal logic. Now in intuitionistic logic, explosion can be used for⊥→(P∧⊥){\displaystyle \bot \to (P\land \bot )}, and so here consequentia mirabilis also implies double-negation elimination.
As the double-negated excluded middle is always already valid even in minimal logic, it thus further also implies excluded middle, over intuitionistic logic. In the other direction, one can intuitionistically also show that excluded middle implies the full Peirce's law directly. To this end, note that using theprinciple of explosion, excluded middle may be expressed asP∨(P→Q){\displaystyle P\lor (P\to Q)}. In words, this may be expressed as: "Every propositionP{\displaystyle P}either holds or implies any other proposition."
Now to prove the law, note that(P∨R)→((R→P)→P){\displaystyle (P\lor R)\to ((R\to P)\to P)}is derivable from just implication introduction on the one hand andmodus ponenson the other. Finally, in place ofR{\displaystyle R}considerP→Q{\displaystyle P\to Q}.
Another proof of the law in classical logic proceeds by passing through the classically valid reversedisjunctive syllogismtwice:
First note that¬¬P{\displaystyle \neg \neg P}is implied by(¬¬P∧¬Q)∨P{\displaystyle (\neg \neg P\land \neg Q)\lor P}, which is intuitionistically equivalent to¬(¬P∨Q)∨P{\displaystyle \neg (\neg P\lor Q)\lor P}. Now explosion entails that¬A∨B{\displaystyle \neg A\lor B}impliesA→B{\displaystyle A\to B}, and using excluded middle forA{\displaystyle A}here entails that these two are in fact equivalent. Taken together, this means that in classical logicP{\displaystyle P}is equivalent to(P→Q)→P{\displaystyle (P\to Q)\to P}.
Intuitionistically, not even the constraint¬Q→P{\displaystyle \neg Q\to P}always implies Pierce's law for two propositions. Postulating the latter to be valid in its propositional form results inSmetanich's intermediate logic.
Peirce's law allows one to enhance the technique of using thededuction theoremto prove theorems. Suppose one is given a set of premises Γ and one wants to deduce a propositionZfrom them. With Peirce's law, one can add (at no cost) additional premises of the formZ→Pto Γ. For example, suppose we are givenP→Zand (P→Q)→Zand we wish to deduceZso that we can use the deduction theorem to conclude that (P→Z)→(((P→Q)→Z)→Z) is a theorem. Then we can add another premiseZ→Q. From that andP→Z, we getP→Q. Then we apply modus ponens with (P→Q)→Zas the major premise to getZ. Applying the deduction theorem, we get that (Z→Q)→Zfollows from the original premises. Then we use Peirce's law in the form ((Z→Q)→Z)→Zand modus ponens to deriveZfrom the original premises. Then we can finish off proving the theorem as we originally intended.
(P→Z)→(((P→Q)→Z)→Z)
One reason that Peirce's law is important is that it can substitute for the law of excluded middle in the logic which only uses implication. The sentences which can be deduced from the axiom schemas:
(whereP,Q,Rcontain only "→" as a connective) are all thetautologieswhich use only "→" as a connective.
Since Peirce's law implies the law of the excluded middle, it must always fail in non-classical intuitionistic logics. A simple explicit counterexample is that ofGödel many valued logics, which are afuzzy logicwhere truth values are real numbers between 0 and 1, with material implication defined by:
and where Peirce's law as a formula can be simplified to:
where it always being true would be equivalent to the statement that u > v implies u = 1, which is true only if 0 and 1 are the only allowed values. At the same time however, the expression cannot ever be equal to the bottom truth value of the logic and its double negation is always true.
|
https://en.wikipedia.org/wiki/Peirce%27s_law
|
Probabilistic logic(alsoprobability logicandprobabilistic reasoning) involves the use of probability and logic to deal with uncertain situations. Probabilistic logic extends traditional logictruth tableswith probabilistic expressions. A difficulty of probabilistic logics is their tendency to multiply thecomputational complexitiesof their probabilistic and logical components. Other difficulties include the possibility of counter-intuitive results, such as in case of belief fusion inDempster–Shafer theory. Source trust and epistemic uncertainty about the probabilities they provide, such as defined insubjective logic, are additional elements to consider. The need to deal with a broad variety of contexts and issues has led to many different proposals.
There are numerous proposals for probabilistic logics. Very roughly, they can be categorized into two different classes: those logics that attempt to make a probabilistic extension tological entailment, such asMarkov logic networks, and those that attempt to address the problems of uncertainty and lack of evidence (evidentiary logics).
That the concept of probability can have different meanings may be understood by noting that, despite the mathematization of probability in theEnlightenment, mathematicalprobability theoryremains, to this very day, entirely unused in criminal courtrooms, when evaluating the "probability" of the guilt of a suspected criminal.[1]
More precisely, in evidentiary logic, there is a need to distinguish the objective truth of a statement from our decision about the truth of that statement, which in turn must be distinguished from our confidence in its truth: thus, a suspect's real guilt is not necessarily the same as the judge's decision on guilt, which in turn is not the same as assigning a numerical probability to the commission of the crime, and deciding whether it is above a numerical threshold of guilt. The verdict on a single suspect may be guilty or not guilty with some uncertainty, just as the flipping of a coin may be predicted as heads or tails with some uncertainty. Given a large collection of suspects, a certain percentage may be guilty, just as the probability of flipping "heads" is one-half. However, it is incorrect to take this law of averages with regard to a single criminal (or single coin-flip): the criminal is no more "a little bit guilty" than predicting a single coin flip to be "a little bit heads and a little bit tails": we are merely uncertain as to which it is. Expressing uncertainty as a numerical probability may be acceptable when making scientific measurements of physical quantities, but it is merely a mathematical model of the uncertainty we perceive in the context of "common sense" reasoning and logic. Just as in courtroom reasoning, the goal of employinguncertain inferenceis to gather evidence to strengthen the confidence of a proposition, as opposed to performing some sort of probabilistic entailment.
Historically, attempts to quantify probabilistic reasoning date back to antiquity. There was a particularly strong interest starting in the 12th century, with the work of theScholastics, with the invention of thehalf-proof(so that two half-proofs are sufficient to prove guilt), the elucidation ofmoral certainty(sufficient certainty to act upon, but short of absolute certainty), the development ofCatholic probabilism(the idea that it is always safe to follow the established rules of doctrine or the opinion of experts, even when they are less probable), thecase-based reasoningofcasuistry, and the scandal ofLaxism(whereby probabilism was used to give support to almost any statement at all, it being possible to find an expert opinion in support of almost any proposition.).[1]
Below is a list of proposals for probabilistic and evidentiary extensions to classical andpredicate logic.
|
https://en.wikipedia.org/wiki/Probabilistic_logic
|
Inlogic, afunctionally completeset oflogical connectivesorBoolean operatorsis one that can be used to express all possibletruth tablesby combining members of thesetinto aBoolean expression.[1][2]A well-known complete set of connectives is{AND,NOT}. Each of thesingletonsets{NAND}and{NOR}is functionally complete. However, the set{ AND,OR}is incomplete, due to its inability to express NOT.
A gate (or set of gates) that is functionally complete can also be called a universal gate (or a universal set of gates).
In a context ofpropositional logic, functionally complete sets of connectives are also called (expressively)adequate.[3]
From the point of view ofdigital electronics, functional completeness means that every possiblelogic gatecan be realized as a network of gates of the types prescribed by the set. In particular, all logic gates can be assembled from either only binaryNAND gates, or only binaryNOR gates.
Modern texts on logic typically take as primitive some subset of the connectives:conjunction(∧{\displaystyle \land });disjunction(∨{\displaystyle \lor });negation(¬{\displaystyle \neg });material conditional(→{\displaystyle \to }); and possibly thebiconditional(↔{\displaystyle \leftrightarrow }). Further connectives can be defined, if so desired, by defining them in terms of these primitives. For example, NOR (the negation of the disjunction, sometimes denoted↓{\displaystyle \downarrow }) can be expressed as conjunction of two negations:
Similarly, the negation of the conjunction, NAND (sometimes denoted as↑{\displaystyle \uparrow }), can be defined in terms of disjunction and negation. Every binary connective can be defined in terms of{¬,∧,∨,→,↔}{\displaystyle \{\neg ,\land ,\lor ,\to ,\leftrightarrow \}}, which means that set is functionally complete. However, it contains redundancy: this set is not aminimalfunctionally complete set, because the conditional and biconditional can be defined in terms of the other connectives as
It follows that the smaller set{¬,∧,∨}{\displaystyle \{\neg ,\land ,\lor \}}is also functionally complete. (Its functional completeness is also proved by theDisjunctive Normal Form Theorem.)[4]But this is still not minimal, as∨{\displaystyle \lor }can be defined as
Alternatively,∧{\displaystyle \land }may be defined in terms of∨{\displaystyle \lor }in a similar manner, or∨{\displaystyle \lor }may be defined in terms of→{\displaystyle \rightarrow }:
No further simplifications are possible. Hence, every two-element set of connectives containing¬{\displaystyle \neg }and one of{∧,∨,→}{\displaystyle \{\land ,\lor ,\rightarrow \}}is a minimal functionally completesubsetof{¬,∧,∨,→,↔}{\displaystyle \{\neg ,\land ,\lor ,\to ,\leftrightarrow \}}.
Given theBoolean domainB= {0, 1}, a setFof Boolean functionsfi:Bni→Bisfunctionally completeif thecloneonBgenerated by the basic functionsficontains all functionsf:Bn→B, for allstrictly positiveintegersn≥ 1. In other words, the set is functionally complete if every Boolean function that takes at least one variable can be expressed in terms of the functionsfi. Since every Boolean function of at least one variable can be expressed in terms of binary Boolean functions,Fis functionally complete if and only if every binary Boolean function can be expressed in terms of the functions inF.
A more natural condition would be that the clone generated byFconsist of all functionsf:Bn→B, for all integersn≥ 0. However, the examples given above are not functionally complete in this stronger sense because it is not possible to write anullaryfunction, i.e. a constant expression, in terms ofFifFitself does not contain at least one nullary function. With this stronger definition, the smallest functionally complete sets would have 2 elements.
Another natural condition would be that the clone generated byFtogether with the two nullary constant functions be functionally complete or, equivalently, functionally complete in the strong sense of the previous paragraph. The example of the Boolean function given byS(x,y,z) =zifx=yandS(x,y,z) =xotherwise shows that this condition is strictly weaker than functional completeness.[5][6][7]
Emil Postproved that a set of logical connectives is functionally complete if and only if it is not a subset of any of the following sets of connectives:
Post gave a complete description of thelatticeof allclones(sets of operations closed under composition and containing all projections) on the two-element set{T,F}, nowadays calledPost's lattice, which implies the above result as a simple corollary: the five mentioned sets of connectives are exactly the maximal nontrivial clones.[8]
When a single logical connective or Boolean operator is functionally complete by itself, it is called aSheffer function[9]or sometimes asole sufficient operator. There are nounaryoperators with this property.NANDandNOR, which aredual to each other, are the only two binary Sheffer functions. These were discovered, but not published, byCharles Sanders Peircearound 1880, and rediscovered independently and published byHenry M. Shefferin 1913.[10]In digital electronics terminology, the binaryNAND gate(↑) and the binaryNOR gate(↓) are the only binaryuniversal logic gates.
The following are the minimal functionally complete sets of logical connectives witharity≤ 2:[11]
There are no minimal functionally complete sets of more than three at most binary logical connectives.[11]In order to keep the lists above readable, operators that ignore one or more inputs have been omitted. For example, an operator that ignores the first input and outputs the negation of the second can be replaced by a unary negation.
Note that an electronic circuit or a software function can be optimized by reuse, to reduce the number of gates. For instance, the "A∧B" operation, when expressed by ↑ gates, is implemented with the reuse of "A ↑ B",
Apart from logical connectives (Boolean operators), functional completeness can be introduced in other domains. For example, a set ofreversiblegates is called functionally complete, if it can express every reversible operator.
The 3-inputFredkin gateis functionally complete reversible gate by itself – a sole sufficient operator. There are many other three-input universal logic gates, such as theToffoli gate.
Inquantum computing, theHadamard gateand theT gateare universal, albeit with aslightly more restrictive definitionthan that of functional completeness.
There is anisomorphismbetween thealgebra of setsand theBoolean algebra, that is, they have the samestructure. Then, if we map boolean operators into set operators, the "translated" above text are valid also for sets: there are many "minimal complete set of set-theory operators" that can generate any other set relations. The more popular "Minimal complete operator sets" are{¬, ∩}and{¬, ∪}. If theuniversal setis forbidden, set operators are restricted to being falsity (Ø) preserving, and cannot be equivalent to functionally complete Boolean algebra.
|
https://en.wikipedia.org/wiki/Sole_sufficient_operator
|
Informal semantics,Strawson entailmentis a variant of the concept ofentailmentwhich is insensitive topresuppositionfailures. Formally, a sentencePStrawson-entails a sentenceQiffQis always true whenPis true andQs presuppositions are satisfied. For example, "Maria loves every cat" Strawson-entails "Maria loves her cat" because Maria could not love every cat without loving her own, assuming that she has one. This would not be an ordinary entailment, since the first sentence could be true while the second is undefined on account of a presupposition failure; loving every cat would not guarantee that she owns a cat.[1][2][3]
Strawson entailment has played an important role in semantic theory since somenatural languageexpressions have been argued to be sensitive to Strawson-entailment rather than pure entailment. For instance, the textbook theory of weaknegative polarity itemsholds that they are licensed only in Strawson-downward entailingenvironments. Other phenomena that have been analyzed using Strawson entailment include temporal adverbials, covert reciprocals, andscalar implicature.[1][2][3][4]Although the concept is widely used within formal semantics, it is not universally adopted and alternative proposals have argued both for returning to pure entailment and for generalizing the notion further to consider not-at-issue content beyond presupposition.[3][1][5]
Thissemanticsarticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Strawson_entailment
|
Inlogic, astrict conditional(symbol:◻{\displaystyle \Box }, or ⥽) is a conditional governed by amodal operator, that is, alogical connectiveofmodal logic. It islogically equivalentto thematerial conditionalofclassical logic, combined with thenecessityoperator frommodal logic. For any twopropositionspandq, theformulap→qsays thatpmaterially impliesqwhile◻(p→q){\displaystyle \Box (p\rightarrow q)}says thatpstrictly impliesq.[1]Strict conditionals are the result ofClarence Irving Lewis's attempt to find a conditional for logic that can adequately expressindicative conditionalsin natural language.[2][3]They have also been used in studyingMolinisttheology.[4]
The strict conditionals may avoidparadoxes of material implication. The following statement, for example, is not correctly formalized by material implication:
This condition should clearly be false: the degree of Bill Gates has nothing to do with whether Elvis is still alive. However, the direct encoding of this formula inclassical logicusing material implication leads to:
This formula is true because whenever the antecedentAis false, a formulaA→Bis true. Hence, this formula is not an adequate translation of the original sentence. An encoding using the strict conditional is:
In modal logic, this formula means (roughly) that, in every possible world in which Bill Gates graduated in medicine, Elvis never died. Since one can easily imagine a world where Bill Gates is a medicine graduate and Elvis is dead, this formula is false. Hence, this formula seems to be a correct translation of the original sentence.
Although the strict conditional is much closer to being able to express natural language conditionals than the material conditional, it has its own problems withconsequentsthat arenecessarily true(such as 2 + 2 = 4) or antecedents that are necessarily false.[5]The following sentence, for example, is not correctly formalized by a strict conditional:
Using strict conditionals, this sentence is expressed as:
In modal logic, this formula means that, in every possible world where Bill Gates graduated in medicine, it holds that 2 + 2 = 4. Since 2 + 2 is equal to 4 in all possible worlds, this formula is true, although it does not seem that the original sentence should be. A similar situation arises with 2 + 2 = 5, which is necessarily false:
Some logicians view this situation as indicating that the strict conditional is still unsatisfactory. Others have noted that the strict conditional cannot adequately expresscounterfactual conditionals,[6]and that it does not satisfy certain logical properties.[7]In particular, the strict conditional istransitive, while the counterfactual conditional is not.[8]
Some logicians, such asPaul Grice, have usedconversational implicatureto argue that, despite apparent difficulties, the material conditional is just fine as a translation for the natural language 'if...then...'. Others still have turned torelevance logicto supply a connection between the antecedent and consequent of provable conditionals.
In aconstructivesetting, the symmetry between ⥽ and◻{\displaystyle \Box }is broken, and the two connectives can be studied independently. Constructive strict implication can be used to investigateinterpretabilityofHeyting arithmeticand to modelarrowsand guardedrecursionin computer science.[9]
|
https://en.wikipedia.org/wiki/Strict_conditional
|
Inmathematical logic, atautology(fromAncient Greek:ταυτολογία) is aformulathat is true regardless of the interpretation of its componentterms, with only thelogical constantshaving a fixed meaning. For example, a formula that states, "the ball is green or the ball is not green," is always true, regardless of what a ball is and regardless of its colour. Tautology is usually, though not always, used to refer to valid formulas ofpropositional logic.
The philosopherLudwig Wittgensteinfirst applied the term to redundancies ofpropositional logicin 1921, borrowing fromrhetoric, where atautologyis a repetitive statement. In logic, a formula issatisfiableif it is true under at least one interpretation, and thus a tautology is a formula whose negation is unsatisfiable. In other words, it cannot be false.
Unsatisfiable statements, both through negation and affirmation, are known formally ascontradictions. A formula that is neither a tautology nor a contradiction is said to belogically contingent. Such a formula can be made either true or false based on the values assigned to its propositional variables.
Thedouble turnstilenotation⊨S{\displaystyle \vDash S}is used to indicate thatSis a tautology. Tautology is sometimes symbolized by "Vpq", and contradiction by "Opq". Theteesymbol⊤{\displaystyle \top }is sometimes used to denote an arbitrary tautology, with the dual symbol⊥{\displaystyle \bot }(falsum) representing an arbitrary contradiction; in any symbolism, a tautology may be substituted for the truth value "true", as symbolized, for instance, by "1".[1]
Tautologies are a key concept inpropositional logic, where a tautology is defined as a propositional formula that is true under any possibleBoolean valuationof itspropositional variables.[2]A key property of tautologies in propositional logic is that aneffective methodexists for testing whether a given formula is always satisfied (equiv., whether its negation is unsatisfiable).
The definition of tautology can be extended to sentences inpredicate logic, which may containquantifiers—a feature absent from sentences of propositional logic. Indeed, in propositional logic, there is no distinction between a tautology and alogically validformula. In the context of predicate logic, many authors define a tautology to be a sentence that can be obtained by taking a tautology of propositional logic, and uniformly replacing each propositional variable by a first-order formula (one formula per propositional variable). The set of such formulas is aproper subsetof the set of logically valid sentences of predicate logic (i.e., sentences that are true in everymodel).
The word tautology was used by the ancient Greeks to describe a statement that was asserted to be true merely by virtue of saying the same thing twice, apejorativemeaning that is still used forrhetorical tautologies. Between 1800 and 1940, the word gained new meaning in logic, and is currently used inmathematical logicto denote a certain type of propositional formula, without the pejorative connotations it originally possessed.
In 1800,Immanuel Kantwrote in his bookLogic:
The identity of concepts in analytical judgments can be eitherexplicit(explicita) ornon-explicit(implicita). In the former case analytic propositions aretautological.
Here,analytic propositionrefers to ananalytic truth, a statement in natural language that is true solely because of the terms involved.
In 1884,Gottlob Fregeproposed in hisGrundlagenthat a truth is analytic exactly if it can be derived using logic. However, he maintained a distinction between analytic truths (i.e., truths based only on the meanings of their terms) and tautologies (i.e., statements devoid of content).
In hisTractatus Logico-Philosophicusin 1921, Ludwig Wittgenstein proposed that statements that can be deduced by logical deduction are tautological (empty of meaning), as well as being analytic truths.Henri Poincaréhad made similar remarks inScience and Hypothesisin 1905. AlthoughBertrand Russellat first argued against these remarks by Wittgenstein and Poincaré, claiming that mathematical truths were not only non-tautologous but weresynthetic, he later spoke in favor of them in 1918:
Everything that is a proposition of logic has got to be in some sense or the other like a tautology. It has got to be something that has some peculiar quality, which I do not know how to define, that belongs to logical propositions but not to others.
Here,logical propositionrefers to a proposition that is provable using the laws of logic.
Many logicians in the early 20th century used the term 'tautology' for any formula that is universally valid, whether a formula ofpropositional logicor ofpredicate logic. In this broad sense, a tautology is a formula that is true under allinterpretations, or that is logically equivalent to the negation of a contradiction.TarskiandGödelfollowed this usage and it appears in textbooks such as that of Lewis and Langford.[3]This broad use of the term is less common today, though some textbooks continue to use it.[4][5]
Modern textbooks more commonly restrict the use of 'tautology' to valid sentences of propositional logic, or valid sentences of predicate logic that can be reduced to propositional tautologies by substitution.[6][7]
Propositional logic begins withpropositional variables, atomic units that represent concrete propositions. Aformulaconsists of propositional variables connected by logical connectives, built up in such a way that the truth of the overall formula can be deduced from the truth or falsity of each variable. Avaluationis a function that assigns each propositional variable to either T (for truth) or F (for falsity). So by using the propositional variablesAandB, the binary connectives∨{\displaystyle \lor }and∧{\displaystyle \land }representingdisjunctionandconjunctionrespectively, and the unary connective¬{\displaystyle \lnot }representingnegation, the following formula can be obtained:(A∧B)∨(¬A)∨(¬B){\displaystyle (A\land B)\lor (\lnot A)\lor (\lnot B)}.
A valuation here must assign to each ofAandBeither T or F. But no matter how this assignment is made, the overall formula will come out true. For if the first disjunct(A∧B){\displaystyle (A\land B)}is not satisfied by a particular valuation, thenAorBmust be assigned F, which will make one of the following disjunct to be assigned T. In natural language, either both A and B are true or at least one of them is false.
A formula of propositional logic is atautologyif the formula itself is always true, regardless of which valuation is used for thepropositional variables. There are infinitely many tautologies.
In many of the following examplesArepresents the statement "objectXis bound",Brepresents "objectXis a book", and C represents "objectXis on the shelf". Without a specific referent objectX,A→B{\displaystyle A\to B}corresponds to the proposition "all bound things are books".
A minimal tautology is a tautology that is not the instance of a shorter tautology.
The problem of determining whether a formula is a tautology is fundamental in propositional logic. If there arenvariables occurring in a formula then there are 2ndistinct valuations for the formula. Therefore, the task of determining whether or not the formula is a tautology is a finite and mechanical one: one needs only to evaluate thetruth valueof the formula under each of its possible valuations. One algorithmic method for verifying that every valuation makes the formula to be true is to make atruth tablethat includes every possible valuation.[2]
For example, consider the formula
There are 8 possible valuations for the propositional variablesA,B,C, represented by the first three columns of the following table. The remaining columns show the truth of subformulas of the formula above, culminating in a column showing the truth value of the original formula under each valuation.
Because each row of the final column showsT, the sentence in question is verified to be a tautology.
It is also possible to define adeductive system(i.e., proof system) for propositional logic, as a simpler variant of the deductive systems employed for first-order logic (see Kleene 1967, Sec 1.9 for one such system). A proof of a tautology in an appropriate deduction system may be much shorter than a complete truth table (a formula withnpropositional variables requires a truth table with 2nlines, which quickly becomes infeasible asnincreases). Proof systems are also required for the study ofintuitionisticpropositional logic, in which the method of truth tables cannot be employed because the law of the excluded middle is not assumed.
A formulaRis said totautologically implya formulaSif every valuation that causesRto be true also causesSto be true. This situation is denotedR⊨S{\displaystyle R\models S}. It is equivalent to the formulaR→S{\displaystyle R\to S}being a tautology (Kleene 1967 p. 27).
For example, letS{\displaystyle S}beA∧(B∨¬B){\displaystyle A\land (B\lor \lnot B)}. ThenS{\displaystyle S}is not a tautology, because any valuation that makesA{\displaystyle A}false will makeS{\displaystyle S}false. But any valuation that makesA{\displaystyle A}true will makeS{\displaystyle S}true, becauseB∨¬B{\displaystyle B\lor \lnot B}is a tautology. LetR{\displaystyle R}be the formulaA∧C{\displaystyle A\land C}. ThenR⊨S{\displaystyle R\models S}, because any valuation satisfyingR{\displaystyle R}will makeA{\displaystyle A}true—and thus makesS{\displaystyle S}true.
It follows from the definition that if a formulaR{\displaystyle R}is a contradiction, thenR{\displaystyle R}tautologically implies every formula, because there is no truth valuation that causesR{\displaystyle R}to be true, and so the definition of tautological implication is trivially satisfied. Similarly, ifS{\displaystyle S}is a tautology, thenS{\displaystyle S}is tautologically implied by every formula.
There is a general procedure, thesubstitution rule, that allows additional tautologies to be constructed from a given tautology (Kleene 1967 sec. 3). Suppose thatSis a tautology and for each propositional variableAinSa fixed sentenceSAis chosen. Then the sentence obtained by replacing each variableAinSwith the corresponding sentenceSAis also a tautology.
For example, letSbe the tautology:
LetSAbeC∨D{\displaystyle C\lor D}and letSBbeC→E{\displaystyle C\to E}.
It follows from the substitution rule that the sentence:
is also a tautology.
Anaxiomatic systemiscompleteif every tautology is a theorem (derivable from axioms). An axiomatic system issoundif every theorem is a tautology.
The problem of constructing practical algorithms to determine whether sentences with large numbers of propositional variables are tautologies is an area of contemporary research in the area ofautomated theorem proving.
The method oftruth tablesillustrated above is provably correct – the truth table for a tautology will end in a column with onlyT, while the truth table for a sentence that is not a tautology will contain a row whose final column isF, and the valuation corresponding to that row is a valuation that does not satisfy the sentence being tested. This method for verifying tautologies is aneffective procedure, which means that given unlimited computational resources it can always be used to mechanistically determine whether a sentence is a tautology. This means, in particular, the set of tautologies over a fixed finite or countable alphabet is adecidable set.
As anefficient procedure, however, truth tables are constrained by the fact that the number of valuations that must be checked increases as 2k, wherekis the number of variables in the formula. This exponential growth in the computation length renders the truth table method useless for formulas with thousands of propositional variables, as contemporary computing hardware cannot execute the algorithm in a feasible time period.
The problem of determining whether there is any valuation that makes a formula true is theBoolean satisfiability problem; the problem of checking tautologies is equivalent to this problem, because verifying that a sentenceSis a tautology is equivalent to verifying that there is no valuation satisfying¬S{\displaystyle \lnot S}. The Boolean satisfiability problem isNP-complete, and consequently, tautology isco-NP-complete. It is widely believed that (equivalently for all NP-complete problems) nopolynomial-time algorithmcan solve the satisfiability problem, although some algorithms perform well on special classes of formulas, or terminate quickly on many instances.[8]
The fundamental definition of a tautology is in the context of propositional logic. The definition can be extended, however, to sentences infirst-order logic.[9]These sentences may contain quantifiers, unlike sentences of propositional logic. In the context of first-order logic, a distinction is maintained betweenlogical validities, sentences that are true in every model, andtautologies(or,tautological validities), which are a proper subset of the first-order logical validities. In the context of propositional logic, these two terms coincide.
A tautology in first-order logic is a sentence that can be obtained by taking a tautology of propositional logic and uniformly replacing each propositional variable by a first-order formula (one formula per propositional variable). For example, becauseA∨¬A{\displaystyle A\lor \lnot A}is a tautology of propositional logic,(∀x(x=x))∨(¬∀x(x=x)){\displaystyle (\forall x(x=x))\lor (\lnot \forall x(x=x))}is a tautology in first order logic. Similarly, in a first-order language with a unary relation symbolsR,S,T, the following sentence is a tautology:
It is obtained by replacingA{\displaystyle A}with∃xRx{\displaystyle \exists xRx},B{\displaystyle B}with¬∃xSx{\displaystyle \lnot \exists xSx}, andC{\displaystyle C}with∀xTx{\displaystyle \forall xTx}in the propositional tautology:((A∧B)→C)⇔(A→(B→C)){\displaystyle ((A\land B)\to C)\Leftrightarrow (A\to (B\to C))}.
Whether a given formula is a tautology depends on the formal system of logic that is in use. For example, the following formula is a tautology of classical logic but not ofintuitionistic logic:
|
https://en.wikipedia.org/wiki/Tautology_(logic)
|
Inpropositional logic,tautological consequenceis a strict form oflogical consequence[1]in which thetautologousnessof apropositionis preserved from one line of a proof to the next. Not all logical consequences are tautological consequences. ApropositionQ{\displaystyle Q}is said to be a tautological consequence of one or more other propositions (P1{\displaystyle P_{1}},P2{\displaystyle P_{2}}, ...,Pn{\displaystyle P_{n}}) in aproofwith respect to somelogical systemif one isvalidlyable to introduce the proposition onto a line of the proof within therulesof the system; and in all cases when each of (P1{\displaystyle P_{1}},P2{\displaystyle P_{2}}, ...,Pn{\displaystyle P_{n}}) are true, the propositionQ{\displaystyle Q}also is true.
Another way to express this preservation of tautologousness is by usingtruth tables. A propositionQ{\displaystyle Q}is said to be a tautological consequence of one or more other propositions (P1{\displaystyle P_{1}},P2{\displaystyle P_{2}}, ...,Pn{\displaystyle P_{n}}) if and only if in every row of a joint truth table that assigns "T" to all propositions (P1{\displaystyle P_{1}},P2{\displaystyle P_{2}}, ...,Pn{\displaystyle P_{n}}) the truth table also assigns "T" toQ{\displaystyle Q}.
a= "Socrates is a man."b= "All men are mortal."c= "Socrates is mortal."
The conclusion of this argument is a logical consequence of the premises because it is impossible for all the premises to be true while the conclusion false.
Reviewing the truth table, it turns out the conclusion of the argument isnota tautological consequence of the premise. Not every row that assigns T to the premise also assigns T to the conclusion. In particular, it is the second row that assigns T toa∧b, but does not assign T toc.
Tautological consequence can also be defined asP1{\displaystyle P_{1}}∧P2{\displaystyle P_{2}}∧ ... ∧Pn{\displaystyle P_{n}}→Q{\displaystyle Q}is a substitution instance of a tautology, with the same effect.[2]
It follows from the definition that if a propositionpis a contradiction thenptautologically implies every proposition, because there is no truth valuation that causespto be true and so the definition of tautological implication is trivially satisfied. Similarly, ifpis a tautology thenpis tautologically implied by every proposition.
|
https://en.wikipedia.org/wiki/Tautological_consequence
|
Inlogical argumentandmathematical proof, thetherefore sign,∴, is generally used before alogical consequence, such as the conclusion of asyllogism. The symbol consists of three dots placed in an upright triangle and is readtherefore. While it is not generally used in formal writing, it is used inmathematicsandshorthand.
According toFlorian CajoriinA History of Mathematical Notations,the Swiss mathematicianJohann Rahnused both an upright and an inverted triangle of dots to meantherefore. In the German edition ofTeutsche Algebra(1659), he used the upright triangle with its modern meaning, but in the 1668 English edition Rahn used the inverted triangle more often to mean 'therefore'.[1]: vol. 1, p. 211Other authors in the 18th century also used three dots in a triangle shape to signify 'therefore', but as with Rahn, there was little consistency as to how the triangle was oriented. Use of an upright triangle exclusively to meantherefore(and an inverted
triangle exclusively to meanbecause) appears to have originated in the 19th century. In the 20th century, the three-dot notation for 'therefore' became very rare in continental Europe, but it remains popular inAnglophonecountries.[1]
Used in asyllogism:
and in mathematics:
In meteorology, the therefore sign is used to indicate "moderate rain" on astation model; the similartypographicsymbolasterism(⁂, threeasterisks) indicates moderate snow.[2][3]
InFreemasonrytraditions, the symbol is used to indicatea Masonic abbreviation(rather than theperiodmark used conventionally with some abbreviations). For example, "R∴W∴ John Smith" is an abbreviation for "Right Worshipful John Smith" (the term Right Worshipful is an honorific and indicates that Smith is a Grand Lodge officer).[4]
The symbol has a Unicodecode pointatU+2234∴THEREFORE(∴, ∴, ∴). SeeUnicode inputfor keyboard-entering methods.
One can write the symbol inLaTeXby using theamssymbpackage with the\thereforecommand.
The inverted form,∵, known as thebecause sign, is sometimes used as ashorthandform of "because".
The characterஃ(visarga) in theTamil scriptrepresents theāytam, a special sound of theTamil language.
Anasterism,⁂, is a typographic symbol consisting of threeasterisksplaced in atriangle. Its purpose is to "indicate minor breaks in text", to call attention to a passage, or to separate sub-chapters in a book. It is also used inmeteorologyto indicate 'moderate snowfall'.
The graphically identical sign∴serves as aJapanese map symbolon the maps of theGeographical Survey Institute of Japan, indicating a tea plantation. On some maps, a version of the sign with thicker dots,⛬, is used to signal the presence of anational monument,historic siteorruins; it has its own Unicode code point.[5]
InNorwegianandDanish, a superficially similar symbol was formerly used as an explanatory symbol (forklaringstegnet). It can be typeset using theopen ofollowed by a colon, thus:ɔ:. It is used for the meaning "namely",id est(i.e.),scilicet(viz.) or similar.[6]
|
https://en.wikipedia.org/wiki/Therefore_sign
|
Inmathematical logicandcomputer sciencethe symbol ⊢ (⊢{\displaystyle \vdash }) has taken the nameturnstilebecause of its resemblance to a typicalturnstileif viewed from above. It is also referred to asteeand is often read as "yields", "proves", "satisfies" or "entails".
The turnstile represents abinary relation. It has several differentinterpretationsin different contexts:
InTeX, the turnstile symbol⊢{\displaystyle \vdash }is obtained from the command\vdash.
InUnicode, the turnstile symbol (⊢) is calledright tackand is at code point U+22A2.[14](Code point U+22A6 is namedassertion sign(⊦).)
On atypewriter, a turnstile can be composed from avertical bar(|) and adash(–).
InLaTeXthere is a turnstile package which issues this sign in many ways, and is capable of putting labels below or above it, in the correct places.[15]
|
https://en.wikipedia.org/wiki/Turnstile_(symbol)
|
Inlogic, thesymbol⊨, ⊧ or⊨{\displaystyle \models }is called thedouble turnstile. It is often read as "entails", "models", "is asemanticconsequenceof" or "is stronger than".[1]It is closely related to theturnstilesymbol⊢{\displaystyle \vdash }, which has a single bar across the middle, and which denotessyntacticconsequence (in contrast tosemantic).
The double turnstile is a binary relation. It has several different meanings in different contexts:
InTeX, the turnstile symbols ⊨ and⊨{\displaystyle \models }are obtained from the commands\vDashand\modelsrespectively.
In Unicode it is encoded atU+22A8⊨TRUE(⊨, ⊨) , and the opposite of it isU+22AD⊭NOT TRUE(⊭) .
InLaTeXthere is theturnstile package, which issues this sign in many ways, including the double turnstile, and is capable of putting labels below or above it, in the correct places. The articleA Tool for Logiciansis a tutorial on using this package.
Thismathematical logic-related article is astub. You can help Wikipedia byexpanding it.
Thislogic-related article is astub. You can help Wikipedia byexpanding it.
Thistypography-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Double_turnstile
|
Anexistential clauseis aclausethat refers to the existence or presence of something, such as "There is a God" and "There are boys in the yard". The use of such clauses can be considered analogous toexistential quantificationin predicate logic, which is often expressed with the phrase "There exist(s)...".
Different languages have different ways of forming and using existential clauses. For details on theEnglishforms, seeEnglish grammar:Thereas pronoun.
Many languages form existential clauses without any particular marker by simply using forms of the normalcopulaverb (the equivalent of Englishbe), thesubjectbeing the noun (phrase) referring to the thing whose existence is asserted. For example, theFinnishsentencePihalla on poikia, meaning "There are boys in the yard", is literally "On the yard is boys". Some languages have a different verb for that purpose:SwedishfinnashasDet finns pojkar på gården, literally "It is found boys on the yard". On the other hand, some languages do not require a copula at all, and sentences analogous to "In the yard boys" are used. Some languages use the verbhave; for exampleSerbo-CroatianU dvorištu ima dječakais literally "In the yard has boys".[1]
Some languages form the negative of existential clauses irregularly; for example, inRussian,естьyest("there is/are") is used in affirmative existential clauses (in the present tense), but the negative equivalent isнетnyet("there is/are not"), used with the logical subject in thegenitive case.
In English, existential clauses usually use thedummy subjectconstruction (also known as expletive) withthere(infinitive: there be), as in "There are boys in the yard", butthereis sometimes omitted when the sentence begins with anotheradverbial(usually designating a place), as in "In my room (there) is a large box." Other languages with constructions similar to the English dummy subject includeFrench(seeil y a) andGerman, which useses ist,es sindores gibt, literally "it is", "it are", "it gives".
The principal meaning of existential clauses is to refer to the existence of something or the presence of something in a particular place or time. For example, "There is a God" asserts the existence of a God, but "There is a pen on the desk" asserts the presence or existence of a pen in a particular place.
Existential clauses can be modified like other clauses in terms oftense,negation,interrogative inversion,modality,finiteness, etc. For example, one can say "There was a God", "There is not a God" ("There is no God"), "Is there a God?", "There might be a God", "He was anxious for there to be a God" etc.
An existential sentence is one of four structures associated within thePingelapese languageofMicronesia. The form heavily uses a post-verbal subject order and explains what exists or does not exist. Only a few Pingelapese verbs are used existential sentence structure:minae-"to exist",soh-"not to exist",dir-"to exist in large numbers", anddaeri-"to be finished". All four verbs have a post-verbal subject in common and usually introduce new characters to a story. If a character is already known, the verb would be used in the preverbal position.[2]
In some languages, linguisticpossession(in a broad sense) is indicated by existential clauses, rather than by a verb likehave. For example, inRussian, "I have a friend" can be expressed by the sentence у меня есть другu menya yest' drug, literally "at me there is a friend". Russian has a verb иметьimet'meaning "have", but it is less commonly used than the former method.
Other examples includeIrishTá peann agam"(There) is (a) pen at me" (for "I have a pen”).HungarianVan egy halam"(There) is a fish-my" (for "I have a fish") andTurkishİki defterim var"two notebook-my (there) is" (for "I have two notebooks").
InMaltese, a change over time has been noted: "in the possessive construction, subject properties have been transferred diachronically from the possessed noun phrase to the possessor, while the possessor has all the subject properties except the form of the verb agreement that it triggers."[3]
|
https://en.wikipedia.org/wiki/Existential_clause
|
Inmathematics, anexistence theoremis atheoremwhich asserts the existence of a certain object.[1]It might be a statement which begins with the phrase "there exist(s)", or it might be a universal statement whose lastquantifierisexistential(e.g., "for allx,y, ... there exist(s) ..."). In the formal terms ofsymbolic logic, an existence theorem is a theorem with aprenex normal forminvolving theexistential quantifier, even though in practice, such theorems are usually stated in standard mathematical language. For example, the statement that thesinefunction iscontinuouseverywhere, or any theorem written inbig O notation, can be considered as theorems which are existential by nature—since the quantification can be found in the definitions of the concepts used.
A controversy that goes back to the early twentieth century concerns the issue of purely theoretic existence theorems, that is, theorems which depend on non-constructive foundational material such as theaxiom of infinity, theaxiom of choiceor thelaw of excluded middle. Such theorems provide no indication as to how to construct (or exhibit) the object whose existence is being claimed. From aconstructivistviewpoint, such approaches are not viable as it leads to mathematics losing its concrete applicability,[2]while the opposing viewpoint is that abstract methods are far-reaching,[further explanation needed]in a way thatnumerical analysiscannot be.
In mathematics, an existence theorem is purely theoretical if the proof given for it does not indicate a construction of the object whose existence is asserted. Such a proof is non-constructive,[3]since the whole approach may not lend itself to construction.[4]In terms ofalgorithms, purely theoretical existence theorems bypass all algorithms for finding what is asserted to exist. These are to be contrasted with the so-called "constructive" existence theorems,[5]which many constructivist mathematicians working in extended logics (such asintuitionistic logic) believe to be intrinsically stronger than their non-constructive counterparts.
Despite that, the purely theoretical existence results are nevertheless ubiquitous in contemporary mathematics. For example,John Nash's original proof of the existence of aNash equilibriumin 1951 was such an existence theorem. An approach which is constructive was also later found in 1962.[6]
From the other direction, there has been considerable clarification of whatconstructive mathematicsis—without the emergence of a 'master theory'. For example, according toErrett Bishop's definitions, the continuity of a function such assin(x)should be proved as a constructive bound on themodulus of continuity, meaning that the existential content of the assertion of continuity is a promise that can always be kept. Accordingly, Bishop rejects the standard idea of pointwise continuity, and proposed that continuity should be defined in terms of "local uniform continuity".[7]One could get another explanation of existence theorem fromtype theory, in which a proof of an existential statement can come only from aterm(which one can see as the computational content).
|
https://en.wikipedia.org/wiki/Existence_theorem
|
Inmathematical logic, aLindström quantifieris ageneralized polyadic quantifier. Lindström quantifiers generalize first-order quantifiers, such as theexistential quantifier, theuniversal quantifier, and thecounting quantifiers. They were introduced byPer Lindströmin 1966. They were later studied for their applications inlogic in computer scienceand databasequery languages.
In order to facilitate discussion, some notational conventions need explaining. The expression
forAanL-structure (orL-model) in a languageL,φanL-formula, anda¯{\displaystyle {\bar {a}}}a tuple of elements of the domain dom(A) ofA.[clarification needed]In other words,ϕA,x,a¯{\displaystyle \phi ^{A,x,{\bar {a}}}}denotes a (monadic) property defined on dom(A). In general, wherexis replaced by ann-tuplex¯{\displaystyle {\bar {x}}}of free variables,ϕA,x¯,a¯{\displaystyle \phi ^{A,{\bar {x}},{\bar {a}}}}denotes ann-ary relation defined on dom(A). Each quantifierQA{\displaystyle Q_{A}}is relativized to a structure, since each quantifier is viewed as a family of relations (between relations) on that structure. For a concrete example, take the universal and existential quantifiers ∀ and ∃, respectively. Their truth conditions can be specified as
where∀A{\displaystyle \forall _{A}}is the singleton whose sole member is dom(A), and∃A{\displaystyle \exists _{A}}is the set of all non-empty subsets of dom(A) (i.e. thepower setof dom(A) minus the empty set). In other words, each quantifier is a family of properties on dom(A), so each is called amonadicquantifier. Any quantifier defined as ann> 0-ary relation between properties on dom(A) is calledmonadic. Lindström introduced polyadic ones that aren> 0-ary relations between relations on domains of structures.
Before we go on to Lindström's generalization, notice that any family of properties on dom(A) can be regarded as a monadic generalized quantifier. For example, the quantifier "there are exactlynthings such that..." is a family of subsets of the domain of a structure, each of which has a cardinality of sizen. Then, "there are exactly 2 things such that φ" is true in A if and only if the set of things that are such that φ is a member of the set of all subsets of dom(A) of size 2.
A Lindström quantifier is a polyadic generalized quantifier, so instead being a relation between subsets of the domain, it is a relation between relations defined on the domain. For example, the quantifierQAx1x2y1z1z2z3(ϕ(x1x2),ψ(y1),θ(z1z2z3)){\displaystyle Q_{A}x_{1}x_{2}y_{1}z_{1}z_{2}z_{3}(\phi (x_{1}x_{2}),\psi (y_{1}),\theta (z_{1}z_{2}z_{3}))}is defined semantically as
where
for ann-tuplex¯{\displaystyle {\bar {x}}}of variables.
Lindström quantifiers are classified according to the number structure of their parameters. For exampleQxyϕ(x)ψ(y){\displaystyle Qxy\phi (x)\psi (y)}is a type (1,1) quantifier, whereasQxyϕ(x,y){\displaystyle Qxy\phi (x,y)}is a type (2) quantifier. An example of type (1,1) quantifier isHartig's quantifiertesting equicardinality, i.e. the extension of {A, B ⊆ M: |A| = |B|}.[clarification needed]An example of a type (4) quantifier is theHenkin quantifier.
The first result in this direction was obtained by Lindström (1966) who showed that a type (1,1) quantifier was not definable in terms of a type (1) quantifier. After Lauri Hella (1989) developed a general technique for proving the relative expressiveness of quantifiers, the resulting hierarchy turned out to belexicographically orderedby quantifier type:
For every typet, there is a quantifier of that type that is not definable in first-order logic extended with quantifiers that are of types less thant.
Although Lindström had only partially developed the hierarchy of quantifiers which now bear his name, it was enough for him to observe that some nice properties of first-order logic are lost when it is extended with certain generalized quantifiers. For example, adding a "there exist finitely many" quantifier results in a loss ofcompactness, whereas adding a "there exist uncountably many" quantifier to first-order logic results in a logic no longer satisfying theLöwenheim–Skolem theorem. In 1969 Lindström proved a much stronger result now known asLindström's theorem, which intuitively states that first-order logic is the "strongest" logic having both properties.
|
https://en.wikipedia.org/wiki/Lindstr%C3%B6m_quantifier
|
The termquantifier variancerefers to claims that there is no uniquely best ontological language with which to describe the world.[1]The term "quantifier variance" rests upon the philosophical term 'quantifier', more preciselyexistential quantifier. A 'quantifier' is an expression like "there exists at least one 'such-and-such'".[2]Quantifier variancethen is the thesis that the meaning of quantifiers is ambiguous. This thesis can be used to explain how some disputes inontologyare only due to a failure of the disagreeing parties to agree on the meaning of the quantifiers used.[3]
According toEli Hirsch, it is an outgrowth ofUrmson's dictum:
If two sentences are equivalent to each other, then while the use of one rather than the other may be useful for some philosophical purposes, it is not the case that one will be nearer to reality than the other...We can say a thing this way, and we can say it that way, sometimes...But it is no use asking which is the logically or metaphysically right way to say it.[4]
The wordquantifierin the introduction refers to a variable used in adomain of discourse, a collection of objects under discussion. In daily life, the domain of discourse could be 'apples', or 'persons', or even everything.[5]In a more technical arena, the domain of discourse could be 'integers', say. The quantifier variablex, say, in the given domain of discourse can take on the 'value' or designate any object in the domain. The presence of a particular object, say a 'unicorn' is expressed in the manner ofsymbolic logicas:
Here the'turnedE'or ∃ is read as "there exists..." and is called the symbol for existential quantification. Relations between objects also can be expressed using quantifiers. For example, in the domain of integers (denoting the quantifier byn, a customary choice for an integer) we can indirectly identify '5' by its relation with the number '25':
If we want to point out specifically that the domain of integers is meant, we couldwrite:
Here, ∈ =is a member of...and ∈ is called the symbol forset membership; andZ{\displaystyle \mathbb {Z} }denotes the set of integers.
There are a variety of expressions that serve the same purpose in various ontologies, and they are accordingly all quantifier expressions.[1]Quantifiervarianceis then one argument concerning exactly what expressions can be construed as quantifiers, and just which arguments of a quantifier, that is, which substitutions for "such-and-such", are permissible.[6]
Hirsch says the notion of quantifier variance is a concept concerning how languages work, and is not connected to the ontological question of what 'really' exists.[7]That view is not universal.[8]
The thesis underlying quantifier variance was stated by Putnam:
The logical primitives themselves, and in particular the notions of object and existence, have a multitude of different uses rather than one absolute 'meaning'.[9]
Citing this quotation from Putnam, Wasserman states: "This thesis – the thesis that there are many meanings for the existential quantifier that are equally neutral and equally adequate for describing all the facts – is often referred to as 'the doctrine of quantifier variance'".[8]
Hirsch's quantifier variance has been connected toCarnap's idea of a linguistic framework as a 'neo'-Carnapian view, namely, "the view that there are a number of equally good meanings of the logical quantifiers; choosing one of these frameworks is to be understood analogously to choosing a Carnapian framework."[10]Of course, not all philosophers (notablyQuineand the 'neo'-Quineans) subscribe to the notion of multiple linguistic frameworks.[10]Seemeta-ontology.
Hirsch himself suggests some care in connectinghis versionof quantifier variance with Carnap: "Let's not call any philosophers quantifier variantists unless they are clearly committed to the idea that (most of) the things that exist are completely independent of language." In this connection Hirsch says "I have a problem, however, in calling Carnap a quantifier variantist, insofar as he is often viewed as a verificationistanti-realist."[1]Although Thomasson does not think Carnap is properly considered to be an antirealist, she still disassociates Carnap from Hirsch's version of quantifier variance: "I'll argue, however, that Carnap in fact is not committed to quantifier variance in anything like Hirsch's sense, and that he [Carnap] does not rely on it in his ways of deflating metaphysical debates."[11]
|
https://en.wikipedia.org/wiki/Quantifier_variance
|
Inmathematicsandlogic, the term "uniqueness" refers to the property of being the one and only object satisfying a certain condition.[1]This sort ofquantificationis known asuniqueness quantificationorunique existential quantification, and is often denoted with the symbols "∃!"[2]or "∃=1". It is defined to meanthere existsan object with the given property, andall objectswith this property areequal.
For example, the formal statement
may be read as "there is exactly one natural numbern{\displaystyle n}such thatn−2=4{\displaystyle n-2=4}".
The most common technique to prove the unique existence of an object is to first prove the existence of the entity with the desired condition, and then to prove that any two such entities (say,a{\displaystyle a}andb{\displaystyle b}) must be equal to each other (i.e.a=b{\displaystyle a=b}).
For example, to show that the equationx+2=5{\displaystyle x+2=5}has exactly one solution, one would first start by establishing that at least one solution exists, namely 3; the proof of this part is simply the verification that the equation below holds:
To establish the uniqueness of the solution, one would proceed by assuming that there are two solutions, namelya{\displaystyle a}andb{\displaystyle b}, satisfyingx+2=5{\displaystyle x+2=5}. That is,
Then since equality is atransitive relation,
Subtracting 2 from both sides then yields
which completes the proof that 3 is the unique solution ofx+2=5{\displaystyle x+2=5}.
In general, both existence (there existsat leastone object) and uniqueness (there existsat mostone object) must be proven, in order to conclude that there exists exactly one object satisfying a said condition.
An alternative way to prove uniqueness is to prove that there exists an objecta{\displaystyle a}satisfying the condition, and then to prove that every object satisfying the condition must be equal toa{\displaystyle a}.
Uniqueness quantification can be expressed in terms of theexistentialanduniversalquantifiers ofpredicate logic, by defining the formula∃!xP(x){\displaystyle \exists !xP(x)}to mean[3]
which is logically equivalent to
An equivalent definition that separates the notions of existence and uniqueness into two clauses, at the expense of brevity, is
Another equivalent definition, which has the advantage of brevity, is
The uniqueness quantification can be generalized intocounting quantification(or numerical quantification[4]). This includes both quantification of the form "exactlykobjects exist such that …" as well as "infinitely many objects exist such that …" and "only finitely many objects exist such that…". The first of these forms is expressible using ordinary quantifiers, but the latter two cannot be expressed in ordinaryfirst-order logic.[5]
Uniqueness depends on a notion ofequality. Loosening this to a coarserequivalence relationyields quantification of uniquenessup tothat equivalence (under this framework, regular uniqueness is "uniqueness up to equality"). This is calledessentially unique. For example, many concepts incategory theoryare defined to be unique up toisomorphism.
The exclamation mark!{\displaystyle !}can be also used as a separate quantification symbol, so(∃!x.P(x))↔((∃x.P(x))∧(!x.P(x))){\displaystyle (\exists !x.P(x))\leftrightarrow ((\exists x.P(x))\land (!x.P(x)))}, where(!x.P(x)):=(∀a∀b.P(a)∧P(b)→a=b){\displaystyle (!x.P(x)):=(\forall a\forall b.P(a)\land P(b)\rightarrow a=b)}. E.g. it can be safely used in thereplacement axiom, instead of∃!{\displaystyle \exists !}.
|
https://en.wikipedia.org/wiki/Uniqueness_quantification
|
Inmathematics, the termessentially uniqueis used to describe a weaker form of uniqueness, where an object satisfying a property is "unique" only in the sense that all objects satisfying the property are equivalent to each other. The notion of essential uniqueness presupposes some form of "sameness", which is often formalized using anequivalence relation.
A related notion is auniversal property, where an object is not only essentially unique, but uniqueup toa uniqueisomorphism[1](meaning that it has trivialautomorphism group). In general there can be more than one isomorphism between examples of an essentially unique object.
At the most basic level, there is an essentially unique set of any givencardinality, whether one labels the elements{1,2,3}{\displaystyle \{1,2,3\}}or{a,b,c}{\displaystyle \{a,b,c\}}.
In this case, the non-uniqueness of the isomorphism (e.g., match 1 toa{\displaystyle a}or 1 toc{\displaystyle c}) is reflected in thesymmetric group.
On the other hand, there is an essentially uniquetotally orderedset of any given finite cardinality that is uniqueup tounique isomorphism: if one writes{1<2<3}{\displaystyle \{1<2<3\}}and{a<b<c}{\displaystyle \{a<b<c\}}, then the onlyorder-preservingisomorphism is the one which maps 1 toa{\displaystyle a},2 tob{\displaystyle b},and 3 toc{\displaystyle c}.
Thefundamental theorem of arithmeticestablishes that thefactorizationof any positiveintegerintoprime numbersis essentially unique, i.e., unique up to the ordering of the primefactors.[2][3]
In the context of classification ofgroups, there is an essentially unique group containing exactly 2 elements.[3]Similarly, there is also an essentially unique group containing exactly 3 elements: thecyclic groupof order three. In fact, regardless of how one chooses to write the three elements and denote the group operation, all such groups can be shown to beisomorphicto each other, and hence are "the same".
On the other hand, there does not exist an essentially unique group with exactly 4 elements, as there are in this case two non-isomorphic groups in total: the cyclic group of order 4 and theKlein four-group.[4]
There is an essentially unique measure that istranslation-invariant,strictly positiveandlocally finiteon thereal line. In fact, any such measure must be a constant multiple ofLebesgue measure, specifying that the measure of the unit interval should be 1—before determining the solution uniquely.
There is an essentially unique two-dimensional,compact,simply connectedmanifold: the2-sphere. In this case, it is unique up tohomeomorphism.
In the area of topology known asknot theory, there is an analogue of the fundamental theorem of arithmetic: the decomposition of a knot into a sum ofprime knotsis essentially unique.[5]
Amaximal compact subgroupof asemisimple Lie groupmay not be unique, but is unique up toconjugation.
An object that is thelimitor colimit over a given diagram is essentially unique, as there is auniqueisomorphism to any other limiting/colimiting object.[6]
Given the task of using 24-bitwords to store 12 bits of information in such a way that 4-bit errors can be detected and 3-bit errors can be corrected, the solution is essentially unique: theextended binary Golay code.[7]
|
https://en.wikipedia.org/wiki/Essentially_unique
|
Indigital circuitsandmachine learning, aone-hotis a group ofbitsamong which the legal combinations of values are only those with a single high (1) bit and all the others low (0).[1]A similar implementation in which all bits are '1' except one '0' is sometimes calledone-cold.[2]Instatistics,dummy variablesrepresent a similar technique for representingcategorical data.
One-hot encoding is often used for indicating the state of astate machine. When usingbinary, adecoderis needed to determine the state. A one-hot state machine, however, does not need a decoder as the state machine is in thenth state if, and only if, thenth bit is high.
Aring counterwith 15 sequentially ordered states is an example of a state machine. A 'one-hot' implementation would have 15flip-flopschained in series with the Q output of each flip-flop connected to the D input of the next and the D input of the first flip-flop connected to the Q output of the 15th flip-flop. The first flip-flop in the chain represents the first state, the second represents the second state, and so on to the 15th flip-flop, which represents the last state. Upon reset of the state machine all of the flip-flops are reset to '0' except the first in the chain, which is set to '1'. The next clock edge arriving at the flip-flops advances the one 'hot' bit to the second flip-flop. The 'hot' bit advances in this way until the 15th state, after which the state machine returns to the first state.
Anaddress decoderconverts from binary to one-hot representation.
Apriority encoderconverts from one-hot representation to binary.
Innatural language processing, a one-hot vector is a 1 ×Nmatrix (vector) used to distinguish each word in a vocabulary from every other word in the vocabulary.[5]The vector consists of 0s in all cells with the exception of a single 1 in a cell used uniquely to identify the word. One-hot encoding ensures that machine learning does not assume that higher numbers are more important. For example, the value '8' is bigger than the value '1', but that does not make '8' more important than '1'. The same is true for words: the value 'laughter' is not more important than 'laugh'.
In machine learning, one-hot encoding is a frequently used method to deal with categorical data. Because many machine learning models need their input variables to be numeric, categorical variables need to be transformed in the pre-processing part.[6]
Categorical data can be eithernominalorordinal.[7]Ordinal data has a ranked order for its values and can therefore be converted to numerical data through ordinal encoding.[8]An example of ordinal data would be the ratings on a test ranging from A to F, which could be ranked using numbers from 6 to 1. Since there is no quantitative relationship between nominal variables' individual values, using ordinal encoding can potentially create a fictional ordinal relationship in the data.[9]Therefore, one-hot encoding is often applied to nominal variables, in order to improve the performance of the algorithm.
For each unique value in the original categorical column, a new column is created in this method. These dummy variables are then filled up with zeros and ones (1 meaning TRUE, 0 meaning FALSE).[citation needed]
Because this process creates multiple new variables, it is prone to creating a 'big p' problem (too many predictors) if there are many unique values in the original column. Another downside of one-hot encoding is that it causes multicollinearity between the individual variables, which potentially reduces the model's accuracy.[citation needed]
Also, if the categorical variable is an output variable, you may want to convert the values back into a categorical form in order to present them in your application.[10]
In practical usage, this transformation is often directly performed by a function that takes categorical data as an input and outputs the corresponding dummy variables. An example would be the dummyVars function of the Caret library in R.[11]
|
https://en.wikipedia.org/wiki/One-hot
|
Inmathematics, asingleton(also known as aunit set[1]orone-point set) is asetwithexactly oneelement. For example, the set{0}{\displaystyle \{0\}}is a singleton whose single element is0{\displaystyle 0}.
Within the framework ofZermelo–Fraenkel set theory, theaxiom of regularityguarantees that no set is an element of itself. This implies that a singleton is necessarily distinct from the element it contains,[1]thus 1 and{1}{\displaystyle \{1\}}are not the same thing, and theempty setis distinct from the set containing only the empty set. A set such as{{1,2,3}}{\displaystyle \{\{1,2,3\}\}}is a singleton as it contains a single element (which itself is a set, but not a singleton).
A set is a singletonif and only ifitscardinalityis1. Invon Neumann's set-theoretic construction of the natural numbers, the number 1 isdefinedas the singleton{0}.{\displaystyle \{0\}.}
Inaxiomatic set theory, the existence of singletons is a consequence of theaxiom of pairing: for any setA, the axiom applied toAandAasserts the existence of{A,A},{\displaystyle \{A,A\},}which is the same as the singleton{A}{\displaystyle \{A\}}(since it containsA, and no other set, as an element).
IfAis any set andSis any singleton, then there exists precisely onefunctionfromAtoS, the function sending every element ofAto the single element ofS. Thus every singleton is aterminal objectin thecategory of sets.
A singleton has the property that every function from it to any arbitrary set is injective. The only non-singleton set with this property is theempty set.
Every singleton set is anultra prefilter. IfX{\displaystyle X}is a set andx∈X{\displaystyle x\in X}then the upward of{x}{\displaystyle \{x\}}inX,{\displaystyle X,}which is the set{S⊆X:x∈S},{\displaystyle \{S\subseteq X:x\in S\},}is aprincipalultrafilteronX{\displaystyle X}. Moreover, every principal ultrafilter onX{\displaystyle X}is necessarily of this form.[2]Theultrafilter lemmaimplies that non-principalultrafilters exist on everyinfinite set(these are calledfree ultrafilters).
Everynetvalued in a singleton subsetX{\displaystyle X}of is anultranetinX.{\displaystyle X.}
TheBell numberinteger sequence counts the number ofpartitions of a set(OEIS:A000110), if singletons are excluded then the numbers are smaller (OEIS:A000296).
Structures built on singletons often serve asterminal objectsorzero objectsof variouscategories:
LetSbe aclassdefined by anindicator functionb:X→{0,1}.{\displaystyle b:X\to \{0,1\}.}ThenSis called asingletonif and only if there is somey∈X{\displaystyle y\in X}such that for allx∈X,{\displaystyle x\in X,}b(x)=(x=y).{\displaystyle b(x)=(x=y).}
The following definition was introduced inPrincipia MathematicabyWhiteheadandRussell[3]
The symbolι{\displaystyle \iota }‘x{\displaystyle x}denotes the singleton{x}{\displaystyle \{x\}}andy^(y=x){\displaystyle {\hat {y}}(y=x)}denotes the class of objects identical withx{\displaystyle x}aka{y:y=x}{\displaystyle \{y:y=x\}}.
This occurs as a definition in the introduction, which, in places, simplifies the argument in the main text, where it occurs as proposition 51.01 (p. 357 ibid.).
The proposition is subsequently used to define thecardinal number1 as
That is, 1 is the class of singletons. This is definition 52.01 (p. 363 ibid.)
|
https://en.wikipedia.org/wiki/Singleton_(mathematics)
|
In mathematics, auniqueness theorem, also called aunicity theorem, is atheoremasserting the uniqueness of an object satisfying certain conditions, or the equivalence of all objects satisfying the said conditions.[1]Examples of uniqueness theorems include:
The worduniqueis sometimes replaced byessentially unique, whenever one wants to stress that the uniqueness is only referred to the underlying structure, whereas the form may vary in all ways that do not affect the mathematical content.[1]
A uniqueness theorem (or its proof) is, at least within the mathematics of differential equations, often combined with an existence theorem (or its proof) to a combined existence and uniqueness theorem (e.g., existence and uniqueness of solution to first-order differential equations with boundary condition).[3]
|
https://en.wikipedia.org/wiki/Uniqueness_theorem
|
Quantificationmay refer to:
|
https://en.wikipedia.org/wiki/Quantification_(disambiguation)
|
bijective
injective-only
injective
surjective-only
general
Inmathematics,injections,surjections, andbijectionsare classes offunctionsdistinguished by the manner in whicharguments(inputexpressionsfrom thedomain) andimages(output expressions from thecodomain) are related ormapped toeach other.
A functionmapselements from its domain to elements in its codomain. Given a functionf:X→Y{\displaystyle f\colon X\to Y}:
An injective function need not be surjective (not all elements of the codomain may be associated with arguments), and a surjective function need not be injective (some images may be associated withmore than oneargument). The four possible combinations of injective and surjective features are illustrated in the adjacent diagrams.
A function isinjective(one-to-one) if each possible element of the codomain is mapped to by at most one argument. Equivalently, a function is injective if it maps distinct arguments to distinct images. An injective function is aninjection.[1]The formal definition is the following.
The following are some facts related to injections:
A function issurjectiveorontoif each element of thecodomainis mapped to by at least one element of thedomain. In other words, each element of the codomain has a non-emptypreimage. Equivalently, a function is surjective if its image is equal to its codomain. A surjective function is asurjection.[1]The formal definition is the following.
The following are some facts related to surjections:
A function isbijectiveif it is both injective and surjective. A bijective function is also called abijectionor aone-to-one correspondence(not to be confused withone-to-one function, which refers to injection). A function is bijective if and only if every possible image is mapped to by exactly one argument.[1]This equivalent condition is formally expressed as follows:
The following are some facts related to bijections:
Suppose that one wants to define what it means for two sets to "have the same number of elements". One way to do this is to say that two sets "have the same number of elements", if and only if all the elements of one set can be paired with the elements of the other, in such a way that each element is paired with exactly one element. Accordingly, one can define two sets to "have the same number of elements"—if there is a bijection between them. In which case, the two sets are said to have the samecardinality.
Likewise, one can say that setX{\displaystyle X}"has fewer than or the same number of elements" as setY{\displaystyle Y}, if there is an injection fromX{\displaystyle X}toY{\displaystyle Y}; one can also say that setX{\displaystyle X}"has fewer than the number of elements" in setY{\displaystyle Y}, if there is an injection fromX{\displaystyle X}toY{\displaystyle Y}, but not a bijection betweenX{\displaystyle X}andY{\displaystyle Y}.
It is important to specify the domain and codomain of each function, since by changing these, functions which appear to be the same may have different properties.
In thecategoryofsets, injections, surjections, and bijections correspond precisely tomonomorphisms,epimorphisms, andisomorphisms, respectively.[5]
TheOxford English Dictionaryrecords the use of the wordinjectionas a noun byS. Mac LaneinBulletin of the American Mathematical Society(1950), andinjectiveas an adjective byEilenbergandSteenrodinFoundations of Algebraic Topology(1952).[6]
However, it was not until the FrenchBourbaki groupcoined the injective-surjective-bijective terminology (both as nouns and adjectives) that they achieved widespread adoption.[7]
|
https://en.wikipedia.org/wiki/Bijection,_injection_and_surjection
|
Inmetric geometry, aninjective metric space, or equivalently ahyperconvex metric space, is ametric spacewith certain properties generalizing those of thereal lineand ofL∞distancesin higher-dimensionalvector spaces. These properties can be defined in two seemingly different ways: hyperconvexity involves the intersection properties ofclosed ballsin the space, while injectivity involves theisometric embeddingsof the space into larger spaces. However it is a theorem ofAronszajn & Panitchpakdi (1956)that these two different types of definitions are equivalent.[1]
A metric spaceX{\displaystyle X}is said to behyperconvexif it isconvexand itsclosed ballshave the binaryHelly property. That is:
Equivalently, a metric spaceX{\displaystyle X}is hyperconvex if, for any set of pointspi{\displaystyle p_{i}}inX{\displaystyle X}and radiiri>0{\displaystyle r_{i}>0}satisfyingri+rj≥d(pi,pj){\displaystyle r_{i}+r_{j}\geq d(p_{i},p_{j})}for eachi{\displaystyle i}andj{\displaystyle j}, there is a pointq{\displaystyle q}inX{\displaystyle X}that is within distanceri{\displaystyle r_{i}}of eachpi{\displaystyle p_{i}}(that is,d(pi,q)≤ri{\displaystyle d(p_{i},q)\leq r_{i}}for alli{\displaystyle i}).
Aretractionof a metric spaceX{\displaystyle X}is afunctionf{\displaystyle f}mappingX{\displaystyle X}to a subspace of itself, such that
Aretractof a spaceX{\displaystyle X}is a subspace ofX{\displaystyle X}that is an image of a retraction.
A metric spaceX{\displaystyle X}is said to beinjectiveif, wheneverX{\displaystyle X}isisometricto a subspaceZ{\displaystyle Z}of a spaceY{\displaystyle Y}, that subspaceZ{\displaystyle Z}is a retract ofY{\displaystyle Y}.
Examples of hyperconvex metric spaces include
Due to the equivalence between hyperconvexity and injectivity, these spaces are all also injective.
In an injective space, the radius of theminimum ballthat contains any setS{\displaystyle S}is equal to half thediameterofS{\displaystyle S}. This follows since the balls of radius half the diameter, centered at the points ofS{\displaystyle S}, intersect pairwise and therefore by hyperconvexity have a common intersection; a ball of radius half the diameter centered at a point of this common intersection contains all ofS{\displaystyle S}. Thus, injective spaces satisfy a particularly strong form ofJung's theorem.
Every injective space is acomplete space,[2]and everymetric map(or, equivalently,nonexpansive mapping, or short map) on a bounded injective space has afixed point.[3]A metric space is injectiveif and only ifit is aninjective objectin thecategoryofmetric spaces and metric maps.[4]
|
https://en.wikipedia.org/wiki/Injective_metric_space
|
Inmathematics, amonotonic function(ormonotone function) is afunctionbetweenordered setsthat preserves or reverses the givenorder.[1][2][3]This concept first arose incalculus, and was later generalized to the more abstract setting oforder theory.
Incalculus, a functionf{\displaystyle f}defined on asubsetof thereal numberswith real values is calledmonotonicif it is either entirely non-decreasing, or entirely non-increasing.[2]That is, as per Fig. 1, a function that increases monotonically does not exclusively have to increase, it simply must not decrease.
A function is termedmonotonically increasing(alsoincreasingornon-decreasing)[3]if for allx{\displaystyle x}andy{\displaystyle y}such thatx≤y{\displaystyle x\leq y}one hasf(x)≤f(y){\displaystyle f\!\left(x\right)\leq f\!\left(y\right)}, sof{\displaystyle f}preserves the order (see Figure 1). Likewise, a function is calledmonotonically decreasing(alsodecreasingornon-increasing)[3]if, wheneverx≤y{\displaystyle x\leq y}, thenf(x)≥f(y){\displaystyle f\!\left(x\right)\geq f\!\left(y\right)}, so itreversesthe order (see Figure 2).
If the order≤{\displaystyle \leq }in the definition of monotonicity is replaced by the strict order<{\displaystyle <}, one obtains a stronger requirement. A function with this property is calledstrictly increasing(alsoincreasing).[3][4]Again, by inverting the order symbol, one finds a corresponding concept calledstrictly decreasing(alsodecreasing).[3][4]A function with either property is calledstrictly monotone. Functions that are strictly monotone areone-to-one(because forx{\displaystyle x}not equal toy{\displaystyle y}, eitherx<y{\displaystyle x<y}orx>y{\displaystyle x>y}and so, by monotonicity, eitherf(x)<f(y){\displaystyle f\!\left(x\right)<f\!\left(y\right)}orf(x)>f(y){\displaystyle f\!\left(x\right)>f\!\left(y\right)}, thusf(x)≠f(y){\displaystyle f\!\left(x\right)\neq f\!\left(y\right)}.)
To avoid ambiguity, the termsweakly monotone,weakly increasingandweakly decreasingare often used to refer to non-strict monotonicity.
The terms "non-decreasing" and "non-increasing" should not be confused with the (much weaker) negative qualifications "not decreasing" and "not increasing". For example, the non-monotonic function shown in figure 3 first falls, then rises, then falls again. It is therefore not decreasing and not increasing, but it is neither non-decreasing nor non-increasing.
A functionf{\displaystyle f}is said to beabsolutely monotonicover an interval(a,b){\displaystyle \left(a,b\right)}if the derivatives of all orders off{\displaystyle f}arenonnegativeor allnonpositiveat all points on the interval.
All strictly monotonic functions areinvertiblebecause they are guaranteed to have a one-to-one mapping from their range to their domain.
However, functions that are only weakly monotone are not invertible because they are constant on some interval (and therefore are not one-to-one).
A function may be strictly monotonic over a limited a range of values and thus have an inverse on that range even though it is not strictly monotonic everywhere. For example, ify=g(x){\displaystyle y=g(x)}is strictly increasing on the range[a,b]{\displaystyle [a,b]}, then it has an inversex=h(y){\displaystyle x=h(y)}on the range[g(a),g(b)]{\displaystyle [g(a),g(b)]}.
The termmonotonicis sometimes used in place ofstrictly monotonic, so a source may state that all monotonic functions are invertible when they really mean that all strictly monotonic functions are invertible.[citation needed]
The termmonotonic transformation(ormonotone transformation) may also cause confusion because it refers to a transformation by a strictly increasing function. This is the case in economics with respect to the ordinal properties of autility functionbeing preserved across a monotonic transform (see alsomonotone preferences).[5]In this context, the term "monotonic transformation" refers to a positive monotonic transformation and is intended to distinguish it from a "negative monotonic transformation," which reverses the order of the numbers.[6]
The following properties are true for a monotonic functionf:R→R{\displaystyle f\colon \mathbb {R} \to \mathbb {R} }:
These properties are the reason why monotonic functions are useful in technical work inanalysis. Other important properties of these functions include:
An important application of monotonic functions is inprobability theory. IfX{\displaystyle X}is arandom variable, itscumulative distribution functionFX(x)=Prob(X≤x){\displaystyle F_{X}\!\left(x\right)={\text{Prob}}\!\left(X\leq x\right)}is a monotonically increasing function.
A function isunimodalif it is monotonically increasing up to some point (themode) and then monotonically decreasing.
Whenf{\displaystyle f}is astrictly monotonicfunction, thenf{\displaystyle f}isinjectiveon its domain, and ifT{\displaystyle T}is therangeoff{\displaystyle f}, then there is aninverse functiononT{\displaystyle T}forf{\displaystyle f}. In contrast, each constant function is monotonic, but not injective,[7]and hence cannot have an inverse.
The graphic shows six monotonic functions. Their simplest forms are shown in the plot area and the expressions used to create them are shown on they-axis.
A mapf:X→Y{\displaystyle f:X\to Y}is said to bemonotoneif each of itsfibersisconnected; that is, for each elementy∈Y,{\displaystyle y\in Y,}the (possibly empty) setf−1(y){\displaystyle f^{-1}(y)}is a connectedsubspaceofX.{\displaystyle X.}
Infunctional analysison atopological vector spaceX{\displaystyle X}, a (possibly non-linear) operatorT:X→X∗{\displaystyle T:X\rightarrow X^{*}}is said to be amonotone operatorif
(Tu−Tv,u−v)≥0∀u,v∈X.{\displaystyle (Tu-Tv,u-v)\geq 0\quad \forall u,v\in X.}Kachurovskii's theoremshows thatconvex functionsonBanach spaceshave monotonic operators as their derivatives.
A subsetG{\displaystyle G}ofX×X∗{\displaystyle X\times X^{*}}is said to be amonotone setif for every pair[u1,w1]{\displaystyle [u_{1},w_{1}]}and[u2,w2]{\displaystyle [u_{2},w_{2}]}inG{\displaystyle G},
(w1−w2,u1−u2)≥0.{\displaystyle (w_{1}-w_{2},u_{1}-u_{2})\geq 0.}G{\displaystyle G}is said to bemaximal monotoneif it is maximal among all monotone sets in the sense of set inclusion. The graph of a monotone operatorG(T){\displaystyle G(T)}is a monotone set. A monotone operator is said to bemaximal monotoneif its graph is amaximal monotone set.
Order theory deals with arbitrarypartially ordered setsandpreordered setsas a generalization of real numbers. The above definition of monotonicity is relevant in these cases as well. However, the terms "increasing" and "decreasing" are avoided, since their conventional pictorial representation does not apply to orders that are nottotal. Furthermore, thestrictrelations<{\displaystyle <}and>{\displaystyle >}are of little use in many non-total orders and hence no additional terminology is introduced for them.
Letting≤{\displaystyle \leq }denote the partial order relation of any partially ordered set, amonotonefunction, also calledisotone, ororder-preserving, satisfies the property
x≤y⟹f(x)≤f(y){\displaystyle x\leq y\implies f(x)\leq f(y)}
for allxandyin its domain. The composite of two monotone mappings is also monotone.
Thedualnotion is often calledantitone,anti-monotone, ororder-reversing. Hence, an antitone functionfsatisfies the property
x≤y⟹f(y)≤f(x),{\displaystyle x\leq y\implies f(y)\leq f(x),}
for allxandyin its domain.
Aconstant functionis both monotone and antitone; conversely, iffis both monotone and antitone, and if the domain offis alattice, thenfmust be constant.
Monotone functions are central in order theory. They appear in most articles on the subject and examples from special applications are found in these places. Some notable special monotone functions areorder embeddings(functions for whichx≤y{\displaystyle x\leq y}if and only iff(x)≤f(y)){\displaystyle f(x)\leq f(y))}andorder isomorphisms(surjectiveorder embeddings).
In the context ofsearch algorithmsmonotonicity (also called consistency) is a condition applied toheuristic functions. A heuristich(n){\displaystyle h(n)}is monotonic if, for every nodenand every successorn'ofngenerated by any actiona, the estimated cost of reaching the goal fromnis no greater than the step cost of getting ton'plus the estimated cost of reaching the goal fromn',
h(n)≤c(n,a,n′)+h(n′).{\displaystyle h(n)\leq c\left(n,a,n'\right)+h\left(n'\right).}
This is a form oftriangle inequality, withn,n', and the goalGnclosest ton. Because every monotonic heuristic is alsoadmissible, monotonicity is a stricter requirement than admissibility. Someheuristic algorithmssuch asA*can be provenoptimalprovided that the heuristic they use is monotonic.[8]
InBoolean algebra, a monotonic function is one such that for allaiandbiin{0,1}, ifa1≤b1,a2≤b2, ...,an≤bn(i.e. the Cartesian product{0, 1}nis orderedcoordinatewise), thenf(a1, ...,an) ≤ f(b1, ...,bn). In other words, a Boolean function is monotonic if, for every combination of inputs, switching one of the inputs from false to true can only cause the output to switch from false to true and not from true to false. Graphically, this means that ann-ary Boolean function is monotonic when its representation as ann-cubelabelled with truth values has no upward edge fromtruetofalse. (This labelledHasse diagramis thedualof the function's labelledVenn diagram, which is the more common representation forn≤ 3.)
The monotonic Boolean functions are precisely those that can be defined by an expression combining the inputs (which may appear more than once) using only the operatorsandandor(in particularnotis forbidden). For instance "at least two ofa,b,chold" is a monotonic function ofa,b,c, since it can be written for instance as ((aandb) or (aandc) or (bandc)).
The number of such functions onnvariables is known as theDedekind numberofn.
SAT solving, generally anNP-hardtask, can be achieved efficiently when all involved functions and predicates are monotonic and Boolean.[9]
|
https://en.wikipedia.org/wiki/Monotonic_function
|
Inmathematics, in the branch ofcomplex analysis, aholomorphic functionon anopen subsetof thecomplex planeis calledunivalentif it isinjective.[1][2]
The functionf:z↦2z+z2{\displaystyle f\colon z\mapsto 2z+z^{2}}is univalent in the open unit disc, asf(z)=f(w){\displaystyle f(z)=f(w)}implies thatf(z)−f(w)=(z−w)(z+w+2)=0{\displaystyle f(z)-f(w)=(z-w)(z+w+2)=0}. As the second factor is non-zero in the open unit disc,z=w{\displaystyle z=w}sof{\displaystyle f}is injective.
One can prove that ifG{\displaystyle G}andΩ{\displaystyle \Omega }are two openconnectedsets in the complex plane, and
is a univalent function such thatf(G)=Ω{\displaystyle f(G)=\Omega }(that is,f{\displaystyle f}issurjective), then the derivative off{\displaystyle f}is never zero,f{\displaystyle f}isinvertible, and its inversef−1{\displaystyle f^{-1}}is also holomorphic. More, one has by thechain rule
for allz{\displaystyle z}inG.{\displaystyle G.}
Forrealanalytic functions, unlike for complex analytic (that is, holomorphic) functions, these statements fail to hold. For example, consider the function
given byf(x)=x3{\displaystyle f(x)=x^{3}}. This function is clearly injective, but its derivative is 0 atx=0{\displaystyle x=0}, and its inverse is not analytic, or even differentiable, on the whole interval(−1,1){\displaystyle (-1,1)}. Consequently, if we enlarge the domain to an open subsetG{\displaystyle G}of the complex plane, it must fail to be injective; and this is the case, since (for example)f(εω)=f(ε){\displaystyle f(\varepsilon \omega )=f(\varepsilon )}(whereω{\displaystyle \omega }is aprimitive cube root of unityandε{\displaystyle \varepsilon }is a positive real number smaller than the radius ofG{\displaystyle G}as a neighbourhood of0{\displaystyle 0}).
This article incorporates material fromunivalent analytic functiononPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
|
https://en.wikipedia.org/wiki/Univalent_function
|
Inabstract algebra, acoveris one instance of somemathematical structuremappingontoanother instance, such as agroup(trivially) covering asubgroup. This should not be confused with the concept of acover in topology.
When some objectXis said to cover another objectY, the cover is given by somesurjectiveandstructure-preservingmapf:X→Y. The precise meaning of "structure-preserving" depends on the kind of mathematical structure of whichXandYare instances. In order to be interesting, the cover is usually endowed with additional properties, which are highly dependent on the context.
A classic result insemigrouptheory due toD. B. McAlisterstates that everyinverse semigrouphas anE-unitarycover; besides being surjective, the homomorphism in this case is alsoidempotentseparating, meaning that in itskernelan idempotent and non-idempotent never belong to the same equivalence class.; something slightly stronger has actually be shown for inverse semigroups: every inverse semigroup admits anF-inversecover.[1]McAlister's covering theorem generalizes toorthodox semigroups: every orthodox semigroup has a unitary cover.[2]
Examples from other areas of algebra include theFrattini coverof aprofinite group[3]and theuniversal coverof aLie group.
IfFis some family of modules over some ringR, then anF-cover of a moduleMis a homomorphismX→Mwith the following properties:
In general anF-cover ofMneed not exist, but if it does exist then it is unique up to (non-unique) isomorphism.
Examples include:
Thisabstract algebra-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Cover_(algebra)
|
Intopology, acoveringorcovering projectionis amapbetweentopological spacesthat, intuitively,locallyacts like aprojectionof multiple copies of a space onto itself. In particular, coverings are special types oflocal homeomorphisms. Ifp:X~→X{\displaystyle p:{\tilde {X}}\to X}is a covering,(X~,p){\displaystyle ({\tilde {X}},p)}is said to be acovering spaceorcoverofX{\displaystyle X}, andX{\displaystyle X}is said to be thebase of the covering, or simply thebase. Byabuse of terminology,X~{\displaystyle {\tilde {X}}}andp{\displaystyle p}may sometimes be calledcovering spacesas well. Since coverings are local homeomorphisms, a covering space is a special kind ofétalé space.
Covering spaces first arose in the context ofcomplex analysis(specifically, the technique ofanalytic continuation), where they were introduced byRiemannas domains on which naturallymultivaluedcomplex functions become single-valued. These spaces are now calledRiemann surfaces.[1]: 10
Covering spaces are an important tool in several areas of mathematics. In moderngeometry, covering spaces (orbranched coverings, which have slightly weaker conditions) are used in the construction ofmanifolds,orbifolds, and themorphismsbetween them. Inalgebraic topology, covering spaces are closely related to thefundamental group: for one, since all coverings have thehomotopy lifting property, covering spaces are an important tool in the calculation ofhomotopy groups. A standard example in this vein is the calculation of thefundamental groupof the circle by means of the covering ofS1{\displaystyle S^{1}}byR{\displaystyle \mathbb {R} }(seebelow).[2]: 29Under certain conditions, covering spaces also exhibit aGalois correspondencewith the subgroups of the fundamental group.
LetX{\displaystyle X}be a topological space. AcoveringofX{\displaystyle X}is a continuous map
such that for everyx∈X{\displaystyle x\in X}there exists anopen neighborhoodUx{\displaystyle U_{x}}ofx{\displaystyle x}and adiscrete spaceDx{\displaystyle D_{x}}such thatπ−1(Ux)=⨆d∈DxVd{\displaystyle \pi ^{-1}(U_{x})=\displaystyle \bigsqcup _{d\in D_{x}}V_{d}}andπ|Vd:Vd→Ux{\displaystyle \pi |_{V_{d}}:V_{d}\rightarrow U_{x}}is ahomeomorphismfor everyd∈Dx{\displaystyle d\in D_{x}}.
The open setsVd{\displaystyle V_{d}}are calledsheets, which are uniquely determined up to homeomorphism ifUx{\displaystyle U_{x}}isconnected.[2]: 56For eachx∈X{\displaystyle x\in X}the discrete setπ−1(x){\displaystyle \pi ^{-1}(x)}is called thefiberofx{\displaystyle x}. IfX{\displaystyle X}is connected (andX~{\displaystyle {\tilde {X}}}is non-empty), it can be shown thatπ{\displaystyle \pi }issurjective, and thecardinalityofDx{\displaystyle D_{x}}is the same for allx∈X{\displaystyle x\in X}; this value is called thedegreeof the covering. IfX~{\displaystyle {\tilde {X}}}ispath-connected, then the coveringπ:X~→X{\displaystyle \pi :{\tilde {X}}\rightarrow X}is called apath-connected covering. This definition is equivalent to the statement thatπ{\displaystyle \pi }is a locally trivialFiber bundle.
Some authors also require thatπ{\displaystyle \pi }be surjective in the case thatX{\displaystyle X}is not connected.[3]
Since a coveringπ:E→X{\displaystyle \pi :E\rightarrow X}maps each of the disjoint open sets ofπ−1(U){\displaystyle \pi ^{-1}(U)}homeomorphically ontoU{\displaystyle U}it is a local homeomorphism, i.e.π{\displaystyle \pi }is a continuous map and for everye∈E{\displaystyle e\in E}there exists an open neighborhoodV⊂E{\displaystyle V\subset E}ofe{\displaystyle e}, such thatπ|V:V→π(V){\displaystyle \pi |_{V}:V\rightarrow \pi (V)}is a homeomorphism.
It follows that the covering spaceE{\displaystyle E}and the base spaceX{\displaystyle X}locally share the same properties.
LetX,Y{\displaystyle X,Y}andE{\displaystyle E}be path-connected, locally path-connected spaces, andp,q{\displaystyle p,q}andr{\displaystyle r}be continuous maps, such that the diagram
commutes.
LetX{\displaystyle X}andX′{\displaystyle X'}be topological spaces andp:E→X{\displaystyle p:E\rightarrow X}andp′:E′→X′{\displaystyle p':E'\rightarrow X'}be coverings, thenp×p′:E×E′→X×X′{\displaystyle p\times p':E\times E'\rightarrow X\times X'}with(p×p′)(e,e′)=(p(e),p′(e′)){\displaystyle (p\times p')(e,e')=(p(e),p'(e'))}is a covering.[6]: 339However, coverings ofX×X′{\displaystyle X\times X'}are not all of this form in general.
LetX{\displaystyle X}be a topological space andp:E→X{\displaystyle p:E\rightarrow X}andp′:E′→X{\displaystyle p':E'\rightarrow X}be coverings. Both coverings are calledequivalent, if there exists a homeomorphismh:E→E′{\displaystyle h:E\rightarrow E'}, such that the diagram
commutes. If such a homeomorphism exists, then one calls the covering spacesE{\displaystyle E}andE′{\displaystyle E'}isomorphic.
All coverings satisfy thelifting property, i.e.:
LetI{\displaystyle I}be theunit intervalandp:E→X{\displaystyle p:E\rightarrow X}be a covering. LetF:Y×I→X{\displaystyle F:Y\times I\rightarrow X}be a continuous map andF~0:Y×{0}→E{\displaystyle {\tilde {F}}_{0}:Y\times \{0\}\rightarrow E}be a lift ofF|Y×{0}{\displaystyle F|_{Y\times \{0\}}}, i.e. a continuous map such thatp∘F~0=F|Y×{0}{\displaystyle p\circ {\tilde {F}}_{0}=F|_{Y\times \{0\}}}. Then there is a uniquely determined, continuous mapF~:Y×I→E{\displaystyle {\tilde {F}}:Y\times I\rightarrow E}for whichF~(y,0)=F~0{\displaystyle {\tilde {F}}(y,0)={\tilde {F}}_{0}}and which is a lift ofF{\displaystyle F}, i.e.p∘F~=F{\displaystyle p\circ {\tilde {F}}=F}.[2]: 60
IfX{\displaystyle X}is a path-connected space, then forY={0}{\displaystyle Y=\{0\}}it follows that the mapF~{\displaystyle {\tilde {F}}}is a lift of apathinX{\displaystyle X}and forY=I{\displaystyle Y=I}it is a lift of ahomotopyof paths inX{\displaystyle X}.
As a consequence, one can show that thefundamental groupπ1(S1){\displaystyle \pi _{1}(S^{1})}of the unit circle is aninfinite cyclic group, which is generated by the homotopy classes of the loopγ:I→S1{\displaystyle \gamma :I\rightarrow S^{1}}withγ(t)=(cos(2πt),sin(2πt)){\displaystyle \gamma (t)=(\cos(2\pi t),\sin(2\pi t))}.[2]: 29
LetX{\displaystyle X}be a path-connected space andp:E→X{\displaystyle p:E\rightarrow X}be a connected covering. Letx,y∈X{\displaystyle x,y\in X}be any two points, which are connected by a pathγ{\displaystyle \gamma }, i.e.γ(0)=x{\displaystyle \gamma (0)=x}andγ(1)=y{\displaystyle \gamma (1)=y}. Letγ~{\displaystyle {\tilde {\gamma }}}be the unique lift ofγ{\displaystyle \gamma }, then the map
isbijective.[2]: 69
IfX{\displaystyle X}is a path-connected space andp:E→X{\displaystyle p:E\rightarrow X}a connected covering, then the inducedgroup homomorphism
isinjectiveand thesubgroupp#(π1(E)){\displaystyle p_{\#}(\pi _{1}(E))}ofπ1(X){\displaystyle \pi _{1}(X)}consists of the homotopy classes of loops inX{\displaystyle X}, whose lifts are loops inE{\displaystyle E}.[2]: 61
LetX{\displaystyle X}andY{\displaystyle Y}beRiemann surfaces, i.e. one dimensionalcomplex manifolds, and letf:X→Y{\displaystyle f:X\rightarrow Y}be a continuous map.f{\displaystyle f}isholomorphic in a pointx∈X{\displaystyle x\in X}, if for anychartsϕx:U1→V1{\displaystyle \phi _{x}:U_{1}\rightarrow V_{1}}ofx{\displaystyle x}andϕf(x):U2→V2{\displaystyle \phi _{f(x)}:U_{2}\rightarrow V_{2}}off(x){\displaystyle f(x)}, withϕx(U1)⊂U2{\displaystyle \phi _{x}(U_{1})\subset U_{2}}, the mapϕf(x)∘f∘ϕx−1:C→C{\displaystyle \phi _{f(x)}\circ f\circ \phi _{x}^{-1}:\mathbb {C} \rightarrow \mathbb {C} }isholomorphic.
Iff{\displaystyle f}is holomorphic at allx∈X{\displaystyle x\in X}, we sayf{\displaystyle f}isholomorphic.
The mapF=ϕf(x)∘f∘ϕx−1{\displaystyle F=\phi _{f(x)}\circ f\circ \phi _{x}^{-1}}is called thelocal expressionoff{\displaystyle f}inx∈X{\displaystyle x\in X}.
Iff:X→Y{\displaystyle f:X\rightarrow Y}is a non-constant, holomorphic map betweencompact Riemann surfaces, thenf{\displaystyle f}issurjectiveand anopen map,[5]: 11i.e. for every open setU⊂X{\displaystyle U\subset X}theimagef(U)⊂Y{\displaystyle f(U)\subset Y}is also open.
Letf:X→Y{\displaystyle f:X\rightarrow Y}be a non-constant, holomorphic map between compact Riemann surfaces. For everyx∈X{\displaystyle x\in X}there exist charts forx{\displaystyle x}andf(x){\displaystyle f(x)}and there exists a uniquely determinedkx∈N>0{\displaystyle k_{x}\in \mathbb {N_{>0}} }, such that the local expressionF{\displaystyle F}off{\displaystyle f}inx{\displaystyle x}is of the formz↦zkx{\displaystyle z\mapsto z^{k_{x}}}.[5]: 10The numberkx{\displaystyle k_{x}}is called theramification indexoff{\displaystyle f}inx{\displaystyle x}and the pointx∈X{\displaystyle x\in X}is called aramification pointifkx≥2{\displaystyle k_{x}\geq 2}. Ifkx=1{\displaystyle k_{x}=1}for anx∈X{\displaystyle x\in X}, thenx{\displaystyle x}isunramified. The image pointy=f(x)∈Y{\displaystyle y=f(x)\in Y}of a ramification point is called abranch point.
Letf:X→Y{\displaystyle f:X\rightarrow Y}be a non-constant, holomorphic map between compact Riemann surfaces. Thedegreedeg(f){\displaystyle \operatorname {deg} (f)}off{\displaystyle f}is the cardinality of the fiber of an unramified pointy=f(x)∈Y{\displaystyle y=f(x)\in Y}, i.e.deg(f):=|f−1(y)|{\displaystyle \operatorname {deg} (f):=|f^{-1}(y)|}.
This number is well-defined, since for everyy∈Y{\displaystyle y\in Y}the fiberf−1(y){\displaystyle f^{-1}(y)}is discrete[5]: 20and for any two unramified pointsy1,y2∈Y{\displaystyle y_{1},y_{2}\in Y}, it is:|f−1(y1)|=|f−1(y2)|.{\displaystyle |f^{-1}(y_{1})|=|f^{-1}(y_{2})|.}
It can be calculated by:
A continuous mapf:X→Y{\displaystyle f:X\rightarrow Y}is called abranched covering, if there exists aclosed setwithdensecomplementE⊂Y{\displaystyle E\subset Y}, such thatf|X∖f−1(E):X∖f−1(E)→Y∖E{\displaystyle f_{|X\smallsetminus f^{-1}(E)}:X\smallsetminus f^{-1}(E)\rightarrow Y\smallsetminus E}is a covering.
Letp:X~→X{\displaystyle p:{\tilde {X}}\rightarrow X}be asimply connectedcovering. Ifβ:E→X{\displaystyle \beta :E\rightarrow X}is another simply connected covering, then there exists a uniquely determined homeomorphismα:X~→E{\displaystyle \alpha :{\tilde {X}}\rightarrow E}, such that the diagram
commutes.[6]: 482
This means thatp{\displaystyle p}is, up to equivalence, uniquely determined and because of thatuniversal propertydenoted as theuniversal coveringof the spaceX{\displaystyle X}.
A universal covering does not always exist. The following theorem guarantees its existence for a certain class of base spaces.
LetX{\displaystyle X}be a connected,locally simply connectedtopological space. Then, there exists a universal coveringp:X~→X.{\displaystyle p:{\tilde {X}}\rightarrow X.}
The setX~{\displaystyle {\tilde {X}}}is defined asX~={γ:γis a path inXwithγ(0)=x0}/homotopy with fixed ends,{\displaystyle {\tilde {X}}=\{\gamma :\gamma {\text{ is a path in }}X{\text{ with }}\gamma (0)=x_{0}\}/{\text{homotopy with fixed ends}},}wherex0∈X{\displaystyle x_{0}\in X}is any chosen base point. The mapp:X~→X{\displaystyle p:{\tilde {X}}\rightarrow X}is defined byp([γ])=γ(1).{\displaystyle p([\gamma ])=\gamma (1).}[2]: 64
ThetopologyonX~{\displaystyle {\tilde {X}}}is constructed as follows: Letγ:I→X{\displaystyle \gamma :I\rightarrow X}be a path withγ(0)=x0.{\displaystyle \gamma (0)=x_{0}.}LetU{\displaystyle U}be a simply connected neighborhood of the endpointx=γ(1).{\displaystyle x=\gamma (1).}Then, for everyy∈U,{\displaystyle y\in U,}there is apathσy{\displaystyle \sigma _{y}}insideU{\displaystyle U}fromx{\displaystyle x}toy{\displaystyle y}that is unique up tohomotopy. Now consider the setU~={γσy:y∈U}/homotopy with fixed ends.{\displaystyle {\tilde {U}}=\{\gamma \sigma _{y}:y\in U\}/{\text{homotopy with fixed ends}}.}The restrictionp|U~:U~→U{\displaystyle p|_{\tilde {U}}:{\tilde {U}}\rightarrow U}withp([γσy])=γσy(1)=y{\displaystyle p([\gamma \sigma _{y}])=\gamma \sigma _{y}(1)=y}is a bijection andU~{\displaystyle {\tilde {U}}}can be equipped with thefinal topologyofp|U~.{\displaystyle p|_{\tilde {U}}.}[further explanation needed]
The fundamental groupπ1(X,x0)=Γ{\displaystyle \pi _{1}(X,x_{0})=\Gamma }actsfreelyonX~{\displaystyle {\tilde {X}}}by([γ],[x~])↦[γx~],{\displaystyle ([\gamma ],[{\tilde {x}}])\mapsto [\gamma {\tilde {x}}],}and the orbit spaceΓ∖X~{\displaystyle \Gamma \backslash {\tilde {X}}}is homeomorphic toX{\displaystyle X}through the map[Γx~]↦x~(1).{\displaystyle [\Gamma {\tilde {x}}]\mapsto {\tilde {x}}(1).}
LetGbe adiscrete groupactingon thetopological spaceX. This means that each elementgofGis associated to a homeomorphism HgofXonto itself, in such a way that Hghis always equal to Hg∘ Hhfor any two elementsgandhofG. (Or in other words, a group action of the groupGon the spaceXis just a group homomorphism of the groupGinto the group Homeo(X) of self-homeomorphisms ofX.) It is natural to ask under what conditions the projection fromXto theorbit spaceX/Gis a covering map. This is not always true since the action may have fixed points. An example for this is the cyclic group of order 2 acting on a productX×Xby the twist action where the non-identity element acts by(x,y) ↦ (y,x). Thus the study of the relation between the fundamental groups ofXandX/Gis not so straightforward.
However the groupGdoes act on the fundamentalgroupoidofX, and so the study is best handled by considering groups acting on groupoids, and the correspondingorbit groupoids. The theory for this is set down in Chapter 11 of the bookTopology and groupoidsreferred to below. The main result is that for discontinuous actions of a groupGon a Hausdorff spaceXwhich admits a universal cover, then the fundamental groupoid of the orbit spaceX/Gis isomorphic to the orbit groupoid of the fundamental groupoid ofX, i.e. the quotient of that groupoid by the action of the groupG. This leads to explicit computations, for example of the fundamental group of the symmetric square of a space.
LetEandMbesmooth manifoldswith or withoutboundary. A coveringπ:E→M{\displaystyle \pi :E\to M}is called asmooth coveringif it is asmooth mapand the sheets are mappeddiffeomorphicallyonto the corresponding open subset ofM. (This is in contrast to the definition of a covering, which merely requires that the sheets are mappedhomeomorphicallyonto the corresponding open subset.)
Letp:E→X{\displaystyle p:E\rightarrow X}be a covering. Adeck transformationis a homeomorphismd:E→E{\displaystyle d:E\rightarrow E}, such that the diagram of continuous maps
commutes. Together with the composition of maps, the set of deck transformation forms agroupDeck(p){\displaystyle \operatorname {Deck} (p)}, which is the same asAut(p){\displaystyle \operatorname {Aut} (p)}.
Now supposep:C→X{\displaystyle p:C\to X}is a covering map andC{\displaystyle C}(and therefore alsoX{\displaystyle X}) is connected and locally path connected. The action ofAut(p){\displaystyle \operatorname {Aut} (p)}on each fiber isfree. If this action istransitiveon some fiber, then it is transitive on all fibers, and we call the coverregular(ornormalorGalois). Every such regular cover is aprincipalG{\displaystyle G}-bundle, whereG=Aut(p){\displaystyle G=\operatorname {Aut} (p)}is considered as a discrete topological group.
Every universal coverp:D→X{\displaystyle p:D\to X}is regular, with deck transformation group being isomorphic to thefundamental groupπ1(X){\displaystyle \pi _{1}(X)}.
LetX{\displaystyle X}be a path-connected space andp:E→X{\displaystyle p:E\rightarrow X}be a connected covering. Since a deck transformationd:E→E{\displaystyle d:E\rightarrow E}isbijective, it permutes the elements of a fiberp−1(x){\displaystyle p^{-1}(x)}withx∈X{\displaystyle x\in X}and is uniquely determined by where it sends a single point. In particular, only the identity map fixes a point in the fiber.[2]: 70Because of this property every deck transformation defines agroup actiononE{\displaystyle E}, i.e. letU⊂X{\displaystyle U\subset X}be an open neighborhood of ax∈X{\displaystyle x\in X}andU~⊂E{\displaystyle {\tilde {U}}\subset E}an open neighborhood of ane∈p−1(x){\displaystyle e\in p^{-1}(x)}, thenDeck(p)×E→E:(d,U~)↦d(U~){\displaystyle \operatorname {Deck} (p)\times E\rightarrow E:(d,{\tilde {U}})\mapsto d({\tilde {U}})}is agroup action.
A coveringp:E→X{\displaystyle p:E\rightarrow X}is called normal, ifDeck(p)∖E≅X{\displaystyle \operatorname {Deck} (p)\backslash E\cong X}. This means, that for everyx∈X{\displaystyle x\in X}and any twoe0,e1∈p−1(x){\displaystyle e_{0},e_{1}\in p^{-1}(x)}there exists a deck transformationd:E→E{\displaystyle d:E\rightarrow E}, such thatd(e0)=e1{\displaystyle d(e_{0})=e_{1}}.
LetX{\displaystyle X}be a path-connected space andp:E→X{\displaystyle p:E\rightarrow X}be a connected covering. LetH=p#(π1(E)){\displaystyle H=p_{\#}(\pi _{1}(E))}be asubgroupofπ1(X){\displaystyle \pi _{1}(X)}, thenp{\displaystyle p}is a normal covering iffH{\displaystyle H}is anormal subgroupofπ1(X){\displaystyle \pi _{1}(X)}.
Ifp:E→X{\displaystyle p:E\rightarrow X}is a normal covering andH=p#(π1(E)){\displaystyle H=p_{\#}(\pi _{1}(E))}, thenDeck(p)≅π1(X)/H{\displaystyle \operatorname {Deck} (p)\cong \pi _{1}(X)/H}.
Ifp:E→X{\displaystyle p:E\rightarrow X}is a path-connected covering andH=p#(π1(E)){\displaystyle H=p_{\#}(\pi _{1}(E))}, thenDeck(p)≅N(H)/H{\displaystyle \operatorname {Deck} (p)\cong N(H)/H}, wherebyN(H){\displaystyle N(H)}is thenormaliserofH{\displaystyle H}.[2]: 71
LetE{\displaystyle E}be a topological space. A groupΓ{\displaystyle \Gamma }actsdiscontinuouslyonE{\displaystyle E}, if everye∈E{\displaystyle e\in E}has an open neighborhoodV⊂E{\displaystyle V\subset E}withV≠∅{\displaystyle V\neq \emptyset }, such that for everyd1,d2∈Γ{\displaystyle d_{1},d_{2}\in \Gamma }withd1V∩d2V≠∅{\displaystyle d_{1}V\cap d_{2}V\neq \emptyset }one hasd1=d2{\displaystyle d_{1}=d_{2}}.
If a groupΓ{\displaystyle \Gamma }acts discontinuously on a topological spaceE{\displaystyle E}, then thequotient mapq:E→Γ∖E{\displaystyle q:E\rightarrow \Gamma \backslash E}withq(e)=Γe{\displaystyle q(e)=\Gamma e}is a normal covering.[2]: 72HerebyΓ∖E={Γe:e∈E}{\displaystyle \Gamma \backslash E=\{\Gamma e:e\in E\}}is thequotient spaceandΓe={γ(e):γ∈Γ}{\displaystyle \Gamma e=\{\gamma (e):\gamma \in \Gamma \}}is theorbitof the group action.
LetΓ{\displaystyle \Gamma }be a group, which acts discontinuously on a topological spaceE{\displaystyle E}and letq:E→Γ∖E{\displaystyle q:E\rightarrow \Gamma \backslash E}be the normal covering.
LetX{\displaystyle X}be a connected andlocally simply connectedspace, then for everysubgroupH⊆π1(X){\displaystyle H\subseteq \pi _{1}(X)}there exists a path-connected coveringα:XH→X{\displaystyle \alpha :X_{H}\rightarrow X}withα#(π1(XH))=H{\displaystyle \alpha _{\#}(\pi _{1}(X_{H}))=H}.[2]: 66
Letp1:E→X{\displaystyle p_{1}:E\rightarrow X}andp2:E′→X{\displaystyle p_{2}:E'\rightarrow X}be two path-connected coverings, then they are equivalent iff the subgroupsH=p1#(π1(E)){\displaystyle H=p_{1\#}(\pi _{1}(E))}andH′=p2#(π1(E′)){\displaystyle H'=p_{2\#}(\pi _{1}(E'))}areconjugateto each other.[6]: 482
LetX{\displaystyle X}be a connected and locally simply connected space, then, up to equivalence between coverings, there is a bijection:
{Subgroup ofπ1(X)}⟷{path-connected coveringp:E→X}H⟶α:XH→Xp#(π1(E))⟵p{normal subgroup ofπ1(X)}⟷{normal coveringp:E→X}{\displaystyle {\begin{matrix}\qquad \displaystyle \{{\text{Subgroup of }}\pi _{1}(X)\}&\longleftrightarrow &\displaystyle \{{\text{path-connected covering }}p:E\rightarrow X\}\\H&\longrightarrow &\alpha :X_{H}\rightarrow X\\p_{\#}(\pi _{1}(E))&\longleftarrow &p\\\displaystyle \{{\text{normal subgroup of }}\pi _{1}(X)\}&\longleftrightarrow &\displaystyle \{{\text{normal covering }}p:E\rightarrow X\}\end{matrix}}}
For a sequence of subgroups{e}⊂H⊂G⊂π1(X){\displaystyle \displaystyle \{{\text{e}}\}\subset H\subset G\subset \pi _{1}(X)}one gets a sequence of coveringsX~⟶XH≅H∖X~⟶XG≅G∖X~⟶X≅π1(X)∖X~{\displaystyle {\tilde {X}}\longrightarrow X_{H}\cong H\backslash {\tilde {X}}\longrightarrow X_{G}\cong G\backslash {\tilde {X}}\longrightarrow X\cong \pi _{1}(X)\backslash {\tilde {X}}}. For a subgroupH⊂π1(X){\displaystyle H\subset \pi _{1}(X)}withindex[π1(X):H]=d{\displaystyle \displaystyle [\pi _{1}(X):H]=d}, the coveringα:XH→X{\displaystyle \alpha :X_{H}\rightarrow X}has degreed{\displaystyle d}.
LetX{\displaystyle X}be a topological space. The objects of thecategoryCov(X){\displaystyle {\boldsymbol {Cov(X)}}}are the coveringsp:E→X{\displaystyle p:E\rightarrow X}ofX{\displaystyle X}and themorphismsbetween two coveringsp:E→X{\displaystyle p:E\rightarrow X}andq:F→X{\displaystyle q:F\rightarrow X}are continuous mapsf:E→F{\displaystyle f:E\rightarrow F}, such that the diagram
commutes.
LetG{\displaystyle G}be atopological group. ThecategoryG−Set{\displaystyle {\boldsymbol {G-Set}}}is the category of sets which areG-sets. The morphisms areG-mapsϕ:X→Y{\displaystyle \phi :X\rightarrow Y}between G-sets. They satisfy the conditionϕ(gx)=gϕ(x){\displaystyle \phi (gx)=g\,\phi (x)}for everyg∈G{\displaystyle g\in G}.
LetX{\displaystyle X}be a connected and locally simply connected space,x∈X{\displaystyle x\in X}andG=π1(X,x){\displaystyle G=\pi _{1}(X,x)}be the fundamental group ofX{\displaystyle X}. SinceG{\displaystyle G}defines, by lifting of paths and evaluating at the endpoint of the lift, a group action on the fiber of a covering, thefunctorF:Cov(X)⟶G−Set:p↦p−1(x){\displaystyle F:{\boldsymbol {Cov(X)}}\longrightarrow {\boldsymbol {G-Set}}:p\mapsto p^{-1}(x)}is anequivalence of categories.[2]: 68–70
An important practical application of covering spaces occurs incharts on SO(3), therotation group. This group occurs widely in engineering, due to 3-dimensional rotations being heavily used innavigation,nautical engineering, andaerospace engineering, among many other uses. Topologically, SO(3) is thereal projective spaceRP3, with fundamental groupZ/2, and only (non-trivial) covering space the hypersphereS3, which is the groupSpin(3), and represented by the unitquaternions. Thus quaternions are a preferred method for representing spatial rotations – seequaternions and spatial rotation.
However, it is often desirable to represent rotations by a set of three numbers, known asEuler angles(in numerous variants), both because this is conceptually simpler for someone familiar with planar rotation, and because one can build a combination of threegimbalsto produce rotations in three dimensions. Topologically this corresponds to a map from the 3-torusT3of three angles to the real projective spaceRP3of rotations, and the resulting map has imperfections due to this map being unable to be a covering map. Specifically, the failure of the map to be a local homeomorphism at certain points is referred to asgimbal lock, and is demonstrated in the animation at the right – at some points (when the axes are coplanar) therankof the map is 2, rather than 3, meaning that only 2 dimensions of rotations can be realized from that point by changing the angles. This causes problems in applications, and is formalized by the notion of a covering space.
|
https://en.wikipedia.org/wiki/Covering_map
|
Anenumerationis a complete, orderedlistingof all the items in a collection. The term is commonly used inmathematicsandcomputer scienceto refer to a listing of all of theelementsof aset. The precise requirements for an enumeration (for example, whether the set must befinite, or whether the list is allowed to contain repetitions) depend on the discipline of study and the context of a given problem.
Some sets can be enumerated by means of anatural ordering(such as 1, 2, 3, 4, ... for the set ofpositive integers), but in other cases it may be necessary to impose a (perhaps arbitrary) ordering. In some contexts, such asenumerative combinatorics, the termenumerationis used more in the sense ofcounting– with emphasis on determination of the number of elements that a set contains, rather than the production of an explicit listing of those elements.
In combinatorics, enumeration meanscounting, i.e., determining the exact number of elements of finite sets, usually grouped into infinite families, such as the family of sets each consisting of allpermutationsof some finite set. There are flourishing subareas in many branches of mathematics concerned with enumerating in this sense. For instance, inpartitionenumerationandgraph enumerationthe objective is to count partitions or graphs that meet certain conditions.
Inset theory, the notion of enumeration has a broader sense, and does not require the set being enumerated to be finite.
When an enumeration is used in anordered listcontext, we impose some sort of ordering structure requirement on theindex set. While we can make the requirements on the ordering quite lax in order to allow for great generality, the most natural and common prerequisite is that the index set bewell-ordered. According to this characterization, an ordered enumeration is defined to be a surjection (an onto relationship) with a well-ordered domain. This definition is natural in the sense that a given well-ordering on the index set provides a unique way to list the next element given a partial enumeration.
Unless otherwise specified, an enumeration is done by means ofnatural numbers. That is, anenumerationof asetSis abijective functionfrom thenatural numbersN{\displaystyle \mathbb {N} }or aninitial segment{1, ...,n}of the natural numbers toS.
A set iscountableif it can be enumerated, that is, if there exists an enumeration of it. Otherwise, it isuncountable. For example, the set of the real numbers is uncountable.
A set isfiniteif it can be enumerated by means of a proper initial segment{1, ...,n}of the natural numbers, in which case, itscardinalityisn. Theempty setis finite, as it can be enumerated by means of the empty initial segment of the natural numbers.
The termenumerablesetis sometimes used for countable sets. However it is also often used forcomputably enumerable sets, which are the countable sets for which an enumeration function can be computed with an algorithm.
For avoiding to distinguish between finite and countably infinite set, it is often useful to use another definition that is equivalent: A setSis countable if and only if there exists aninjective functionfrom it into the natural numbers.
Inset theory, there is a more general notion of an enumeration than the characterization requiring the domain of the listing function to be aninitial segmentof the Natural numbers where the domain of the enumerating function can assume anyordinal. Under this definition, an enumeration of a setSis anysurjectionfrom an ordinal α ontoS. The more restrictive version of enumeration mentioned before is the special case where α is a finite ordinal or the first limit ordinal ω. This more generalized version extends the aforementioned definition to encompasstransfinitelistings.
Under this definition, thefirst uncountable ordinalω1{\displaystyle \omega _{1}}can be enumerated by the identity function onω1{\displaystyle \omega _{1}}so that these two notions donotcoincide. More generally, it is a theorem of ZF that anywell-orderedset can be enumerated under this characterization so that it coincides up to relabeling with the generalized listing enumeration. If one also assumes theAxiom of Choice, then all sets can be enumerated so that it coincides up to relabeling with the most general form of enumerations.
Sinceset theoristswork with infinite sets of arbitrarily largecardinalities, the default definition among this group of mathematicians of an enumeration of a set tends to be any arbitrary α-sequence exactly listing all of its elements. Indeed, in Jech's book, which is a common reference for set theorists, an enumeration is defined to be exactly this. Therefore, in order to avoid ambiguity, one may use the term finitely enumerable ordenumerableto denote one of the corresponding types of distinguished countable enumerations.
Formally, the most inclusive definition of an enumeration of a setSis anysurjectionfrom an arbitraryindex setIontoS. In this broad context, every setScan be trivially enumerated by theidentity functionfromSonto itself. If one doesnotassume theaxiom of choiceor one of its variants,Sneed not have anywell-ordering. Even if one does assume the axiom of choice,Sneed not have any natural well-ordering.
This general definition therefore lends itself to a counting notion where we are interested in "how many" rather than "in what order." In practice, this broad meaning of enumeration is often used to compare the relative sizes orcardinalitiesof different sets. If one works inZermelo–Fraenkel set theorywithout the axiom of choice, one may want to impose the additional restriction that an enumeration must also beinjective(without repetition) since in this theory, the existence of a surjection fromIontoSneed not imply the existence of aninjectionfromSintoI.
Incomputability theoryone often considers countable enumerations with the added requirement that the mapping fromN{\displaystyle \mathbb {N} }(set of all natural numbers) to the enumerated set must becomputable. The set being enumerated is then calledrecursively enumerable(or computably enumerable in more contemporary language), referring to the use ofrecursion theoryin formalizations of what it means for the map to be computable.
In this sense, a subset of the natural numbers iscomputably enumerableif it is the range of a computable function. In this context, enumerable may be used to mean computably enumerable. However, these definitions characterize distinct classes since there are uncountably many subsets of the natural numbers that can be enumerated by an arbitrary function with domain ω and only countably many computable functions. A specific example of a set with an enumeration but not a computable enumeration is the complement of thehalting set.
Furthermore, this characterization illustrates a place where the ordering of the listing is important. There exists a computable enumeration of the halting set, butnotone that lists the elements in an increasing ordering. If there were one, then the halting set would bedecidable, which is provably false. In general, being recursively enumerable is a weaker condition than being adecidable set.
The notion of enumeration has also been studied from the point of view ofcomputational complexity theoryfor various tasks in the context ofenumeration algorithms.
|
https://en.wikipedia.org/wiki/Enumeration
|
Inmathematics, and particularlytopology, afiber bundle(Commonwealth English:fibre bundle) is aspacethat islocallyaproduct space, butgloballymay have a differenttopological structure. Specifically, the similarity between a spaceE{\displaystyle E}and a product spaceB×F{\displaystyle B\times F}is defined using acontinuoussurjectivemap,π:E→B,{\displaystyle \pi :E\to B,}that in small regions ofE{\displaystyle E}behaves just like a projection from corresponding regions ofB×F{\displaystyle B\times F}toB.{\displaystyle B.}The mapπ,{\displaystyle \pi ,}called theprojectionorsubmersionof the bundle, is regarded as part of the structure of the bundle. The spaceE{\displaystyle E}is known as thetotal spaceof the fiber bundle,B{\displaystyle B}as thebase space, andF{\displaystyle F}thefiber.
In thetrivialcase,E{\displaystyle E}is justB×F,{\displaystyle B\times F,}and the mapπ{\displaystyle \pi }is just the projection from the product space to the first factor. This is called atrivial bundle. Examples of non-trivial fiber bundles include theMöbius stripandKlein bottle, as well as nontrivialcovering spaces. Fiber bundles, such as thetangent bundleof amanifoldand other more generalvector bundles, play an important role indifferential geometryanddifferential topology, as doprincipal bundles.
Mappings between total spaces of fiber bundles that "commute" with the projection maps are known asbundle maps, and theclassof fiber bundles forms acategorywith respect to such mappings. A bundle map from the base space itself (with theidentity mappingas projection) toE{\displaystyle E}is called asectionofE.{\displaystyle E.}Fiber bundles can be specialized in a number of ways, the most common of which is requiring that thetransition mapsbetween the local trivial patches lie in a certaintopological group, known as thestructure group, acting on the fiberF{\displaystyle F}.
Intopology, the termsfiber(German:Faser) andfiber space(gefaserter Raum) appeared for the first time in a paper byHerbert Seifertin 1933,[1][2][3]but his definitions are limited to a very special case. The main difference from the present day conception of a fiber space, however, was that for Seifert what is now called thebase space(topological space) of a fiber (topological) spaceEwas not part of the structure, but derived from it as a quotient space ofE. The first definition offiber spacewas given byHassler Whitneyin 1935[4]under the namesphere space, but in 1940 Whitney changed the name tosphere bundle.[5]
The theory of fibered spaces, of whichvector bundles,principal bundles, topologicalfibrationsandfibered manifoldsare a special case, is attributed toHerbert Seifert,Heinz Hopf,Jacques Feldbau,[6]Whitney,Norman Steenrod,Charles Ehresmann,[7][8][9]Jean-Pierre Serre,[10]and others.
Fiber bundles became their own object of study in the period 1935–1940. The first general definition appeared in the works of Whitney.[11]
Whitney came to the general definition of a fiber bundle from his study of a more particular notion of asphere bundle,[12]that is a fiber bundle whose fiber is a sphere of arbitrarydimension.[13]
A fiber bundle is a structure(E,B,π,F),{\displaystyle (E,\,B,\,\pi ,\,F),}whereE,B,{\displaystyle E,B,}andF{\displaystyle F}aretopological spacesandπ:E→B{\displaystyle \pi :E\to B}is acontinuoussurjectionsatisfying alocal trivialitycondition outlined below. The spaceB{\displaystyle B}is called thebase spaceof the bundle,E{\displaystyle E}thetotal space, andF{\displaystyle F}thefiber. The mapπ{\displaystyle \pi }is called theprojection map(orbundle projection). We shall assume in what follows that the base spaceB{\displaystyle B}isconnected.
We require that for everyx∈B{\displaystyle x\in B}, there is an openneighborhoodU⊆B{\displaystyle U\subseteq B}ofx{\displaystyle x}(which will be called a trivializing neighborhood) such that there is ahomeomorphismφ:π−1(U)→U×F{\displaystyle \varphi :\pi ^{-1}(U)\to U\times F}(whereπ−1(U){\displaystyle \pi ^{-1}(U)}is given thesubspace topology, andU×F{\displaystyle U\times F}is the product space) in such a way thatπ{\displaystyle \pi }agrees with the projection onto the first factor. That is, the following diagram shouldcommute:
whereproj1:U×F→U{\displaystyle \operatorname {proj} _{1}:U\times F\to U}is the natural projection andφ:π−1(U)→U×F{\displaystyle \varphi :\pi ^{-1}(U)\to U\times F}is a homeomorphism. Thesetof all{(Ui,φi)}{\displaystyle \left\{\left(U_{i},\,\varphi _{i}\right)\right\}}is called alocal trivializationof the bundle.
Thus for anyp∈B{\displaystyle p\in B}, thepreimageπ−1({p}){\displaystyle \pi ^{-1}(\{p\})}is homeomorphic toF{\displaystyle F}(since this is true ofproj1−1({p}){\displaystyle \operatorname {proj} _{1}^{-1}(\{p\})}) and is called thefiber overp.{\displaystyle p.}Every fiber bundleπ:E→B{\displaystyle \pi :E\to B}is anopen map, since projections of products are open maps. ThereforeB{\displaystyle B}carries thequotient topologydetermined by the mapπ.{\displaystyle \pi .}
A fiber bundle(E,B,π,F){\displaystyle (E,\,B,\,\pi ,\,F)}is often denoted
that, in analogy with ashort exact sequence, indicates which space is the fiber, total space and base space, as well as the map from total to base space.
Asmooth fiber bundleis a fiber bundle in thecategoryofsmooth manifolds. That is,E,B,{\displaystyle E,B,}andF{\displaystyle F}are required to be smooth manifolds and all thefunctionsabove are required to besmooth maps.
LetE=B×F{\displaystyle E=B\times F}and letπ:E→B{\displaystyle \pi :E\to B}be the projection onto the first factor. Thenπ{\displaystyle \pi }is a fiber bundle (ofF{\displaystyle F}) overB.{\displaystyle B.}HereE{\displaystyle E}is not just locally a product butgloballyone. Any such fiber bundle is called atrivial bundle. Any fiber bundle over acontractibleCW-complexis trivial.
Perhaps the simplest example of a nontrivial bundleE{\displaystyle E}is theMöbius strip. It has thecirclethat runs lengthwise along the center of the strip as a baseB{\displaystyle B}and aline segmentfor the fiberF{\displaystyle F}, so the Möbius strip is a bundle of the line segment over the circle. AneighborhoodU{\displaystyle U}ofπ(x)∈B{\displaystyle \pi (x)\in B}(wherex∈E{\displaystyle x\in E}) is anarc; in the picture, this is thelengthof one of the squares. Thepreimageπ−1(U){\displaystyle \pi ^{-1}(U)}in the picture is a (somewhat twisted) slice of the strip four squares wide and one long (i.e. all the points that project toU{\displaystyle U}).
A homeomorphism (φ{\displaystyle \varphi }in§ Formal definition) exists that maps the preimage ofU{\displaystyle U}(the trivializing neighborhood) to a slice of a cylinder: curved, but not twisted. This pair locally trivializes the strip. The corresponding trivial bundleB×F{\displaystyle B\times F}would be acylinder, but the Möbius strip has an overall "twist". This twist is visible only globally; locally the Möbius strip and the cylinder are identical (making a single vertical cut in either gives the same space).
A similar nontrivial bundle is theKlein bottle, which can be viewed as a "twisted" circle bundle over another circle. The corresponding non-twisted (trivial) bundle is the 2-torus,S1×S1{\displaystyle S^{1}\times S^{1}}.
Acovering spaceis a fiber bundle such that the bundle projection is alocal homeomorphism. It follows that the fiber is adiscrete space.
A special class of fiber bundles, calledvector bundles, are those whose fibers arevector spaces(to qualify as a vector bundle the structure group of the bundle — see below — must be alinear group). Important examples of vector bundles include thetangent bundleandcotangent bundleof a smooth manifold. From any vector bundle, one can construct theframe bundleofbases, which is a principal bundle (see below).
Another special class of fiber bundles, calledprincipal bundles, are bundles on whose fibers afreeandtransitiveactionby a groupG{\displaystyle G}is given, so that each fiber is aprincipal homogeneous space. The bundle is often specified along with the group by referring to it as a principalG{\displaystyle G}-bundle. The groupG{\displaystyle G}is also the structure group of the bundle. Given arepresentationρ{\displaystyle \rho }ofG{\displaystyle G}on a vector spaceV{\displaystyle V}, a vector bundle withρ(G)⊆Aut(V){\displaystyle \rho (G)\subseteq {\text{Aut}}(V)}as a structure group may be constructed, known as theassociated bundle.
Asphere bundleis a fiber bundle whose fiber is ann-sphere. Given a vector bundleE{\displaystyle E}with ametric(such as the tangent bundle to aRiemannian manifold) one can construct the associatedunit sphere bundle, for which the fiber over a pointx{\displaystyle x}is the set of allunit vectorsinEx{\displaystyle E_{x}}. When the vector bundle in question is the tangent bundleTM{\displaystyle TM}, the unit sphere bundle is known as theunit tangent bundle.
A sphere bundle is partially characterized by itsEuler class, which is a degreen+1{\displaystyle n+1}cohomologyclass in the total space of the bundle. In the casen=1{\displaystyle n=1}the sphere bundle is called acircle bundleand the Euler class is equal to the firstChern class, which characterizes the topology of the bundle completely. For anyn{\displaystyle n}, given the Euler class of a bundle, one can calculate its cohomology using along exact sequencecalled theGysin sequence.
IfX{\displaystyle X}is atopological spaceandf:X→X{\displaystyle f:X\to X}is ahomeomorphismthen themapping torusMf{\displaystyle M_{f}}has a natural structure of a fiber bundle over thecirclewith fiberX.{\displaystyle X.}Mapping tori of homeomorphisms ofsurfacesare of particular importance in3-manifold topology.
IfG{\displaystyle G}is atopological groupandH{\displaystyle H}is aclosed subgroup, then under some circumstances, thequotient spaceG/H{\displaystyle G/H}together with the quotient mapπ:G→G/H{\displaystyle \pi :G\to G/H}is a fiber bundle, whose fiber is the topological spaceH{\displaystyle H}. Anecessary and sufficient conditionfor (G,G/H,π,H{\displaystyle G,\,G/H,\,\pi ,\,H}) to form a fiber bundle is that the mappingπ{\displaystyle \pi }admitslocal cross-sections(Steenrod 1951, §7).
The most general conditions under which thequotient mapwill admit local cross-sections are not known, although ifG{\displaystyle G}is aLie groupandH{\displaystyle H}a closed subgroup (and thus aLie subgroupbyCartan's theorem), then the quotient map is a fiber bundle. One example of this is theHopf fibration,S3→S2{\displaystyle S^{3}\to S^{2}}, which is a fiber bundle over the sphereS2{\displaystyle S^{2}}whose total space isS3{\displaystyle S^{3}}. From the perspective of Lie groups,S3{\displaystyle S^{3}}can be identified with thespecial unitary groupSU(2){\displaystyle SU(2)}. Theabeliansubgroup ofdiagonal matricesisisomorphicto thecircle groupU(1){\displaystyle U(1)}, and the quotientSU(2)/U(1){\displaystyle SU(2)/U(1)}isdiffeomorphicto the sphere.
More generally, ifG{\displaystyle G}is any topological group andH{\displaystyle H}a closed subgroup that also happens to be a Lie group, thenG→G/H{\displaystyle G\to G/H}is a fiber bundle.
Asection(orcross section) of a fiber bundleπ{\displaystyle \pi }is a continuous mapf:B→E{\displaystyle f:B\to E}such thatπ(f(x))=x{\displaystyle \pi (f(x))=x}for allxinB. Since bundles do not in general have globally defined sections, one of the purposes of the theory is to account for their existence. Theobstructionto the existence of a section can often be measured by a cohomology class, which leads to the theory ofcharacteristic classesinalgebraic topology.
The most well-known example is thehairy ball theorem, where theEuler classis the obstruction to thetangent bundleof the2-spherehaving a nowhere vanishing section.
Often one would like to define sections only locally (especially when global sections do not exist). Alocal sectionof a fiber bundle is a continuous mapf:U→E{\displaystyle f:U\to E}whereUis anopen setinBandπ(f(x))=x{\displaystyle \pi (f(x))=x}for allxinU. If(U,φ){\displaystyle (U,\,\varphi )}is a local trivializationchartthen local sections always exist overU. Such sections are in1-1 correspondencewith continuous mapsU→F{\displaystyle U\to F}. Sections form asheaf.
Fiber bundles often come with agroupof symmetries that describe the matching conditions between overlapping local trivialization charts. Specifically, letGbe atopological groupthatactscontinuously on the fiber spaceFon the left. We lose nothing if we requireGto actfaithfullyonFso that it may be thought of as a group ofhomeomorphismsofF. AG-atlasfor the bundle(E,B,π,F){\displaystyle (E,B,\pi ,F)}is a set of local trivialization charts{(Uk,φk)}{\displaystyle \{(U_{k},\,\varphi _{k})\}}such that for anyφi,φj{\displaystyle \varphi _{i},\varphi _{j}}for the overlapping charts(Ui,φi){\displaystyle (U_{i},\,\varphi _{i})}and(Uj,φj){\displaystyle (U_{j},\,\varphi _{j})}the functionφiφj−1:(Ui∩Uj)×F→(Ui∩Uj)×F{\displaystyle \varphi _{i}\varphi _{j}^{-1}:\left(U_{i}\cap U_{j}\right)\times F\to \left(U_{i}\cap U_{j}\right)\times F}is given byφiφj−1(x,ξ)=(x,tij(x)ξ){\displaystyle \varphi _{i}\varphi _{j}^{-1}(x,\,\xi )=\left(x,\,t_{ij}(x)\xi \right)}wheretij:Ui∩Uj→G{\displaystyle t_{ij}:U_{i}\cap U_{j}\to G}is a continuous map called atransition function. TwoG-atlases are equivalent if their union is also aG-atlas. AG-bundleis a fiber bundle with an equivalence class ofG-atlases. The groupGis called thestructure groupof the bundle; the analogous term inphysicsisgauge group.
In the smooth category, aG-bundle is a smooth fiber bundle whereGis aLie groupand the corresponding action onFis smooth and the transition functions are all smooth maps.
The transition functionstij{\displaystyle t_{ij}}satisfy the following conditions
The third condition applies on triple overlapsUi∩Uj∩Ukand is called thecocyclecondition(seeČech cohomology). The importance of this is that the transition functions determine the fiber bundle (if one assumes the Čech cocycle condition).
AprincipalG-bundleis aG-bundle where the fiberFis aprincipal homogeneous spacefor the left action ofGitself (equivalently, one can specify that the action ofGon the fiberFis free and transitive, i.e.regular). In this case, it is often a matter of convenience to identifyFwithGand so obtain a (right) action ofGon the principal bundle.
It is useful to have notions of a mapping between two fiber bundles. Suppose thatMandNare base spaces, andπE:E→M{\displaystyle \pi _{E}:E\to M}andπF:F→N{\displaystyle \pi _{F}:F\to N}are fiber bundles overMandN, respectively. Abundle maporbundle morphismconsists of a pair of continuous[14]functionsφ:E→F,f:M→N{\displaystyle \varphi :E\to F,\quad f:M\to N}such thatπF∘φ=f∘πE.{\displaystyle \pi _{F}\circ \varphi =f\circ \pi _{E}.}That is, the following diagram iscommutative:
For fiber bundles with structure groupGand whose total spaces are (right)G-spaces (such as a principal bundle), bundlemorphismsare also required to beG-equivarianton the fibers. This means thatφ:E→F{\displaystyle \varphi :E\to F}is alsoG-morphism from oneG-space to another, that is,φ(xs)=φ(x)s{\displaystyle \varphi (xs)=\varphi (x)s}for allx∈E{\displaystyle x\in E}ands∈G.{\displaystyle s\in G.}
In case the base spacesMandNcoincide, then a bundle morphism overMfrom the fiber bundleπE:E→M{\displaystyle \pi _{E}:E\to M}toπF:F→M{\displaystyle \pi _{F}:F\to M}is a mapφ:E→F{\displaystyle \varphi :E\to F}such thatπE=πF∘φ.{\displaystyle \pi _{E}=\pi _{F}\circ \varphi .}This means that the bundle mapφ:E→F{\displaystyle \varphi :E\to F}coversthe identity ofM. That is,f≡idM{\displaystyle f\equiv \mathrm {id} _{M}}and the following diagram commutes:
Assume that bothπE:E→M{\displaystyle \pi _{E}:E\to M}andπF:F→M{\displaystyle \pi _{F}:F\to M}are defined over the same base spaceM. A bundleisomorphismis a bundle map(φ,f){\displaystyle (\varphi ,\,f)}betweenπE:E→M{\displaystyle \pi _{E}:E\to M}andπF:F→M{\displaystyle \pi _{F}:F\to M}such thatf≡idM{\displaystyle f\equiv \mathrm {id} _{M}}and such thatφ{\displaystyle \varphi }is also a homeomorphism.[15]
In the category ofdifferentiable manifolds, fiber bundles arise naturally assubmersionsof one manifold to another. Not every (differentiable) submersionf:M→N{\displaystyle f:M\to N}from a differentiable manifoldMto another differentiable manifoldNgives rise to a differentiable fiber bundle. For one thing, the map must be surjective, and(M,N,f){\displaystyle (M,N,f)}is called afibered manifold. However, this necessary condition is not quite sufficient, and there are a variety of sufficient conditions in common use.
IfMandNarecompactandconnected, then any submersionf:M→N{\displaystyle f:M\to N}gives rise to a fiber bundle in the sense that there is a fiber spaceFdiffeomorphic to each of the fibers such that(E,B,π,F)=(M,N,f,F){\displaystyle (E,B,\pi ,F)=(M,N,f,F)}is a fiber bundle. (Surjectivity off{\displaystyle f}follows by the assumptions already given in this case.) More generally, the assumption of compactness can be relaxed if the submersionf:M→N{\displaystyle f:M\to N}is assumed to be a surjectiveproper map, meaning thatf−1(K){\displaystyle f^{-1}(K)}is compact for every compactsubsetKofN. Another sufficient condition, due toEhresmann (1951), is that iff:M→N{\displaystyle f:M\to N}is a surjectivesubmersionwithMandNdifferentiable manifoldssuch that the preimagef−1{x}{\displaystyle f^{-1}\{x\}}is compact and connected for allx∈N,{\displaystyle x\in N,}thenf{\displaystyle f}admits acompatiblefiber bundle structure (Michor 2008, §17).
|
https://en.wikipedia.org/wiki/Fiber_bundle
|
Inmathematics, anindex setis a set whose members label (or index) members of another set.[1][2]For instance, if the elements of asetAmay beindexedorlabeledby means of the elements of a setJ, thenJis an index set. The indexing consists of asurjective functionfromJontoA, and the indexed collection is typically called anindexed family, often written as{Aj}j∈J.
The set of all such indicator functions,{1r}r∈R{\displaystyle \{\mathbf {1} _{r}\}_{r\in \mathbb {R} }}, is anuncountable setindexed byR{\displaystyle \mathbb {R} }.
Incomputational complexity theoryandcryptography, an index set is a set for which there exists an algorithmIthat can sample the set efficiently; e.g., on input1n,Ican efficiently select a poly(n)-bit long element from the set.[3]
|
https://en.wikipedia.org/wiki/Index_set
|
Incategory theory, a branch ofmathematics, asectionis aright inverseof somemorphism.Dually, aretractionis aleft inverseof somemorphism.
In other words, iff:X→Y{\displaystyle f:X\to Y}andg:Y→X{\displaystyle g:Y\to X}are morphisms whose compositionf∘g:Y→Y{\displaystyle f\circ g:Y\to Y}is theidentity morphismonY{\displaystyle Y}, theng{\displaystyle g}is a section off{\displaystyle f}, andf{\displaystyle f}is a retraction ofg{\displaystyle g}.[1]
Every section is amonomorphism(every morphism with a left inverse isleft-cancellative), and every retraction is anepimorphism(every morphism with a right inverse isright-cancellative).
Inalgebra, sections are also calledsplit monomorphismsand retractions are also calledsplit epimorphisms. In anabelian category, iff:X→Y{\displaystyle f:X\to Y}is a split epimorphism with split monomorphismg:Y→X{\displaystyle g:Y\to X}, thenX{\displaystyle X}isisomorphicto thedirect sumofY{\displaystyle Y}and thekerneloff{\displaystyle f}. The synonymcoretractionfor section is sometimes seen in the literature, although rarely in recent work.
The concept of a retraction in category theory comes from the essentially similar notion of aretractionintopology:f:X→Y{\displaystyle f:X\to Y}whereY{\displaystyle Y}is a subspace ofX{\displaystyle X}is a retraction in the topological sense, if it's a retraction of the inclusion mapi:Y↪X{\displaystyle i:Y\hookrightarrow X}in the category theory sense. The concept in topology was defined byKarol Borsukin 1931.[2]
Borsuk's student,Samuel Eilenberg, was withSaunders Mac Lanethe founder of category theory, and (as the earliest publications on category theory concerned various topological spaces) one might have expected this term to have initially be used. In fact, their earlier publications, up to, e.g., Mac Lane (1963)'sHomology, used the term right inverse. It was not until 1965 when Eilenberg andJohn Coleman Moorecoined the dual term 'coretraction' that Borsuk's term was lifted to category theory in general.[3]The term coretraction gave way to the term section by the end of the 1960s.
Both use of left/right inverse and section/retraction are commonly seen in the literature: the former use has the advantage that it is familiar from the theory ofsemigroupsandmonoids; the latter is considered less confusing by some because one does not have to think about 'which way around' composition goes, an issue that has become greater with the increasing popularity of the synonymf∘g{\displaystyle f\circ g}forg∘f{\displaystyle g\circ f}.[4]
In thecategory of sets, every monomorphism (injectivefunction) with anon-emptydomainis a section, and every epimorphism (surjective function) is a retraction; the latter statement is equivalent to theaxiom of choice.
In thecategory of vector spacesover afieldK, every monomorphism and every epimorphism splits; this follows from the fact thatlinear mapscan be uniquely defined by specifying their values on abasis.
In thecategory of abelian groups, the epimorphismZ→Z/2Zwhich sends everyintegerto its remaindermodulo 2does not split; in fact the only morphismZ/2Z→Zis thezero map. Similarly, the natural monomorphismZ/2Z→Z/4Zdoesn't split even though there is a non-trivial morphismZ/4Z→Z/2Z.
The categorical concept of a section is important inhomological algebra, and is also closely related to the notion of asectionof afiber bundleintopology: in the latter case, a section of a fiber bundle is a section of the bundle projection map of the fiber bundle.
Given aquotient spaceX¯{\displaystyle {\bar {X}}}with quotient mapπ:X→X¯{\displaystyle \pi \colon X\to {\bar {X}}}, a section ofπ{\displaystyle \pi }is called atransversal.
|
https://en.wikipedia.org/wiki/Section_(category_theory)
|
Themathematicalconcept of afunctiondates from the 17th century in connection with the development ofcalculus; for example, the slopedy/dx{\displaystyle dy/dx}of agraphat a point was regarded as a function of thex-coordinate of the point. Functions were not explicitly considered in antiquity, but some precursors of the concept can perhaps be seen in the work of medieval philosophers and mathematicians such asOresme.
Mathematicians of the 18th century typically regarded a function as being defined by ananalytic expression. In the 19th century, the demands of the rigorous development ofanalysisbyKarl Weierstrassand others, the reformulation ofgeometryin terms of analysis, and the invention ofset theorybyGeorg Cantor, eventually led to the much more general modern concept of a function as a single-valued mapping from onesetto another.
In the 12th century, mathematicianSharaf al-Din al-Tusianalyzed the equationx3+d=b⋅x2in the formx2⋅ (b–x) =d,stating that the left hand side must at least equal the value ofdfor the equation to have a solution. He then determined the maximum value of this expression. It is arguable that the isolation of this expression is an early approach to the notion of a "function". A value less thandmeans no positive solution; a value equal todcorresponds to one solution, while a value greater thandcorresponds to two solutions. Sharaf al-Din's analysis of this equation was a notable development inIslamic mathematics, but his work was not pursued any further at that time, neither in the Muslim world nor in Europe.[1]
According toJean Dieudonné[2]and Ponte,[3]the concept of a function emerged in the 17th century as a result of the development ofanalytic geometryand theinfinitesimal calculus. Nevertheless, Medvedev suggests that the implicit concept of a function is one with an ancient lineage.[4]Ponte also sees more explicit approaches to the concept in theMiddle Ages:
The development of analytical geometry around 1640 allowed mathematicians to go between geometric problems about curves and algebraic relations between "variable coordinatesxandy."[6]Calculus was developed using the notion of variables, with their associated geometric meaning, which persisted well into the eighteenth century.[7]However, the terminology of "function" came to be used in interactions between Leibniz and Bernoulli towards the end of the 17th century.[8]
The term "function" was literally introduced byGottfried Leibniz, in a 1673 letter, to describe a quantity related to points of acurve, such as acoordinateor curve'sslope.[9][10]Johann Bernoullistarted calling expressions made of a single variable "functions." In 1698, he agreed with Leibniz that any quantity formed "in an algebraic and transcendental manner" may be called a function ofx.[11]By 1718, he came to regard as a function "any expression made up of a variable and some constants."[12]Alexis Claude Clairaut(in approximately 1734) andLeonhard Eulerintroduced the familiar notationf(x){\displaystyle {f(x)}}for the value of a function.[13]
The functions considered in those times are called todaydifferentiable functions. For this type of function, one can talk aboutlimitsand derivatives; both are measurements of the output or the change in the output as it depends on the input or the change in the input. Such functions are the basis ofcalculus.
In the first volume of his fundamental textIntroductio in analysin infinitorum, published in 1748, Euler gave essentially the same definition of a function as his teacher Bernoulli, as anexpressionorformulainvolving variables and constants e.g.,x2+3x+2{\displaystyle {x^{2}+3x+2}}.[14]Euler's own definition reads:
Euler also allowed multi-valued functions whose values are determined by an implicit equation.
In 1755, however, in hisInstitutiones calculi differentialis,Euler gave a more general concept of a function:
Medvedev[17]considers that "In essence this is the definition that became known as Dirichlet's definition." Edwards[18]also credits Euler with a general concept of a function and says further that
In hisThéorie Analytique de la Chaleur,[19]Joseph Fourierclaimed that an arbitrary function could be represented by aFourier series.[20]Fourier had a general conception of a function, which included functions that were neithercontinuousnor defined by an analytical expression.[21]Related questions on the nature and representation of functions, arising from the solution of thewave equationfor a vibrating string, had already been the subject of dispute betweenJean le Rond d'Alembertand Euler, and they had a significant impact in generalizing the notion of a function.Luzinobserves that:
During the 19th century, mathematicians started to formalize all the different branches of mathematics. One of the first to do so wasAugustin-Louis Cauchy; his somewhat imprecise results were later made completely rigorous by Weierstrass, who advocated building calculus onarithmeticrather than ongeometry, which favoured Euler's definition over Leibniz's (seearithmetization of analysis). According to Smithies, Cauchy thought of functions as being defined by equations involvingrealorcomplex numbers, and tacitly assumed they were continuous:
Nikolai Lobachevsky[24]andPeter Gustav Lejeune Dirichlet[25]are traditionally credited with independently giving the modern "formal" definition of a function as arelationin which every first element has a unique second element.
Lobachevsky (1834) writes that
while Dirichlet (1837) writes
Eves asserts that "the student of mathematics usually meets the Dirichlet definition of function in his introductory course in calculus.[28]
Dirichlet's claim to this formalization has been disputed byImre Lakatos:
However, Gardiner says
"...it seems to me that Lakatos goes too far, for example, when he asserts that 'there is ample evidence that [Dirichlet] had no idea of [the modern function] concept'."[30]Moreover, as noted above, Dirichlet's paper does appear to include a definition along the lines of what is usually ascribed to him, even though (like Lobachevsky) he states it only for continuous functions of a real variable.
Similarly, Lavine observes that:
Because Lobachevsky and Dirichlet have been credited as among the first to introduce the notion of an arbitrary correspondence, this notion is sometimes referred to as the Dirichlet or Lobachevsky-Dirichlet definition of a function.[32]A general version of this definition was later used byBourbaki(1939), and some in the education community refer to it as the "Dirichlet–Bourbaki" definition of a function.
Dieudonné, who was one of the founding members of the Bourbaki group, credits a precise and general modern definition of a function to Dedekind in his workWas sind und was sollen die Zahlen,[33]which appeared in 1888 but had already been drafted in 1878. Dieudonné observes that instead of confining himself, as in previous conceptions, to real (or complex) functions, Dedekind defines a function as a single-valued mapping between any two sets:
Hardy 1908, pp. 26–28 defined a function as a relation between two variablesxandysuch that "to some values ofxat any rate correspond values ofy." He neither required the function to be defined for all values ofxnor to associate each value ofxto a single value ofy. This broad definition of a function encompasses more relations than are ordinarily considered functions in contemporary mathematics. For example, Hardy's definition includesmultivalued functionsand what incomputability theoryare calledpartial functions.
Logiciansof this time were primarily involved with analyzingsyllogisms(the 2000-year-old Aristotelian forms and otherwise), or asAugustus De Morgan(1847) stated it: "the examination of that part of reasoning which depends upon the manner in which inferences are formed,
and the investigation of general maxims and rules for constructing arguments".[35]At this time the notion of (logical) "function" is not explicit, but at least in the work of De Morgan andGeorge Booleit is implied: we see abstraction of the argument forms, the introduction of variables, the introduction of a symbolic algebra with respect to these variables, and some of the notions of set theory.
De Morgan's 1847 "FORMAL LOGIC OR, The Calculus of Inference, Necessary and Probable" observes that "[a]logical truthdepends upon thestructure of the statement, and not upon the particular matters spoken of"; he wastes no time (preface page i) abstracting: "In the form of the proposition, the copula is made as abstract as the terms". He immediately (p. 1) casts what he calls "the proposition" (present-day propositionalfunctionorrelation) into a form such as "X is Y", where the symbols X, "is", and Y represent, respectively, thesubject,copula, andpredicate.While the word "function" does not appear, the notion of "abstraction" is there, "variables" are there, the notion of inclusion in his symbolism "all of the Δ is in the О" (p. 9) is there, and lastly a new symbolism for logical analysis of the notion of "relation" (he uses the word with respect to this example " X)Y " (p. 75) ) is there:
In his 1848The Nature of LogicBoole asserts that "logic . . . is in a more especial sense the science of reasoning by signs", and he briefly discusses the notions of "belonging to" and "class": "An individual may possess a great variety of attributes and thus belonging to a great variety of different classes".[36]Like De Morgan he uses the notion of "variable" drawn from analysis; he gives an example of "represent[ing] the class oxen byxand that of horses byyand the conjunctionandby the sign + . . . we might represent the aggregate class oxen and horses byx+y".[37]
In the context of "the Differential Calculus" Boole defined (circa 1849) the notion of a function as follows:
Eves observes "that logicians have endeavored to push down further the starting level of the definitional development of mathematics and to derive the theory ofsets, orclasses, from a foundation in the logic of propositions and propositional functions".[39]But by the late 19th century the logicians' research into the foundations of mathematics was undergoing a major split. The direction of the first group, theLogicists, can probably be summed up best by Bertrand Russell1903– "to fulfil two objects, first, to show that all mathematics follows from symbolic logic, and secondly to discover, as far as possible, what are the principles of symbolic logic itself."
The second group of logicians, the set-theorists, emerged withGeorg Cantor's "set theory" (1870–1890) but were driven forward partly as a result of Russell's discovery of a paradox that could be derived from Frege's conception of "function", but also as a reaction against Russell's proposed solution.[40]Ernst Zermelo's set-theoretic response was his 1908Investigations in the foundations of set theory I– the firstaxiomatic set theory; here too the notion of "propositional function" plays a role.
In hisAn Investigation into the laws of thoughtBoole now defined a function in terms of a symbolxas follows:
Boole then usedalgebraicexpressions to define both algebraic andlogicalnotions, e.g., 1 −xis logical NOT(x),xyis the logical AND(x,y),x+yis the logical OR(x,y),x(x+y) isxx+xy, and "the special law"xx=x2=x.[42]
In his 1881Symbolic LogicVenn was using the words "logical function" and the contemporary symbolism (x=f(y),y=f−1(x), cf page xxi) plus the circle-diagrams historically associated withVennto describe "class relations",[43]the notions "'quantifying' our predicate", "propositions in respect of their extension", "the relation of inclusion and exclusion of two classes to one another", and "propositional function" (all on p. 10), the bar over a variable to indicate not-x(page 43), etc. Indeed he equated unequivocally the notion of "logical function" with "class" [modern "set"]: "... on the view adopted in this book,f(x) never stands for anything but a logical class. It may be a compound class aggregated of many simple classes; it may be a class indicated by certain inverse logical operations, it may be composed of two groups of classes equal to one another, or what is the same thing, their difference declared equal to zero, that is, a logical equation. But however composed or derived,f(x) with us will never be anything else than a general expression for such logical classes of things as may fairly find a place in ordinary Logic".[44]
Gottlob Frege'sBegriffsschrift(1879) precededGiuseppe Peano(1889), but Peano had no knowledge ofFrege 1879until after he had published his 1889.[45]Both writers strongly influencedRussell (1903). Russell in turn influenced much of 20th-century mathematics and logic through hisPrincipia Mathematica(1913) jointly authored withAlfred North Whitehead.
At the outset Frege abandons the traditional "conceptssubjectandpredicate", replacing them withargumentandfunctionrespectively, which he believes "will stand the test of time. It is easy to see how regarding a content as a function of an argument leads to the formation of concepts. Furthermore, the demonstration of the connection between the meanings of the wordsif, and, not, or, there is, some, all,and so forth, deserves attention".[46]
Frege begins his discussion of "function" with an example: Begin with the expression[47]"Hydrogen is lighter than carbon dioxide". Now remove the sign for hydrogen (i.e., the word "hydrogen") and replace it with the sign for oxygen (i.e., the word "oxygen"); this makes a second statement. Do this again (using either statement) and substitute the sign for nitrogen (i.e., the word "nitrogen") and note that "This changes the meaning in such a way that "oxygen" or "nitrogen" enters into the relations in which "hydrogen" stood before".[48]There are three statements:
Now observe in all three a "stable component, representing the totality of [the] relations";[49]call thisthe function, i.e.,
Frege calls theargumentof the function "[t]he sign [e.g., hydrogen, oxygen, or nitrogen], regarded as replaceable by others that denotes the object standing in these relations".[50]He notes that we could have derived the function as "Hydrogen is lighter than . . .." as well, with an argument position on theright; the exact observation is made by Peano (see more below). Finally, Frege allows for the case of two (or more) arguments. For example, remove "carbon dioxide" to yield the invariant part (the function) as:
The one-argument function Frege generalizes into the form Φ(A) where A is the argument and Φ( ) represents the function, whereas the two-argument function he symbolizes as Ψ(A, B) with A and B the arguments and Ψ( , ) the function and cautions that "in general Ψ(A, B) differs from Ψ(B, A)". Using his unique symbolism he translates for the reader the following symbolism:
Peano defined the notion of "function" in a manner somewhat similar to Frege, but without the precision.[52]First Peano defines the sign "K meansclass, or aggregate of objects",[53]the objects of which satisfy three simple equality-conditions,[54]a=a, (a=b) = (b=a), IF ((a=b) AND (b=c)) THEN (a=c). He then introduces φ, "a sign or an aggregate of signs such that ifxis an object of the classs, the expression φxdenotes a new object". Peano adds two conditions on these new objects: First, that the three equality-conditions hold for the objects φx; secondly, that "ifxandyare objects of classsand ifx=y, we assume it is possible to deduce φx= φy".[55]Given all these conditions are met, φ is a "function presign". Likewise he identifies a "function postsign". For example ifφis the function presigna+, then φxyieldsa+x, or if φ is the function postsign +athenxφ yieldsx+a.[54]
While the influence of Cantor and Peano was paramount,[56]in Appendix A "The Logical and Arithmetical Doctrines of Frege" ofThe Principles of Mathematics, Russell arrives at a discussion of Frege's notion offunction, "...a point in which Frege's work is very important, and requires careful examination".[57]In response to his 1902 exchange of letters with Frege about the contradiction he discovered in Frege'sBegriffsschriftRussell tacked this section on at the last moment.
For Russell the bedeviling notion is that ofvariable: "6. Mathematical propositions are not only characterized by the fact that they assert implications, but also by the fact that they containvariables. The notion of the variable is one of the most difficult with which logic has to deal. For the present, I openly wish to make it plain that there are variables in all mathematical propositions, even where at first sight they might seem to be absent. . . . We shall find always, in all mathematical propositions, that the wordsanyorsomeoccur; and these words are the marks of a variable and a formal implication".[58]
As expressed by Russell "the process of transforming constants in a proposition into variables leads to what is called generalization, and gives us, as it were, the formal essence of a proposition ... So long as any term in our proposition can be turned into a variable, our proposition can be generalized; and so long as this is possible, it is the business of mathematics to do it";[59]these generalizations Russell namedpropositional functions.[60]Indeed he cites and quotes from Frege'sBegriffsschriftand presents a vivid example from Frege's 1891Function und Begriff: That "the essence of the arithmetical function 2x3+xis what is left when thexis taken away, i.e., in the above instance 2( )3+ ( ). The argumentxdoes not belong to the function but the two taken together make the whole".[57]Russell agreed with Frege's notion of "function" in one sense: "He regards functions – and in this I agree with him – as more fundamental thanpredicatesandrelations" but Russell rejected Frege's "theory of subject and assertion", in particular "he thinks that, if a termaoccurs in a proposition, the proposition can always be analysed intoaand an assertion abouta".[57]
Russell would carry his ideas forward in his 1908Mathematical logical as based on the theory of typesand into his and Whitehead's 1910–1913Principia Mathematica. By the time ofPrincipia MathematicaRussell, like Frege, considered the propositional function fundamental: "Propositional functions are the fundamental kind from which the more usual kinds of function, such as "sinx" or logxor "the father ofx" are derived. These derivative functions . . . are called "descriptive functions". The functions of propositions . . . are a particular case of propositional functions".[61]
Propositional functions: Because his terminology is different from the contemporary, the reader may be confused by Russell's "propositional function". An example may help. Russell writes apropositional functionin its raw form, e.g., asφŷ: "ŷis hurt". (Observe the circumflex or "hat" over the variabley). For our example, we will assign just 4 values to the variableŷ: "Bob", "This bird", "Emily the rabbit", and "y". Substitution of one of these values for variableŷyields aproposition; this proposition is called a "value" of the propositional function. In our example there are four values of the propositional function, e.g., "Bob is hurt", "This bird is hurt", "Emily the rabbit is hurt" and "yis hurt." A proposition, if it issignificant—i.e., if its truth isdeterminate—has atruth-valueoftruthorfalsity. If a proposition's truth value is "truth" then the variable's value is said tosatisfythe propositional function. Finally, per Russell's definition, "aclass[set] is all objects satisfying some propositional function" (p. 23). Note the word "all" – this is how the contemporary notions of "For all ∀" and "there exists at least one instance ∃" enter the treatment (p. 15).
To continue the example: Suppose (from outside the mathematics/logic) one determines that the propositions "Bob is hurt" has a truth value of "falsity", "This bird is hurt" has a truth value of "truth", "Emily the rabbit is hurt" has an indeterminate truth value because "Emily the rabbit" doesn't exist, and "yis hurt" is ambiguous as to its truth value because the argumentyitself is ambiguous. While the two propositions "Bob is hurt" and "This bird is hurt" aresignificant(both have truth values), only the value "This bird" of thevariableŷsatisfiesthe propositional functionφŷ: "ŷis hurt". When one goes to form the class α:φŷ: "ŷis hurt", only "This bird" is included, given the four values "Bob", "This bird", "Emily the rabbit" and "y" for variableŷand their respective truth-values: falsity, truth, indeterminate, ambiguous.
Russell definesfunctions of propositions with arguments, andtruth-functionsf(p).[62]For example, suppose one were to form the "function of propositions with arguments"p1: "NOT(p) ANDq" and assign its variables the values ofp: "Bob is hurt" andq: "This bird is hurt". (We are restricted to the logical linkages NOT, AND, OR and IMPLIES, and we can only assign "significant" propositions to the variablespandq). Then the "function of propositions with arguments" isp1: NOT("Bob is hurt") AND "This bird is hurt". To determine the truth value of this "function of propositions with arguments" we submit it to a "truth function", e.g.,f(p1):f( NOT("Bob is hurt") AND "This bird is hurt" ), which yields a truth value of "truth".
The notion of a "many-one" functional relation": Russell first discusses the notion of "identity", then defines adescriptive function(pages 30ff) as theuniquevalueιxthat satisfies the (2-variable) propositional function (i.e., "relation")φŷ.
Russell symbolizes the descriptive function as "the object standing in relation toy":R'y=DEF(ιx)(x R y). Russell repeats that "R'yis a function ofy, but not a propositional function [sic]; we shall call it adescriptivefunction. All the ordinary functions of mathematics are of this kind. Thus in our notation "siny" would be written " sin'y", and "sin" would stand for the relation sin'yhas toy".[64]
David Hilbertset himself the goal of "formalizing" classical mathematics "as a formal axiomatic theory, and this theory shall be proved to beconsistent, i.e., free from contradiction".[65]InHilbert 1927The Foundations of Mathematicshe frames the notion of function in terms of the existence of an "object":
Hilbert then illustrates the three ways how the ε-function is to be used, firstly as the "for all" and "there exists" notions, secondly to represent the "object of which [a proposition] holds", and lastly how to cast it into thechoice function.
Recursion theory and computability: But the unexpected outcome of Hilbert's and his studentBernays's effort was failure; seeGödel's incompleteness theoremsof 1931. At about the same time, in an effort to solve Hilbert'sEntscheidungsproblem, mathematicians set about to define what was meant by an "effectively calculable function" (Alonzo Church1936), i.e., "effective method" or "algorithm", that is, an explicit, step-by-step procedure that would succeed in computing a function. Various models for algorithms appeared, in rapid succession, including Church'slambda calculus(1936),Stephen Kleene'sμ-recursive functions(1936) andAlan Turing's (1936–7) notion of replacing human "computers" with utterly-mechanical "computing machines" (seeTuring machines). It was shown that all of these models could compute the same class ofcomputable functions.Church's thesisholds that this class of functions exhausts all thenumber-theoretic functionsthat can be calculated by an algorithm. The outcomes of these efforts were vivid demonstrations that, in Turing's words, "there can be no general process for determining whether a given formulaUof the functional calculusK[Principia Mathematica] is provable";[67]see more atIndependence (mathematical logic)andComputability theory.
Set theory began with the work of the logicians with the notion of "class" (modern "set") for exampleDe Morgan (1847),Jevons(1880),Venn (1881),Frege (1879)andPeano (1889). It was given a push byGeorg Cantor's attempt to define the infinite in set-theoretic treatment (1870–1890) and a subsequent discovery of anantinomy(contradiction, paradox) in this treatment (Cantor's paradox), by Russell's discovery (1902) of an antinomy in Frege's 1879 (Russell's paradox), by the discovery of more antinomies in the early 20th century (e.g., the 1897Burali-Forti paradoxand the 1905Richard paradox), and by resistance to Russell's complex treatment of logic[68]and dislike of hisaxiom of reducibility[69](1908, 1910–1913) that he proposed as a means to evade the antinomies.
In 1902 Russell sent a letter to Frege pointing out that Frege's 1879Begriffsschriftallowed a function to be an argument of itself: "On the other hand, it may also be that the argument is determinate and the function indeterminate . . .."[70]From this unconstrained situation Russell was able to form a paradox:
Frege responded promptly that "Your discovery of the contradiction caused me the greatest surprise and, I would almost say, consternation, since it has shaken the basis on which I intended to build arithmetic".[72]
From this point forward development of the foundations of mathematics became an exercise in how to dodge "Russell's paradox", framed as it was in "the bare [set-theoretic] notions of set and element".[73]
The notion of "function" appears as Zermelo's axiom III—the Axiom of Separation (Axiom der Aussonderung). This axiom constrains us to use a propositional function Φ(x) to "separate" asubsetMΦfrom a previously formed setM:
As there is nouniversal set— sets originate by way of Axiom II from elements of (non-set)domain B– "...this disposes of the Russell antinomy so far as we are concerned".[75]But Zermelo's "definite criterion" is imprecise, and is fixed byWeyl,Fraenkel,Skolem, andvon Neumann.[76]
In fact Skolem in his 1922 referred to this "definite criterion" or "property" as a "definite proposition":
van Heijenoortsummarizes:
In this quote the reader may observe a shift in terminology: nowhere is mentioned the notion of "propositional function", but rather one sees the words "formula", "predicate calculus", "predicate", and "logical calculus." This shift in terminology is discussed more in the section that covers "function" in contemporary set theory.
The history of the notion of "ordered pair" is not clear. As noted above, Frege (1879) proposed an intuitive ordering in his definition of a two-argument function Ψ(A, B).Norbert Wienerin his 1914 (see below) observes that his own treatment essentially "revert(s) toSchröder'streatment of a relation as a class of ordered couples".[79]Russell (1903)considered the definition of a relation (such as Ψ(A, B)) as a "class of couples" but rejected it:
By 1910–1913 andPrincipia MathematicaRussell had given up on the requirement for anintensionaldefinition of a relation, stating that "mathematics is always concerned with extensions rather than intensions" and "Relations, like classes, are to be taken inextension".[81]To demonstrate the notion of a relation inextensionRussell now embraced the notion ofordered couple: "We may regard a relation ... as a class of couples ... the relation determined by φ(x, y) is the class of couples (x, y) for which φ(x, y) is true".[82]In a footnote he clarified his notion and arrived at this definition:
But he goes on to say that he would not introduce the ordered couples further into his "symbolic treatment"; he proposes his "matrix" and his unpopular axiom of reducibility in their place.
An attempt to solve the problem of theantinomiesled Russell to propose his "doctrine of types" in an appendix B of his 1903The Principles of Mathematics.[83]In a few years he would refine this notion and propose in his 1908The Theory of Typestwoaxioms of reducibility, the purpose of which were to reduce (single-variable) propositional functions and (dual-variable) relations to a "lower" form (and ultimately into a completelyextensionalform); he andAlfred North Whiteheadwould carry this treatment over toPrincipia Mathematica1910–1913 with a further refinement called "a matrix".[84]The first axiom is *12.1; the second is *12.11. To quote Wiener the second axiom *12.11 "is involved only in the theory of relations".[85]Both axioms, however, were met with skepticism and resistance; see more atAxiom of reducibility. By 1914 Norbert Wiener, using Whitehead and Russell's symbolism, eliminated axiom *12.11 (the "two-variable" (relational) version of the axiom of reducibility) by expressing a relation as an ordered pair using the null set. At approximately the same time,Hausdorff(1914, p. 32) gave the definition of the ordered pair (a,b) as {{a,1}, {b, 2}}. A few years laterKuratowski(1921) offered a definition that has been widely used ever since, namely {{a,b}, {a}}".[86]As noted bySuppes (1960)"This definition . . . was historically important in reducing the theory of relations to the theory of sets.[87]
Observe that while Wiener "reduced" the relational *12.11 form of the axiom of reducibility hedid notreduce nor otherwise change the propositional-function form *12.1; indeed he declared this "essential to the treatment of identity, descriptions, classes and relations".[88]
Where exactly thegeneralnotion of "function" as a many-one correspondence derives from is unclear. Russell in his 1920Introduction to Mathematical Philosophystates that "It should be observed that all mathematical functions result form one-many [sic – contemporary usage is many-one] relations . . . Functions in this sense aredescriptivefunctions".[89]A reasonable possibility is thePrincipia Mathematicanotion of "descriptive function" –R 'y=DEF(ιx)(x R y): "the singular object that has a relationRtoy". Whatever the case, by 1924,Moses Schönfinkelexpressed the notion, claiming it to be "well known":
According toWillard Quine,Schönfinkel 1924"provide[s] for ... the whole sweep of abstract set theory. The crux of the matter is that Schönfinkel lets functions stand as arguments. For Schönfinkel, substantially as for Frege, classes are special sorts of functions. They are propositional functions, functions whose values are truth values. All functions, propositional and otherwise, are for Schönfinkel one-place functions".[91]Remarkably, Schönfinkel reduces all mathematics to an extremely compactfunctional calculusconsisting of only three functions: Constancy, fusion (i.e., composition), and mutual exclusivity. Quine notes thatHaskell Curry(1958) carried this work forward "under the head ofcombinatory logic".[92]
By 1925Abraham Fraenkel(1922) andThoralf Skolem(1922) had amended Zermelo's set theory of 1908. But von Neumann was not convinced that this axiomatization could not lead to the antinomies.[93]So he proposed his own theory, his 1925An axiomatization of set theory.[94]It explicitly contains a "contemporary", set-theoretic version of the notion of "function":
At the outset he begins withI-objectsandII-objects, two objectsAandBthat are I-objects (first axiom), and two types of "operations" that assume ordering as a structural property[96]obtained of the resulting objects [x,y] and (x,y). The two "domains of objects" are called "arguments" (I-objects) and "functions" (II-objects); where they overlap are the "argument functions" (he calls them I-II objects). He introduces two "universal two-variable operations" – (i) the operation [x,y]: ". . . read 'the value of the functionxfor the argumenty. . . it itself is a type I object", and (ii) the operation (x,y): ". . . (read 'the ordered pairx,y') whose variablesxandymust both be arguments and that itself produces an argument (x,y). Its most important property is thatx1=x2andy1=y2follow from (x1=y2) = (x2=y2)". To clarify the function pair he notes that "Instead off(x) we write [f,x] to indicate thatf, just likex, is to be regarded as a variable in this procedure". To avoid the "antinomies of naive set theory, in Russell's first of all . . . we must forgo treating certain functions as arguments".[97]He adopts a notion from Zermelo to restrict these "certain functions".[98]
Suppes[99]observes that von Neumann's axiomatization was modified by Bernays "in order to remain nearer to the original Zermelo system . . . He introduced two membership relations: one between sets, and one between sets and classes". Then Gödel [1940][100]further modified the theory: "his primitive notions are those of set, class and membership (although membership alone is sufficient)".[101]This axiomatization is now known asvon Neumann–Bernays–Gödel set theory.
In 1939, the collaborationNIcolas Bourbaki, in addition to giving the well-known ordered pair definition of a function as a certain subset of thecartesian productE×F, gave the following:
"LetEandFbe two sets, which may or may not be distinct. A relation between a variable elementxofEand a variable elementyofFis called a functional relation inyif, for allx∈E, there exists a uniquey∈Fwhich is in the given relation withx.
We give the name of function to the operation which in this way associates with every elementx∈Ethe elementy∈Fwhich is in the given relation withx, and the function is said to be determined by the given functional relation. Two equivalent functional relations determine the same function."
Both axiomatic and naive forms of Zermelo's set theory as modified by Fraenkel (1922) and Skolem (1922)define"function" as a relation,definea relation as a set of ordered pairs, anddefinean ordered pair as a set of two "dissymetric" sets.
While the reader ofSuppes (1960)Axiomatic Set TheoryorHalmos (1970)Naive Set Theoryobserves the use of function-symbolism in theaxiom of separation, e.g., φ(x) (in Suppes) and S(x) (in Halmos), they will see no mention of "proposition" or even "first order predicate calculus". In their place are "expressionsof the object language", "atomic formulae", "primitive formulae", and "atomic sentences".
Kleene (1952)defines the words as follows: "In word languages, a proposition is expressed by a sentence. Then a 'predicate' is expressed by an incomplete sentence or sentence skeleton containing an open place. For example, "___ is a man" expresses a predicate ... The predicate is apropositional function of one variable. Predicates are often called 'properties' ... The predicate calculus will treat of the logic of predicates in this general sense of 'predicate', i.e., as propositional function".[102]
In 1954, Bourbaki, on p. 76 in Chapitre II of Theorie des Ensembles (theory of sets), gave a definition of a function as a triplef= (F,A,B).[103]HereFis afunctional graph, meaning a set of pairs where no two pairs have the same first member. On p. 77 (op. cit.) Bourbaki states (literal translation): "Often we shall use, in the remainder of this Treatise, the wordfunctioninstead offunctional graph."
Suppes (1960)inAxiomatic Set Theory, formally defines arelation(p. 57) as a set of pairs, and afunction(p. 86) as a relation where no two pairs have the same first member.
The reason for the disappearance of the words "propositional function" e.g., inSuppes (1960), andHalmos (1970), is explained byTarski (1946)together with further explanation of the terminology:
For his partTarskicalls the relational form of function a "FUNCTIONAL RELATION or simply a FUNCTION".[105]After a discussion of this "functional relation" he asserts that:
See more about "truth under an interpretation" atAlfred Tarski.
|
https://en.wikipedia.org/wiki/History_of_the_function_concept
|
Inmathematics,functionscan be identified according to the properties they have. These properties describe the functions' behaviour under certain conditions. A parabola is a specific type of function.
These properties concern thedomain, thecodomainand theimageof functions.
These properties concern how the function is affected byarithmeticoperations on its argument.
The following are special examples of ahomomorphismon abinary operation:
Relative tonegation:
Relative to a binary operation and anorder:
Relative to topology and order:
Relative to measure and topology:
In general, functions are often defined by specifying the name of a dependent variable, and a way of calculating what it should map to. For this purpose, the↦{\displaystyle \mapsto }symbol orChurch'sλ{\displaystyle \lambda }is often used. Also, sometimes mathematicians notate a function'sdomainandcodomainby writing e.g.f:A→B{\displaystyle f:A\rightarrow B}. These notions extend directly tolambda calculusandtype theory, respectively.
These are functions that operate on functions or produce other functions; seeHigher order function.
Examples are:
Category theoryis a branch of mathematics that formalizes the notion of a special function via arrows ormorphisms. Acategoryis an algebraic object that (abstractly) consists of a class ofobjects, and for every pair of objects, a set ofmorphisms. A partial (equiv.dependently typed) binary operation calledcompositionis provided on morphisms, every object has one special morphism from it to itself called theidentityon that object, and composition and identities are required to obey certain relations.
In a so-calledconcrete category, the objects are associated with mathematical structures likesets,magmas,groups,rings,topological spaces,vector spaces,metric spaces,partial orders,differentiable manifolds,uniform spaces, etc., and morphisms between two objects are associated withstructure-preserving functionsbetween them. In the examples above, these would befunctions, magmahomomorphisms,group homomorphisms,ring homomorphisms,continuous functions,linear transformations(ormatrices),metric maps,monotonic functions,differentiable functions, anduniformly continuousfunctions, respectively.
As an algebraic theory, one of the advantages of category theory is to enable one to prove many general results with a minimum of assumptions. Many common notions from mathematics (e.g.surjective,injective,free object,basis, finiterepresentation,isomorphism) are definable purely in category theoretic terms (cf.monomorphism,epimorphism).
Category theory has been suggested as a foundation for mathematics on par withset theoryandtype theory(cf.topos).
Allegory theory[1]provides a generalization comparable to category theory forrelationsinstead of functions.
|
https://en.wikipedia.org/wiki/List_of_types_of_functions
|
Inmathematics, some functions or groups of functions are important enough to deserve their own names. This is a listing of articles which explain some of these functions in more detail. There is a large theory ofspecial functionswhich developed out ofstatisticsandmathematical physics. A modern, abstract point of view contrasts largefunction spaces, which are infinite-dimensional and within which most functions are 'anonymous', with special functions picked out by properties such assymmetry, or relationship toharmonic analysisandgroup representations.
See alsoList of types of functions
Elementary functionsare functions built from basic operations (e.g. addition, exponentials, logarithms...)
Algebraic functionsare functions that can be expressed as the solution of a polynomial equation with integer coefficients.
Transcendental functionsare functions that are not algebraic.
|
https://en.wikipedia.org/wiki/List_of_functions
|
Curve fitting[1][2]is the process of constructing acurve, ormathematical function, that has the best fit to a series ofdata points,[3]possibly subject to constraints.[4][5]Curve fitting can involve eitherinterpolation,[6][7]where an exact fit to the data is required, orsmoothing,[8][9]in which a "smooth" function is constructed that approximately fits the data. A related topic isregression analysis,[10][11]which focuses more on questions ofstatistical inferencesuch as how much uncertainty is present in a curve that is fitted to data observed with random errors. Fitted curves can be used as an aid for data visualization,[12][13]to infer values of a function where no data are available,[14]and to summarize the relationships among two or more variables.[15]Extrapolationrefers to the use of a fitted curve beyond therangeof the observed data,[16]and is subject to adegree of uncertainty[17]since it may reflect the method used to construct the curve as much as it reflects the observed data.
For linear-algebraic analysis of data, "fitting" usually means trying to find the curve that minimizes the vertical (y-axis) displacement of a point from the curve (e.g.,ordinary least squares). However, for graphical and image applications, geometric fitting seeks to provide the best visual fit; which usually means trying to minimize theorthogonal distanceto the curve (e.g.,total least squares), or to otherwise include both axes of displacement of a point from the curve. Geometric fits are not popular because they usually require non-linear and/or iterative calculations, although they have the advantage of a more aesthetic and geometrically accurate result.[18][19][20]
Most commonly, one fits a function of the formy=f(x).
The first degreepolynomialequation
is a line withslopea. A line will connect any two points, so a first degree polynomial equation is an exact fit through any two points with distinct x coordinates.
If the order of the equation is increased to a second degree polynomial, the following results:
This will exactly fit a simple curve to three points.
If the order of the equation is increased to a third degree polynomial, the following is obtained:
This will exactly fit four points.
A more general statement would be to say it will exactly fit fourconstraints. Each constraint can be a point,angle, orcurvature(which is the reciprocal of the radius of anosculating circle). Angle and curvature constraints are most often added to the ends of a curve, and in such cases are calledend conditions. Identical end conditions are frequently used to ensure a smooth transition between polynomial curves contained within a singlespline. Higher-order constraints, such as "the change in the rate of curvature", could also be added. This, for example, would be useful in highwaycloverleafdesign to understand the rate of change of the forces applied to a car (seejerk), as it follows the cloverleaf, and to set reasonable speed limits, accordingly.
The first degree polynomial equation could also be an exact fit for a single point and an angle while the third degree polynomial equation could also be an exact fit for two points, an angle constraint, and a curvature constraint. Many other combinations of constraints are possible for these and for higher order polynomial equations.
If there are more thann+ 1 constraints (nbeing the degree of the polynomial), the polynomial curve can still be run through those constraints. An exact fit to all constraints is not certain (but might happen, for example, in the case of a first degree polynomial exactly fitting threecollinear points). In general, however, some method is then needed to evaluate each approximation. Theleast squaresmethod is one way to compare the deviations.
There are several reasons given to get an approximate fit when it is possible to simply increase the degree of the polynomial equation and get an exact match.:
The degree of the polynomial curve being higher than needed for an exact fit is undesirable for all the reasons listed previously for high order polynomials, but also leads to a case where there are an infinite number of solutions. For example, a first degree polynomial (a line) constrained by only a single point, instead of the usual two, would give an infinite number of solutions. This brings up the problem of how to compare and choose just one solution, which can be a problem for both software and humans. Because of this, it is usually best to choose as low a degree as possible for an exact match on all constraints, and perhaps an even lower degree, if an approximate fit is acceptable.
Other types of curves, such astrigonometric functions(such as sine and cosine), may also be used, in certain cases.
In spectroscopy, data may be fitted withGaussian,Lorentzian,Voigtand related functions.
In biology, ecology, demography, epidemiology, and many other disciplines, thegrowth of a population, the spread of infectious disease, etc. can be fitted using thelogistic function.
Inagriculturethe inverted logisticsigmoid function(S-curve) is used to describe the relation between crop yield and growth factors. The blue figure was made by a sigmoid regression of data measured in farm lands. It can be seen that initially, i.e. at low soil salinity, the crop yield reduces slowly at increasing soil salinity, while thereafter the decrease progresses faster.
If a function of the formy=f(x){\displaystyle y=f(x)}cannot be postulated, one can still try to fit aplane curve.
Other types of curves, such asconic sections(circular, elliptical, parabolic, and hyperbolic arcs) ortrigonometric functions(such as sine and cosine), may also be used, in certain cases. For example, trajectories of objects under the influence of gravity follow a parabolic path, when air resistance is ignored. Hence, matching trajectory data points to a parabolic curve would make sense. Tides follow sinusoidal patterns, hence tidal data points should be matched to a sine wave, or the sum of two sine waves of different periods, if the effects of the Moon and Sun are both considered.
For aparametric curve, it is effective to fit each of its coordinates as a separate function ofarc length; assuming that data points can be ordered, thechord distancemay be used.[22]
Coope[23]approaches the problem of trying to find the best visual fit of circle to a set of 2D data points. The method elegantly transforms the ordinarily non-linear problem into a linear problem that can be solved without using iterative numerical methods, and is hence much faster than previous techniques.
The above technique is extended to general ellipses[24]by adding a non-linear step, resulting in a method that is fast, yet finds visually pleasing ellipses of arbitrary orientation and displacement.
Note that while this discussion was in terms of 2D curves, much of this logic also extends to 3D surfaces, each patch of which is defined by a net of curves in two parametric directions, typically calleduandv. A surface may be composed of one or more surface patches in each direction.
Manystatistical packagessuch asRandnumerical softwaresuch as thegnuplot,GNU Scientific Library,Igor Pro,MLAB,Maple,MATLAB, TK Solver 6.0,Scilab,Mathematica,GNU Octave, andSciPyinclude commands for doing curve fitting in a variety of scenarios. There are also programs specifically written to do curve fitting; they can be found in thelists of statisticalandnumerical-analysis programsas well as inCategory:Regression and curve fitting software.
|
https://en.wikipedia.org/wiki/Function_fitting
|
Inmathematics, animplicit equationis arelationof the formR(x1,…,xn)=0,{\displaystyle R(x_{1},\dots ,x_{n})=0,}whereRis afunctionof several variables (often apolynomial). For example, the implicit equation of theunit circleisx2+y2−1=0.{\displaystyle x^{2}+y^{2}-1=0.}
Animplicit functionis afunctionthat is defined by an implicit equation, that relates one of the variables, considered as thevalueof the function, with the others considered as thearguments.[1]: 204–206For example, the equationx2+y2−1=0{\displaystyle x^{2}+y^{2}-1=0}of theunit circledefinesyas an implicit function ofxif−1 ≤x≤ 1, andyis restricted to nonnegative values.
Theimplicit function theoremprovides conditions under which some kinds of implicit equations define implicit functions, namely those that are obtained by equating to zeromultivariable functionsthat arecontinuously differentiable.
A common type of implicit function is aninverse function. Not all functions have a unique inverse function. Ifgis a function ofxthat has a unique inverse, then the inverse function ofg, calledg−1, is the unique function giving asolutionof the equation
forxin terms ofy. This solution can then be written as
Definingg−1as the inverse ofgis an implicit definition. For some functionsg,g−1(y)can be written out explicitly as aclosed-form expression— for instance, ifg(x) = 2x− 1, theng−1(y) =1/2(y+ 1). However, this is often not possible, or only by introducing a new notation (as in theproduct logexample below).
Intuitively, an inverse function is obtained fromgby interchanging the roles of the dependent and independent variables.
Example:Theproduct logis an implicit function giving the solution forxof the equationy−xex= 0.
Analgebraic functionis a function that satisfies a polynomial equation whose coefficients are themselves polynomials. For example, an algebraic function in one variablexgives a solution foryof an equation
where the coefficientsai(x)are polynomial functions ofx. This algebraic function can be written as the right side of the solution equationy=f(x). Written like this,fis amulti-valuedimplicit function.
Algebraic functions play an important role inmathematical analysisandalgebraic geometry. A simple example of an algebraic function is given by the left side of the unit circle equation:
Solving forygives an explicit solution:
But even without specifying this explicit solution, it is possible to refer to the implicit solution of the unit circle equation asy=f(x), wherefis the multi-valued implicit function.
While explicit solutions can be found for equations that arequadratic,cubic, andquarticiny, the same is not in general true forquinticand higher degree equations, such as
Nevertheless, one can still refer to the implicit solutiony=f(x)involving the multi-valued implicit functionf.
Not every equationR(x,y) = 0implies a graph of a single-valued function, the circle equation being one prominent example. Another example is an implicit function given byx−C(y) = 0whereCis acubic polynomialhaving a "hump" in its graph. Thus, for an implicit function to be atrue(single-valued) function it might be necessary to use just part of the graph. An implicit function can sometimes be successfully defined as a true function only after "zooming in" on some part of thex-axis and "cutting away" some unwanted function branches. Then an equation expressingyas an implicit function of the other variables can be written.
The defining equationR(x,y) = 0can also have other pathologies. For example, the equationx= 0does not imply a functionf(x)giving solutions foryat all; it is a vertical line. In order to avoid a problem like this, various constraints are frequently imposed on the allowable sorts of equations or on thedomain. Theimplicit function theoremprovides a uniform way of handling these sorts of pathologies.
Incalculus, a method calledimplicit differentiationmakes use of thechain ruleto differentiate implicitly defined functions.
To differentiate an implicit functiony(x), defined by an equationR(x,y) = 0, it is not generally possible to solve it explicitly foryand then differentiate. Instead, one cantotally differentiateR(x,y) = 0with respect toxandyand then solve the resulting linear equation fordy/dxto explicitly get the derivative in terms ofxandy. Even when it is possible to explicitly solve the original equation, the formula resulting from total differentiation is, in general, much simpler and easier to use.
Consider
This equation is easy to solve fory, giving
where the right side is the explicit form of the functiony(x). Differentiation then givesdy/dx= −1.
Alternatively, one can totally differentiate the original equation:
Solving fordy/dxgives
the same answer as obtained previously.
An example of an implicit function for which implicit differentiation is easier than using explicit differentiation is the functiony(x)defined by the equation
To differentiate this explicitly with respect tox, one has first to get
and then differentiate this function. This creates two derivatives: one fory≥ 0and another fory< 0.
It is substantially easier to implicitly differentiate the original equation:
giving
Often, it is difficult or impossible to solve explicitly fory, and implicit differentiation is the only feasible method of differentiation. An example is the equation
It is impossible toalgebraically expressyexplicitly as a function ofx, and therefore one cannot finddy/dxby explicit differentiation. Using the implicit method,dy/dxcan be obtained by differentiating the equation to obtain
wheredx/dx= 1. Factoring outdy/dxshows that
which yields the result
which is defined for
IfR(x,y) = 0, the derivative of the implicit functiony(x)is given by[2]: §11.5
whereRxandRyindicate thepartial derivativesofRwith respect toxandy.
The above formula comes from using thegeneralized chain ruleto obtain thetotal derivative— with respect tox— of both sides ofR(x,y) = 0:
hence
which, when solved fordy/dx, gives the expression above.
LetR(x,y)be adifferentiable functionof two variables, and(a,b)be a pair ofreal numberssuch thatR(a,b) = 0. If∂R/∂y≠ 0, thenR(x,y) = 0defines an implicit function that is differentiable in some small enoughneighbourhoodof(a,b); in other words, there is a differentiable functionfthat is defined and differentiable in some neighbourhood ofa, such thatR(x,f(x)) = 0forxin this neighbourhood.
The condition∂R/∂y≠ 0means that(a,b)is aregular pointof theimplicit curveof implicit equationR(x,y) = 0where thetangentis not vertical.
In a less technical language, implicit functions exist and can be differentiated, if the curve has a non-vertical tangent.[2]: §11.5
Consider arelationof the formR(x1, …,xn) = 0, whereRis a multivariable polynomial. The set of the values of the variables that satisfy this relation is called animplicit curveifn= 2and animplicit surfaceifn= 3. The implicit equations are the basis ofalgebraic geometry, whose basic subjects of study are the simultaneous solutions of several implicit equations whose left-hand sides are polynomials. These sets of simultaneous solutions are calledaffine algebraic sets.
The solutions of differential equations generally appear expressed by an implicit function.[3]
Ineconomics, when the level setR(x,y) = 0is anindifference curvefor the quantitiesxandyconsumed of two goods, the absolute value of the implicit derivativedy/dxis interpreted as themarginal rate of substitutionof the two goods: how much more ofyone must receive in order to be indifferent to a loss of one unit ofx.
Similarly, sometimes the level setR(L,K)is anisoquantshowing various combinations of utilized quantitiesLof labor andKofphysical capitaleach of which would result in the production of the same given quantity of output of some good. In this case the absolute value of the implicit derivativedK/dLis interpreted as themarginal rate of technical substitutionbetween the two factors of production: how much more capital the firm must use to produce the same amount of output with one less unit of labor.
Often ineconomic theory, some function such as autility functionor aprofitfunction is to be maximized with respect to a choice vectorxeven though the objective function has not been restricted to any specific functional form. Theimplicit function theoremguarantees that thefirst-order conditionsof the optimization define an implicit function for each element of the optimal vectorx*of the choice vectorx. When profit is being maximized, typically the resulting implicit functions are thelabor demandfunction and thesupply functionsof various goods. When utility is being maximized, typically the resulting implicit functions are thelabor supplyfunction and thedemand functionsfor various goods.
Moreover, the influence of the problem'sparametersonx*— the partial derivatives of the implicit function — can be expressed astotal derivativesof the system of first-order conditions found usingtotal differentiation.
|
https://en.wikipedia.org/wiki/Implicit_function
|
Intraditional logic, acontradictionoccurs when apropositionconflicts either with itself or establishedfact. It is often used as a tool to detectdisingenuousbeliefs andbias. Illustrating a general tendency in applied logic,Aristotle'slaw of noncontradictionstates that "It is impossible that the same thing can at the same time both belong and not belong to the same object and in the same respect."[1]
In modernformal logicandtype theory, the term is mainly used instead for asingleproposition, often denoted by thefalsumsymbol⊥{\displaystyle \bot }; a proposition is a contradiction iffalsecan be derived from it, using the rules of the logic. It is a proposition that is unconditionally false (i.e., a self-contradictory proposition).[2][3]This can be generalized to a collection of propositions, which is then said to "contain" a contradiction.
By creation of aparadox,Plato'sEuthydemusdialogue demonstrates the need for the notion ofcontradiction. In the ensuing dialogue,Dionysodorusdenies the existence of "contradiction", all the while thatSocratesis contradicting him:
... I in my astonishment said: What do you mean Dionysodorus? I have often heard, and have been amazed to hear, this thesis of yours, which is maintained and employed by the disciples of Protagoras and others before them, and which to me appears to be quite wonderful, and suicidal as well as destructive, and I think that I am most likely to hear the truth about it from you. The dictum is that there is no such thing as a falsehood; a man must either say what is true or say nothing. Is not that your position?
Indeed, Dionysodorus agrees that "there is no such thing as false opinion ... there is no such thing as ignorance", and demands of Socrates to "Refute me." Socrates responds "But how can I refute you, if, as you say, to tell a falsehood is impossible?".[4]
In classical logic, particularly inpropositionalandfirst-order logic, a propositionφ{\displaystyle \varphi }is a contradictionif and only ifφ⊢⊥{\displaystyle \varphi \vdash \bot }. Since for contradictoryφ{\displaystyle \varphi }it is true that⊢φ→ψ{\displaystyle \vdash \varphi \rightarrow \psi }for allψ{\displaystyle \psi }(because⊥⊢ψ{\displaystyle \bot \vdash \psi }), one may prove any proposition from a set of axioms which contains contradictions. This is called the "principle of explosion", or "ex falso quodlibet" ("from falsity, anything follows").[5]
In acompletelogic, a formula is contradictory if and only if it isunsatisfiable.
For a set of consistent premisesΣ{\displaystyle \Sigma }and a propositionφ{\displaystyle \varphi }, it is true inclassical logicthatΣ⊢φ{\displaystyle \Sigma \vdash \varphi }(i.e.,Σ{\displaystyle \Sigma }provesφ{\displaystyle \varphi }) if and only ifΣ∪{¬φ}⊢⊥{\displaystyle \Sigma \cup \{\neg \varphi \}\vdash \bot }(i.e.,Σ{\displaystyle \Sigma }and¬φ{\displaystyle \neg \varphi }leads to a contradiction). Therefore, aproofthatΣ∪{¬φ}⊢⊥{\displaystyle \Sigma \cup \{\neg \varphi \}\vdash \bot }also proves thatφ{\displaystyle \varphi }is true under the premisesΣ{\displaystyle \Sigma }. The use of this fact forms the basis of aproof techniquecalledproof by contradiction, which mathematicians use extensively to establish the validity of a wide range of theorems. This applies only in a logic where thelaw of excluded middleA∨¬A{\displaystyle A\vee \neg A}is accepted as an axiom.
Usingminimal logic, a logic with similar axioms to classical logic but withoutex falso quodlibetand proof by contradiction, we can investigate the axiomatic strength and properties of various rules that treat contradiction by considering theorems of classical logic that are not theorems of minimal logic.[6]Each of these extensions leads to anintermediate logic:
In mathematics, the symbol used to represent a contradiction within a proof varies.[7]Some symbols that may be used to represent a contradiction include ↯, Opq,⇒⇐{\displaystyle \Rightarrow \Leftarrow }, ⊥,↔{\displaystyle \leftrightarrow \ \!\!\!\!\!\!\!}/ , and ※; in any symbolism, a contradiction may be substituted for the truth value "false", as symbolized, for instance, by "0" (as is common inBoolean algebra). It is not uncommon to seeQ.E.D., or some of its variants, immediately after a contradiction symbol. In fact, this often occurs in a proof by contradiction to indicate that the original assumption was proved false—and hence that its negation must be true.
In general, aconsistency proofrequires the following two things:
But by whatever method one goes about it, all consistency proofs wouldseemto necessitate the primitive notion ofcontradiction.Moreover, itseemsas if this notion would simultaneously have to be "outside" the formal system in the definition of tautology.
WhenEmil Post, in his 1921 "Introduction to a General Theory of Elementary Propositions", extended his proof of the consistency of thepropositional calculus(i.e. the logic) beyond that ofPrincipia Mathematica(PM), he observed that with respect to ageneralizedset of postulates (i.e. axioms), he would no longer be able to automatically invoke the notion of "contradiction"—such a notion might not be contained in the postulates:
The prime requisite of a set of postulates is that it be consistent. Since the ordinary notion of consistency involves that of contradiction, which again involves negation, and since this function does not appear in general as a primitive in [thegeneralizedset of postulates] a new definition must be given.[8]
Post's solution to the problem is described in the demonstration "An Example of a Successful Absolute Proof of Consistency", offered byErnest NagelandJames R. Newmanin their 1958Gödel's Proof. They too observed a problem with respect to the notion of "contradiction" with its usual "truth values" of "truth" and "falsity". They observed that:
The property of being a tautology has been defined in notions of truth and falsity. Yet these notions obviously involve a reference to somethingoutsidethe formula calculus. Therefore, the procedure mentioned in the text in effect offers aninterpretationof the calculus, by supplying a model for the system. This being so, the authors have not done what they promised, namely, "to define a property of formulas in terms of purely structural features of the formulas themselves". [Indeed] ... proofs of consistency which are based on models, and which argue from the truth of axioms to their consistency, merely shift the problem.[9]
Given some "primitive formulas" such as PM's primitives S1V S2[inclusive OR] and ~S (negation), one is forced to define the axioms in terms of these primitive notions. In a thorough manner, Post demonstrates in PM, and defines (as do Nagel and Newman, see below) that the property oftautologous– as yet to be defined – is "inherited": if one begins with a set of tautologous axioms (postulates) and adeduction systemthat containssubstitutionandmodus ponens, then aconsistentsystem will yield only tautologous formulas.
On the topic of the definition oftautologous, Nagel and Newman create twomutually exclusiveandexhaustiveclasses K1and K2, into which fall (the outcome of) the axioms when their variables (e.g. S1and S2are assigned from these classes). This also applies to the primitive formulas. For example: "A formula having the form S1V S2is placed into class K2, if both S1and S2are in K2; otherwise it is placed in K1", and "A formula having the form ~S is placed in K2, if S is in K1; otherwise it is placed in K1".[10]
Hence Nagel and Newman can now define the notion oftautologous: "a formula is a tautology if and only if it falls in the class K1, no matter in which of the two classes its elements are placed".[11]This way, the property of "being tautologous" is described—without reference to a model or an interpretation.
For example, given a formula such as ~S1V S2and an assignment of K1to S1and K2to S2one can evaluate the formula and place its outcome in one or the other of the classes. The assignment of K1to S1places ~S1in K2, and now we can see that our assignment causes the formula to fall into class K2. Thus by definition our formula is not a tautology.
Post observed that, if the system were inconsistent, a deduction in it (that is, the last formula in a sequence of formulas derived from the tautologies) could ultimately yield S itself. As an assignment to variable S can come from either class K1or K2, the deduction violates the inheritance characteristic of tautology (i.e., the derivation must yield an evaluation of a formula that will fall into class K1). From this, Post was able to derive the following definition of inconsistency—without the use of the notion of contradiction:
Definition.A system will be said to be inconsistent if it yields the assertion of the unmodified variable p [S in the Newman and Nagel examples].
In other words, the notion of "contradiction" can be dispensed when constructing a proof of consistency; what replaces it is the notion of "mutually exclusive and exhaustive" classes. An axiomatic system need not include the notion of "contradiction".[12]: 177
Adherents of theepistemologicaltheory ofcoherentismtypically claim that as a necessary condition of the justification of abelief, that belief must form a part of a logically non-contradictorysystemof beliefs. Somedialetheists, includingGraham Priest, have argued that coherence may not require consistency.[13]
A pragmatic contradiction occurs when the very statement of the argument contradicts the claims it purports. An inconsistency arises, in this case, because the act of utterance, rather than the content of what is said, undermines its conclusion.[14]
Indialectical materialism: Contradiction—as derived fromHegelianism—usually refers to an opposition inherently existing within one realm, one unified force or object. This contradiction, as opposed to metaphysical thinking, is not an objectively impossible thing, because these contradicting forces exist in objective reality, not cancelling each other out, but actually defining each other's existence. According toMarxist theory, such a contradiction can be found, for example, in the fact that:
Hegelian and Marxist theories stipulate that thedialecticnature of history will lead to thesublation, orsynthesis, of its contradictions. Marx therefore postulated that history would logically makecapitalismevolve into asocialistsociety where themeans of productionwould equally serve theworking and producing classof society, thus resolving the prior contradiction between (a) and (b).[15]
Colloquial usagecan label actions or statements as contradicting each other when due (or perceived as due) topresuppositionswhich are contradictory in the logical sense.
Proof by contradictionis used inmathematicsto constructproofs.
|
https://en.wikipedia.org/wiki/Contradiction
|
"The exception that proves the rule" is a saying whose meaning is contested.Henry Watson Fowler'sModern English Usageidentifies five ways in which the phrase has been used,[1]and each use makes some sort of reference to the role that a particular case or event takes in relation to a more general rule.
Two original meanings of the phrase are usually cited. The first, preferred by Fowler, is that the presence of an exception applying to aspecificcase establishes ("proves") that ageneralrule exists. A more explicit phrasing might be "the exception that provesthe existence ofthe rule."[1]Most contemporary uses of the phrase emerge from this origin,[2]although often in a way which is closer to the idea that all rules have their exceptions.[1]The alternative origin given is that the word "prove" is used in the archaic sense of "test",[3]a reading advocated, for example, by a 1918Detroit Newsstyle guide:
The exception proves the ruleis a phrase that arises from ignorance, though common to good writers. The original word waspreuves, which did not meanprovesbuttests.[4]
In this sense, the phrase does not mean that an exception demonstrates a rule to be true or to exist, but that it tests the rule, thereby proving its value. There is little evidence of the phrase being used in this second way.[1][2][5]
Fowler's typology of uses stretches from what he sees as the "original, simple use" through to the use which is both the "most objectionable" and "unfortunately the commonest".[1]Fowler, following aprescriptiveapproach,[6]understood this typology as moving from a more correct to a less correct use.[1]However under a moredescriptiveapproach, such distinctions in terms of accuracy would be less useful.[6]
This meaning of the phrase, which for Fowler is the original and clearest meaning,[1]is thought to have emerged from the legal phrase "exceptio probat regulam in casibus non exceptis" ("the exception proves the rule in cases not excepted"),[7]an argument attributed toCiceroin his defence ofLucius Cornelius Balbus.[8][9]This argument states if an exception exists or has to be stated, then this exception proves that there must be some rule to which the case is an exception.[8]The second part of Cicero's phrase, "in casibus non exceptis" ("in cases not excepted"), is almost always missing from modern uses of the statement that "the exception proves the rule".
Consider the following example of the original meaning:
Special leave is given for men to be out of barracks tonight till 11.00 p.m.; "The exception proves the rule" means that this special leave implies a rule requiring men, except when an exception is made, to be in earlier. The value of this in interpreting statutes is plain.
In other words, under this meaning of the phrase, the exception proves that the rule exists on other occasions.[2]This meaning of the phrase, outside of a legal setting, can describe inferences taken from signs, statements or other information. For example, the inference in a shop from a sign saying "pre-paid delivery required for refrigerators" would be that pre-paid delivery isnotrequired for other objects.[2]In this case, the exception of refrigerators proves the existence of a rule that pre-paid delivery is not required.
The English phrase was used this way in early citations from the seventeenth and eighteenth centuries.[10][11]
"The exception that proves the rule" is often used to describe a case (the exception) which serves to highlight or confirm (prove) a rule to which the exception itself is apparently contrary. Fowler describes two versions of this use, one being the "loose rhetorical sense" and the other "serious nonsense";[1]other writers connect these uses together insofar as they represent what Holton calls a "drift" from the legal meaning.[5]In its morerhetoricalsense, this variant of the phrase describes an exception which reveals a tendency that might have otherwise gone unnoticed.[1]In other words, the presence of the exception serves to remind and perhaps reveal to us the rule that otherwise applies; the word 'proof' here is thus not to be taken literally.
In many uses of the phrase, however, the existence of an exception is taken to more definitively 'prove' a rule to which the exception does not fit. Under this sense it is "the unusualness of the exception"[2]which proves how prevalent the tendency orrule of thumbto which it runs contrary is. For example: a rural village is "always" quiet. A local farmer rents his fields to a rock festival, which disturbs the quiet. In this example, saying "the exception proves the rule" is in a literal sense incorrect, as the exception shows (first) that the belief is not a rule and (second) there is no 'proof' involved. However, the phrase draws attention to therarity of the exception,and in so doing establishes the general accuracy of the rule. In what Fowler describes as the "most objectionable" variation of the phrase,[1]this sort of use comes closest to meaning "there is an exception to every rule", or even that the presence of an exception makes a rule more true; these uses Fowler attributes to misunderstanding.[1]
TheOxford English Dictionaryincludes this meaning in its entry for the wordexception, citing the example fromBenjamin Jowett's 1855 bookEssays, in which he writes: "We may except one solitary instance (an exception which eminently proves the rule)." Here, the existence of an exception seems to strengthen the belief of the prevalence of the rule.[7]
Under this version of the phrase, the word 'proof' is to be understood in its archaic form to mean the word 'test' (this use can be seen in the phrasethe proof of the pudding is in the eating[12]). Fowler's example is of a hypotheticalcritic, Jones, who never writes a favourable review. So it is surprising when we receive an exception: a favourable review by Jones of a novel by an unknown author. Then it is discovered that the novel is his own, written under a pseudonym. The exception tested ('proved') the rule and found that it needed to be understood a little more precisely - namely, that Jones will never write a favourable review, except of his own work.[1]The previous evaluation of Jones's ill-nature toward others is re-affirmed by discovering the manner in which the exception falls outside the rule.
Holton argues that this origin involves a "once-heard etymology" which "makes no sense of the way in which the expression is used."[5]Others agree that most uses of the term do not correspond to this format.[2]Nonetheless, it does for Fowler pass the test of making grammatical sense[1]and it is also referenced as a possible meaning within the Oxford English Dictionary.[7]
In any case, the phrase can be interpreted as a jocular expression of the correct insight that a single counterexample, while sufficient to disprove a strictly logical statement, does not disprove statistical statements which may correctly express a general trend notwithstanding the also commonly encountered existence of a few outliers to this trend.
Fowler describes this use as "jocularnonsense". He presents the exchange: 'If there is one virtue I can claim, it is punctuality.' 'Were you in time for breakfast this morning?' 'Well, well, the exception that proves the rule.'[1]In this case, the speakers are aware that the phrase does not correctly apply, but are appealing to it ironically.
|
https://en.wikipedia.org/wiki/Exception_that_proves_the_rule
|
Inmathematics, aminimal counterexampleis the smallest example which falsifies a claim, and aproof by minimal counterexampleis a method ofproofwhich combines the use of a minimal counterexample with the methods ofproof by inductionandproof by contradiction.[1][2]More specifically, in trying to prove a propositionP, one first assumes by contradiction that it is false, and that therefore there must be at least onecounterexample. With respect to some idea of size (which may need to be chosen carefully), one then concludes that there is such a counterexampleCthat isminimal. In regard to the argument,Cis generally something quite hypothetical (since the truth ofPexcludes the possibility ofC), but it may be possible to argue that ifCexisted, then it would have some definite properties which, after applying some reasoning similar to that in an inductive proof, would lead to a contradiction, thereby showing that the propositionPis indeed true.[3]
If the form of the contradiction is that we can derive a further counterexampleD, that is smaller thanCin the sense of the working hypothesis of minimality, then this technique is traditionally calledproof by infinite descent. In which case, there may be multiple and more complex ways to structure the argument of the proof.
The assumption that if there is a counterexample, there is a minimal counterexample, is based on awell-orderingof some kind. The usual ordering on thenatural numbersis clearly possible, by the most usual formulation ofmathematical induction; but the scope of the method can includewell-ordered inductionof any kind.
The minimal counterexample method has been much used in theclassification of finite simple groups. TheFeit–Thompson theorem, that finite simple groups that are not cyclic groups have even order, was proved based on the hypothesis of some, and therefore some minimal, simple groupGof odd order. Every proper subgroup ofGcan be assumed a solvable group, meaning that much theory of such subgroups could be applied.[4]
Euclid's proof of the fundamental theorem of arithmeticis a simple proof which uses a minimal counterexample.[5][6]
Courant and Robbins used the termminimal criminalfor a minimal counterexample in the context of thefour color theorem.[7]
|
https://en.wikipedia.org/wiki/Minimal_counterexample
|
Inmathematics, anincidence structureis an abstract system consisting of two types of objects and a single relationship between these types of objects. Consider thepointsandlinesof theEuclidean planeas the two types of objects and ignore all the properties of this geometry except for therelationof which points areincidenton which lines for all points and lines. What is left is the incidence structure of the Euclidean plane.
Incidence structures are most often considered in the geometrical context where they are abstracted from, and hence generalize, planes (such asaffine,projective, andMöbius planes), but the concept is very broad and not limited to geometric settings. Even in a geometric setting, incidence structures are not limited to just points and lines; higher-dimensional objects (planes,solids,n-spaces,conics, etc.) can be used. The study of finite structures is sometimes calledfinite geometry.[1]
Anincidence structureis a triple (P,L,I) wherePis a set whose elements are calledpoints,Lis a distinct set whose elements are calledlinesandI⊆P×Lis theincidencerelation. The elements ofIare calledflags.If (p,l) is inIthen one may say that pointp"lies on" linelor that the linel"passes through" pointp. A more "symmetric" terminology, to reflect thesymmetricnature of this relation, is that "pisincidentwithl" or that "lis incident withp" and uses the notationpIlsynonymously with(p,l) ∈I.[2]
In some common situationsLmay be a set of subsets ofPin which case incidenceIwill be containment (pIlif and only ifpis a member ofl). Incidence structures of this type are calledset-theoretic.[3]This is not always the case, for example, ifPis a set of vectors andLa set ofsquare matrices, we may defineI={(v,M):v→is an eigenvector of matrixM}.{\displaystyle I=\{(v,M):{\vec {v}}{\text{ is an eigenvector of matrix }}M\}.}This example also shows that while the geometric language of points and lines is used, the object types need not be these geometric objects.
An incidence structure isuniformif each line is incident with the same number of points. Each of these examples, except the second, is uniform with three points per line.
Anygraph(which need not besimple;loopsandmultiple edgesare allowed) is a uniform incidence structure with two points per line. For these examples, the vertices of the graph form the point set, the edges of the graph form the line set, and incidence means that a vertex is an endpoint of an edge.
Incidence structures are seldom studied in their full generality; it is typical to study incidence structures that satisfy some additional axioms. For instance, apartial linear spaceis an incidence structure that satisfies:
If the first axiom above is replaced by the stronger:
the incidence structure is called alinear space.[4][5]
A more specialized example is ak-net. This is an incidence structure in which the lines fall intokparallel classes, so that two lines in the same parallel class have no common points, but two lines in different classes have exactly one common point, and each point belongs to exactly one line from each parallel class. An example of ak-net is the set of points of anaffine planetogether withkparallel classes of affine lines.
If we interchange the role of "points" and "lines" inC=(P,L,I){\displaystyle C=(P,L,I)}we obtain thedual structure,C∗=(L,P,I∗){\displaystyle C^{*}=(L,P,I^{*})}whereI∗is theconverse relationofI. It follows immediately from the definition that:C∗∗=C{\displaystyle C^{**}=C}
This is an abstract version ofprojective duality.[2]
A structureCthat isisomorphicto its dualC∗is calledself-dual. The Fano plane above is a self-dual incidence structure.
The concept of an incidence structure is very simple and has arisen in several disciplines, each introducing its own vocabulary and specifying the types of questions that are typically asked about these structures. Incidence structures use a geometric terminology, but ingraph theoreticterms they are calledhypergraphsand in design theoretic terms they are calledblock designs. They are also known as aset systemorfamily of setsin a general context.
Eachhypergraphorset systemcan be regarded as an incidence
structure in which theuniversal setplays the role of "points", the correspondingfamily of subsetsplays the role of "lines" and the incidence relation isset membership"∈". Conversely, every incidence structure can be viewed as a hypergraph by identifying the lines with the sets of points that are incident with them.
A (general) block design is a setXtogether with afamilyFof subsetsofX(repeated subsets are allowed). Normally a block design is required to satisfy numerical regularity conditions. As an incidence structure,Xis the set of points andFis the set of lines, usually calledblocksin this context (repeated blocks must have distinct names, soFis actually a set and not a multiset). If all the subsets inFhave the same size, the block design is calleduniform. If each element ofXappears in the same number of subsets, the block design is said to beregular. The dual of a uniform design is a regular design and vice versa.
Consider the block design/hypergraph given by:P={1,2,3,4,5,6,7},L={{1,2,3},{1,4,5},{1,6,7},{2,4,6},{2,5,7},{3,4,7},{3,5,6}}.{\displaystyle {\begin{aligned}P&=\{1,2,3,4,5,6,7\},\\[2pt]L&=\left\{{\begin{array}{ll}\{1,2,3\},&\{1,4,5\},\\\{1,6,7\},&\{2,4,6\},\\\{2,5,7\},&\{3,4,7\},\\\{3,5,6\}\end{array}}\right\}.\end{aligned}}}
This incidence structure is called theFano plane. As a block design it is both uniform and regular.
In the labeling given, the lines are precisely the subsets of the points that consist of three points whose labels add up to zero usingnim addition. Alternatively, each number, when written inbinary, can be identified with a non-zero vector of length three over thebinary field. Three vectors that generate asubspaceform a line; in this case, that is equivalent to their vector sum being the zero vector.
Incidence structures may be represented in many ways. If the setsPandLare finite these representations can compactly encode all the relevant information concerning the structure.
Theincidence matrixof a (finite) incidence structure is a(0,1) matrixthat has its rows indexed by the points{pi}and columns indexed by the lines{lj}where theij-th entry is a 1 ifpiIljand 0 otherwise.[a]An incidence matrix is not uniquely determined since it depends upon the arbitrary ordering of the points and the lines.[6]
The non-uniform incidence structure pictured above (example number 2) is given by:P={A,B,C,D,E,P}L={l={C,P,E},m={P},n={P,D},o={P,A},p={A,B},q={P,B}}{\displaystyle {\begin{aligned}P&=\{A,B,C,D,E,P\}\\[2pt]L&=\left\{{\begin{array}{ll}l=\{C,P,E\},&m=\{P\},\\n=\{P,D\},&o=\{P,A\},\\p=\{A,B\},&q=\{P,B\}\end{array}}\right\}\end{aligned}}}
An incidence matrix for this structure is:(000110000011100000001000100000111101){\displaystyle {\begin{pmatrix}0&0&0&1&1&0\\0&0&0&0&1&1\\1&0&0&0&0&0\\0&0&1&0&0&0\\1&0&0&0&0&0\\1&1&1&1&0&1\end{pmatrix}}}which corresponds to the incidence table:
If an incidence structureChas an incidence matrixM, then the dual structureC∗has thetranspose matrixMTas its incidence matrix (and is defined by that matrix).
An incidence structure is self-dual if there exists an ordering of the points and lines so that the incidence matrix constructed with that ordering is asymmetric matrix.
With the labels as given in example number 1 above and with points orderedA,B,C,D,G,F,Eand lines orderedl,p,n,s,r,m,q, the Fano plane has the incidence matrix:(1110000100110010000110101010010010100110010010110).{\displaystyle {\begin{pmatrix}1&1&1&0&0&0&0\\1&0&0&1&1&0&0\\1&0&0&0&0&1&1\\0&1&0&1&0&1&0\\0&1&0&0&1&0&1\\0&0&1&1&0&0&1\\0&0&1&0&1&1&0\end{pmatrix}}.}Since this is a symmetric matrix, the Fano plane is a self-dual incidence structure.
An incidence figure (that is, a depiction of an incidence structure), is constructed by representing the points by dots in a plane and having some visual means of joining the dots to correspond to lines.[6]The dots may be placed in any manner, there are no restrictions on distances between points or any relationships between points. In an incidence structure there is no concept of a point being between two other points; the order of points on a line is undefined. Compare this withordered geometry, which does have a notion of betweenness. The same statements can be made about the depictions of the lines. In particular, lines need not be depicted by "straight line segments" (see examples 1, 3 and 4 above). As with the pictorial representation ofgraphs, the crossing of two "lines" at any place other than a dot has no meaning in terms of the incidence structure; it is only an accident of the representation. These incidence figures may at times resemble graphs, but they aren't graphs unless the incidence structure is a graph.
Incidence structures can be modelled by points and curves in theEuclidean planewith the usual geometric meaning of incidence. Some incidence structures admit representation by points and (straight) lines. Structures that can be are calledrealizable. If no ambient space is mentioned then the Euclidean plane is assumed. The Fano plane (example 1 above) is not realizable since it needs at least one curve. The Möbius–Kantor configuration (example 4 above) is not realizable in the Euclidean plane, but it is realizable in thecomplex plane.[7]On the other hand, examples 2 and 5 above are realizable and the incidence figures given there demonstrate this. Steinitz (1894)[8]has shown thatn3-configurations(incidence structures withnpoints andnlines, three points per line and three lines through each point) are either realizable or require the use of only one curved line in their representations.[9]The Fano plane is the unique (73) and the Möbius–Kantor configuration is the unique (83).
Each incidence structureCcorresponds to abipartite graphcalled theLevi graphor incidence graph of the structure. As any bipartite graph is two-colorable, the Levi graph can be given a black and whitevertex coloring, where black vertices correspond to points and white vertices correspond to lines ofC. The edges of this graph correspond to the flags (incident point/line pairs) of the incidence structure. The original Levi graph was the incidence graph of thegeneralized quadrangleof order two (example 3 above),[10]but the term has been extended byH.S.M. Coxeter[11]to refer to an incidence graph of any incidence structure.[12]
The Levi graph of theFano planeis theHeawood graph. Since the Heawood graph isconnectedandvertex-transitive, there exists anautomorphism(such as the one defined by a reflection about the vertical axis in the figure of the Heawood graph) interchanging black and white vertices. This, in turn, implies that the Fano plane is self-dual.
The specific representation, on the left, of the Levi graph of the Möbius–Kantor configuration (example 4 above) illustrates that a rotation ofπ/4about the center (either clockwise or counterclockwise) of the diagram interchanges the blue and red vertices and maps edges to edges. That is to say that there exists a color interchanging automorphism of this graph. Consequently, the incidence structure known as the Möbius–Kantor configuration is self-dual.
It is possible to generalize the notion of an incidence structure to include more than two types of objects. A structure withktypes of objects is called anincidence structure of rankkor arankkgeometry.[12]Formally these are defined ask+ 1tuplesS= (P1,P2, ...,Pk,I)withPi∩Pj= ∅andI⊆⋃i<jPi×Pj.{\displaystyle I\subseteq \bigcup _{i<j}P_{i}\times P_{j}.}
The Levi graph for these structures is defined as amultipartite graphwith vertices corresponding to each type being colored the same.
|
https://en.wikipedia.org/wiki/Incidence_structure
|
Order theoryis a branch ofmathematicsthat investigates the intuitive notion of order usingbinary relations. It provides a formal framework for describing statements such as "this isless thanthat" or "this precedes that". This article introduces the field and provides basic definitions. A list of order-theoretic terms can be found in theorder theory glossary.
Orders are everywhere in mathematics and related fields likecomputer science. The first order often discussed inprimary schoolis the standard order on thenatural numberse.g. "2 is less than 3", "10 isgreater than5", or "Does Tom have fewer cookies than Sally?". This intuitive concept can be extended to orders on other sets ofnumbers, such as theintegersand thereals. The idea of being greater than or less than another number is one of the basic intuitions ofnumber systemsin general (although one usually is also interested in the actualdifferenceof two numbers, which is not given by the order). Other familiar examples of orderings are thealphabetical orderof words in a dictionary and thegenealogicalproperty oflineal descentwithin a group of people.
The notion of order is very general, extending beyond contexts that have an immediate, intuitive feel of sequence or relative quantity. In other contexts orders may capture notions of containment or specialization. Abstractly, this type of order amounts to thesubset relation, e.g., "Pediatriciansarephysicians," and "Circlesare merely special-caseellipses."
Some orders, like "less-than" on the natural numbers and alphabetical order on words, have a special property: each element can becomparedto any other element, i.e. it is smaller (earlier) than, larger (later) than, or identical to. However, many other orders do not. Consider for example the subset order on a collection ofsets: though the set of birds and the set of dogs are both subsets of the set of animals, neither the birds nor the dogs constitutes a subset of the other. Those orders like the "subset-of" relation for which there existincomparableelements are calledpartial orders; orders for which every pair of elements is comparable aretotal orders.
Order theory captures the intuition of orders that arises from such examples in a general setting. This is achieved by specifying properties that a relation ≤ must have to be a mathematical order. This more abstract approach makes much sense, because one can derive numerous theorems in the general setting, without focusing on the details of any particular order. These insights can then be readily transferred to many less abstract applications.
Driven by the wide practical usage of orders, numerous special kinds of ordered sets have been defined, some of which have grown into mathematical fields of their own. In addition, order theory does not restrict itself to the various classes of ordering relations, but also considers appropriatefunctionsbetween them. A simple example of an order theoretic property for functions comes fromanalysiswheremonotonefunctions are frequently found.
This section introduces ordered sets by building upon the concepts ofset theory,arithmetic, andbinary relations.
Orders are special binary relations. Suppose thatPis a set and that ≤ is a relation onP('relationona set' is taken to mean 'relationamongstits inhabitants', i.e. ≤ is a subset of the cartesian productP×P). Then ≤ is apartial orderif it isreflexive,antisymmetric, andtransitive, that is, if for alla,bandcinP, we have that:
A set with apartial orderon it is called apartially ordered set,poset, or justordered setif the intended meaning is clear. By checking these properties, one immediately sees that the well-known orders onnatural numbers,integers,rational numbersandrealsare all orders in the above sense. However, these examples have the additional property that any two elements are comparable, that is, for allaandbinP, we have that:
A partial order with this property is called atotal order. These orders can also be calledlinear ordersorchains. While many familiar orders are linear, thesubsetorder on sets provides an example where this is not the case. Another example is given by the divisibility (or "is-a-factor-of") relation |. For two natural numbersnandm, we writen|mifndividesmwithout remainder. One easily sees that this yields a partial order. For example neither 3 divides 13 nor 13 divides 3, so 3 and 13 are not comparable elements of the divisibility relation on the set of integers.
The identity relation = on any set is also a partial order in which every two distinct elements are incomparable. It is also the only relation that is both a partial order and anequivalence relationbecause it satisfies both the antisymmetry property of partial orders and thesymmetryproperty of equivalence relations. Many advanced properties of posets are interesting mainly for non-linear orders.
Hasse diagramscan visually represent the elements and relations of a partial ordering. These aregraph drawingswhere theverticesare the elements of the poset and the ordering relation is indicated by both theedgesand the relative positioning of the vertices. Orders are drawn bottom-up: if an elementxis smaller than (precedes)ythen there exists a path fromxtoythat is directed upwards. It is often necessary for the edges connecting elements to cross each other, but elements must never be located within an edge. An instructive exercise is to draw the Hasse diagram for the set of natural numbers that are smaller than or equal to 13, ordered by | (thedividesrelation).
Even some infinite sets can be diagrammed by superimposing anellipsis(...) on a finite sub-order. This works well for the natural numbers, but it fails for the reals, where there is no immediate successor above 0; however, quite often one can obtain an intuition related to diagrams of a similar kind[vague].
In a partially ordered set there may be some elements that play a special role. The most basic example is given by theleast elementof aposet. For example, 1 is theleast elementof the positive integers and theempty setis the least set under the subset order. Formally, an elementmis a least element if:
The notation 0 is frequently found for the least element, even when no numbers are concerned. However, in orders on sets of numbers, this notation might be inappropriate or ambiguous, since the number 0 is not always least. An example is given by the above divisibility order |, where 1 is the least element since it divides all other numbers. In contrast, 0 is the number that is divided by all other numbers. Hence it is thegreatest elementof the order. Other frequent terms for the least and greatest elements isbottomandtoporzeroandunit.
Least andgreatest elementsmay fail to exist, as the example of the real numbers shows. But if they exist, they are always unique. In contrast, consider the divisibility relation | on the set {2,3,4,5,6}. Although this set has neither top nor bottom, the elements 2, 3, and 5 have no elements below them, while 4, 5 and 6 have none above. Such elements are calledminimalandmaximal, respectively. Formally, an elementmisminimalif:
Exchanging ≤ with ≥ yields the definition ofmaximality. As the example shows, there can be many maximal elements and some elements may be both maximal and minimal (e.g. 5 above). However, if there is a least element, then it is the only minimal element of the order. Again, in infinite posets maximal elements do not always exist - the set of allfinitesubsets of a given infinite set, ordered by subset inclusion, provides one of many counterexamples. An important tool to ensure the existence of maximal elements under certain conditions isZorn's Lemma.
Subsets of partially ordered sets inherit the order. We already applied this by considering the subset {2,3,4,5,6} of the natural numbers with the induced divisibility ordering. Now there are also elements of a poset that are special with respect to some subset of the order. This leads to the definition ofupper bounds. Given a subsetSof some posetP, an upper bound ofSis an elementbofPthat is above all elements ofS. Formally, this means that
Lower bounds again are defined by inverting the order. For example, −5 is a lower bound of the natural numbers as a subset of the integers. Given a set of sets, an upper bound for these sets under the subset ordering is given by theirunion. In fact, this upper bound is quite special: it is the smallest set that contains all of the sets. Hence, we have found theleast upper boundof a set of sets. This concept is also calledsupremumorjoin, and for a setSone writes sup(S) or⋁S{\displaystyle \bigvee S}for its least upper bound. Conversely, thegreatest lower boundis known asinfimumormeetand denoted inf(S) or⋀S{\displaystyle \bigwedge S}. These concepts play an important role in many applications of order theory. For two elementsxandy, one also writesx∨y{\displaystyle x\vee y}andx∧y{\displaystyle x\wedge y}for sup({x,y}) and inf({x,y}), respectively.
For example, 1 is the infimum of the positive integers as a subset of integers.
For another example, consider again the relation | on natural numbers. The least upper bound of two numbers is the smallest number that is divided by both of them, i.e. theleast common multipleof the numbers. Greatest lower bounds in turn are given by thegreatest common divisor.
In the previous definitions, we often noted that a concept can be defined by just inverting the ordering in a former definition. This is the case for "least" and "greatest", for "minimal" and "maximal", for "upper bound" and "lower bound", and so on. This is a general situation in order theory: A given order can be inverted by just exchanging its direction, pictorially flipping the Hasse diagram top-down. This yields the so-calleddual,inverse, oropposite order.
Every order theoretic definition has its dual: it is the notion one obtains by applying the definition to the inverse order. Since all concepts are symmetric, this operation preserves the theorems of partial orders. For a given mathematical result, one can just invert the order and replace all definitions by their duals and one obtains another valid theorem. This is important and useful, since one obtains two theorems for the price of one. Some more details and examples can be found in the article onduality in order theory.
There are many ways to construct orders out of given orders. The dual order is one example. Another important construction is thecartesian productof two partially ordered sets, taken together with theproduct orderon pairs of elements. The ordering is defined by (a,x) ≤ (b,y) if (and only if)a≤bandx≤y. (Notice carefully that there are three distinct meanings for the relation symbol ≤ in this definition.) Thedisjoint unionof two posets is another typical example of order construction, where the order is just the (disjoint) union of the original orders.
Every partial order ≤ gives rise to a so-calledstrict order<, by defininga<bifa≤band notb≤a. This transformation can be inverted by settinga≤bifa<bora=b. The two concepts are equivalent although in some circumstances one can be more convenient to work with than the other.
It is reasonable to consider functions between partially ordered sets having certain additional properties that are related to the ordering relations of the two sets. The most fundamental condition that occurs in this context ismonotonicity. A functionffrom a posetPto a posetQismonotone, ororder-preserving, ifa≤binPimpliesf(a) ≤f(b) inQ(Noting that, strictly, the two relations here are different since they apply to different sets.). The converse of this implication leads to functions that areorder-reflecting, i.e. functionsfas above for whichf(a) ≤f(b) impliesa≤b. On the other hand, a function may also beorder-reversingorantitone, ifa≤bimpliesf(a) ≥f(b).
Anorder-embeddingis a functionfbetween orders that is both order-preserving and order-reflecting. Examples for these definitions are found easily. For instance, the function that maps a natural number to its successor is clearly monotone with respect to the natural order. Any function from a discrete order, i.e. from a set ordered by the identity order "=", is also monotone. Mapping each natural number to the corresponding real number gives an example for an order embedding. Theset complementon apowersetis an example of an antitone function.
An important question is when two orders are "essentially equal", i.e. when they are the same up to renaming of elements.Order isomorphismsare functions that define such a renaming. An order-isomorphism is a monotonebijectivefunction that has a monotone inverse. This is equivalent to being asurjectiveorder-embedding. Hence, the imagef(P) of an order-embedding is always isomorphic toP, which justifies the term "embedding".
A more elaborate type of functions is given by so-calledGalois connections. Monotone Galois connections can be viewed as a generalization of order-isomorphisms, since they constitute of a pair of two functions in converse directions, which are "not quite" inverse to each other, but that still have close relationships.
Another special type of self-maps on a poset areclosure operators, which are not only monotonic, but alsoidempotent, i.e.f(x) =f(f(x)), andextensive(orinflationary), i.e.x≤f(x). These have many applications in all kinds of "closures" that appear in mathematics.
Besides being compatible with the mere order relations, functions between posets may also behave well with respect to special elements and constructions. For example, when talking about posets with least element, it may seem reasonable to consider only monotonic functions that preserve this element, i.e. which map least elements to least elements. If binary infima ∧ exist, then a reasonable property might be to require thatf(x∧y) =f(x) ∧f(y), for allxandy. All of these properties, and indeed many more, may be compiled under the label of limit-preserving functions.
Finally, one can invert the view, switching fromfunctions of orderstoorders of functions. Indeed, the functions between two posetsPandQcan be ordered via thepointwise order. For two functionsfandg, we havef≤giff(x) ≤g(x) for all elementsxofP. This occurs for example indomain theory, wherefunction spacesplay an important role.
Many of the structures that are studied in order theory employ order relations with further properties. In fact, even some relations that are not partial orders are of special interest. Mainly the concept of apreorderhas to be mentioned. A preorder is a relation that is reflexive and transitive, but not necessarily antisymmetric. Each preorder induces anequivalence relationbetween elements, whereais equivalent tob, ifa≤bandb≤a. Preorders can be turned into orders by identifying all elements that are equivalent with respect to this relation.
Several types of orders can be defined from numerical data on the items of the order: atotal orderresults from attaching distinct real numbers to each item and using the numerical comparisons to order the items; instead, if distinct items are allowed to have equal numerical scores, one obtains astrict weak ordering. Requiring two scores to be separated by a fixed threshold before they may be compared leads to the concept of asemiorder, while allowing the threshold to vary on a per-item basis produces aninterval order.
An additional simple but useful property leads to so-calledwell-founded, for which all non-empty subsets have a minimal element. Generalizing well-orders from linear to partial orders, a set iswell partially orderedif all its non-empty subsets have a finite number of minimal elements.
Many other types of orders arise when the existence ofinfimaandsupremaof certain sets is guaranteed. Focusing on this aspect, usually referred to ascompletenessof orders, one obtains:
However, one can go even further: if all finite non-empty infima exist, then ∧ can be viewed as a total binary operation in the sense ofuniversal algebra. Hence, in a lattice, two operations ∧ and ∨ are available, and one can define new properties by giving identities, such as
This condition is calleddistributivityand gives rise todistributive lattices. There are some other important distributivity laws which are discussed in the article ondistributivity in order theory. Some additional order structures that are often specified via algebraic operations and defining identities are
which both introduce a new operation ~ callednegation. Both structures play a role inmathematical logicand especially Boolean algebras have major applications incomputer science.
Finally, various structures in mathematics combine orders with even more algebraic operations, as in the case ofquantales, that allow for the definition of an addition operation.
Many other important properties of posets exist. For example, a poset islocally finiteif every closedinterval[a,b] in it isfinite. Locally finite posets give rise toincidence algebraswhich in turn can be used to define theEuler characteristicof finite bounded posets.
In an ordered set, one can define many types of special subsets based on the given order. A simple example areupper sets; i.e. sets that contain all elements that are above them in the order. Formally, theupper closureof a setSin a posetPis given by the set {xinP| there is someyinSwithy≤x}. A set that is equal to its upper closure is called an upper set.Lower setsare defined dually.
More complicated lower subsets areideals, which have the additional property that each two of their elements have an upper bound within the ideal. Their duals are given byfilters. A related concept is that of adirected subset, which like an ideal contains upper bounds of finite subsets, but does not have to be a lower set. Furthermore, it is often generalized to preordered sets.
A subset which is – as a sub-poset – linearly ordered, is called achain. The opposite notion, theantichain, is a subset that contains no two comparable elements; i.e. that is a discrete order.
Although most mathematical areasuseorders in one or the other way, there are also a few theories that have relationships which go far beyond mere application. Together with their major points of contact with order theory, some of these are to be presented below.
As already mentioned, the methods and formalisms ofuniversal algebraare an important tool for many order theoretic considerations. Beside formalizing orders in terms ofalgebraic structuresthat satisfy certain identities, one can also establish other connections to algebra. An example is given by the correspondence betweenBoolean algebrasandBoolean rings. Other issues are concerned with the existence offree constructions, such asfree latticesbased on a given set of generators. Furthermore, closure operators are important in the study of universal algebra.
Intopology, orders play a very prominent role. In fact, the collection ofopen setsprovides a classical example of a complete lattice, more precisely a completeHeyting algebra(or "frame" or "locale").Filtersandnetsare notions closely related to order theory and theclosure operator of setscan be used to define a topology. Beyond these relations, topology can be looked at solely in terms of the open set lattices, which leads to the study ofpointless topology. Furthermore, a natural preorder of elements of the underlying set of a topology is given by the so-calledspecialization order, that is actually a partial order if the topology isT0.
Conversely, in order theory, one often makes use of topological results. There are various ways to define subsets of an order which can be considered as open sets of a topology. Considering topologies on a poset (X, ≤) that in turn induce ≤ as their specialization order, thefinestsuch topology is theAlexandrov topology, given by taking all upper sets as opens. Conversely, thecoarsesttopology that induces the specialization order is theupper topology, having the complements ofprincipal ideals(i.e. sets of the form {yinX|y≤x} for somex) as asubbase. Additionally, a topology with specialization order ≤ may beorder consistent, meaning that their open sets are "inaccessible by directed suprema" (with respect to ≤). The finest order consistent topology is theScott topology, which is coarser than the Alexandrov topology. A third important topology in this spirit is theLawson topology. There are close connections between these topologies and the concepts of order theory. For example, a function preserves directed suprema if and only if it iscontinuouswith respect to the Scott topology (for this reason this order theoretic property is also calledScott-continuity).
The visualization of orders withHasse diagramshas a straightforward generalization: instead of displaying lesser elementsbelowgreater ones, the direction of the order can also be depicted by giving directions to the edges of a graph. In this way, each order is seen to be equivalent to adirected acyclic graph, where the nodes are the elements of the poset and there is a directed path fromatobif and only ifa≤b. Dropping the requirement of being acyclic, one can also obtain all preorders.
When equipped with all transitive edges, these graphs in turn are just specialcategories, where elements are objects and each set of morphisms between two elements is at most singleton. Functions between orders become functors between categories. Many ideas of order theory are just concepts of category theory in small. For example, an infimum is just acategorical product. More generally, one can capture infima and suprema under the abstract notion of acategorical limit(orcolimit, respectively). Another place where categorical ideas occur is the concept of a (monotone)Galois connection, which is just the same as a pair ofadjoint functors.
But category theory also has its impact on order theory on a larger scale. Classes of posets with appropriate functions as discussed above form interesting categories. Often one can also state constructions of orders, like theproduct order, in terms of categories. Further insights result when categories of orders are foundcategorically equivalentto other categories, for example of topological spaces. This line of research leads to variousrepresentation theorems, often collected under the label ofStone duality.
As explained before, orders are ubiquitous in mathematics. However, the earliest explicit mentionings of partial orders are probably to be found not before the 19th century. In this context the works ofGeorge Booleare of great importance. Moreover, works ofCharles Sanders Peirce,Richard Dedekind, andErnst Schröderalso consider concepts of order theory.
Contributors toordered geometrywere listed in a 1961textbook:
It wasPaschin 1882, who first pointed out that a geometry of order could be developed without reference to measurement. His system of axioms was gradually improved byPeano(1889),Hilbert(1899), andVeblen(1904).
In 1901Bertrand Russellwrote "On the Notion of Order"[2]exploring the foundations of the idea through generation ofseries. He returned to the topic in part IV ofThe Principles of Mathematics(1903). Russell noted thatbinary relationaRbhas a sense proceeding fromatobwith theconverse relationhaving an opposite sense, and sense "is the source of order and series." (p 95) He acknowledgesImmanuel Kant[3]was "aware of the difference between logical opposition and the opposition of positive and negative". He wrote that Kant deserves credit as he "first called attention to the logical importance of asymmetric relations."
The termposetas an abbreviation for partially ordered set is attributed toGarrett Birkhoffin the second edition of his influential bookLattice Theory.[4][5]
|
https://en.wikipedia.org/wiki/Order_theory
|
Inmathematicsandabstract algebra, arelation algebrais aresiduated Boolean algebraexpandedwith aninvolutioncalledconverse, aunary operation. The motivating example of a relation algebra is the algebra 2X2of allbinary relationson a setX, that is, subsets of thecartesian squareX2, withR•Sinterpreted as the usualcomposition of binary relationsRandS, and with the converse ofRas theconverse relation.
Relation algebra emerged in the 19th-century work ofAugustus De MorganandCharles Peirce, which culminated in thealgebraic logicofErnst Schröder. The equational form of relation algebra treated here was developed byAlfred Tarskiand his students, starting in the 1940s. Tarski and Givant (1987) applied relation algebra to a variable-free treatment ofaxiomatic set theory, with the implication that mathematics founded on set theory could itself be conducted without variables.
Arelation algebra(L, ∧, ∨,−, 0, 1, •,I, ˘)is analgebraic structureequipped with theBoolean operationsof conjunctionx∧y, disjunctionx∨y, and negationx−, the Boolean constants 0 and 1, the relational operations ofcompositionx•yandconversex˘, and the relational constantI, such that these operations and constants satisfy certain equations constituting an axiomatization of acalculus of relations. Roughly, a relation algebra is to a system of binary relations on a set containing theempty(0),universal(1), andidentity(I)relations and closed under these five operations as agroupis to a system ofpermutationsof a set containing theidentity permutationand closed undercompositionandinverse. However, thefirst-ordertheoryof relation algebras is notcompletefor such systems of binary relations.
Following Jónsson and Tsinakis (1993) it is convenient to define additional operationsx◁y=x•y˘, and, dually,x▷y=x˘ •y. Jónsson and Tsinakis showed thatI◁x=x▷I, and that both were equal tox˘. Hence a relation algebra can equally well be defined as an algebraic structure(L, ∧, ∨,−, 0, 1, •,I, ◁, ▷). The advantage of thissignatureover the usual one is that a relation algebra can then be defined in full simply as aresiduated Boolean algebrafor whichI◁xis an involution, that is,I◁ (I◁x) =x. The latter condition can be thought of as the relational counterpart of the equation 1/(1/x) =xfor ordinary arithmeticreciprocal, and some authors use reciprocal as a synonym for converse.
Since residuated Boolean algebras are axiomatized with finitely many identities, so are relation algebras. Hence the latter form avariety, the varietyRAof relation algebras. Expanding the above definition as equations yields the following finite axiomatization.
The axiomsB1-B10below are adapted from Givant (2006: 283), and were first set out byTarskiin 1948.[1]
Lis aBoolean algebraunder binarydisjunction, ∨, and unarycomplementation()−:
This axiomatization of Boolean algebra is due toHuntington(1933). Note that the meet of the implied Boolean algebra isnotthe • operator (even though it distributes over ∨ like a meet does), nor is the 1 of the Boolean algebra theIconstant.
Lis amonoidunder binarycomposition(•) andnullaryidentityI:
Unaryconverse()˘ is aninvolution with respect to composition:
Axiom B6 defines conversion as aninvolution, whereas B7 expresses theantidistributiveproperty of conversion relative to composition.[2]
Converse and compositiondistributeover disjunction:
B10is Tarski's equational form of the fact, discovered byAugustus De Morgan, thatA•B≤C−↔{\displaystyle \leftrightarrow }A˘ •C≤B−↔{\displaystyle \leftrightarrow }C•B˘ ≤A−.
These axioms areZFCtheorems; for the purely BooleanB1-B3, this fact is trivial. After each of the following axioms is shown the number of the corresponding theorem in Chapter 3 of Suppes (1960), an exposition of ZFC:B427,B545,B614,B726,B816,B923.
The following table shows how many of the usual properties ofbinary relationscan be expressed as succinctRAequalities or inequalities. Below, an inequality of the formA≤Bis shorthand for the Boolean equationA∨B=B.
The most complete set of results of this nature is Chapter C of Carnap (1958), where the notation is rather distant from that of this entry. Chapter 3.2 of Suppes (1960) contains fewer results, presented as ZFC theorems and using a notation that more resembles that of this entry. Neither Carnap nor Suppes formulated their results using theRAof this entry, or in an equational manner.
ThemetamathematicsofRAare discussed at length in Tarski and Givant (1987), and more briefly in Givant (2006).
RAconsists entirely of equations manipulated using nothing more than uniform replacement and the substitution of equals for equals. Both rules are wholly familiar from school mathematics and fromabstract algebragenerally. HenceRAproofs are carried out in a manner familiar to all mathematicians, unlike the case inmathematical logicgenerally.
RAcan express any (and up tological equivalence, exactly the)first-order logic(FOL) formulas containing no more than three variables. (A given variable can be quantified multiple times and hence quantifiers can be nested arbitrarily deeply by "reusing" variables.)[citation needed]Surprisingly, this fragment of FOL suffices to expressPeano arithmeticand almost allaxiomatic set theoriesever proposed. HenceRAis, in effect, a way of algebraizing nearly all mathematics, while dispensing with FOL and itsconnectives,quantifiers,turnstiles, andmodus ponens. BecauseRAcan express Peano arithmetic and set theory,Gödel's incompleteness theoremsapply to it;RAisincomplete, incompletable, andundecidable.[citation needed](N.B. The Boolean algebra fragment ofRAis complete and decidable.)
Therepresentable relation algebras, forming the classRRA, are those relation algebrasisomorphicto some relation algebra consisting of binary relations on some set, and closed under the intended interpretation of theRAoperations. It is easily shown, e.g. using the method ofpseudoelementary classes, thatRRAis aquasivariety, that is, axiomatizable by auniversal Horn theory. In 1950,Roger Lyndonproved the existence of equations holding inRRAthat did not hold inRA. Hence the variety generated byRRAis a proper subvariety of the varietyRA. In 1955,Alfred Tarskishowed thatRRAis itself a variety. In 1964, Donald Monk showed thatRRAhas nofinite axiomatization, unlikeRAwhich is finitely axiomatized by definition.
AnRAis a Q-relation algebra (QRA) if, in addition toB1-B10, there exist someAandBsuch that (Tarski and Givant 1987: §8.4):
Essentially these axioms imply that the universe has a (non-surjective) pairing relation whose projections areAandB. It is a theorem that everyQRAis aRRA(Proof by Maddux, see Tarski & Givant 1987: 8.4(iii)).
EveryQRAis representable (Tarski and Givant 1987). That not every relation algebra is representable is a fundamental wayRAdiffers fromQRAandBoolean algebras, which, byStone's representation theorem for Boolean algebras, are always representable as sets of subsets of some set, closed underunion,intersection, andcomplement.
De MorganfoundedRAin 1860, butC. S. Peircetook it much further and became fascinated with its philosophical power. The work of DeMorgan and Peirce came to be known mainly in the extended and definitive formErnst Schrödergave it in Vol. 3 of hisVorlesungen(1890–1905).Principia Mathematicadrew strongly on Schröder'sRA, but acknowledged him only as the inventor of the notation. In 1912,Alwin Korseltproved that a particular formula in which the quantifiers were nested four deep had noRAequivalent.[4]This fact led to a loss of interest inRAuntil Tarski (1941) began writing about it. His students have continued to developRAdown to the present day. Tarski returned toRAin the 1970s with the help of Steven Givant; this collaboration resulted in the monograph by Tarski and Givant (1987), the definitive reference for this subject. For more on the history ofRA, see Maddux (1991, 2006).
|
https://en.wikipedia.org/wiki/Relation_algebra
|
Inmathematics, particularly inset theory, thealeph numbersare asequenceof numbers used to represent thecardinality(or size) ofinfinite sets.[a]They were introduced by the mathematicianGeorg Cantor[1]and are named after the symbol he used to denote them, the Hebrew letteraleph(ℵ).[2][b]
The smallest cardinality of an infinite set is that of thenatural numbers, denoted byℵ0{\displaystyle \aleph _{0}}(readaleph-nought,aleph-zero, oraleph-null); the next larger cardinality of awell-orderedset isℵ1,{\displaystyle \aleph _{1},}thenℵ2,{\displaystyle \aleph _{2},}thenℵ3,{\displaystyle \aleph _{3},}and so on. Continuing in this manner, it is possible to define an infinitecardinal numberℵα{\displaystyle \aleph _{\alpha }}for everyordinal numberα,{\displaystyle \alpha ,}as described below.
The concept and notation are due toGeorg Cantor,[5]who defined the notion of cardinality and realized thatinfinite sets can have different cardinalities.
The aleph numbers differ from theinfinity(∞{\displaystyle \infty }) commonly found in algebra and calculus, in that the alephs measure the sizes of sets, while infinity is commonly defined either as an extremelimitof thereal number line(applied to afunctionorsequencethat "divergesto infinity" or "increases without bound"), or as an extreme point of theextended real number line.
ℵ0{\displaystyle \aleph _{0}}(aleph-nought, aleph-zero, or aleph-null) is the cardinality of the set of all natural numbers, and is aninfinite cardinal. The set of all finiteordinals, calledω{\displaystyle \omega }orω0{\displaystyle \omega _{0}}(whereω{\displaystyle \omega }is the lowercase Greek letteromega), also has cardinalityℵ0{\displaystyle \aleph _{0}}. A set has cardinalityℵ0{\displaystyle \aleph _{0}}if and only if it iscountably infinite, that is, there is abijection(one-to-one correspondence) between it and the natural numbers. Examples of such sets are
Among the countably infinite sets are certain infinite ordinals,[c]including for exampleω{\displaystyle \omega },ω+1{\displaystyle \omega +1},ω⋅2{\displaystyle \omega \cdot 2},ω2{\displaystyle \omega ^{2}},ωω{\displaystyle \omega ^{\omega }}, andε0{\displaystyle \varepsilon _{0}}.[6]For example, the sequence (withorder typeω⋅2{\displaystyle \omega \cdot 2}) of all positive odd integers followed by all positive even integers{1,3,5,7,9,⋯;2,4,6,8,10,⋯}{\displaystyle \{1,3,5,7,9,\cdots ;2,4,6,8,10,\cdots \}}is an ordering of the set (with cardinalityℵ0{\displaystyle \aleph _{0}}) of positive integers.
If theaxiom of countable choice(a weaker version of theaxiom of choice) holds, thenℵ0{\displaystyle \aleph _{0}}is smaller than any other infinite cardinal, and is therefore the (unique) least infinite ordinal.
ℵ1{\displaystyle \aleph _{1}}is the cardinality of the set of all countableordinal numbers.[7]This set is denoted byω1{\displaystyle \omega _{1}}(or sometimes Ω). The setω1{\displaystyle \omega _{1}}is itself an ordinal number larger than all countable ones, so it is anuncountable set. Therefore,ℵ1{\displaystyle \aleph _{1}}is the smallest cardinality that is larger thanℵ0,{\displaystyle \aleph _{0},}the smallest infinite cardinality.
The definition ofℵ1{\displaystyle \aleph _{1}}implies (in ZF,Zermelo–Fraenkel set theorywithoutthe axiom of choice) that no cardinal number is betweenℵ0{\displaystyle \aleph _{0}}andℵ1.{\displaystyle \aleph _{1}.}If theaxiom of choiceis used, it can be further proved that the class of cardinal numbers istotally ordered, and thusℵ1{\displaystyle \aleph _{1}}is the second-smallest infinite cardinal number. One can show one of the most useful properties of the setω1{\displaystyle \omega _{1}}: Any countable subset ofω1{\displaystyle \omega _{1}}has an upper bound inω1{\displaystyle \omega _{1}}(this follows from the fact that the union of a countable number of countable sets is itself countable). This fact is analogous to the situation inℵ0{\displaystyle \aleph _{0}}: Every finite set of natural numbers has a maximum which is also a natural number, andfinite unionsof finite sets are finite.
An example application of the ordinalω1{\displaystyle \omega _{1}}is "closing" with respect to countable operations; e.g., trying to explicitly describe theσ-algebragenerated by an arbitrary collection of subsets (see e.g.Borel hierarchy). This is harder than most explicit descriptions of "generation" in algebra (vector spaces,groups, etc.) because in those cases we only have to close with respect to finite operations – sums, products, etc. The process involves defining, for each countable ordinal, viatransfinite induction, a set by "throwing in" all possiblecountableunions and complements, and taking the union of all that over all ofω1.{\displaystyle \omega _{1}.}
Thecardinalityof the set ofreal numbers(cardinality of the continuum) is 2ℵ0{\displaystyle \aleph _{0}}. It cannot be determined fromZFC(Zermelo–Fraenkel set theoryaugmented with theaxiom of choice) where this number fits exactly in the aleph number hierarchy, but it follows from ZFC that the continuum hypothesis (CH) is equivalent to the identity
The CH states that there is no set whose cardinality is strictly between that of the natural numbers and the real numbers.[9]CH is independent ofZFC: It can be neither proven nor disproven within the context of that axiom system (provided thatZFCisconsistent). That CH is consistent withZFCwas demonstrated byKurt Gödelin 1940, when he showed that its negation is not a theorem ofZFC. That it is independent ofZFCwas demonstrated byPaul Cohenin 1963, when he showed conversely that the CH itself is not a theorem ofZFC– by the (then-novel) method offorcing.[8][10]
Aleph-omega isℵω=sup{ℵn|n∈ω}=sup{ℵn|n∈{0,1,2,⋯}}{\displaystyle \aleph _{\omega }=\sup\{\aleph _{n}|n\in \omega \}=\sup\{\aleph _{n}|n\in \{0,1,2,\cdots \}\}}where the smallest infinite ordinal is denoted asω{\displaystyle \omega }. That is, the cardinal numberℵω{\displaystyle \aleph _{\omega }}is theleast upper boundofsup{ℵn|n∈{0,1,2,⋯}}{\displaystyle \sup\{\aleph _{n}|n\in \{0,1,2,\cdots \}\}}.
Notably,ℵω{\displaystyle \aleph _{\omega }}is the first uncountable cardinal number that can be demonstrated within Zermelo–Fraenkel set theorynotto be equal to the cardinality of the set of allreal numbers2ℵ0{\displaystyle 2^{\aleph _{0}}}: For any natural numbern≥1{\displaystyle n\geq 1}, we can consistently assume that2ℵ0=ℵn{\displaystyle 2^{\aleph _{0}}=\aleph _{n}}, and moreover it is possible to assume that2ℵ0{\displaystyle 2^{\aleph _{0}}}is as least as large as any cardinal number we like. The main restriction ZFC puts on the value of2ℵ0{\displaystyle 2^{\aleph _{0}}}is that it cannot equal certain special cardinals withcofinalityℵ0{\displaystyle \aleph _{0}}. An uncountably infinite cardinalκ{\displaystyle \kappa }having cofinalityℵ0{\displaystyle \aleph _{0}}means that there is a (countable-length) sequenceκ0≤κ1≤κ2≤⋯{\displaystyle \kappa _{0}\leq \kappa _{1}\leq \kappa _{2}\leq \cdots }of cardinalsκi<κ{\displaystyle \kappa _{i}<\kappa }whose limit (i.e. its least upper bound) isκ{\displaystyle \kappa }(seeEaston's theorem). As per the definition above,ℵω{\displaystyle \aleph _{\omega }}is the limit of a countable-length sequence of smaller cardinals.
To defineℵα{\displaystyle \aleph _{\alpha }}for arbitrary ordinal numberα{\displaystyle \alpha }, we must define thesuccessor cardinal operation, which assigns to any cardinal numberρ{\displaystyle \rho }the next largerwell-orderedcardinalρ+{\displaystyle \rho ^{+}}(if theaxiom of choiceholds, this is the (unique) next larger cardinal).
We can then define the aleph numbers as follows:
Theα{\displaystyle \alpha }-th infiniteinitial ordinalis writtenωα{\displaystyle \omega _{\alpha }}. Its cardinality is writtenℵα{\displaystyle \aleph _{\alpha }}.
Informally, thealeph functionℵ:On→Cd{\displaystyle \aleph :{\text{On}}\rightarrow {\text{Cd}}}is a bijection from the ordinals to the infinite cardinals.
Formally, inZFC,ℵ{\displaystyle \aleph }isnot a function, but a function-like class, as it is not a set (due to theBurali-Forti paradox).
For any ordinalα{\displaystyle \alpha }we haveα≤ωα{\displaystyle \alpha \leq \omega _{\alpha }}.
In many casesωα{\displaystyle \omega _{\alpha }}is strictly greater thanα. For example, it is true for any successorordinal:α+1≤ωα+1{\displaystyle \alpha +1\leq \omega _{\alpha +1}}holds. There are, however, some limit ordinals which arefixed pointsof the omega function, because of thefixed-point lemma for normal functions. The first such is the limit of the sequence
which is sometimes denotedωω⋱{\textstyle \omega _{\omega _{\ddots }}}.
Anyweakly inaccessible cardinalis also a fixed point of the aleph function.[11]This can be shown in ZFC as follows. Supposeκ=ℵλ{\displaystyle \kappa =\aleph _{\lambda }}is a weakly inaccessible cardinal. Ifλ{\displaystyle \lambda }were asuccessor ordinal, thenℵλ{\displaystyle \aleph _{\lambda }}would be asuccessor cardinaland hence not weakly inaccessible. Ifλ{\displaystyle \lambda }were alimit ordinalless thanκ{\displaystyle \kappa }then itscofinality(and thus the cofinality ofℵλ{\displaystyle \aleph _{\lambda }}) would be less thanκ{\displaystyle \kappa }and soκ{\displaystyle \kappa }would not be regular and thus not weakly inaccessible. Thusλ≥κ{\displaystyle \lambda \geq \kappa }and consequentlyλ=κ{\displaystyle \lambda =\kappa }which makes it a fixed point.
The cardinality of any infiniteordinal numberis an aleph number. Every aleph is the cardinality of some ordinal. The least of these is itsinitial ordinal. Any set whose cardinality is an aleph isequinumerouswith an ordinal and is thuswell-orderable.
Eachfinite setis well-orderable, but does not have an aleph as its cardinality.
Over ZF, the assumption that the cardinality of eachinfinite setis an aleph number is equivalent to the existence of a well-ordering of every set, which in turn is equivalent to theaxiom of choice. ZFC set theory, which includes the axiom of choice, implies that every infinite set has an aleph number as its cardinality (i.e. is equinumerous with its initial ordinal), and thus the initial ordinals of the aleph numbers serve as a class of representatives for all possible infinite cardinal numbers.
When cardinality is studied in ZF without the axiom of choice, it is no longer possible to prove that each infinite set has some aleph number as its cardinality; the sets whose cardinality is an aleph number are exactly the infinite sets that can be well-ordered. The method ofScott's trickis sometimes used as an alternative way to construct representatives for cardinal numbers in the setting of ZF. For example, one can definecard(S){\displaystyle {\text{card}}(S)}to be the set of sets with the same cardinality asS{\displaystyle S}of minimum possible rank. This has the property thatcard(S)=card(T){\displaystyle {\text{card}}(S)={\text{card}}(T)}if and only ifS{\displaystyle S}andT{\displaystyle T}have the same cardinality. (The setcard(S){\displaystyle {\text{card}}(S)}does not have the same cardinality ofS{\displaystyle S}in general, but all its elements do.)
|
https://en.wikipedia.org/wiki/Aleph_number
|
Countingis the process of determining thenumberofelementsof afinite setof objects; that is, determining thesizeof a set. The traditional way of counting consists of continually increasing a (mental or spoken) counter by aunitfor every element of the set, in some order, while marking (or displacing) those elements to avoid visiting the same element more than once, until no unmarked elements are left; if the counter was set to one after the first object, the value after visiting the final object gives the desired number of elements. The related termenumerationrefers to uniquely identifying the elements of afinite(combinatorial)setor infinite set by assigning a number to each element.
Counting sometimes involves numbers other than one; for example, when counting money, counting out change, "counting by twos" (2, 4, 6, 8, 10, 12, ...), or "counting by fives" (5, 10, 15, 20, 25, ...).
There is archaeological evidence suggesting that humans have been counting for at least 50,000 years.[1]Counting was primarily used by ancient cultures to keep track of social and economic data such as the number of group members, prey animals, property, or debts (that is,accountancy). Notched bones were also found in the Border Caves in South Africa, which may suggest that the concept of counting was known to humans as far back as 44,000 BCE.[2]The development of counting led to the development ofmathematical notation,numeral systems, andwriting.
Verbal counting involves speaking sequential numbers aloud or mentally to track progress. Generally such counting is done withbase 10numbers: "1, 2, 3, 4", etc. Verbal counting is often used for objects that are currently present rather than for counting things over time, since following an interruption counting must resume from where it was left off, a number that has to be recorded or remembered.
Counting a small set of objects, especially over time, can be accomplished efficiently withtally marks: making a mark for each number and then counting all of the marks when done tallying. Tallying isbase 1counting.
Finger countingis convenient and common for small numbers. Children count on fingers to facilitate tallying and for performing simple mathematical operations. Older finger counting methods used the four fingers and the three bones in each finger (phalanges) to count to twelve.[3]Other hand-gesture systems are also in use, for example the Chinese system by which one can count to 10 using only gestures of one hand. Withfinger binaryit is possible to keep a finger count up to1023 = 210− 1.
Various devices can also be used to facilitate counting, such astally countersandabacuses.
Inclusive/exclusive counting are two different methods of counting. For exclusive counting, unit intervals are counted at the end of each interval. For inclusive counting, unit intervals are counted beginning with the start of the first interval and ending with end of the last interval. This results in a count which is always greater by one when using inclusive counting, as compared to using exclusive counting, for the same set. Apparently, the introduction of the number zero to the number line resolved this difficulty; however, inclusive counting is still useful for some things.
Refer also to thefencepost error, which is a type ofoff-by-one error.
Modern mathematical English language usage has introduced another difficulty, however. Because an exclusive counting is generally tacitly assumed, the term "inclusive" is generally used in reference to a set which is actually counted exclusively. For example; How many numbers are included in the set that ranges from 3 to 8, inclusive? The set is counted exclusively, once the range of the set has been made certain by the use of the word "inclusive". The answer is 6; that is 8-3+1, where the +1 range adjustment makes the adjusted exclusive count numerically equivalent to an inclusive count, even though the range of the inclusive count does not include the number eight unit interval. So, it's necessary to discern the difference in usage between the terms "inclusive counting" and "inclusive" or "inclusively", and one must recognize that it's not uncommon for the former term to be loosely used for the latter process.
Inclusive counting is usually encountered when dealing with time inRoman calendarsand theRomance languages.[4]In theancient Roman calendar, thenones(meaning "nine") is 8 days before theides; more generally, dates are specified as inclusively counted days up to the next named day.[4]
In theChristian liturgical calendar,Quinquagesima(meaning 50) is 49 days before Easter Sunday. When counting "inclusively", the Sunday (the start day) will beday 1and therefore the following Sunday will be theeighth day. For example, the French phrase for "fortnight" isquinzaine(15 [days]), and similar words are present in Greek (δεκαπενθήμερο,dekapenthímero), Spanish (quincena) and Portuguese (quinzena).
In contrast, the English word "fortnight" itself derives from "a fourteen-night", as the archaic "sennight" does from "a seven-night"; the English words are not examples of inclusive counting. In exclusive counting languages such as English, when counting eight days "from Sunday", Monday will beday 1, Tuesdayday 2, and the following Monday will be theeighth day.[citation needed]For many years it wasa standard practice in English lawfor the phrase "from a date" to mean "beginning on the day after that date": this practice is now deprecated because of the high risk of misunderstanding.[5]
Similar counting is involved inEast Asian age reckoning, in whichnewbornsare considered to be 1 at birth.
Musical terminology also uses inclusive counting ofintervalsbetween notes of the standard scale: going up one note is a second interval, going up two notes is a third interval, etc., and going up seven notes is anoctave.
Learning to count is an important educational/developmental milestone in most cultures of the world. Learning to count is a child's very first step into mathematics, and constitutes the most fundamental idea of that discipline. However, some cultures in Amazonia and the Australian Outback do not count,[6][7]and their languages do not have number words.
Many children at just 2 years of age have some skill in reciting the count list (that is, saying "one, two, three, ..."). They can also answer questions of ordinality for small numbers, for example, "What comes afterthree?". They can even be skilled at pointing to each object in a set and reciting the words one after another. This leads many parents and educators to the conclusion that the child knows how to use counting to determine the size of a set.[8]Research suggests that it takes about a year after learning these skills for a child to understand what they mean and why the procedures are performed.[9][10]In the meantime, children learn how to name cardinalities that they cansubitize.
In mathematics, the essence of counting a set and finding a resultn, is that it establishes aone-to-one correspondence(or bijection) of the subject set with the subset of positive integers {1, 2, ...,n}. A fundamental fact, which can be proved bymathematical induction, is that no bijection can exist between {1, 2, ...,n} and {1, 2, ...,m} unlessn=m; this fact (together with the fact that two bijections can becomposedto give another bijection) ensures that counting the same set in different ways can never result in different numbers (unless an error is made). This is the fundamental mathematical theorem that gives counting its purpose; however you count a (finite) set, the answer is the same. In a broader context, the theorem is an example of a theorem in the mathematical field of (finite)combinatorics—hence (finite) combinatorics is sometimes referred to as "the mathematics of counting."
Many sets that arise in mathematics do not allow a bijection to be established with {1, 2, ...,n} foranynatural numbern; these are calledinfinite sets, while those sets for which such a bijection does exist (for somen) are calledfinite sets. Infinite sets cannot be counted in the usual sense; for one thing, the mathematical theorems which underlie this usual sense for finite sets are false for infinite sets. Furthermore, different definitions of the concepts in terms of which these theorems are stated, while equivalent for finite sets, are inequivalent in the context of infinite sets.
The notion of counting may be extended to them in the sense of establishing (the existence of) a bijection with some well-understood set. For instance, if a set can be brought into bijection with the set of all natural numbers, then it is called "countably infinite." This kind of counting differs in a fundamental way from counting of finite sets, in that adding new elements to a set does not necessarily increase its size, because the possibility of a bijection with the original set is not excluded. For instance, the set of allintegers(including negative numbers) can be brought into bijection with the set of natural numbers, and even seemingly much larger sets like that of all finite sequences of rational numbers are still (only) countably infinite. Nevertheless, there are sets, such as the set ofreal numbers, that can be shown to be "too large" to admit a bijection with the natural numbers, and these sets are called "uncountable". Sets for which there exists a bijection between them are said to have the samecardinality, and in the most general sense counting a set can be taken to mean determining its cardinality. Beyond the cardinalities given by each of the natural numbers, there is an infinite hierarchy of infinite cardinalities, although only very few such cardinalities occur in ordinary mathematics (that is, outsideset theorythat explicitly studies possible cardinalities).
Counting, mostly of finite sets, has various applications in mathematics. One important principle is that if two setsXandYhave the same finite number of elements, and a functionf:X→Yis known to beinjective, then it is alsosurjective, and vice versa. A related fact is known as thepigeonhole principle, which states that if two setsXandYhave finite numbers of elementsnandmwithn>m, then any mapf:X→Yisnotinjective (so there exist two distinct elements ofXthatfsends to the same element ofY); this follows from the former principle, since iffwere injective, then so would itsrestrictionto a strict subsetSofXwithmelements, which restriction would then be surjective, contradicting the fact that forxinXoutsideS,f(x) cannot be in the image of the restriction. Similar counting arguments can prove the existence of certain objects without explicitly providing an example. In the case of infinite sets this can even apply in situations where it is impossible to give an example.[citation needed]
The domain ofenumerative combinatoricsdeals with computing the number of elements of finite sets, without actually counting them; the latter usually being impossible because infinite families of finite sets are considered at once, such as the set ofpermutationsof {1, 2, ...,n} for any natural numbern.
|
https://en.wikipedia.org/wiki/Counting
|
Hilbert's paradox of the Grand Hotel(colloquial:Infinite Hotel ParadoxorHilbert's Hotel) is athought experimentwhich illustrates acounterintuitiveproperty ofinfinite sets. It is demonstrated that a fully occupied hotel with infinitely many rooms may still accommodate additional guests, even infinitely many of them, and this process may be repeated infinitely often. The idea was introduced byDavid Hilbertin a 1925 lecture "Über das Unendliche", reprinted in (Hilbert 2013, p.730), and was popularized throughGeorge Gamow's 1947 bookOne Two Three... Infinity.[1][2]
Hilbert imagines a hypothetical hotel with rooms numbered 1, 2, 3, and so on with no upper limit. This is called acountably infinitenumber of rooms. Initially every room is occupied, and yet new visitors arrive, each expecting their own room. A normal, finite hotel could not accommodate new guests once every room is full. However, it can be shown that the existing guests and newcomers — even an infinite number of them — can each have their own room in the infinite hotel.
With one additional guest, the hotel can accommodate them and the existing guests if infinitely many guests simultaneously move rooms. The guest currently in room 1 moves to room 2, the guest currently in room 2 to room 3, and so on, moving every guest from their current roomnto roomn+1. The infinite hotel has no final room, so every guest has a room to go to. After this, room 1 is empty and the new guest can be moved into that room. By repeating this procedure, it is possible to make room for any finite number of new guests. In general, whenkguests seek a room, the hotel can apply the same procedure and move every guest from roomnto roomn + k.
It is also possible to accommodate acountably infinitenumber of new guests: just move the person occupying room 1 to room 2, the guest occupying room 2 to room 4, and, in general, the guest occupying roomnto room 2n(2 timesn), and all the odd-numbered rooms (which are countably infinite) will be free for the new guests.
It is possible to accommodate countably infinitely manycoachloadsof countably infinite passengers each, by several different methods. Most methods depend on the seats in the coaches being already numbered (or use theaxiom of countable choice). In general anypairing functioncan be used to solve this problem. For each of these methods, consider a passenger's seat number on a coach to ben{\displaystyle n}, and their coach number to bec{\displaystyle c}, and the numbersn{\displaystyle n}andc{\displaystyle c}are then fed into the two arguments of thepairing function.
Send the guest in roomi{\displaystyle i}to room2i{\displaystyle 2^{i}}, then put the first coach's load in rooms3n{\displaystyle 3^{n}}, the second coach's load in rooms5n{\displaystyle 5^{n}}; in general for coach numberc{\displaystyle c}we use the roomspcn{\displaystyle p_{c}^{n}}wherepc{\displaystyle p_{c}}is thec{\displaystyle c}th oddprime number. This solution leaves certain rooms empty (which may or may not be useful to the hotel); specifically, all numbers that are notprime powers, such as 15 or 847, will no longer be occupied. (So, strictly speaking, this shows that the number of arrivals isless than or equal tothe number of vacancies created. It is easier to show, by an independent means, that the number of arrivals is alsogreater than or equal tothe number of vacancies, and thus that they areequal, than to modify the algorithm to an exact fit.) (The algorithm works equally well if one interchangesn{\displaystyle n}andc{\displaystyle c}, but whichever choice is made, it must be applied uniformly throughout.)
Each person of a certain seats{\displaystyle s}and coachc{\displaystyle c}can be put into room2s3c{\displaystyle 2^{s}3^{c}}(presumingc=0 for the people already in the hotel, 1 for the first coach, etc.). Because every number has a uniqueprime factorization, it is easy to see all people will have a room, while no two people will end up in the same room. For example, the person in room 2592 (2534{\displaystyle 2^{5}3^{4}}) was sitting in on the 4th coach, on the 5th seat. Like the prime powers method, this solution leaves certain rooms empty.
This method can also easily be expanded for infinite nights, infinite entrances, etc. (2s3c5n7e...{\displaystyle 2^{s}3^{c}5^{n}7^{e}...})
For each passenger, compare the lengths ofn{\displaystyle n}andc{\displaystyle c}as written in any positionalnumeral system, such asdecimal. (Treat each hotel resident as being in coach #0.) If either number is shorter, addleading zeroesto it until both values have the same number of digits.Interleavethe digits to produce a room number: its digits will be [first digit of coach number]-[first digit of seat number]-[second digit of coach number]-[second digit of seat number]-etc. The hotel (coach #0) guest in room number 1729 moves to room 01070209 (i.e., room 1,070,209). The passenger on seat 1234 of coach 789 goes to room 01728394 (i.e., room 1,728,394).
Unlike the prime powers solution, this one fills the hotel completely, and we can reconstruct a guest's original coach and seat by reversing the interleaving process. First add a leading zero if the room has an odd number of digits. Then de-interleave the number into two numbers: the coach number consists of the odd-numbered digits and the seat number is the even-numbered ones. Of course, the original encoding is arbitrary, and the roles of the two numbers can be reversed (seat-odd and coach-even), so long as it is applied consistently.
Those already in the hotel will be moved to room(n2+n)/2{\displaystyle (n^{2}+n)/2}, or then{\displaystyle n}thtriangular number. Those in a coach will be in room((c+n−1)2+c+n−1)/2+n{\displaystyle ((c+n-1)^{2}+c+n-1)/2+n}, or the(c+n−1){\displaystyle (c+n-1)}triangular number plusn{\displaystyle n}. In this way all the rooms will be filled by one, and only one, guest.
This pairing function can be demonstrated visually by structuring the hotel as a one-room-deep, infinitely tallpyramid. The pyramid's topmost row is a single room: room 1; its second row is rooms 2 and 3; and so on. The column formed by the set of rightmost rooms will correspond to the triangular numbers. Once they are filled (by the hotel's redistributed occupants), the remaining empty rooms form the shape of a pyramid exactly identical to the original shape. Thus, the process can be repeated for each infinite set. Doing this one at a time for each coach would require an infinite number of steps, but by using the prior formulas, a guest can determine what their room "will be" once their coach has been reached in the process, and can simply go there immediately.
LetS:={(a,b)∣a,b∈N}{\displaystyle S:=\{(a,b)\mid a,b\in \mathbb {N} \}}.S{\displaystyle S}is countable sinceN{\displaystyle \mathbb {N} }is countable, hence we may enumerate its elementss1,s2,…{\displaystyle s_{1},s_{2},\dots }. Now ifsn=(a,b){\displaystyle s_{n}=(a,b)}, assign theb{\displaystyle b}th guest of thea{\displaystyle a}th coach to then{\displaystyle n}th room (consider the guests already in the hotel as guests of the0{\displaystyle 0}th coach). Thus we have a function assigning each person to a room; furthermore, this assignment does not skip over any rooms.
Suppose the hotel is next to an ocean, and an infinite number ofcar ferriesarrive, each bearing an infinite number of coaches, each with an infinite number of passengers. This is a situation involving three "levels" ofinfinity, and it can be solved by extensions of any of the previous solutions.
Theprime factorizationmethod can be applied by adding a new prime number for every additional layer of infinity (2s3c5f{\displaystyle 2^{s}3^{c}5^{f}}, withf{\displaystyle f}the ferry).
The prime power solution can be applied with furtherexponentiationof prime numbers, resulting in very large room numbers even given small inputs. For example, the passenger in the second seat of the third bus on the second ferry (address 2-3-2) would raise the 2nd odd prime (5) to 49, which is the result of the 3rd odd prime (7) being raised to the power of his seat number (2). This room number would have over thirty decimal digits.
The interleaving method can be used with three interleaved "strands" instead of two. The passenger with the address 2-3-2 would go to room 232, while the one with the address 4935-198-82217 would go to room #008,402,912,391,587 (the leading zeroes can be removed).
Anticipating the possibility of any number of layers of infinite guests, the hotel may wish to assign rooms such that no guest will need to move, no matter how many guests arrive afterward. One solution is to convert each arrival's address into abinary numberin which ones are used as separators at the start of each layer, while a number within a given layer (such as a guest's coach number) is represented with that many zeroes. Thus, a guest with the prior address 2-5-1-3-1 (five infinite layers) would go to room 10010000010100010 (decimal 295458).
As an added step in this process, one zero can be removed from each section of the number; in this example, the guest's new room is 101000011001 (decimal 2585). This ensures that every room could be filled by a hypothetical guest. If no infinite sets of guests arrive, then only rooms that are a power of two will be occupied.
Hilbert's paradox is averidical paradox: it leads to acounter-intuitiveresult that isprovablytrue. The statements "there is a guest to every room" and "no more guests can be accommodated" are notequivalentwhen there are infinitely many rooms.
Initially, this state of affairs might seem to be counter-intuitive. The properties of infinite collections of things are quite different from those of finite collections of things. The paradox of Hilbert's Grand Hotel can be understood by using Cantor's theory oftransfinite numbers. Thus, in an ordinary (finite) hotel with more than one room, the number of odd-numbered rooms is obviously smaller than the total number of rooms. However, in Hilbert's Grand Hotel, the quantity of odd-numbered rooms is not smaller than the total "number" of rooms. In mathematical terms, thecardinalityof thesubsetcontaining the odd-numbered rooms is the same as the cardinality of thesetof all rooms. Indeed, infinite sets are characterized as sets that have proper subsets of the same cardinality. For countable sets (sets with the same cardinality as thenatural numbers) this cardinality isℵ0{\displaystyle \aleph _{0}}.[3]
Rephrased, for any countably infinite set, there exists abijectivefunction which maps the countably infinite set to the set of natural numbers, even if the countably infinite set contains the natural numbers. For example, the set of rational numbers—those numbers which can be written as a quotient of integers—contains the natural numbers as a subset, but is no bigger than the set of natural numbers since the rationals are countable: there is a bijection from the naturals to the rationals.
|
https://en.wikipedia.org/wiki/Hilbert%27s_paradox_of_the_Grand_Hotel
|
Inmathematics, particularly inset theory, thebeth numbersare a certain sequence ofinfinitecardinal numbers(also known astransfinite numbers), conventionally writtenℶ0,ℶ1,ℶ2,ℶ3,…{\displaystyle \beth _{0},\beth _{1},\beth _{2},\beth _{3},\dots }, whereℶ{\displaystyle \beth }is theHebrew letterbeth. The beth numbers are related to thealeph numbers(ℵ0,ℵ1,…{\displaystyle \aleph _{0},\aleph _{1},\dots }), but unless thegeneralized continuum hypothesisis true, there are numbers indexed byℵ{\displaystyle \aleph }that are not indexed byℶ{\displaystyle \beth }orℷ{\displaystyle \gimel }. See:Gimel function
Beth numbers are defined bytransfinite recursion:
whereα{\displaystyle \alpha }is an ordinal andλ{\displaystyle \lambda }is alimit ordinal.[1]
The cardinalℶ0=ℵ0{\displaystyle \beth _{0}=\aleph _{0}}is the cardinality of anycountably infinitesetsuch as the setN{\displaystyle \mathbb {N} }ofnatural numbers, so thatℶ0=|N|{\displaystyle \beth _{0}=|\mathbb {N} |}.
Letα{\displaystyle \alpha }be anordinal, andAα{\displaystyle A_{\alpha }}be a set with cardinalityℶα=|Aα|{\displaystyle \beth _{\alpha }=|A_{\alpha }|}. Then,
Given this definition,
are respectively the cardinalities of
so that the second beth numberℶ1{\displaystyle \beth _{1}}is equal toc{\displaystyle {\mathfrak {c}}}, thecardinality of the continuum(the cardinality of the set of thereal numbers), and the third beth numberℶ2{\displaystyle \beth _{2}}is the cardinality of the power set of the continuum.
Because ofCantor's theorem, each set in the preceding sequence has cardinality strictly greater than the one preceding it. For infinitelimit ordinalsλ{\displaystyle \lambda }, the corresponding beth number is defined to be thesupremumof the beth numbers for all ordinals strictly smaller thanλ{\displaystyle \lambda }:
One can show that this definition is equivalent to
For instance:
This equivalence can be shown by seeing that:
Note that this behavior is different from that of successor ordinals. Cardinalities less thanℶβ{\displaystyle \beth _{\beta }}but greater than anyℶα:α<β{\displaystyle \beth _{\alpha }:\alpha <\beta }can exist whenβ{\displaystyle \beta }is a successor ordinal (in that case, the existence is undecidable in ZFC and controlled by theGeneralized Continuum Hypothesis); but cannot exist whenβ{\displaystyle \beta }is a limit ordinal, even under the second definition presented.
One can also show that thevon Neumann universesVω+α{\displaystyle V_{\omega +\alpha }}have cardinalityℶα{\displaystyle \beth _{\alpha }}.
Assuming theaxiom of choice, infinite cardinalities arelinearly ordered; no two cardinalities can fail to be comparable. Thus, since by definition no infinite cardinalities are betweenℵ0{\displaystyle \aleph _{0}}andℵ1{\displaystyle \aleph _{1}}, it follows that
Repeating this argument (seetransfinite induction) yieldsℶα≥ℵα{\displaystyle \beth _{\alpha }\geq \aleph _{\alpha }}for all ordinalsα{\displaystyle \alpha }.
Thecontinuum hypothesisis equivalent to
Thegeneralized continuum hypothesissays the sequence of beth numbers thus defined is the same as the sequence ofaleph numbers, i.e.,ℶα=ℵα{\displaystyle \beth _{\alpha }=\aleph _{\alpha }}for all ordinalsα{\displaystyle \alpha }.
Since this is defined to beℵ0{\displaystyle \aleph _{0}}, oraleph null, sets with cardinalityℶ0{\displaystyle \beth _{0}}include:
Sets with cardinalityℶ1{\displaystyle \beth _{1}}include:
ℶ2{\displaystyle \beth _{2}}(pronouncedbeth two) is also referred to as2c{\displaystyle 2^{\mathfrak {c}}}(pronouncedtwo to the power ofc{\displaystyle {\mathfrak {c}}}).
Sets with cardinalityℶ2{\displaystyle \beth _{2}}include:
ℶω{\displaystyle \beth _{\omega }}(pronouncedbeth omega) is the smallestuncountablestrong limit cardinal.
The more general symbolℶα(κ){\displaystyle \beth _{\alpha }(\kappa )}, for ordinalsα{\displaystyle \alpha }and cardinalsκ{\displaystyle \kappa }, is occasionally used. It is defined by:
So
InZermelo–Fraenkel set theory(ZF), for any cardinalsκ{\displaystyle \kappa }andμ{\displaystyle \mu }, there is an ordinalα{\displaystyle \alpha }such that:
And in ZF, for any cardinalκ{\displaystyle \kappa }and ordinalsα{\displaystyle \alpha }andβ{\displaystyle \beta }:
Consequently, in ZF absentur-elements, with or without theaxiom of choice, for any cardinalsκ{\displaystyle \kappa }andμ{\displaystyle \mu }, the equality
holds for all sufficiently large ordinalsβ{\displaystyle \beta }. That is, there is an ordinalα{\displaystyle \alpha }such that the equality holds for every ordinalβ≥α{\displaystyle \beta \geq \alpha }.
This also holds in Zermelo–Fraenkel set theory with ur-elements (with or without the axiom of choice), provided that the ur-elements form a set which is equinumerous with apure set(a set whosetransitive closurecontains no ur-elements). If the axiom of choice holds, then any set of ur-elements is equinumerous with a pure set.
Borel determinacyis implied by the existence of all beths of countable index.[5]
|
https://en.wikipedia.org/wiki/Beth_number
|
Inmathematics, thefirst uncountable ordinal, traditionally denoted byω1{\displaystyle \omega _{1}}or sometimes byΩ{\displaystyle \Omega }, is the smallestordinal numberthat, considered as aset, isuncountable. It is thesupremum(least upper bound) of all countable ordinals. When considered as a set, the elements ofω1{\displaystyle \omega _{1}}are the countable ordinals (including finite ordinals),[1]of which there are uncountably many.
Like any ordinal number (invon Neumann's approach),ω1{\displaystyle \omega _{1}}is awell-ordered set, withset membershipserving as the order relation.ω1{\displaystyle \omega _{1}}is alimit ordinal, i.e. there is no ordinalα{\displaystyle \alpha }such thatω1=α+1{\displaystyle \omega _{1}=\alpha +1}.
Thecardinalityof the setω1{\displaystyle \omega _{1}}is the first uncountablecardinal number,ℵ1{\displaystyle \aleph _{1}}(aleph-one). The ordinalω1{\displaystyle \omega _{1}}is thus theinitial ordinalofℵ1{\displaystyle \aleph _{1}}. Under thecontinuum hypothesis, the cardinality ofω1{\displaystyle \omega _{1}}isℶ1{\displaystyle \beth _{1}}, the same as that ofR{\displaystyle \mathbb {R} }—the set ofreal numbers.[2]
In most constructions,ω1{\displaystyle \omega _{1}}andℵ1{\displaystyle \aleph _{1}}are considered equal as sets. To generalize: ifα{\displaystyle \alpha }is an arbitrary ordinal, we defineωα{\displaystyle \omega _{\alpha }}as the initial ordinal of the cardinalℵα{\displaystyle \aleph _{\alpha }}.
The existence ofω1{\displaystyle \omega _{1}}can be proven without theaxiom of choice. For more, seeHartogs number.
Any ordinal number can be turned into atopological spaceby using theorder topology. When viewed as a topological space,ω1{\displaystyle \omega _{1}}is often written as[0,ω1){\displaystyle [0,\omega _{1})}, to emphasize that it is the space consisting of all ordinals smaller thanω1{\displaystyle \omega _{1}}.
If theaxiom of countable choiceholds, everyincreasing ω-sequenceof elements of[0,ω1){\displaystyle [0,\omega _{1})}converges to alimitin[0,ω1){\displaystyle [0,\omega _{1})}. The reason is that theunion(i.e., supremum) of every countable set of countable ordinals is another countable ordinal.
The topological space[0,ω1){\displaystyle [0,\omega _{1})}issequentially compact, but notcompact. As a consequence, it is notmetrizable. It is, however,countably compactand thus notLindelöf(a countably compact space is compact if and only if it is Lindelöf). In terms ofaxioms of countability,[0,ω1){\displaystyle [0,\omega _{1})}isfirst-countable, but neitherseparablenorsecond-countable.
The space[0,ω1]=ω1+1{\displaystyle [0,\omega _{1}]=\omega _{1}+1}is compact and not first-countable.ω1{\displaystyle \omega _{1}}is used to define thelong lineand theTychonoff plank—two important counterexamples intopology.
|
https://en.wikipedia.org/wiki/First_uncountable_ordinal
|
Cantor's first set theory articlecontainsGeorg Cantor's first theorems of transfiniteset theory, which studiesinfinite setsand their properties. One of these theorems is his "revolutionary discovery" that thesetof allreal numbersisuncountably, rather thancountably, infinite.[1]This theorem is proved usingCantor's first uncountability proof, which differs from the more familiar proof using hisdiagonal argument. The title of the article, "On a Property of the Collection of All Real Algebraic Numbers" ("Ueber eine Eigenschaft des Inbegriffes aller reellen algebraischen Zahlen"), refers to its first theorem: the set of realalgebraic numbersis countable. Cantor's article was published in 1874. In 1879, he modified his uncountability proof by using thetopologicalnotion of a set beingdensein an interval.
Cantor's article also contains a proof of the existence oftranscendental numbers. Bothconstructive and non-constructive proofshave been presented as "Cantor's proof." The popularity of presenting a non-constructive proof has led to a misconception that Cantor's arguments are non-constructive. Since the proof that Cantor published either constructs transcendental numbers or does not, an analysis of his article can determine whether or not this proof is constructive.[2]Cantor's correspondence withRichard Dedekindshows the development of his ideas and reveals that he had a choice between two proofs: a non-constructive proof that uses the uncountability of the real numbers and a constructive proof that does not use uncountability.
Historians of mathematics have examined Cantor's article and the circumstances in which it was written. For example, they have discovered that Cantor was advised to leave out his uncountability theorem in the article he submitted — he added it duringproofreading. They have traced this and other facts about the article to the influence ofKarl WeierstrassandLeopold Kronecker. Historians have also studied Dedekind's contributions to the article, including his contributions to the theorem on the countability of the real algebraic numbers. In addition, they have recognized the role played by the uncountability theorem and the concept of countability in the development of set theory,measure theory, and theLebesgue integral.
Cantor's article is short, less than four and a half pages.[A]It begins with a discussion of the realalgebraic numbersand a statement of his first theorem: The set of real algebraic numbers can be put intoone-to-one correspondencewith the set of positive integers.[3]Cantor restates this theorem in terms more familiar to mathematicians of his time: "The set of real algebraic numbers can be written as an infinitesequencein which each number appears only once."[4]
Cantor's second theorem works with aclosed interval[a,b], which is the set of real numbers ≥aand ≤b. The theorem states: Given any sequence of real numbersx1,x2,x3, ... and any interval [a,b], there is a number in [a,b] that is not contained in the given sequence. Hence, there are infinitely many such numbers.[5]
Cantor observes that combining his two theorems yields a new proof ofLiouville's theoremthat every interval [a,b] contains infinitely manytranscendental numbers.[5]
Cantor then remarks that his second theorem is:
the reason why collections of real numbers forming a so-called continuum (such as, all real numbers which are ≥ 0 and ≤ 1) cannot correspond one-to-one with the collection (ν) [the collection of all positive integers]; thus I have found the clear difference between a so-called continuum and a collection like the totality of real algebraic numbers.[6]
This remark contains Cantor's uncountability theorem, which only states that an interval [a,b] cannot be put into one-to-one correspondence with the set of positive integers. It does not state that this interval is an infinite set of largercardinalitythan the set of positive integers. Cardinality is defined in Cantor's next article, which was published in 1878.[7]
Cantor only states his uncountability theorem. He does not use it in any proofs.[3]
To prove that the set of real algebraic numbers is countable, define theheightof apolynomialofdegreenwith integercoefficientsas:n− 1 + |a0| + |a1| + ... + |an|, wherea0,a1, ...,anare the coefficients of the polynomial. Order the polynomials by their height, and order the realrootsof polynomials of the same height by numeric order. Since there are only a finite number of roots of polynomials of a given height, these orderings put the real algebraic numbers into a sequence. Cantor went a step further and produced a sequence in which each real algebraic number appears just once. He did this by only using polynomials that areirreducibleover the integers. The following table contains the beginning of Cantor's enumeration.[9]
Only the first part of Cantor's second theorem needs to be proved. It states: Given any sequence of real numbersx1,x2,x3, ... and any interval [a,b], there is a number in [a,b] that is not contained in the given sequence.[B]
To find a number in [a,b] that is not contained in the given sequence, construct two sequences of real numbers as follows: Find the first two numbers of the given sequence that are in theopen interval(a,b). Denote the smaller of these two numbers bya1and the larger byb1. Similarly, find the first two numbers of the given sequence that are in (a1,b1). Denote the smaller bya2and the larger byb2. Continuing this procedure generates a sequence of intervals (a1,b1), (a2,b2), (a3,b3), ... such that each interval in the sequence contains all succeeding intervals—that is, it generates a sequence ofnested intervals. This implies that the sequencea1,a2,a3, ... is increasing and the sequenceb1,b2,b3, ... is decreasing.[10]
Either the number of intervals generated is finite or infinite. If finite, let (aL,bL) be the last interval. If infinite, take thelimitsa∞= limn→ ∞anandb∞= limn→ ∞bn. Sincean<bnfor alln, eithera∞=b∞ora∞<b∞. Thus, there are three cases to consider:
The proof is complete since, in all cases, at least one real number in [a,b] has been found that is not contained in the given sequence.[D]
Cantor's proofs are constructive and have been used to write acomputer programthat generates the digits of a transcendental number. This program applies Cantor's construction to a sequence containing all the real algebraic numbers between 0 and 1. The article that discusses this program gives some of its output, which shows how the construction generates a transcendental.[12]
An example illustrates how Cantor's construction works. Consider the sequence:1/2,1/3,2/3,1/4,3/4,1/5,2/5,3/5,4/5, ... This sequence is obtained by ordering therational numbersin (0, 1) by increasing denominators, ordering those with the same denominator by increasing numerators, and omittingreducible fractions. The table below shows the first five steps of the construction. The table's first column contains the intervals (an,bn). The second column lists the terms visited during the search for the first two terms in (an,bn). These two terms are in red.[13]
Since the sequence contains all the rational numbers in (0, 1), the construction generates anirrational number, which turns out to be√2− 1.[14]
Cantor's construction produces mediants because the rational numbers were sequenced by increasing denominator. The first interval in the table is(13,12).{\displaystyle ({\frac {1}{3}},{\frac {1}{2}}).}Since13{\displaystyle {\frac {1}{3}}}and12{\displaystyle {\frac {1}{2}}}are adjacent inF3,{\displaystyle F_{3},}their mediant25{\displaystyle {\frac {2}{5}}}is the first fraction in the sequence between13{\displaystyle {\frac {1}{3}}}and12.{\displaystyle {\frac {1}{2}}.}Hence,13<25<12.{\displaystyle {\frac {1}{3}}<{\frac {2}{5}}<{\frac {1}{2}}.}In this inequality,12{\displaystyle {\frac {1}{2}}}has the smallest denominator, so the second fraction is the mediant of25{\displaystyle {\frac {2}{5}}}and12,{\displaystyle {\frac {1}{2}},}which equals37.{\displaystyle {\frac {3}{7}}.}This implies:13<25<37<12.{\displaystyle {\frac {1}{3}}<{\frac {2}{5}}<{\frac {3}{7}}<{\frac {1}{2}}.}Therefore, the next interval is(25,37).{\displaystyle ({\frac {2}{5}},{\frac {3}{7}}).}
We will prove that the endpoints of the intervals converge to the continued fraction[0;2,2,…].{\displaystyle [0;2,2,\dots ].}This continued fraction is the limit of itsconvergents:
Thepn{\displaystyle p_{n}}andqn{\displaystyle q_{n}}sequences satisfy the equations:[16]
First, we prove by induction that for oddn, then-th interval in the table is:
and for evenn, the interval's endpoints are reversed:(pnqn,pn+pn−1qn+qn−1).{\displaystyle \left({\frac {p_{n}}{q_{n}}},{\frac {p_{n}+p_{n-1}}{q_{n}+q_{n-1}}}\right)\!.}
This is true for the first interval since:
Assume that the inductive hypothesis is true for thek-th interval. Ifkis odd, this interval is:
The mediant of its endpoints2pk+pk−12qk+qk−1=pk+1qk+1{\displaystyle {\frac {2p_{k}+p_{k-1}}{2q_{k}+q_{k-1}}}={\frac {p_{k+1}}{q_{k+1}}}}is the first fraction in the sequence between these endpoints.
Hence,pk+pk−1qk+qk−1<pk+1qk+1<pkqk.{\displaystyle {\frac {p_{k}+p_{k-1}}{q_{k}+q_{k-1}}}<{\frac {p_{k+1}}{q_{k+1}}}<{\frac {p_{k}}{q_{k}}}.}
In this inequality,pkqk{\displaystyle {\frac {p_{k}}{q_{k}}}}has the smallest denominator, so the second fraction is the mediant ofpk+1qk+1{\displaystyle {\frac {p_{k+1}}{q_{k+1}}}}andpkqk,{\displaystyle {\frac {p_{k}}{q_{k}}},}which equalspk+1+pkpk+1+qk.{\displaystyle {\frac {p_{k+1}+p_{k}}{p_{k+1}+q_{k}}}.}
This implies:pk+pk−1qk+qk−1<pk+1qk+1<pk+1+pkpk+1+qk<pkqk.{\displaystyle {\frac {p_{k}+p_{k-1}}{q_{k}+q_{k-1}}}<{\frac {p_{k+1}}{q_{k+1}}}<{\frac {p_{k+1}+p_{k}}{p_{k+1}+q_{k}}}<{\frac {p_{k}}{q_{k}}}.}
Therefore, the (k+ 1)-st interval is(pk+1qk+1,pk+1+pkpk+1+qk).{\displaystyle \left({\frac {p_{k+1}}{q_{k+1}}},{\frac {p_{k+1}+p_{k}}{p_{k+1}+q_{k}}}\right)\!.}
This is the desired interval;pk+1qk+1{\displaystyle {\frac {p_{k+1}}{q_{k+1}}}}is the left endpoint becausek+ 1 is even. Thus, the inductive hypothesis is true for the (k+ 1)-st interval. For evenk, the proof is similar. This completes the inductive proof.
Since the right endpoints of the intervals are decreasing and every other endpoint isp2n−1q2n−1,{\displaystyle {\frac {p_{2n-1}}{q_{2n-1}}},}their limit equalslimn→∞pnqn.{\displaystyle \lim _{n\to \infty }{\frac {p_{n}}{q_{n}}}.}The left endpoints have the same limit because they are increasing and every other endpoint isp2nq2n.{\displaystyle {\frac {p_{2n}}{q_{2n}}}.}As mentioned above, this limit is the continued fraction[0;2,2,…],{\displaystyle [0;2,2,\dots ],}which equals2−1.{\displaystyle {\sqrt {2}}-1.}[17]
In 1879, Cantor published a new uncountability proof that modifies his 1874 proof. He first defines thetopologicalnotion of a point setPbeing "everywheredensein an interval":[E]
In this discussion of Cantor's proof:a,b,c,dare used instead of α, β, γ, δ. Also, Cantor only uses his interval notation if the first endpoint is less than the second. For this discussion, this means that (a,b) impliesa<b.
Since the discussion of Cantor's 1874 proof was simplified by using open intervals rather than closed intervals, the same simplification is used here. This requires an equivalent definition of everywhere dense: A setPis everywhere dense in the interval [a,b] if and only if every opensubinterval(c,d) of [a,b] contains at least one point ofP.[18]
Cantor did not specify how many points ofPan open subinterval (c,d) must contain. He did not need to specify this because the assumption that every open subinterval contains at least one point ofPimplies that every open subinterval contains infinitely many points ofP.[G]
Cantor modified his 1874 proof with a new proof of itssecond theorem: Given any sequencePof real numbersx1,x2,x3, ... and any interval [a,b], there is a number in [a,b] that is not contained inP. Cantor's new proof has only two cases. First, it handles the case ofPnot being dense in the interval, then it deals with the more difficult case ofPbeing dense in the interval. This division into cases not only indicates which sequences are more difficult to handle, but it also reveals the important role denseness plays in the proof.[proof 1]
In the first case,Pis not dense in [a,b]. By definition,Pis dense in [a,b] if and only if for all subintervals (c,d) of [a,b], there is anx∈Psuch thatx∈ (c,d). Taking the negation of each side of the "if and only if" produces:Pis not dense in [a,b] if and only if there exists a subinterval (c,d) of [a,b] such that for allx∈P:x∉ (c,d). Therefore, every number in (c,d) is not contained in the sequenceP.[proof 1]This case handlescase 1andcase 3of Cantor's 1874 proof.
In the second case, which handlescase 2of Cantor's 1874 proof,Pis dense in [a,b]. The denseness of sequencePis used torecursively definea sequence of nested intervals that excludes all the numbers inPand whoseintersectioncontains a single real number in [a,b]. The sequence of intervals starts with (a,b). Given an interval in the sequence, the next interval is obtained by finding the two numbers with the least indices that belong toPand to the current interval. These two numbers are theendpointsof the next open interval. Since an open interval excludes its endpoints, every nested interval eliminates two numbers from the front of sequenceP, which implies that the intersection of the nested intervals excludes all the numbers inP.[proof 1]Details of this proof and a proof that this intersection contains a single real number in [a,b] are given below.
Therecursive stepstarts with the interval(an–1,bn–1), the inequalitiesk1<k2< . . . <k2n–2<k2n–1anda<a1< . . . <an–1<bn–1. . . <b1<b, and the fact that the interval(an–1,bn–1)excludes the first 2n–2 members of the sequenceP—thatis,xm∉ (an–1,bn–1)form≤k2n–2. SincePis dense in [a,b], there are infinitely many numbers ofPin(an–1,bn–1). Letxk2n–1be the number with the least index andxk2nbe the number with the next larger index, and letanbe the smaller andbnbe the larger of these two numbers. Then,k2n–1<k2n,an–1<an<bn<bn–1, and (an,bn) is aproper subintervalof(an–1,bn–1). Combining these inequalities with the ones for stepn–1 of the recursion producesk1<k2< . . . <k2n–1<k2nanda<a1< . . . <an<bn. . . <b1<b. Also,xm∉ (an,bn)form=k2n–1andm=k2nsince thesexmare the endpoints of (an,bn). This together with(an–1,bn–1)excluding the first 2n–2 members of sequencePimplies that the interval (an,bn) excludes the first 2nmembers ofP—thatis,xm∉ (an,bn)form≤k2n. Therefore, for alln,xn∉ (an,bn)sincen≤k2n.[proof 1]
The sequenceanis increasing andbounded abovebyb, so the limitA= limn→ ∞anexists. Similarly, the limitB= limn→ ∞bnexists since the sequencebnis decreasing andbounded belowbya. Also,an<bnimpliesA≤B. IfA<B, then for everyn:xn∉ (A,B) becausexnis not in the larger interval (an,bn). This contradictsPbeing dense in [a,b]. Hence,A=B. For alln,A∈ (an,bn)butxn∉ (an,bn). Therefore,Ais a number in [a,b] that is not contained inP.[proof 1]
The development leading to Cantor's 1874 article appears in the correspondence between Cantor andRichard Dedekind. On November 29, 1873, Cantor asked Dedekind whether the collection of positive integers and the collection of positive real numbers "can be corresponded so that each individual of one collection corresponds to one and only one individual of the other?" Cantor added that collections having such a correspondence include the collection of positive rational numbers, and collections of the form (an1,n2, . . . ,nν) wheren1,n2, . . . ,nν, andνare positive integers.[19]
Dedekind replied that he was unable to answer Cantor's question, and said that it "did not deserve too much effort because it has no particular practical interest". Dedekind also sent Cantor a proof that the set of algebraic numbers is countable.[20]
On December 2, Cantor responded that his question does have interest: "It would be nice if it could be answered; for example, provided that it could be answeredno, one would have a new proof ofLiouville's theoremthat there are transcendental numbers."[21]
On December 7, Cantor sent Dedekind aproof by contradictionthat the set of real numbers is uncountable. Cantor starts by assuming that the real numbers in[0,1]{\displaystyle [0,1]}can be written as a sequence. Then, he applies a construction to this sequence to produce a number in[0,1]{\displaystyle [0,1]}that is not in the sequence, thus contradicting his assumption.[22]Together, the letters of December 2 and 7 provide a non-constructive proof of the existence of transcendental numbers.[23]Also, the proof in Cantor's December 7 letter shows some of the reasoning that led to his discovery that the real numbers form an uncountable set.[24]
The proof is by contradiction and starts by assuming that the real numbers in[0,1]{\displaystyle [0,1]}can be written as a sequence:
An increasing sequence is extracted from this sequence by lettingω11={\displaystyle \omega _{1}^{1}=}the first term,ω12={\displaystyle ,\ \omega _{1}^{2}=}the next largest term followingω11,ω13={\displaystyle \omega _{1}^{1},\ \omega _{1}^{3}=}the next largest term followingω12,{\displaystyle \omega _{1}^{2},}and so forth. The same procedure is applied to the remaining members of the original sequence to extract another increasing sequence. By continuing this process of extracting sequences, one sees that the sequence(I){\displaystyle (\mathrm {I} )}can be decomposed into the infinitely many sequences:[22]
Let[p,q]{\displaystyle [p,q]}be an interval such that no term of sequence (1) lies in it. For example, letp{\displaystyle p}andq{\displaystyle q}satisfyω11<p<q<ω12.{\displaystyle \omega _{1}^{1}<p<q<\omega _{1}^{2}.}Thenω11<p<q<ω1n{\displaystyle \omega _{1}^{1}<p<q<\omega _{1}^{n}}forn≥2,{\displaystyle n\geq 2,}so no term of sequence (1) lies in[p,q].{\displaystyle [p,q].}[22]
Now consider whether the terms of the other sequences lie outside[p,q].{\displaystyle [p,q].}All terms of some of these sequences may lie outside of[p,q];{\displaystyle [p,q];}however, there must be some sequence such that not all its terms lie outside[p,q].{\displaystyle [p,q].}Otherwise, the numbers in[p,q]{\displaystyle [p,q]}would not be contained in sequence(I),{\displaystyle (\mathrm {I} ),}contrary to the initial hypothesis. Let sequence(k){\displaystyle (k)}be the first sequence that contains a term in[p,q]{\displaystyle [p,q]}and letωkn{\displaystyle \omega _{k}^{n}}be the first term. Sincep<ωkn<q,{\displaystyle p<\omega _{k}^{n}<q,}letp1{\displaystyle p_{1}}andq1{\displaystyle q_{1}}satisfyp<p1<q1<ωkn<q.{\displaystyle p<p_{1}<q_{1}<\omega _{k}^{n}<q.}Then[p,q]{\displaystyle [p,q]}is aproper supersetof[p1,q1]{\displaystyle [p_{1},q_{1}]}(in symbols,[p,q]⊋[p1,q1]{\displaystyle [p,q]\supsetneq [p_{1},q_{1}]}). Also, the terms of sequences(1),(2),…,(k−1){\displaystyle (1),(2),\ldots ,(k-1)}lie outside of[p1,q1].{\displaystyle [p_{1},q_{1}].}[22]
Repeat the above argument starting with[p1,q1]:{\displaystyle [p_{1},q_{1}]\!:\,}Let sequence(k1){\displaystyle (k_{1})}be the first sequence containing a term in[p1,q1]{\displaystyle [p_{1},q_{1}]}and letωk1n{\displaystyle \omega _{k_{1}}^{n}}be the first term. Sincep1<ωk1n<q1,{\displaystyle \ p_{1}<\omega _{k_{1}}^{n}<q_{1},}letp2{\displaystyle p_{2}}andq2{\displaystyle q_{2}}satisfyp1<p2<q2<ωk1n<q1.{\displaystyle p_{1}<p_{2}<q_{2}<\omega _{k_{1}}^{n}<q_{1}.}Then[p1,q1]⊋[p2,q2]{\displaystyle [p_{1},q_{1}]\supsetneq [p_{2},q_{2}]}and the terms of sequences(k1),…,(k2−1){\displaystyle (k_{1}),\ldots ,(k_{2}-1)}lie outside of[p2,q2].{\displaystyle [p_{2},q_{2}].}[22]
One sees that it is possible to form an infinite sequence of nested intervals[p,q]⊋[p1,q1]⊋[p2,q2]⊋…{\displaystyle [p,q]\supsetneq [p_{1},q_{1}]\supsetneq [p_{2},q_{2}]\supsetneq \ldots }such that:the members of the1st,2nd,…,(k−1)st{\displaystyle 1^{\text{st}},2^{\text{nd}},\ldots ,(k-1)^{\text{st}}}sequence lie outside[p,q];{\displaystyle [p,q];}the members of thekth,…,(k1−1)st{\displaystyle k^{\text{th}},\ldots ,(k_{1}-1)^{\text{st}}}sequence lie outside[p1,q1];{\displaystyle [p_{1},q_{1}];}the members of the(k1)th,…,(k2−1)st{\displaystyle (k_{1})^{\text{th}},\ldots ,(k_{2}-1)^{\text{st}}}sequence lie outside[p2,q2];{\displaystyle [p_{2},q_{2}];}…;{\displaystyle \ldots ;}[22]
Sincepn{\displaystyle p_{n}}andqn{\displaystyle q_{n}}areboundedmonotonic sequences, the limitslimn→∞pn{\displaystyle \lim _{n\to \infty }p_{n}}andlimn→∞qn{\displaystyle \lim _{n\to \infty }q_{n}}exist. Also,pn<qn{\displaystyle p_{n}<q_{n}}for alln{\displaystyle n}implieslimn→∞pn≤limn→∞qn.{\displaystyle \lim _{n\to \infty }p_{n}\leq \lim _{n\to \infty }q_{n}.}Hence, there is at least one numberη{\displaystyle \eta }in(0,1){\displaystyle (0,1)}that lies in all the intervals[p,q]{\displaystyle [p,q]}and[pn,qn].{\displaystyle [p_{n},q_{n}].}Namely,η{\displaystyle \eta }can be any number in[limn→∞pn,limn→∞qn].{\displaystyle [\lim _{n\to \infty }p_{n},\lim _{n\to \infty }q_{n}].}This implies thatη{\displaystyle \eta }lies outside all the sequences(1),(2),(3),…,{\displaystyle (1),(2),(3),\ldots ,}contradicting the initial hypothesis that sequence(I){\displaystyle (\mathrm {I} )}contains all the real numbers in[0,1].{\displaystyle [0,1].}Therefore, the set of all real numbers is uncountable.[22]
Dedekind received Cantor's proof on December 8. On that same day, Dedekind simplified the proof and mailed his proof to Cantor. Cantor used Dedekind's proof in his article.[25]The letter containing Cantor's December 7 proof was not published until 1937.[26]
On December 9, Cantor announced the theorem that allowed him to construct transcendental numbers as well as prove the uncountability of the set of real numbers:
I show directly that if I start with a sequence
(1)ω1,ω2, ... ,ωn, ...
I can determine, ineverygiven interval [α,β], a numberηthat is not included in (1).[27]
This is the second theorem in Cantor's article. It comes from realizing that his construction can be applied to any sequence, not just to sequences that supposedly enumerate the real numbers. So Cantor had a choice between two proofs that demonstrate the existence of transcendental numbers: one proof is constructive, but the other is not. These two proofs can be compared by starting with a sequence consisting of all the real algebraic numbers.
The constructive proof applies Cantor's construction to this sequence and the interval [a,b] to produce a transcendental number in this interval.[5]
The non-constructive proof uses two proofs by contradiction:
Cantor chose to publish the constructive proof, which not only produces a transcendental number but is also shorter and avoids two proofs by contradiction. The non-constructive proof from Cantor's correspondence is simpler than the one above because it works with all the real numbers rather than the interval [a,b]. This eliminates the subsequence step and all occurrences of [a,b] in the second proof by contradiction.[5]
Akihiro Kanamori, who specializes in set theory, stated that "Accounts of Cantor's work have mostly reversed the order for deducing the existence of transcendental numbers, establishing first the uncountability of the reals and only then drawing the existence conclusion from the countability of the algebraic numbers. In textbooks the inversion may be inevitable, but this has promoted the misconception that Cantor's arguments are non-constructive."[29]
Cantor's published proof and the reverse-order proof both use the theorem: Given a sequence of reals, a real can be found that is not in the sequence. By applying this theorem to the sequence of real algebraic numbers, Cantor produced a transcendental number. He then proved that the reals are uncountable: Assume that there is a sequence containing all the reals. Applying the theorem to this sequence produces a real not in the sequence, contradicting the assumption that the sequence contains all the reals. Hence, the reals are uncountable.[5]The reverse-order proof starts by first proving the reals are uncountable. It then proves that transcendental numbers exist: If there were no transcendental numbers, all the reals would be algebraic and hence countable, which contradicts what was just proved. This contradiction proves that transcendental numbers exist without constructing any.[29]
The correspondence containing Cantor's non-constructive reasoning was published in 1937. By then, other mathematicians had rediscovered his non-constructive, reverse-order proof. As early as 1921, this proof was called "Cantor's proof" and criticized for not producing any transcendental numbers.[30]In that year,Oskar Perrongave the reverse-order proof and then stated: "... Cantor's proof for the existence of transcendental numbers has, along with its simplicity and elegance, the great disadvantage that it is only an existence proof; it does not enable us to actually specify even a single transcendental number."[31][I]
As early as 1930, some mathematicians have attempted to correct this misconception of Cantor's work. In that year, the set theoristAbraham Fraenkelstated that Cantor's method is "... a method that incidentally, contrary to a widespread interpretation, is fundamentally constructive and not merely existential."[32]In 1972,Irving Kaplanskywrote: "It is often said that Cantor's proof is not 'constructive,' and so does not yield a tangible transcendental number. This remark is not justified. If we set up a definite listing of all algebraic numbers ... and then apply thediagonal procedure..., we get a perfectly definite transcendental number (it could be computed to any number of decimal places)."[33][J]Cantor's proof is not only constructive, it is also simpler than Perron's proof, which requires the detour of first proving that the set of all reals is uncountable.[34]
Cantor's diagonal argument has often replaced his 1874 construction in expositions of his proof. The diagonal argument is constructive and produces a more efficient computer program than his 1874 construction. Using it, a computer program has been written that computes the digits of a transcendental number inpolynomial time. The program that uses Cantor's 1874 construction requires at leastsub-exponential time.[35][K]
The presentation of the non-constructive proof without mentioning Cantor's constructive proof appears in some books that were quite successful as measured by the length of time new editions or reprints appeared—for example: Oskar Perron's Irrationalzahlen (1921; 1960, 4th edition),Eric Temple Bell'sMen of Mathematics(1937; still being reprinted),Godfrey HardyandE. M. Wright'sAn Introduction to theTheory of Numbers(1938; 2008 6th edition),Garrett BirkhoffandSaunders Mac Lane'sA Survey ofModern Algebra(1941; 1997 5th edition), andMichael Spivak'sCalculus(1967; 2008 4th edition).[36][L]Since 2014, at least two books have appeared stating that Cantor's proof is constructive,[37]and at least four have appeared stating that his proof does not construct any (or a single) transcendental.[38]
Asserting that Cantor gave a non-constructive argument without mentioning the constructive proof he published can lead to erroneous statements about thehistory of mathematics. InA Survey of Modern Algebra,Birkhoff and Mac Lane state: "Cantor's argument for this result [Not every real number is algebraic] was at first rejected by many mathematicians, since it did not exhibit any specific transcendental number."[39]The proof that Cantor published produces transcendental numbers, and there appears to be no evidence that his argument was rejected. EvenLeopold Kronecker, who had strict views on what is acceptable in mathematics and who could have delayed publication of Cantor's article, did not delay it.[4]In fact, applying Cantor's construction to the sequence of real algebraic numbers produces a limiting process that Kronecker accepted—namely, it determines a number to any required degree of accuracy.[M]
Historians of mathematics have discovered the following facts about Cantor's article "On a Property of the Collection of All Real Algebraic Numbers":
To explain these facts, historians have pointed to the influence of Cantor's former professors,Karl Weierstrassand Leopold Kronecker. Cantor discussed his results with Weierstrass on December 23, 1873.[46]Weierstrass was first amazed by the concept of countability, but then found the countability of the set of real algebraic numbers useful.[47]Cantor did not want to publish yet, but Weierstrass felt that he must publish at least his results concerning the algebraic numbers.[46]
From his correspondence, it appears that Cantor only discussed his article with Weierstrass. However, Cantor told Dedekind: "The restriction which I have imposed on the published version of my investigations is caused in part by local circumstances ..."[46]Cantor biographerJoseph Daubenbelieves that "local circumstances" refers to Kronecker who, as a member of the editorial board ofCrelle's Journal, had delayed publication of an 1870 article byEduard Heine, one of Cantor's colleagues. Cantor would submit his article toCrelle's Journal.[48]
Weierstrass advised Cantor to leave his uncountability theorem out of the article he submitted, but Weierstrass also told Cantor that he could add it as a marginal note during proofreading, which he did.[43]It appears in aremark at the end of the article's introduction. The opinions of Kronecker and Weierstrass both played a role here. Kronecker did not accept infinite sets, and it seems that Weierstrass did not accept that two infinite sets could be so different, with one being countable and the other not.[49]Weierstrass changed his opinion later.[50]Without the uncountability theorem, the article needed a title that did not refer to this theorem. Cantor chose "Ueber eine Eigenschaft des Inbegriffes aller reellen algebraischen Zahlen" ("On a Property of the Collection of All Real Algebraic Numbers"), which refers to the countability of the set of real algebraic numbers, the result that Weierstrass found useful.[51]
Kronecker's influence appears in the proof of Cantor's second theorem. Cantor used Dedekind's version of the proof except he left out why the limitsa∞= limn→ ∞anandb∞= limn→ ∞bnexist. Dedekind had used his "principle of continuity" to prove they exist. This principle (which is equivalent to theleast upper bound propertyof the real numbers) comes from Dedekind's construction of the real numbers, a construction Kronecker did not accept.[52]
Cantor restricted his first theorem to the set of real algebraic numbers even though Dedekind had sent him a proof that handled all algebraic numbers.[20]Cantor did this for expository reasons and because of "local circumstances".[53]This restriction simplifies the article because the second theorem works with real sequences. Hence, the construction in the second theorem can be applied directly to the enumeration of the real algebraic numbers to produce "an effective procedure for the calculation of transcendental numbers". This procedure would be acceptable to Weierstrass.[54]
Since 1856, Dedekind had developed theories involving infinitely many infinite sets—for example:ideals, which he used inalgebraic number theory, andDedekind cuts, which he used to construct the real numbers. This work enabled him to understand and contribute to Cantor's work.[55]
Dedekind's first contribution concerns the theorem that the set of real algebraic numbers is countable. Cantor is usually given credit for this theorem, but the mathematical historian José Ferreirós calls it "Dedekind's theorem." Their correspondence reveals what each mathematician contributed to the theorem.[56]
In his letter introducing the concept of countability, Cantor stated without proof that the set of positive rational numbers is countable, as are sets of the form (an1,n2, ...,nν) wheren1,n2, ...,nν, andνare positive integers.[57]Cantor's second result uses anindexed familyof numbers: a set of the form (an1,n2, ...,nν) is the range of a function from theνindices to the set of real numbers. His second result implies his first: letν= 2 andan1,n2=n1/n2. The function can be quite general—for example,an1,n2,n3,n4,n5=(n1/n2)1/n3+tan(n4/n5).
Dedekind replied with a proof of the theorem that the set of all algebraic numbers is countable.[20]In his reply to Dedekind, Cantor did not claim to have proved Dedekind's result. He did indicate how he proved his theorem about indexed families of numbers: "Your proof that (n) [the set of positive integers] can be correlated one-to-one with the field of all algebraic numbers is approximately the same as the way I prove my contention in the last letter. I taken12+n22+ ··· +nν2=N{\displaystyle {\mathfrak {N}}}and order the elements accordingly."[58]However, Cantor's ordering is weaker than Dedekind's and cannot be extended ton{\displaystyle n}-tuples of integers that include zeros.[59]
Dedekind's second contribution is his proof of Cantor's second theorem. Dedekind sent this proof in reply to Cantor's letter that contained the uncountability theorem, whichCantor provedusing infinitely many sequences. Cantor next wrote that he had found a simpler proof that did not use infinitely many sequences.[60]So Cantor had a choice of proofs and chose to publish Dedekind's.[61]
Cantor thanked Dedekind privately for his help: "... your comments (which I value highly) and your manner of putting some of the points were of great assistance to me."[46]However, he did not mention Dedekind's help in his article. In previous articles, he had acknowledged help received from Kronecker, Weierstrass, Heine, andHermann Schwarz. Cantor's failure to mention Dedekind's contributions damaged his relationship with Dedekind. Dedekind stopped replying to his letters and did not resume the correspondence until October 1876.[62][N]
Cantor's article introduced the uncountability theorem and the concept of countability. Both would lead to significant developments in mathematics. The uncountability theorem demonstrated that one-to-one correspondences can be used to analyze infinite sets. In 1878, Cantor used them to define and compare cardinalities. He also constructed one-to-one correspondences to prove that then-dimensional spacesRn(whereRis the set of real numbers) and the set of irrational numbers have the same cardinality asR.[63][O]
In 1883, Cantor extended the positive integers with his infiniteordinals. This extension was necessary for his work on theCantor–Bendixson theorem. Cantor discovered other uses for the ordinals—for example, he used sets of ordinals to produce an infinity of sets having different infinite cardinalities.[65]His work on infinite sets together with Dedekind's set-theoretical work created set theory.[66]
The concept of countability led to countable operations and objects that are used in various areas of mathematics. For example, in 1878, Cantor introduced countableunionsof sets.[67]In the 1890s,Émile Borelused countable unions in histheory of measure, andRené Baireused countable ordinals to define hisclasses of functions.[68]Building on the work of Borel and Baire,Henri Lebesguecreated his theories ofmeasureandintegration, which were published from 1899 to 1901.[69]
Countablemodelsare used in set theory. In 1922,Thoralf Skolemproved that if conventionalaxioms of set theoryareconsistent, then they have a countable model. Since this model is countable, its set of real numbers is countable. This consequence is calledSkolem's paradox, and Skolem explained why it does not contradict Cantor's uncountability theorem: although there is a one-to-one correspondence between this set and the set of positive integers, no such one-to-one correspondence is a member of the model. Thus the model considers its set of real numbers to be uncountable, or more precisely, thefirst-order sentencethat says the set of real numbers is uncountable is true within the model.[70]In 1963,Paul Cohenused countable models to prove hisindependencetheorems.[71]
. . . But this contradicts a very general theorem, which we have
proved with full rigor in Borchardt's Journal, Vol. 77, page 260; namely, the following theorem:"If one has a simply [countably] infinite sequenceω1, ω2, . . . , ων, . . .of real, unequal numbers that proceed according to some rule, then in every given interval [α, β] a number η (and thus infinitely many of them) can be specified that does not occur in this sequence (as a member of it)."
In view of the great interest in this theorem, not only in the present discussion, but also in many other arithmetical as well as analytical relations, it might not be superfluous if we develop the argument followed there [Cantor's 1874 proof] more clearly here by using simplifying modifications.
Starting with the sequence:ω1, ω2, . . . , ων, . . .(which we give [denote by] the symbol (ω)) and an arbitrary interval [α, β], where α < β, we will now demonstrate that in this interval a real number η can be found that doesnotoccur in (ω).
I. We first notice that if our set (ω) isnot everywhere densein the interval [α, β], then within this interval another interval [γ, δ] must be present, all of whose numbers do not belong to (ω). From the interval [γ, δ], one can then choose any number for η. It lies in the interval [α, β] and definitely doesnotoccur in our sequence (ω). Thus, this case presents no special considerations and we can move on to themore difficultcase.
II. Let the set (ω) beeverywhere densein the interval [α, β]. In this case, every interval [γ,δ] located in [α,β], however small, contains numbers of our sequence (ω). To show that,nevertheless,numbers η in the interval [α, β] exist that do not occur in (ω), we employ the following observation.
Since some numbers in our sequence:ω1, ω2, . . . , ων, . . .
[Seite 5]. . . Dem widerspricht aber ein sehr allgemeiner Satz, welchen wir in Borchardt's Journal, Bd. 77, pag. 260, mit aller Strenge bewiesen haben, nämlich der folgende Satz:
"Hat man eine einfach unendliche Reiheω1, ω2, . . . , ων, . . .von reellen, ungleichen Zahlen, die nach irgend einem Gesetz fortschreiten, so lässt sich in jedem vorgegebenen, Intervalle (α . . . β) eine Zahl η (und folglich lassen sich deren unendlich viele) angeben, welche nicht in jener Reihe (als Glied derselben) vorkommt."
In Anbetracht des grossen Interesses, welches sich an diesen Satz, nicht blos bei der gegenwärtigen Erörterung, sondern auch in vielen anderen sowohl arithmetischen, wie analytischen Beziehungen, knüpft, dürfte es nicht überflüssig sein, wenn wir die dort befolgte Beweisführung [Cantors 1874 Beweis], unter Anwendung vereinfachender Modificationen, hier deutlicher entwickeln.
Unter Zugrundelegung der Reihe:ω1, ω2, . . . , ων, . . .(welcher wir das Zeichen (ω) beilegen) und eines beliebigen Intervalles (α . . . β), wo α < β ist, soll also nun gezeigt werden, dass in diesem Intervalle eine reelle Zahl η gefunden werden kann, welche in (ω)nichtvorkommt.
I. Wir bemerken zunächst, dass wenn unsre Mannichfaltigkeit (ω) in dem Intervall (α . . . β)nicht überall-dichtist, innerhalb dieses Intervalles ein anderes (γ . . . δ) vorhanden sein muss, dessen Zahlen sämmtlich nicht zu (ω) gehören; man kann alsdann für η irgend eine Zahl des Intervalls (γ . . . δ) wählen, sie liegt im Intervalle (α . . . β) und kommt sicher in unsrer Reihe (ω)nichtvor. Dieser Fall bietet daher keinerlei besondere Umstände; und wir können zu demschwierigerenübergehen.
II. Die Mannichfaltigkeit (ω) sei im Intervalle (α . . . β)überall-dicht. In diesem Falle enthält jedes, noch so kleine in (α . . . β) gelegene Intervall (γ . . . δ) Zahlen unserer Reihe (ω). Um zu zeigen, dassnichtsdestowenigerZahlen η im Intervalle (α . . . β) existiren, welche in (ω) nicht vorkommen, stellen wir die folgende Betrachtung an.
Da in unserer Reihe:ω1, ω2, . . . , ων, . . .
definitely occurwithinthe interval [α, β], one of these numbers must have theleast index,let it be ωκ1, and another: ωκ2with the next larger index.
Let the smaller of the two numbers ωκ1, ωκ2be denoted by α', the larger by β'. (Their equality is impossible because we assumed that our sequence consists of nothing but unequal numbers.)
Then according to the definition:α < α' < β' < β,furthermore:κ1< κ2;and all numbers ωμof our sequence,
for which μ ≤ κ2, donotlie in the interior of the interval [α', β'], as is immediately clear from the definition of the numbers κ1, κ2. Similarly, let ωκ3and ωκ4be the two numbers of our sequence with smallest indices that fall in theinteriorof the interval [α', β'] and let the smaller of the numbers ωκ3, ωκ4be denoted by α'', the larger by β''.
Then one has:α' < α'' < β'' < β',κ2< κ3< κ4;and one sees that all numbers ωμof our sequence, for which μ ≤ κ4, donotfall into theinteriorof the interval [α'', β''].
After one has followed this rule to reach an interval[α(ν - 1), β(ν - 1)], the next interval is produced by selecting the first two (i. e. with lowest indices) numbers of our sequence (ω) (let them beωκ2ν - 1and ωκ2ν) that fall into theinteriorof[α(ν - 1), β(ν - 1)]. Let the smaller of these two numbers be denoted by α(ν), the larger by β(ν).
The interval [α(ν), β(ν)] then lies in theinteriorof all preceding intervals and has thespecificrelation with our sequence (ω) that all numbers ωμ, for which μ ≤ κ2ν,definitely do not lie in its interior. Since obviously:κ1< κ2< κ3< . . . , ωκ2ν – 2< ωκ2ν – 1< ωκ2ν, . . .and these numbers, as indices, arewholenumbers, so:κ2ν≥ 2ν,and hence:ν < κ2ν;thus, we can certainly say (and this is sufficient for the following):
That if ν is an arbitrary whole number, the [real] quantity ωνlies outside the interval [α(ν). . . β(ν)].
[Seite 6]sicher Zahleninnerhalbdes Intervalls (α . . . β) vorkommen, so muss eine von diesen Zahlen denkleinsten Indexhaben, sie sei ωκ1, und eine andere: ωκ2mit dem nächst grösseren Index behaftet sein.
Die kleinere der beiden Zahlen ωκ1, ωκ2werde mit α', die grössere mit β' bezeichnet. (Ihre Gleichheit ist ausgeschlossen, weil wir voraussetzten, dass unsere Reihe aus lauter ungleichen Zahlen besteht.)
Es ist alsdann der Definition nach:α < α' < β' < β,ferner:κ1< κ2;und ausserdem ist zu bemerken, dass alle Zahlen ωμunserer Reihe, für welche μ ≤ κ2,nichtim Innern des Intervalls (α' . . . β') liegen, wie aus der Bestimmung der Zahlen κ1, κ2sofort erhellt. Ganz ebenso mögen ωκ3, ωκ4die beiden mit den kleinsten Indices versehenen Zahlen unserer Reihen [see note 1 below] sein, welche in dasInneredes Intervalls (α' . . . β') fallen und die kleinere der Zahlen ωκ3, ωκ4werde mit α'', die grössere mit β'' bezeichnet.
Man hat alsdann:α' < α'' < β'' < β',κ2< κ3< κ4;und man erkennt, dass alle Zahlen ωμunserer Reihe, für welche μ ≤ κ4nichtin dasInneredes Intervalls (α'' . . . β'') fallen.
Nachdem man unter Befolgung des gleichen Gesetzes zu einem Intervall(α(ν - 1), . . . β(ν - 1))gelangt ist, ergiebt sich das folgende Intervall dadurch aus demselben, dass man die beiden ersten (d. h. mit niedrigsten Indices versehenen) Zahlen unserer Reihe (ω) aufstellt (sie seien ωκ2ν – 1und ωκ2ν), welche in dasInnerevon(α(ν – 1). . . β(ν – 1))fallen; die kleinere dieser beiden Zahlen werde mit α(ν), die grössere mit β(ν)bezeichnet.
Das Intervall (α(ν). . . β(ν)) liegt alsdann imInnernaller vorangegangenen Intervalle und hat zu unserer Reihe (ω) dieeigenthümlicheBeziehung, dass alle Zahlen ωμ, für welche μ ≤ κ2νsicher nicht in seinem Innernliegen. Da offenbar:κ1< κ2< κ3< . . . , ωκ2ν – 2< ωκ2ν – 1< ωκ2ν, . . .
und diese Zahlen, als Indices,ganzeZahlen sind, so ist:κ2ν≥ 2ν,und daher:ν < κ2ν;wir können daher, und dies ist für das Folgende ausreichend, gewiss sagen:
Dass, wenn ν eine beliebige ganze Zahl ist, die Grösse ωνausserhalb des Intervalls (α(ν). . . β(ν)) liegt.
Since the numbers α', α'', α''', . . ., α(ν), . . . are continually increasing by value while simultaneously being enclosed in the interval [α, β], they have, by a well-known fundamental theorem of the theory of magnitudes [see note 2 below], a limit that we denote by A, so that:A = Lim α(ν)for ν = ∞.
The same applies to the numbers β', β'', β''', . . ., β(ν), . . ., which are continually decreasing and likewise lying in the interval [α, β]. We call their limit B, so that:B = Lim β(ν)for ν = ∞.
Obviously, one has:α(ν)< A ≤ B < β(ν).
But it is easy to see that the case A < B cannotoccur here since otherwise every number ωνof our sequence would lieoutsideof the interval [A, B] by lying outside the interval [α(ν), β(ν)]. So our sequence (ω) wouldnotbeeverywhere densein the interval [α, β], contrary
to the assumption.
Thus, there only remains the case A = B and now it is demonstrated that the number:η = A = Bdoesnotoccur in our sequence (ω).
If it were a member of our sequence, such as the νth, then one would have: η = ων.
But the latter equation is not possible for any value of ν because η is in theinteriorof the interval [α(ν), β(ν)], but ωνliesoutsideof it.
[Seite 7]Da die Zahlen α', α'', α''', . . ., α(ν), . . . ihrer Grösse nach fortwährend wachsen, dabei jedoch im Intervalle (α . . . β) eingeschlossen sind, so haben sie, nach einem bekannten Fundamentalsatze der Grössenlehre, eine Grenze, die wir mit A bezeichnen, so dass:A = Lim α(ν)für ν = ∞.
Ein Gleiches gilt für die Zahlen β', β'', β''', . . ., β(ν), . . . welche fortwährend abnehmen und dabei ebenfalls im Intervalle (α . . . β) liegen; wir nennen ihre Grenze B, so dass:B = Lim β(ν)für ν = ∞.
Man hat offenbar:α(ν)< A ≤ B < β(ν).
Es ist aber leicht zu sehen, dass der Fall A < B hiernichtvorkommen kann; da sonst jede Zahl ων, unserer Reiheausserhalbdes Intervalles (A . . . B) liegen würde, indem ων, ausserhalb des Intervalls (α(ν). . . β(ν)) gelegen ist; unsere Reihe (ω) wäre im Intervall (α . . . β)nicht überalldicht,gegen die Voraussetzung.
Es bleibt daher nur der Fall A = B übrig und es zeigt sich nun, dass die Zahl:η = A = Bin unserer Reihe (ω)nichtvorkommt.
Denn, würde sie ein Glied unserer Reihe sein, etwa das νte, so hätte man: η = ων.
Die letztere Gleichung ist aber für keinen Werth von v möglich, weil η imInnerndes Intervalls [α(ν), β(ν)], ωνaberausserhalbdesselben liegt.
Note 2:Grössenlehre, which has been translated as "the theory of magnitudes", is a term used by 19th century German mathematicians that refers to the theory ofdiscreteandcontinuousmagnitudes. (Ferreirós 2007, pp. 41–42, 202.)
|
https://en.wikipedia.org/wiki/Cantor%27s_first_uncountability_proof
|
Inmathematics, specificallyset theory, thecontinuum hypothesis(abbreviatedCH) is a hypothesis about the possible sizes ofinfinite sets. It states:
There is no set whosecardinalityis strictly between that of theintegersand thereal numbers.
Or equivalently:
Any subset of the real numbers is either finite, or countably infinite, or has the cardinality of the real numbers.
InZermelo–Fraenkel set theorywith theaxiom of choice(ZFC), this is equivalent to the following equation inaleph numbers:2ℵ0=ℵ1{\displaystyle 2^{\aleph _{0}}=\aleph _{1}}, or even shorter withbeth numbers:ℶ1=ℵ1{\displaystyle \beth _{1}=\aleph _{1}}.
The continuum hypothesis was advanced byGeorg Cantorin 1878,[1]and establishing its truth or falsehood is the first ofHilbert's 23 problemspresented in 1900. The answer to this problem isindependentof ZFC, so that either the continuum hypothesis or its negation can be added as an axiom to ZFC set theory, with the resulting theory being consistent if and only if ZFC is consistent. This independence was proved in 1963 byPaul Cohen, complementing earlier work byKurt Gödelin 1940.[2]
The name of the hypothesis comes from the termthe continuumfor the real numbers.
Cantor believed the continuum hypothesis to be true and for many years tried in vain to prove it.[3]It became the first on David Hilbert'slist of important open questionsthat was presented at theInternational Congress of Mathematiciansin the year 1900 in Paris.Axiomatic set theorywas at that point not yet formulated.Kurt Gödelproved in 1940 that the negation of the continuum hypothesis, i.e., the existence of a set with intermediate cardinality, could not be proved in standard set theory.[2]The second half of the independence of the continuum hypothesis – i.e., unprovability of the nonexistence of an intermediate-sized set – was proved in 1963 byPaul Cohen.[4]
Two sets are said to have the samecardinalityorcardinal numberif there exists abijection(a one-to-one correspondence) between them. Intuitively, for two setsS{\displaystyle S}andT{\displaystyle T}to have the same cardinality means that it is possible to "pair off" elements ofS{\displaystyle S}with elements ofT{\displaystyle T}in such a fashion that every element ofS{\displaystyle S}is paired off with exactly one element ofT{\displaystyle T}and vice versa. Hence, the set{banana,apple,pear}{\displaystyle \{{\text{banana}},{\text{apple}},{\text{pear}}\}}has the same cardinality as{yellow,red,green}{\displaystyle \{{\text{yellow}},{\text{red}},{\text{green}}\}}despite the sets themselves containing different elements.
With infinite sets such as the set ofintegersorrational numbers, the existence of a bijection between two sets becomes more difficult to demonstrate. The rational numbersQ{\displaystyle \mathbb {Q} }seemingly form a counterexample to the continuum hypothesis: the integers form a proper subset of the rationals, which themselves form a proper subset of the reals, so intuitively, there are more rational numbers than integers and more real numbers than rational numbers. However, this intuitive analysis is flawed since it does not take into account the fact that all three sets areinfinite. Perhaps more importantly, it in fact conflates the concept of "size" of the setQ{\displaystyle \mathbb {Q} }with the order or topological structure placed on it. In fact, it turns out the rational numbers can actually be placed in one-to-one correspondence with the integers, and therefore the set of rational numbers is the same size (cardinality) as the set of integers: they are bothcountable sets.[5]
Cantor gave two proofs that the cardinality of the set ofintegersis strictly smaller than that of the set ofreal numbers(seeCantor's first uncountability proofandCantor's diagonal argument). His proofs, however, give no indication of the extent to which the cardinality of the integers is less than that of the real numbers. Cantor proposed the continuum hypothesis as a possible solution to this question.
In simple terms, the Continuum Hypothesis (CH) states that the set of real numbers has minimal possible cardinality which is greater than the cardinality of the set of integers. That is, every setS⊆R{\displaystyle S\subseteq \mathbb {R} }of real numbers can either be mapped one-to-one into the integers or the real numbers can be mapped one-to-one intoS{\displaystyle S}. Since the real numbers areequinumerouswith thepowersetof the integers, i.e.|R|=2ℵ0{\displaystyle |\mathbb {R} |=2^{\aleph _{0}}}, CH can be restated as follows:
Continuum Hypothesis—∄S:ℵ0<|S|<2ℵ0{\displaystyle \nexists S\colon \aleph _{0}<|S|<2^{\aleph _{0}}}.
Assuming theaxiom of choice, there is a unique smallest cardinal numberℵ1{\displaystyle \aleph _{1}}greater thanℵ0{\displaystyle \aleph _{0}}, and the continuum hypothesis is in turn equivalent to the equality2ℵ0=ℵ1{\displaystyle 2^{\aleph _{0}}=\aleph _{1}}.[6][7]
The independence of the continuum hypothesis (CH) fromZermelo–Fraenkel set theory(ZF) follows from combined work ofKurt GödelandPaul Cohen.
Gödel[8][2]showed that CH cannot be disproved from ZF, even if theaxiom of choice(AC) is adopted, i.e. from ZFC. Gödel's proof shows that both CH and AC hold in theconstructible universeL{\displaystyle L}, aninner modelof ZF set theory, assuming only the axioms of ZF. The existence of an inner model of ZF in which additional axioms hold shows that the additional axioms are (relatively)consistentwith ZF, provided ZF itself is consistent. The latter condition cannot be proved in ZF itself, due toGödel's incompleteness theorems, but is widely believed to be true and can be proved in stronger set theories.
Cohen[4][9]showed that CH cannot be proven from the ZFC axioms, completing the overall independence proof. To prove his result, Cohen developed the method offorcing, which has become a standard tool in set theory. Essentially, this method begins with a model of ZF in which CH holds and constructs another model which contains more sets than the original in a way that CH does not hold in the new model. Cohen was awarded theFields Medalin 1966 for his proof.
Cohen's independence proof shows that CH is independent of ZFC. Further research has shown that CH is independent of all knownlarge cardinal axiomsin the context of ZFC.[10]Moreover, it has been shown that thecardinality of the continuumc=2ℵ0{\displaystyle {\mathfrak {c}}=2^{\aleph _{0}}}can be any cardinal consistent withKőnig's theorem. A result of Solovay, proved shortly after Cohen's result on the independence of the continuum hypothesis, shows that in any model of ZFC, ifκ{\displaystyle \kappa }is a cardinal of uncountablecofinality, then there is a forcing extension in which2ℵ0=κ{\displaystyle 2^{\aleph _{0}}=\kappa }. However, per Kőnig's theorem, it is not consistent to assume2ℵ0{\displaystyle 2^{\aleph _{0}}}isℵω{\displaystyle \aleph _{\omega }}orℵω1+ω{\displaystyle \aleph _{\omega _{1}+\omega }}or any cardinal with cofinalityω{\displaystyle \omega }.
The continuum hypothesis is closely related tomany statementsinanalysis, point settopologyandmeasure theory. As a result of its independence, many substantialconjecturesin those fields have subsequently been shown to be independent as well.
The independence from ZFC means that proving or disproving the CH within ZFC is impossible. However, Gödel and Cohen's negative results are not universally accepted as disposing of all interest in the continuum hypothesis. The continuum hypothesis remains an active topic of research: seeWoodin[11][12]andKoellner[13]for an overview of the current research status.
The continuum hypothesis and theaxiom of choicewere among the first genuinely mathematical statements shown to be independent of ZF set theory. Although the existence of some statements independent of ZFC had already been known more than two decades prior: for example, assuminggood soundness propertiesand the consistency of ZFC,Gödel's incompleteness theoremspublished in 1931 establish that there is a formal statement Con(ZFC) (one for each appropriateGödel numberingscheme) expressing the consistency of ZFC, that is also independent of it. The latter independence result indeed holds for many theories.
Gödel believed that CH is false, and that his proof that CH is consistent with ZFC only shows that theZermelo–Fraenkelaxioms do not adequately characterize the universe of sets. Gödel was aPlatonistand therefore had no problems with asserting the truth and falsehood of statements independent of their provability. Cohen, though aformalist,[14]also tended towards rejecting CH.
Historically, mathematicians who favored a "rich" and "large"universeof sets were against CH, while those favoring a "neat" and "controllable" universe favored CH. Parallel arguments were made for and against theaxiom of constructibility, which implies CH. More recently,Matthew Foremanhas pointed out thatontological maximalismcan actually be used to argue in favor of CH, because among models that have the same reals, models with "more" sets of reals have a better chance of satisfying CH.[15]
Another viewpoint is that the conception of set is not specific enough to determine whether CH is true or false. This viewpoint was advanced as early as 1923 bySkolem, even before Gödel's first incompleteness theorem. Skolem argued on the basis of what is now known asSkolem's paradox, and it was later supported by the independence of CH from the axioms of ZFC since these axioms are enough to establish the elementary properties of sets and cardinalities. In order to argue against this viewpoint, it would be sufficient to demonstrate new axioms that are supported by intuition and resolve CH in one direction or another. Although theaxiom of constructibilitydoes resolve CH, it is not generally considered to be intuitively true any more than CH is generally considered to be false.[16]
At least two other axioms have been proposed that have implications for the continuum hypothesis, although these axioms have not currently found wide acceptance in the mathematical community. In 1986, Chris Freiling[17]presented an argument against CH by showing that the negation of CH is equivalent toFreiling's axiom of symmetry, a statement derived by arguing from particular intuitions aboutprobabilities. Freiling believes this axiom is "intuitively clear"[17]but others have disagreed.[18][19]
A difficult argument against CH developed byW. Hugh Woodinhas attracted considerable attention since the year 2000.[11][12]Foremandoes not reject Woodin's argument outright but urges caution.[20]Woodin proposed a new hypothesis that he labeled the"(*)-axiom", or "Star axiom". The Star axiom would imply that2ℵ0{\displaystyle 2^{\aleph _{0}}}isℵ2{\displaystyle \aleph _{2}}, thus falsifying CH. The Star axiom was bolstered by an independent May 2021 proof showing the Star axiom can be derived from a variation ofMartin's maximum. However, Woodin stated in the 2010s that he now instead believes CH to be true, based on his belief in his new "ultimate L" conjecture.[21][22]
Solomon Fefermanargued that CH is not a definite mathematical problem.[23]He proposed a theory of "definiteness" using a semi-intuitionistic subsystem of ZF that acceptsclassical logicfor bounded quantifiers but usesintuitionistic logicfor unbounded ones, and suggested that a propositionϕ{\displaystyle \phi }is mathematically "definite" if the semi-intuitionistic theory can prove(ϕ∨¬ϕ){\displaystyle (\phi \lor \neg \phi )}. He conjectured that CH is not definite according to this notion, and proposed that CH should, therefore, be considered not to have a truth value.Peter Koellnerwrote a critical commentary on Feferman's article.[24]
Joel David Hamkinsproposes amultiverseapproach to set theory and argues that "the continuum hypothesis is settled on the multiverse view by our extensive knowledge about how it behaves in the multiverse, and, as a result, it can no longer be settled in the manner formerly hoped for".[25]In a related vein,Saharon Shelahwrote that he does "not agree with the pure Platonic view that the interesting problems in set theory can be decided, that we just have to discover the additional axiom. My mental picture is that we have many possible set theories, all conforming to ZFC".[26]
Thegeneralized continuum hypothesis(GCH) states that if an infinite set's cardinality lies between that of an infinite setSand that of thepower setP(S){\displaystyle {\mathcal {P}}(S)}ofS, then it has the same cardinality as eitherSorP(S){\displaystyle {\mathcal {P}}(S)}. That is, for anyinfinitecardinalλ{\displaystyle \lambda }there is no cardinalκ{\displaystyle \kappa }such thatλ<κ<2λ{\displaystyle \lambda <\kappa <2^{\lambda }}. GCH is equivalent to:
(occasionally calledCantor's aleph hypothesis).
Thebeth numbersprovide an alternative notation for this condition:ℵα=ℶα{\displaystyle \aleph _{\alpha }=\beth _{\alpha }}for every ordinalα{\displaystyle \alpha }. The continuum hypothesis is the special case for the ordinalα=1{\displaystyle \alpha =1}. GCH was first suggested byPhilip Jourdain.[27]For the early history of GCH, see Moore.[28]
Like CH, GCH is also independent of ZFC, butSierpińskiproved that ZF + GCH implies theaxiom of choice(AC) (and therefore the negation of theaxiom of determinacy, AD), so choice and GCH are not independent in ZF; there are no models of ZF in which GCH holds and AC fails. To prove this, Sierpiński showed GCH implies that every cardinality n is smaller than somealeph number, and thus can be ordered. This is done by showing that n is smaller than2ℵ0+n{\displaystyle 2^{\aleph _{0}+n}}which is smaller than its ownHartogs number—this uses the equality2ℵ0+n=2⋅2ℵ0+n{\displaystyle 2^{\aleph _{0}+n}\,=\,2\cdot \,2^{\aleph _{0}+n}}; for the full proof, see Gillman.[29]
Kurt Gödelshowed that GCH is a consequence of ZF +V=L(the axiom that every set is constructible relative to the ordinals), and is therefore consistent with ZFC. As GCH implies CH, Cohen's model in which CH fails is a model in which GCH fails, and thus GCH is not provable from ZFC. W. B. Easton used the method of forcing developed by Cohen to proveEaston's theorem, which shows it is consistent with ZFC for arbitrarily large cardinalsℵα{\displaystyle \aleph _{\alpha }}to fail to satisfy2ℵα=ℵα+1{\displaystyle 2^{\aleph _{\alpha }}=\aleph _{\alpha +1}}. Much later,ForemanandWoodinproved that (assuming the consistency of very large cardinals) it is consistent that2κ>κ+{\displaystyle 2^{\kappa }>\kappa ^{+}}holds for every infinite cardinalκ{\displaystyle \kappa }. Later Woodin extended this by showing the consistency of2κ=κ++{\displaystyle 2^{\kappa }=\kappa ^{++}}for everyκ{\displaystyle \kappa }.Carmi Merimovich[30]showed that, for eachn≥ 1, it is consistent with ZFC that for each infinite cardinalκ,2κis thenth successor ofκ(assuming the consistency of some large cardinal axioms). On the other hand, László Patai[31]proved that ifγis an ordinal and for each infinite cardinalκ,2κis theγth successor ofκ, thenγis finite.
For any infinite setsAandB, if there is an injection fromAtoBthen there is an injection from subsets ofAto subsets ofB. Thus for any infinite cardinalsAandB,A<B→2A≤2B{\displaystyle A<B\to 2^{A}\leq 2^{B}}. IfAandBare finite, the stronger inequalityA<B→2A<2B{\displaystyle A<B\to 2^{A}<2^{B}}holds. GCH implies that this strict, stronger inequality holds for infinite cardinals as well as finite cardinals.
Although the generalized continuum hypothesis refers directly only to cardinal exponentiation with 2 as the base, one can deduce from it the values of cardinal exponentiationℵαℵβ{\displaystyle \aleph _{\alpha }^{\aleph _{\beta }}}in all cases. GCH implies that for ordinalsαandβ:[32]
The first equality (whenα≤β+1) follows from:
ℵαℵβ≤ℵβ+1ℵβ=(2ℵβ)ℵβ=2ℵβ⋅ℵβ=2ℵβ=ℵβ+1{\displaystyle \aleph _{\alpha }^{\aleph _{\beta }}\leq \aleph _{\beta +1}^{\aleph _{\beta }}=(2^{\aleph _{\beta }})^{\aleph _{\beta }}=2^{\aleph _{\beta }\cdot \aleph _{\beta }}=2^{\aleph _{\beta }}=\aleph _{\beta +1}}while:
ℵβ+1=2ℵβ≤ℵαℵβ.{\displaystyle \aleph _{\beta +1}=2^{\aleph _{\beta }}\leq \aleph _{\alpha }^{\aleph _{\beta }}.}
The third equality (whenβ+1 <αandℵβ≥cf(ℵα){\displaystyle \aleph _{\beta }\geq \operatorname {cf} (\aleph _{\alpha })}) follows from:
ℵαℵβ≥ℵαcf(ℵα)>ℵα{\displaystyle \aleph _{\alpha }^{\aleph _{\beta }}\geq \aleph _{\alpha }^{\operatorname {cf} (\aleph _{\alpha })}>\aleph _{\alpha }}
byKőnig's theorem, while:
ℵαℵβ≤ℵαℵα≤(2ℵα)ℵα=2ℵα⋅ℵα=2ℵα=ℵα+1{\displaystyle \aleph _{\alpha }^{\aleph _{\beta }}\leq \aleph _{\alpha }^{\aleph _{\alpha }}\leq (2^{\aleph _{\alpha }})^{\aleph _{\alpha }}=2^{\aleph _{\alpha }\cdot \aleph _{\alpha }}=2^{\aleph _{\alpha }}=\aleph _{\alpha +1}}
Quotations related toContinuum hypothesisat Wikiquote
|
https://en.wikipedia.org/wiki/Continuum_hypothesis
|
Inmathematical logic, the theory ofinfinite setswas first developed byGeorg Cantor. Although this work has become a thoroughly standard fixture of classicalset theory, it has been criticized in several areas by mathematicians and philosophers.
Cantor's theoremimplies that there are sets havingcardinalitygreater than the infinite cardinality of the set ofnatural numbers. Cantor's argument for this theorem is presented with one small change. This argument can be improved by using a definition he gave later. The resulting argument uses only five axioms of set theory.
Cantor's set theory was controversial at the start, but later became largely accepted. Most modern mathematics textbooks implicitly use Cantor's views onmathematical infinity. For example, alineis generally presented as the infinite set of its points, and it is commonly taught that there are more real numbers than rational numbers (seecardinality of the continuum).
Cantor's first proofthat infinite sets can have differentcardinalitieswas published in 1874. This proof demonstrates that the set of natural numbers and the set ofreal numbershave different cardinalities. It uses the theorem that a bounded increasingsequenceof real numbers has alimit, which can be proved by using Cantor's orRichard Dedekind's construction of theirrational numbers. BecauseLeopold Kroneckerdid not accept these constructions, Cantor was motivated to develop a new proof.[1]
In 1891, he published "a much simpler proof ... which does not depend on considering the irrational numbers."[2]His new proof uses hisdiagonal argumentto prove that there exists an infinite set with a larger number of elements (or greater cardinality) than the set of natural numbersN= {1, 2, 3, ...}. This larger set consists of the elements (x1,x2,x3, ...), where eachxnis eithermorw.[3]Each of these elements corresponds to asubsetofN—namely, the element (x1,x2,x3, ...) corresponds to {n∈N:xn=w}. So Cantor's argument implies that the set of all subsets ofNhas greater cardinality thanN. The set of all subsets ofNis denoted byP(N), thepower setofN.
Cantor generalized his argument to an arbitrary setAand the set consisting of all functions fromAto {0, 1}.[4]Each of these functions corresponds to a subset ofA, so his generalized argument implies the theorem: The power setP(A) has greater cardinality thanA. This is known asCantor's theorem.
The argument below is a modern version of Cantor's argument that uses power sets (for his original argument, seeCantor's diagonal argument). By presenting a modern argument, it is possible to see which assumptions ofaxiomatic set theoryare used. The first part of the argument proves thatNandP(N) have different cardinalities:
Next Cantor shows thatA{\displaystyle A}is equinumerous with a subset ofP(A){\displaystyle P(A)}. From this and the fact thatP(A){\displaystyle P(A)}andA{\displaystyle A}have different cardinalities, he concludes thatP(A){\displaystyle P(A)}has greater cardinality thanA{\displaystyle A}. This conclusion uses his 1878 definition: IfAandBhave different cardinalities, then eitherBis equinumerous with a subset ofA(in this case,Bhas less cardinality thanA) orAis equinumerous with a subset ofB(in this case,Bhas greater cardinality thanA).[7]This definition leaves out the case whereAandBare equinumerous with a subset of the other set—that is,Ais equinumerous with a subset ofBandBis equinumerous with a subset ofA. Because Cantor implicitly assumed that cardinalities arelinearly ordered, this case cannot occur.[8]After using his 1878 definition, Cantor stated that in an 1883 article he proved that cardinalities arewell-ordered, which implies they are linearly ordered.[9]This proof used his well-ordering principle "every set can be well-ordered", which he called a "law of thought".[10]The well-ordering principle is equivalent to theaxiom of choice.[11]
Around 1895, Cantor began to regard the well-ordering principle as a theorem and attempted to prove it.[12]In 1895, Cantor also gave a new definition of "greater than" that correctly defines this concept without the aid of his well-ordering principle.[13]By using Cantor's new definition, the modern argument thatP(N) has greater cardinality thanNcan be completed using weaker assumptions than his original argument:
Besides the axioms of infinity and power set, the axioms ofseparation,extensionality, andpairingwere used in the modern argument. For example, the axiom of separation was used to define the diagonal subsetD,{\displaystyle D,}the axiom of extensionality was used to proveD≠f(x),{\displaystyle D\neq f(x),}and the axiom of pairing was used in the definition of the subsetP1.{\displaystyle P_{1}.}
Initially, Cantor's theory was controversial among mathematicians and (later) philosophers. LogicianWilfrid Hodges(1998) has commented on the energy devoted to refuting this "harmless little argument" (i.e.Cantor's diagonal argument) asking, "what had it done to anyone to make them angry with it?"[14]MathematicianSolomon Fefermanhas referred to Cantor's theories as “simply not relevant to everyday mathematics.”[15]
Before Cantor, the notion of infinity was often taken as a useful abstraction which helped mathematicians reason about the finite world; for example the use of infinite limit cases incalculus. The infinite was deemed to have at most a potential existence, rather than an actual existence.[16]"Actual infinity does not exist. What we call infinite is only the endless possibility of creating new objects no matter how many exist already".[17]Carl Friedrich Gauss's views on the subject can be paraphrased as: "Infinity is nothing more than a figure of speech which helps us talk about limits. The notion of a completed infinity doesn't belong in mathematics."[18]In other words, the only access we have to the infinite is through the notion of limits, and hence, we must not treat infinite sets as if they have an existence exactly comparable to the existence of finite sets.
Cantor's ideas ultimately were largely accepted, strongly supported byDavid Hilbert, amongst others. Hilbert predicted: "No one will drive us from theparadise which Cantor createdfor us."[19]To whichWittgensteinreplied "if one person can see it as a paradise of mathematicians, why should not another see it as a joke?"[20]The rejection of Cantor's infinitary ideas influenced the development of schools of mathematics such asconstructivismandintuitionism.[citation needed]
Wittgenstein did not object to mathematical formalism wholesale, but had a finitist view on what Cantor's proof meant. The philosopher maintained that belief in infinities arises from confusing the intensional nature of mathematical laws with the extensional nature of sets, sequences, symbols etc. A series of symbols is finite in his view: In Wittgenstein's words: "...A curve is not composed of points, it is a law that points
obey, or again, a law according to which points can be constructed."
He also described the diagonal argument as "hocus pocus" and not proving what it purports to do.
A common objection to Cantor's theory of infinite number involves theaxiom of infinity(which is, indeed, an axiom and not alogical truth). Mayberry has noted that "the set-theoretical axioms that sustain modern mathematics are self-evident in differing degrees. One of them—indeed, the most important of them, namely Cantor's Axiom, the so-called Axiom of Infinity—has scarcely any claim to self-evidence at all".[21]
Another objection is that the use of infinite sets is not adequately justified by analogy to finite sets.Hermann Weylwrote:
... classical logic was abstracted from the mathematics of finite sets and their subsets …. Forgetful of this limited origin, one afterwards mistook that logic for something above and prior to all mathematics, and finally applied it, without justification, to the mathematics of infinite sets. This is the Fall and original sin of [Cantor's] set theory[22]
The difficulty with finitism is to develop foundations of mathematics using finitist assumptions that incorporate what everyone reasonably regards as mathematics (for example,real analysis).
|
https://en.wikipedia.org/wiki/Controversy_over_Cantor%27s_theory
|
Inmathematical logic, thediagonal lemma(also known asdiagonalization lemma,self-reference lemmaorfixed point theorem) establishes the existence ofself-referentialsentences in certain formal theories.
A particular instance of the diagonal lemma was used byKurt Gödelin 1931 to construct his proof of theincompleteness theoremsas well as in 1933 byTarskito prove hisundefinability theorem. In 1934,Carnapwas the first to publish the diagonal lemma at some level of generality.[1]The diagonal lemma is named in reference toCantor's diagonal argumentin set and number theory.
The diagonal lemma applies to any sufficiently strong theories capable of representing the diagonal function. Such theories includefirst-order Peano arithmeticPA{\displaystyle {\mathsf {PA}}}, the weakerRobinson arithmeticQ{\displaystyle {\mathsf {Q}}}as well as any theory containingQ{\displaystyle {\mathsf {Q}}}(i.e. which interprets it).[2]A common statement of the lemma (as given below) makes the stronger assumption that the theory can represent allrecursive functions, but all the theories mentioned have that capacity, as well.
The diagonal lemma also requires aGödel numberingα{\displaystyle \alpha }. We writeα(φ){\displaystyle \alpha (\varphi )}for the code assigned toφ{\displaystyle \varphi }by the numbering. Forn¯{\displaystyle {\overline {n}}}, the standard numeral ofn{\displaystyle n}(i.e.0¯=df0{\displaystyle {\overline {0}}=_{df}{\mathsf {0}}}andn+1¯=dfS(n¯){\displaystyle {\overline {n+1}}=_{df}{\mathsf {S}}({\overline {n}})}), let⌜φ⌝{\displaystyle \ulcorner \varphi \urcorner }be the standard numeral of the code ofφ{\displaystyle \varphi }(i.e.⌜φ⌝{\displaystyle \ulcorner \varphi \urcorner }isα(φ)¯{\displaystyle {\overline {\alpha (\varphi )}}}). We assume astandard Gödel numbering
LetN{\displaystyle \mathbb {N} }be the set ofnatural numbers. Afirst-ordertheoryT{\displaystyle T}in the language of arithmetic containingQ{\displaystyle {\mathsf {Q}}}representsthek{\displaystyle k}-ary recursive functionf:Nk→N{\displaystyle f:\mathbb {N} ^{k}\rightarrow \mathbb {N} }if there is aformulaφf(x1,…,xk,y){\displaystyle \varphi _{f}(x_{1},\dots ,x_{k},y)}in the language ofT{\displaystyle T}s.t. for allm1,…,mk∈N{\displaystyle m_{1},\dots ,m_{k}\in \mathbb {N} }, iff(m1,…,mk)=n{\displaystyle f(m_{1},\dots ,m_{k})=n}thenT⊢∀y(φf(m1¯,…,mk¯,y)↔y=n¯){\displaystyle T\vdash \forall y(\varphi _{f}({\overline {m_{1}}},\dots ,{\overline {m_{k}}},y)\leftrightarrow y={\overline {n}})}.
The representation theorem is provable, i.e. every recursive function is representable inT{\displaystyle T}.[3]
Diagonal Lemma: LetT{\displaystyle T}a first-order theory containingQ{\displaystyle {\mathsf {Q}}}(Robinson arithmetic) and letψ(x){\displaystyle \psi (x)}be any formula in the language ofT{\displaystyle T}with onlyx{\displaystyle x}as free variable. Then there is a sentenceφ{\displaystyle \varphi }in the language ofT{\displaystyle T}s.t.T⊢φ↔ψ(⌜φ⌝){\displaystyle T\vdash \varphi \leftrightarrow \psi (\ulcorner \varphi \urcorner )}.
Intuitively,φ{\displaystyle \varphi }is aself-referentialsentence which "says of itself that it has the propertyψ{\displaystyle \psi }."
Proof: LetdiagT:N→N{\displaystyle diag_{T}:\mathbb {N} \to \mathbb {N} }be the recursive function which associates the code of each formulaφ(x){\displaystyle \varphi (x)}with only one free variablex{\displaystyle x}in the language ofT{\displaystyle T}with the code of the closed formulaφ(⌜φ⌝){\displaystyle \varphi (\ulcorner \varphi \urcorner )}(i.e. the substitution of⌜φ⌝{\displaystyle \ulcorner \varphi \urcorner }intoφ{\displaystyle \varphi }forx{\displaystyle x}) and0{\displaystyle 0}for other arguments. (The fact thatdiagT{\displaystyle diag_{T}}is recursive depends on the choice of the Gödel numbering, here thestandard one.)
By the representation theorem,T{\displaystyle T}represents every recursive function. Thus, there is a formulaδ(x,y){\displaystyle \delta (x,y)}be the formula representingdiagT{\displaystyle diag_{T}}, in particular, for eachφ(x){\displaystyle \varphi (x)},T⊢δ(⌜φ⌝,y)↔y=⌜φ(⌜φ⌝)⌝{\displaystyle T\vdash \delta (\ulcorner \varphi \urcorner ,y)\leftrightarrow y=\ulcorner \varphi (\ulcorner \varphi \urcorner )\urcorner }.
Letψ(x){\displaystyle \psi (x)}be an arbitrary formula with onlyx{\displaystyle x}as free variable. We now defineχ(x){\displaystyle \chi (x)}as∃y(δ(x,y)∧ψ(y)){\displaystyle \exists y(\delta (x,y)\land \psi (y))}, and letφ{\displaystyle \varphi }beχ(⌜χ⌝){\displaystyle \chi (\ulcorner \chi \urcorner )}. Then the following equivalences are provable inT{\displaystyle T}:
φ↔χ(⌜χ⌝)↔∃y(δ(⌜χ⌝,y)∧ψ(y))↔∃y(y=⌜χ(⌜χ⌝)⌝∧ψ(y))↔∃y(y=⌜φ⌝∧ψ(y))↔ψ(⌜φ⌝){\displaystyle \varphi \leftrightarrow \chi (\ulcorner \chi \urcorner )\leftrightarrow \exists y(\delta (\ulcorner \chi \urcorner ,y)\land \psi (y))\leftrightarrow \exists y(y=\ulcorner \chi (\ulcorner \chi \urcorner )\urcorner \land \psi (y))\leftrightarrow \exists y(y=\ulcorner \varphi \urcorner \land \psi (y))\leftrightarrow \psi (\ulcorner \varphi \urcorner )}.
There are various generalizations of the Diagonal Lemma. We present only three of them; in particular, combinations of the below generalizations yield new generalizations.[4]LetT{\displaystyle T}be a first-order theory containingQ{\displaystyle {\mathsf {Q}}}(Robinson arithmetic).
Letψ(x,y1,…,yn){\displaystyle \psi (x,y_{1},\dots ,y_{n})}be any formula with free variablesx,y1,…,yn{\displaystyle x,y_{1},\dots ,y_{n}}.
Then there is a formulaφ(y1,…yn){\displaystyle \varphi (y_{1},\dots y_{n})}with free variablesy1,…,yn{\displaystyle y_{1},\dots ,y_{n}}s.t.T⊢φ(y1,…,yn)↔ψ(⌜φ(y1,…,yn)⌝,y1,…,yn){\displaystyle T\vdash \varphi (y_{1},\dots ,y_{n})\leftrightarrow \psi (\ulcorner \varphi (y_{1},\dots ,y_{n})\urcorner ,y_{1},\dots ,y_{n})}.
Letψ(x,y1,…,yn){\displaystyle \psi (x,y_{1},\dots ,y_{n})}be any formula with free variablesx,y1,…,yn{\displaystyle x,y_{1},\dots ,y_{n}}.
Then there is a formulaφ(y1,…yn){\displaystyle \varphi (y_{1},\dots y_{n})}with free variablesy1,…,yn{\displaystyle y_{1},\dots ,y_{n}}s.t. for allm1,…,mn∈N{\displaystyle m_{1},\dots ,m_{n}\in \mathbb {N} },T⊢φ(m1¯,…,mn¯)↔ψ(⌜φ(m1¯,…,mn¯)⌝,m1¯,…,mn¯){\displaystyle T\vdash \varphi ({\overline {m_{1}}},\dots ,{\overline {m_{n}}})\leftrightarrow \psi (\ulcorner \varphi ({\overline {m_{1}}},\dots ,{\overline {m_{n}}})\urcorner ,{\overline {m_{1}}},\dots ,{\overline {m_{n}}})}.
Letψ1(x1,x2){\displaystyle \psi _{1}(x_{1},x_{2})}andψ2(x1,x2){\displaystyle \psi _{2}(x_{1},x_{2})}be formulae with free variablex1{\displaystyle x_{1}}andx2{\displaystyle x_{2}}.
Then there are sentenceφ1{\displaystyle \varphi _{1}}andφ2{\displaystyle \varphi _{2}}s.t.T⊢φ1↔ψ1(⌜φ1⌝,⌜φ2⌝){\displaystyle T\vdash \varphi _{1}\leftrightarrow \psi _{1}(\ulcorner \varphi _{1}\urcorner ,\ulcorner \varphi _{2}\urcorner )}andT⊢φ2↔ψ2(⌜φ1⌝,⌜φ2⌝){\displaystyle T\vdash \varphi _{2}\leftrightarrow \psi _{2}(\ulcorner \varphi _{1}\urcorner ,\ulcorner \varphi _{2}\urcorner )}.
The case withn{\displaystyle n}many formulae is similar.
The lemma is called "diagonal" because it bears some resemblance toCantor's diagonal argument.[5]The terms "diagonal lemma" or "fixed point" do not appear inKurt Gödel's1931 articleor inAlfred Tarski's1936 article.
In 1934,Rudolf Carnapwas the first to publish the diagonal lemma in some level of generality, which says that for any formulaψ(x){\displaystyle \psi (x)}withx{\displaystyle x}as free variable (in a sufficiently expressive language), then there exists a sentenceφ{\displaystyle \varphi }such thatφ↔ψ(⌜φ⌝){\displaystyle \varphi \leftrightarrow \psi (\ulcorner \varphi \urcorner )}is true (in some standard model).[6]Carnap's work was phrased in terms oftruthrather thanprovability(i.e. semantically rather than syntactically).[7]Remark also that the concept ofrecursive functionswas not yet developed in 1934.
The diagonal lemma is closely related toKleene's recursion theoremincomputability theory, and their respective proofs are similar.[8]In 1952,Léon Henkinasked whether sentences that state their own provability are provable. His question led to more general analyses of the diagonal lemma, especially withLöb's theoremandprovability logic.[9]
|
https://en.wikipedia.org/wiki/Diagonal_lemma
|
Aternary search algorithm[1]is a technique incomputer sciencefor finding theminimum or maximumof aunimodalfunction.
Assume we are looking for a maximum off(x){\displaystyle f(x)}and that we know the maximum lies somewhere betweenA{\displaystyle A}andB{\displaystyle B}. For the algorithm to be applicable, there must be some valuex{\displaystyle x}such that
Letf(x){\displaystyle f(x)}be aunimodalfunction on some interval[l;r]{\displaystyle [l;r]}. Take any two pointsm1{\displaystyle m_{1}}andm2{\displaystyle m_{2}}in this segment:l<m1<m2<r{\displaystyle l<m_{1}<m_{2}<r}. Then there are three possibilities:
choice pointsm1{\displaystyle m_{1}}andm2{\displaystyle m_{2}}:
|
https://en.wikipedia.org/wiki/Ternary_search
|
Incomputational complexity theory, thelinear search problemis an optimal search problem introduced byRichard E. Bellman[1]and independently considered byAnatole Beck.[2][3][4]
"An immobile hider is located on the real line according to a knownprobability distribution. A searcher, whose maximal velocity is one, starts from the origin and wishes to discover the hider in minimal expected time. It is assumed that the searcher can change the direction of his motion without any loss of time. It is also assumed that the searcher cannot see the hider until he actually reaches the point at which the hider is located and the time elapsed until this moment is the duration of the game."
The problem is to find the hider in the shortest time possible. Generally, since the hider could be on either side of the searcher and an arbitrary distance away, the searcher has to oscillate back and forth, i.e., the searcher has to go a distance x1in one direction, return to the origin and go distance x2in the other direction, etc., (the length of the n-th step being denoted by xn). (However, an optimal solution need not have a first step and could start with an infinite number of small 'oscillations'.) This problem is usually called the linear search problem and a search plan is called a trajectory.
The linear search problem for a general probability distribution is unsolved.[5]However, there exists adynamic programmingalgorithm that produces a solution for any discrete distribution[6]and also an approximate solution, for any probability distribution, with any desired accuracy.[7]
The linear search problem was solved by Anatole Beck andDonald J. Newman(1970) as a two-personzero-sum game. Theirminimaxtrajectory is to double the distance on each step and the optimal strategy is a mixture of trajectories that increase the distance by some fixed constant.[8]This solution gives search strategies that are not sensitive to assumptions concerning the distribution of the target. Thus, it also presents an upper bound for a worst-case scenario. This solution was obtained in the framework of anonline algorithmbyShmuel Gal, who also generalized this result to a set of concurrent rays.[9]The best onlinecompetitive ratiofor the search on the line is 9 but it can be reduced to 4.6 by using a randomized strategy. Demaine et al. gave an online solution with a turn cost.[10]
These results were rediscovered in the 1990s by computer scientists as thecow path problem.
|
https://en.wikipedia.org/wiki/Linear_search_problem
|
Algorismis the technique of performing basicarithmeticby writing numbers inplace valueform and applying a set of memorized rules andfactsto the digits. One who practices algorism is known as analgorist. Thispositional notationsystem has largely superseded earlier calculation systems that used a different set of symbols for each numericalmagnitude, such asRoman numerals, and in some cases required a device such as anabacus.
The wordalgorismcomes from the nameAl-Khwārizmī(c. 780–850), aPersian[2][3]mathematician,astronomer,geographerandscholarin theHouse of WisdominBaghdad, whose name means "the native ofKhwarezm", which is now in modern-dayUzbekistan.[4][5][6]He wrote a treatise in Arabic language in the 9th century, which was translated intoLatinin the 12th century under the titleAlgoritmi de numero Indorum. This title means "Algoritmi on the numbers of the Indians", where "Algoritmi" was the translator's Latinization of Al-Khwarizmi's name.[7]Al-Khwarizmi was the most widely read mathematician in Europe in the late Middle Ages, primarily through his other book, theAlgebra.[8]In late medieval Latin,algorismus, the corruption of his name, simply meant the "decimal number system" that is still the meaning of modern English algorism. During the 17th century, the French form for the word – but not its meaning – was changed toalgorithm, following the model of the wordlogarithm, this form alluding to the ancient Greekarithmos= number. English adopted the French very soon afterwards, but it wasn't until the late 19th century that "algorithm" took on the meaning that it has in modern English.[9]In English, it was first used about 1230 and then by Chaucer in 1391.[10]Another early use of the word is from 1240, in a manual titledCarmen de Algorismocomposed byAlexandre de Villedieu. It begins thus:
Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris.
which translates as:
This present art, in which we use those twice five Indian figures, is called algorismus.
The wordalgorithmalso derives fromalgorism, a generalization of the meaning to any set of rules specifying a computational procedure. Occasionallyalgorismis also used in this generalized meaning, especially in older texts.
Starting with theintegerarithmeticdeveloped in India usingbase 10notation,Al-Khwārizmīalong with othermathematicians in medieval Islam, documented new arithmetic methods and made many other contributions to decimal arithmetic (see the articles linked below). These included the concept of the decimal fractions as an extension of the notation, which in turn led to the notion of thedecimal point. This system was popularized in Europe by Leonardo of Pisa, now known asFibonacci.[11]
|
https://en.wikipedia.org/wiki/Algorism
|
Incomputingandelectronicsystems,binary-coded decimal(BCD) is a class ofbinaryencodings ofdecimalnumbers where eachdigitis represented by a fixed number ofbits, usually four or eight. Sometimes, special bit patterns are used for asignor other indications (e.g. error or overflow).
Inbyte-oriented systems (i.e. most modern computers), the termunpackedBCD[1]usually implies a full byte for each digit (often including a sign), whereaspackedBCD typically encodes two digits within a single byte by taking advantage of the fact that four bits are enough to represent the range 0 to 9. The precise four-bit encoding, however, may vary for technical reasons (e.g.Excess-3).
The ten states representing a BCD digit are sometimes calledtetrades[2][3](thenibbletypically needed to hold them is also known as a tetrade) while the unused,don't care-states are namedpseudo-tetrad(e)s[de],[4][5][6][7][8]pseudo-decimals,[3]orpseudo-decimal digits.[9][10][nb 1]
BCD's main virtue, in comparison to binarypositional systems, is its more accurate representation and rounding of decimal quantities, as well as its ease of conversion into conventional human-readable representations. Its principal drawbacks are a slight increase in the complexity of the circuits needed to implement basic arithmetic as well as slightly less dense storage.
BCD was used in many earlydecimal computers, and is implemented in the instruction set of machines such as theIBM System/360series and its descendants,Digital Equipment Corporation'sVAX, theBurroughs B1700, and the Motorola68000-series processors.
BCDper seis not as widely used as in the past, and is unavailable or limited in newer instruction sets (e.g.,ARM;x86inlong mode). However, decimalfixed-pointand decimalfloating-pointformats are still important and continue to be used in financial, commercial, and industrial computing, where the subtle conversion and fractionalrounding errorsthat are inherent in binary floating point formats cannot be tolerated.[11]
BCD takes advantage of the fact that any one decimal numeral can be represented by a four-bit pattern. An obvious way of encoding digits isNatural BCD(NBCD), where each decimal digit is represented by its corresponding four-bit binary value, as shown in the following table. This is also called "8421" encoding.
This scheme can also be referred to asSimple Binary-Coded Decimal(SBCD) orBCD 8421, and is the most common encoding.[12]Others include the so-called "4221" and "7421" encoding – named after the weighting used for the bits – and "Excess-3".[13]For example, the BCD digit 6,0110'bin 8421 notation, is1100'bin 4221 (two encodings are possible),0110'bin 7421, while in Excess-3 it is1001'b(6+3=9{\displaystyle 6+3=9}).
The following table representsdecimaldigits from 0 to 9 in various BCD encoding systems. In the headers, the "8421" indicates the weight of each bit. In the fifth column ("BCD 84−2−1"), two of the weights are negative. Both ASCII and EBCDIC character codes for the digits, which are examples of zoned BCD, are also shown.
As most computers deal with data in 8-bitbytes, it is possible to use one of the following methods to encode a BCD number:
As an example, encoding the decimal number91using unpacked BCD results in the following binary pattern of two bytes:
In packed BCD, the same number would fit into a single byte:
Hence the numerical range for one unpacked BCD byte is zero through nine inclusive, whereas the range for one packed BCD byte is zero through ninety-nine inclusive.
To represent numbers larger than the range of a single byte any number of contiguous bytes may be used. For example, to represent the decimal number12345in packed BCD, usingbig-endianformat, a program would encode as follows:
Here, the most significant nibble of the most significant byte has been encoded as zero, so the number is stored as012345(but formatting routines might replace or remove leading zeros). Packed BCD is more efficient in storage usage than unpacked BCD; encoding the same number (with the leading zero) in unpacked format would consume twice the storage.
Shiftingandmaskingoperations are used to pack or unpack a packed BCD digit. Otherbitwise operationsare used to convert a numeral to its equivalent bit pattern or reverse the process.
Some computers whose words are multiples of anoctet(8-bit byte), for example contemporary IBM mainframe systems, supportpacked BCD(orpacked decimal[38]) numeric representations, in which eachnibblerepresents either a decimal digit or a sign.[nb 8]Packed BCD has been in use since at least the 1960s and is implemented in all IBM mainframe hardware since then. Most implementations arebig endian, i.e. with the more significant digit in the upper half of each byte, and with the leftmost byte (residing at the lowest memory address) containing the most significant digits of the packed decimal value. The lower nibble of the rightmost byte is usually used as the sign flag, although some unsigned representations lack a sign flag.
As an example, a 4-byte value consists of 8 nibbles, wherein the upper 7 nibbles store the digits of a 7-digit decimal value, and the lowest nibble indicates the sign of the decimal integer value. Standard sign values are 1100 (hexC) for positive (+) and 1101 (D) for negative (−). This convention comes from the zone field forEBCDICcharacters and thesigned overpunchrepresentation.
Other allowed signs are 1010 (A) and 1110 (E) for positive and 1011 (B) for negative. IBM System/360 processors will use the 1010 (A) and 1011 (B) signs if the A bit is set in the PSW, for the ASCII-8 standard that never passed. Most implementations also provide unsigned BCD values with a sign nibble of 1111 (F).[39][40][41]ILE RPG uses 1111 (F) for positive and 1101 (D) for negative.[42]These match the EBCDIC zone for digits without a sign overpunch. In packed BCD, the number 127 is represented by 0001 0010 0111 1100 (127C) and −127 is represented by 0001 0010 0111 1101 (127D). Burroughs systems used 1101 (D) for negative, and any other value is considered a positive sign value (the processors will normalize a positive sign to 1100 (C)).
No matter how many bytes wide awordis, there is always an even number of nibbles because each byte has two of them. Therefore, a word ofnbytes can contain up to (2n)−1 decimal digits, which is always an odd number of digits. A decimal number withddigits requires1/2(d+1) bytes of storage space.
For example, a 4-byte (32-bit) word can hold seven decimal digits plus a sign and can represent values ranging from ±9,999,999. Thus the number −1,234,567 is 7 digits wide and is encoded as:
Like character strings, the first byte of the packed decimal – that with the most significant two digits – is usually stored in the lowest address in memory, independent of theendiannessof the machine.
In contrast, a 4-byte binarytwo's complementinteger can represent values from −2,147,483,648 to +2,147,483,647.
While packed BCD does not make optimal use of storage (using about 20% more memory thanbinary notationto store the same numbers), conversion toASCII, EBCDIC, or the various encodings ofUnicodeis made trivial, as no arithmetic operations are required. The extra storage requirements are usually offset by the need for the accuracy and compatibility with calculator or hand calculation that fixed-point decimal arithmetic provides. Denser packings ofBCDexist which avoid the storage penalty and also need no arithmetic operations for common conversions.
Packed BCD is supported in theCOBOLprogramming language as the "COMPUTATIONAL-3" (an IBM extension adopted by many other compiler vendors) or "PACKED-DECIMAL" (part of the 1985 COBOL standard) data type. It is supported inPL/Ias "FIXED DECIMAL". Beside the IBM System/360 and later compatible mainframes, packed BCD is implemented in the native instruction set of the originalVAXprocessors fromDigital Equipment Corporationand some models of theSDS Sigma seriesmainframes, and is the native format for theBurroughs Medium Systemsline of mainframes (descended from the 1950sElectrodata 200 series).
Ten's complementrepresentations for negative numbers offer an alternative approach to encoding the sign of packed (and other) BCD numbers. In this case, positive numbers always have a most significant digit between 0 and 4 (inclusive), while negative numbers are represented by the 10's complement of the corresponding positive number.
As a result, this system allows for 32-bit packed BCD numbers to range from −50,000,000 to +49,999,999, and −1 is represented as 99999999. (As with two's complement binary numbers, the range is not symmetric about zero.)
Fixed-pointdecimal numbers are supported by some programming languages (such as COBOL and PL/I). These languages allow the programmer to specify an implicit decimal point in front of one of the digits.
For example, a packed decimal value encoded with the bytes 12 34 56 7C represents the fixed-point value +1,234.567 when the implied decimal point is located between the fourth and fifth digits:
The decimal point is not actually stored in memory, as the packed BCD storage format does not provide for it. Its location is simply known to the compiler, and the generated code acts accordingly for the various arithmetic operations.
If a decimal digit requires four bits, then three decimal digits require 12 bits. However, since 210(1,024) is greater than 103(1,000), if three decimal digits are encoded together, only 10 bits are needed. Two such encodings areChen–Ho encodinganddensely packed decimal(DPD). The latter has the advantage that subsets of the encoding encode two digits in the optimal seven bits and one digit in four bits, as in regular BCD.
Some implementations, for exampleIBMmainframe systems, supportzoned decimalnumeric representations. Each decimal digit is stored in one 8-bit[nb 9]byte, with the lower four bits encoding the digit in BCD form. The upper four[nb 10]bits, called the "zone" bits, are usually set to a fixed value so that the byte holds a character value corresponding to the digit, or to values representing plus or minus. EBCDIC[nb 11]systems use a zone value of 11112(F16), yielding F016-F916, the codes for "0" through "9", a zone value of 11002(C16) for positive, yielding C016-C916, the codes for "{" through "I" and a zone value of 11102(D16) for negative, yielding D016-D916, the codes for the characters "}" through "R". Similarly, ASCII systems use a zone value of 0011 (hex 3), giving character codes 30 to 39 (hex).
For signed zoned decimal values, the rightmost (least significant) zone nibble holds the sign digit, which is the same set of values that are used for signed packed decimal numbers (see above). Thus a zoned decimal value encoded as the hex bytes F1 F2 D3 represents the signed decimal value −123:
(*)Note: These characters vary depending on the local charactercode pagesetting.
Some languages (such as COBOL and PL/I) directly support fixed-point zoned decimal values, assigning an implicit decimal point at some location between the decimal digits of a number.
For example, given a six-byte signed zoned decimal value with an implied decimal point to the right of the fourth digit, the hex bytes F1 F2 F7 F9 F5 C0 represent the value +1,279.50:
It is possible to performadditionby first adding in binary, and then converting to BCD afterwards. Conversion of the simple sum of two digits can be done by adding 6 (that is, 16 − 10) when the five-bit result of adding a pair of digits has a value greater than 9. The reason for adding 6 is that there are 16 possible 4-bit BCD values (since 24= 16), but only 10 values are valid (0000 through 1001). For example:
10001 is the binary, not decimal, representation of the desired result, but the most significant 1 (the "carry") cannot fit in a 4-bit binary number. In BCD as in decimal, there cannot exist a value greater than 9 (1001) per digit. To correct this, 6 (0110) is added to the total, and then the result is treated as two nibbles:
The two nibbles of the result, 0001 and 0111, correspond to the digits "1" and "7". This yields "17" in BCD, which is the correct result.
This technique can be extended to adding multiple digits by adding in groups from right to left, propagating the second digit as a carry, always comparing the 5-bit result of each digit-pair sum to 9. Some CPUs provide ahalf-carry flagto facilitate BCD arithmetic adjustments following binary addition and subtraction operations. TheIntel 8080, theZilog Z80and the CPUs of the x86 family provide the opcode DAA (Decimal Adjust Accumulator).
Subtraction is done by adding the ten's complement of thesubtrahendto theminuend. To represent the sign of a number in BCD, the number 0000 is used to represent apositive number, and 1001 is used to represent anegative number. The remaining 14 combinations are invalid signs. To illustrate signed BCD subtraction, consider the following problem: 357 − 432.
In signed BCD, 357 is 0000 0011 0101 0111. The ten's complement of 432 can be obtained by taking thenine's complementof 432, and then adding one. So, 999 − 432 = 567, and 567 + 1 = 568. By preceding 568 in BCD by the negative sign code, the number −432 can be represented. So, −432 in signed BCD is 1001 0101 0110 1000.
Now that both numbers are represented in signed BCD, they can be added together:
Since BCD is a form of decimal representation, several of the digit sums above are invalid. In the event that an invalid entry (any BCD digit greater than 1001) exists, 6 is added to generate a carry bit and cause the sum to become a valid entry. So, adding 6 to the invalid entries results in the following:
Thus the result of the subtraction is 1001 1001 0010 0101 (−925). To confirm the result, note that the first digit is 9, which means negative. This seems to be correct since 357 − 432 should result in a negative number. The remaining nibbles are BCD, so 1001 0010 0101 is 925. The ten's complement of 925 is 1000 − 925 = 75, so the calculated answer is −75.
If there are a different number of nibbles being added together (such as 1053 − 2), the number with the fewer digits must first be prefixed with zeros before taking the ten's complement or subtracting. So, with 1053 − 2, 2 would have to first be represented as 0002 in BCD, and the ten's complement of 0002 would have to be calculated.
IBM used the termsBinary-Coded Decimal Interchange Code(BCDIC, sometimes just called BCD), for 6-bitalphanumericcodes that represented numbers, upper-case letters and special characters. Some variation of BCDICalphamericsis used in most early IBM computers, including theIBM 1620(introduced in 1959),IBM 1400 series, and non-decimal architecturemembers of theIBM 700/7000 series.
The IBM 1400 series are character-addressable machines, each location being six bits labeledB, A, 8, 4, 2and1,plus an odd parity check bit (C) and a word mark bit (M). For encoding digits1through9,BandAare zero and the digit value represented by standard 4-bit BCD in bits8through1. For most other characters bitsBandAare derived simply from the "12", "11", and "0" "zone punches" in thepunched cardcharacter code, and bits8through1from the1through9punches. A "12 zone" punch set bothBandA, an "11 zone" setB, and a "0 zone" (a 0 punch combined with any others) setA. Thus the letterA, which is(12,1)in the punched card format, is encoded(B,A,1). The currency symbol$,(11,8,3)in the punched card, was encoded in memory as(B,8,2,1). This allows the circuitry to convert between the punched card format and the internal storage format to be very simple with only a few special cases. One important special case is digit0, represented by a lone0punch in the card, and(8,2)in core memory.[43]
The memory of the IBM 1620 is organized into 6-bit addressable digits, the usual8, 4, 2, 1plusF, used as a flag bit andC, an odd parity check bit. BCDalphamericsare encoded using digit pairs, with the "zone" in the even-addressed digit and the "digit" in the odd-addressed digit, the "zone" being related to the12,11, and0"zone punches" as in the 1400 series. Input/output translation hardware converted between the internal digit pairs and the external standard 6-bit BCD codes.
In the decimal architectureIBM 7070,IBM 7072, andIBM 7074alphamericsare encoded using digit pairs (usingtwo-out-of-five codein the digits,notBCD) of the 10-digit word, with the "zone" in the left digit and the "digit" in the right digit. Input/output translation hardware converted between the internal digit pairs and the external standard 6-bit BCD codes.
With the introduction ofSystem/360, IBM expanded 6-bit BCDalphamericsto 8-bit EBCDIC, allowing the addition of many more characters (e.g., lowercase letters). A variable length packed BCDnumericdata type is also implemented, providing machine instructions that perform arithmetic directly on packed decimal data.
On theIBM 1130and1800, packed BCD is supported in software by IBM's Commercial Subroutine Package.
Today, BCD data is still heavily used in IBM databases such asIBM Db2and processors such asz/ArchitectureandPOWER6and laterPower ISAprocessors. In these products, the BCD is usually zoned BCD (as in EBCDIC or ASCII), packed BCD (two decimal digits per byte), or "pure" BCD encoding (one decimal digit stored as BCD in the low four bits of each byte). All of these are used within hardware registers and processing units, and in software.
The Digital Equipment CorporationVAXseries includesinstructionsthat can perform arithmetic directly on packed BCD data and convert between packed BCD data and other integer representations.[41]The VAX's packed BCD format is compatible with that on IBM System/360 and IBM's later compatible processors. The MicroVAX and later VAX implementations dropped this ability from the CPU but retained code compatibility with earlier machines by implementing the missing instructions in an operating system-supplied software library. This is invoked automatically viaexception handlingwhen the defunct instructions are encountered, so that programs using them can execute without modification on the newer machines.
Many processors have hardware support for BCD-encoded integer arithmetic. For example, the6502,[44][45]theMotorola 68000 series,[46]and thex86series.[47]TheIntelx86 architecture supports aunique 18-digit (ten-byte) BCD formatthat can be loaded into and stored from the floating point registers, from where computations can be performed.[48]
In more recent computers such capabilities are almost always implemented in software rather than the CPU's instruction set, but BCD numeric data are still extremely common in commercial and financial applications.
There are tricks for implementing packed BCD and zoned decimal add–or–subtract operations using short but difficult to understand sequences of word-parallel logic and binary arithmetic operations.[49]For example, the following code (written inC) computes an unsigned 8-digit packed BCD addition using 32-bit binary operations:
BCD is common in electronic systems where a numeric value is to be displayed, especially in systems consisting solely of digital logic, and not containing a microprocessor. By employing BCD, the manipulation of numerical data for display can be greatly simplified by treating each digit as a separate single sub-circuit.
This matches much more closely the physical reality of display hardware—a designer might choose to use a series of separate identicalseven-segment displaysto build a metering circuit, for example. If the numeric quantity were stored and manipulated as pure binary, interfacing with such a display would require complex circuitry. Therefore, in cases where the calculations are relatively simple, working throughout with BCD can lead to an overall simpler system than converting to and from binary. Most pocket calculators do all their calculations in BCD.
The same argument applies when hardware of this type uses an embedded microcontroller or other small processor. Often, representing numbers internally in BCD format results in smaller code, since a conversion from or to binary representation can be expensive on such limited processors. For these applications, some small processors feature dedicated arithmetic modes, which assist when writing routines that manipulate BCD quantities.[50][51]
Various BCD implementations exist that employ other representations for numbers.Programmable calculatorsmanufactured byTexas Instruments,Hewlett-Packard, and others typically employ afloating-pointBCD format, typically with two or three digits for the (decimal) exponent. The extra bits of the sign digit may be used to indicate special numeric values, such asinfinity,underflow/overflow, anderror(a blinking display).
Signed decimal values may be represented in several ways. TheCOBOLprogramming language, for example, supports five zoned decimal formats, with each one encoding the numeric sign in a different way:
3GPPdevelopedTBCD,[53]an expansion to BCD where the remaining (unused) bit combinations are used to add specifictelephonysymbols,[54][55]similar to those intelephone keypaddesign.
The mentioned 3GPP document definesTBCD-STRINGwith swapped nibbles in each byte. Bits, octets and digits indexed from 1, bits from the right, digits and octets from the left.
bits 8765 of octetnencoding digit 2n
bits 4321 of octetnencoding digit 2(n– 1) + 1
Meaning number1234, would become21 43in TBCD.
This format is used in modernmobile telephonyto send dialed numbers, as well asoperator ID(the MCC/MNC tuple),IMEI,IMSI(SUPI), et.c.[56][57]
If errors in representation and computation are more important than the speed of conversion to and from display, a scaled binary representation may be used, which stores a decimal number as a binary-encoded integer and a binary-encoded signed decimal exponent. For example, 0.2 can be represented as 2×10−1.
This representation allows rapid multiplication and division, but may require shifting by a power of 10 during addition and subtraction to align the decimal points. It is appropriate for applications with a fixed number of decimal places that do not then require this adjustment—particularly financial applications where 2 or 4 digits after the decimal point are usually enough. Indeed, this is almost a form offixed point arithmeticsince the position of theradix pointis implied.
TheHertzandChen–Ho encodingsprovide Boolean transformations for converting groups of three BCD-encoded digits to and from 10-bit values[nb 1]that can be efficiently encoded in hardware with only 2 or 3 gate delays.Densely packed decimal(DPD) is a similar scheme[nb 1]that is used for most of thesignificand, except the lead digit, for one of the two alternative decimal encodings specified in theIEEE 754-2008floating-point standard.
TheBIOSin manypersonal computersstores the date and time in BCD because theMC6818real-time clock chip used in the originalIBM PC ATmotherboard provided the time encoded in BCD. This form is easily converted into ASCII for display.[58][59]
TheAtari 8-bit computersuse a BCD format for floating point numbers. TheMOS Technology 6502processor has a BCD mode for the addition and subtraction instructions. ThePsion Organiser 1handheld computer's manufacturer-supplied software also uses BCD to implement floating point; later Psion models use binary exclusively.
Early models of thePlayStation 3store the date and time in BCD. This led to a worldwide outage of the console on 1 March 2010. The last two digits of the year stored as BCDwere misinterpretedas 16 causing an error in the unit's date, rendering most functions inoperable. This has been referred to as theYear 2010 problem.
In the 1972 caseGottschalk v. Benson, theU.S. Supreme Courtoverturned alower court's decision that had allowed apatentfor converting BCD-encoded numbers to binary on a computer.
The decision noted that a patent "would wholly pre-empt the mathematical formula and in practical effect would be a patent on thealgorithmitself".[60]This was a landmark judgement that determined thepatentability of software and algorithms.
|
https://en.wikipedia.org/wiki/Binary-coded_decimal
|
Decimal classificationis a type oflibrary classification. Examples include:
|
https://en.wikipedia.org/wiki/Decimal_classification
|
Adecimal computeris acomputerthat represents and operates onnumbersandaddressesindecimalformat – instead ofbinaryas is common in most modern computers. Some decimal computers had a variableword length, which enabled operations on relatively large numbers.
Decimal computers were common from the early machines through the 1960s and into the 1970s. Using decimal directly saved the need to convert from decimal to binary forinput and outputand offered a significant speed improvement over binary machines that performed these conversions using subroutines. This allowed otherwise low-end machines to offer practical performance for roles likeaccountingandbookkeeping, and many low- and mid-range systems of the era were decimal based.
TheIBM System/360line of binary computers, announced in 1964, included instructions that perform decimal arithmetic; other lines of binary computers with decimal arithmetic instructions followed. During the 1970s,microprocessorswith instructions supporting decimal arithmetic became common inelectronic calculators,cash registersand similar roles, especially in the 8-bit era.
The rapid improvements in general performance of binary machines eroded the value of decimal operations. One of the last major new designs to support it was theMotorola 68000, which shipped in 1980. More recently,IBMadded decimal support to theirPOWER6designs to allow them to directly support programs written for 1960s platforms like theSystem/360. With that exception, most modern designs have little or no decimal support.
Earlycomputersthat were exclusively decimal include theENIAC,IBM NORC,IBM 650,IBM 1620,IBM 7070,UNIVAC Solid State 80. In these machines, the basic unit of data was the decimal digit, encoded in one of several schemes, includingbinary-coded decimal(BCD),bi-quinaryandtwo-out-of-five code. Except for the IBM 1620 and 1710, these machines usedword addressing. When non-numeric characters were used in these machines, they were encoded as two decimal digits.
Other early computers were character oriented, providing instructions for performing arithmetic on character strings of decimal numerals, using BCD orexcess-3(XS-3)[1]for decimal digits. On these machines, the basic data element was analphanumericcharacter, typically encoded in sixbits.UNIVAC IandUNIVAC IIused word addressing, with 12-character words. IBM examples includeIBM 702,IBM 705, theIBM 1400series,[2]IBM 7010, and theIBM 7080.
Some early binary computers, such as theHoneywell 800[3]and theRCA601,[4][5]also had decimal arithmetic instructions. Some others had special instructions, such as CVR and CAQ on theIBM 7090, that could be used to speed up decimal addition and the conversion of decimal to binary.[6]
TheIBM System/360family of computers, introduced in 1964 to unify IBM's product lines, uses binary addressing, binaryinteger arithmetic, and binaryfloating-point; it also includes instructions forpacked decimalinteger arithmetic.[7]
Some other lines of binary computers added decimal arithmetic instructions. For example, theHoneywell 6000 series, based on the binaryGE-600 series, offered, in some models, an Extended Instruction Set that supported packed decimal integer arithmetic and decimal floating-point arithmetic.[8]
IBM's lines ofmidrange computers, starting with theSystem/3in 1969,[9]are binary computers with decimal integer instructions.
TheVAXline of 32-bit binary computers fromDigital Equipment Corporation, introduced in 1977, also includes packed decimal integer arithmetic instructions.
TheBurroughs Medium Systems, beginning with the Burroughs B2500 and B3500 in 1966, provides only decimal arithmetic, including decimal addressing, making it a decimal architecture.
Support for BCD was common in earlymicroprocessors, which were often used in roles likeelectronic calculatorsandcash registerswhere the math was all decimal. Examples of such support can be found in theIntel 8080,MOS 6502,Zilog Z80,Motorola 6800/6809and most other designs of the era. In these designs, BCD was directly supported in the ALU, allowing it to perform operations on decimal data directly.
Intel BCD opcodeshave remained in thex86family to this day, although they are not supported in long mode. These instructions convert one-byte BCD numbers (packed and unpacked) to binary format before or after arithmetic operations.[10]These operations were not extended to wider formats and hence are now slower than using32-bitor wider BCD "tricks" to compute in BCD.[11]Thex87FPUhas instructions to convert 10-byte (18 decimal digits) packed decimal data, although it then operates on them as floating-point numbers.
TheMotorola 68000series offered both conversion utilities as well as the ability to directly add and subtract in BCD.[12]These instructions were removed when theColdfireinstruction set was defined.
The 2008 revision of the IEEE 754 floating-point standardadds three decimal types with two binary encodings, with 7-, 16-, and 34-digit decimalsignificands.[13]
One of the fewRISCinstruction sets to directly support decimal is IBM'sPower ISA, which added support for IEEE 754-2008 decimal floating-point starting with Power ISA 2.05. Decimal integer support had been part of their mainframe line, and as part of the broader effort to merge theiSeriesandzSeriesdecimal arithmetic was added to the POWER line so that a single processor could support workloads from these older machines with full performance.[citation needed]The IBMPOWER6processor is the first Power ISA processor that implemented these types, using thedensely packed decimalbinary encoding rather than BCD.[14]Starting with Power ISA 3.0, decimal integer arithmetic instructions were added.
z/Architecture, the 64-bit version of IBM's mainframe instruction set, added support for the same encodings of IEEE 754 decimal floating-point, starting with theIBM System z9.[14]Starting with thez15processor, vector instructions to perform decimal integer arithmetic were added.[15]
|
https://en.wikipedia.org/wiki/Decimal_computer
|
Decimal timeis the representation of the time of day using units which aredecimallyrelated. This term is often used specifically to refer to theFrench Republican calendartime system used inFrancefrom 1794 to 1800, during theFrench Revolution, which divided the day into 10 decimal hours, each decimal hour into 100 decimal minutes and each decimal minute into 100 decimal seconds (100,000 decimal seconds per day), as opposed to the more familiarstandard time, which divides the day into 24hours, each hour into 60minutesand each minute into 60seconds(86,400SIseconds per day).
The main advantage of a decimal time system is that, since thebaseused to divide the time is the same as the one used to represent it, the representation of hours, minutes and seconds can be handled as a unified value. Therefore, it becomes simpler to interpret a timestamp and to perform conversions. For instance, 1h23m45sis 1 decimal hour, 23 decimal minutes, and 45 decimal seconds, or 1.2345 decimal hours, or 123.45 decimal minutes or 12345 decimal seconds; 3 hours is 300 minutes or 30,000 seconds.
This property also makes it straightforward to represent a timestamp as afractional day, so that 2025-05-20.54321 can be interpreted as five decimal hours, 43 decimal minutes and 21 decimal seconds after the start of that day, or a fraction of 0.54321 (54.321%) through that day (which is shortly after traditional 13:00). It also adjusts well to digital time representation usingepochs, in that the internal time representation can be used directly both for computation and for user-facing display.
Thedecansare 36 groups of stars (small constellations) used in the ancient Egyptian astronomy to conveniently divide the 360 degree ecliptic into 36 parts of 10 degrees. Because a new decan also appears heliacally every ten days (that is, every ten days, a new decanic star group reappears in the eastern sky at dawn right before the Sun rises, after a period of being obscured by the Sun's light), the ancient Greeks called themdekanoi(δεκανοί; pl. of δεκανόςdekanos) or "tens". A ten-day period between the rising of two consecutive decans is a decade. There were 36 decades (36 × 10 = 360 days), plus five added days to compose the 365 days of a solar based year.
Decimal time was used in China throughout most of its history alongsideduodecimaltime. The midnight-to-midnight day was divided both into 12 double hours (traditional Chinese:時辰;simplified Chinese:时辰;pinyin:shí chén) and also into 10 shi / 100ke(Chinese:刻;pinyin:kè) by the 1st millennium BC.[1][2]Other numbers ofkeper day were used during three short periods: 120kefrom 5 to 3 BC, 96kefrom 507 to 544 CE, and 108kefrom 544 to 565. Several of the roughly 50 Chinese calendars also divided eachkeinto 100fen, although others divided eachkeinto 60fen. In 1280, theShoushi(Season Granting) calendar further subdivided eachfeninto 100miao, creating a complete decimal time system of 100ke, 100fenand 100miao.[3]Chinese decimal time ceased to be used in 1645 when theShíxiàn calendar, based on European astronomy and brought to China by theJesuits, adopted 96keper day alongside 12 double hours, making eachkeexactly one-quarter hour.[4]
Gēng (更) is a time signal given by drum or gong. The character for gēng 更, literally meaning "rotation" or "watch", comes from the rotation of watchmen sounding these signals. The first gēng theoretically comes at sundown, but was standardized to fall at 19:12. The time between each gēng is 1⁄10 of a day, making a gēng 2.4 hours long (2 hours 24 minutes). As a 10-part system, the gēng are strongly associated with the 10 celestial stems, especially since the stems are used to count off the gēng during the night in Chinese literature.
As early as the Bronze-Age Xia dynasty, days were grouped into ten-day weeks known as xún (旬). Months consisted of three xún. The first 10 days were the early xún (上旬), the middle 10 the mid xún (中旬), and the last nine or 10 days were the late xún (下旬). Japan adopted this pattern, with 10-day-weeks known as jun (旬). In Korea, they were known as sun (순,旬).
In 1754,Jean le Rond d'Alembertwrote in theEncyclopédie:
In 1788,Claude Boniface Collignonproposed dividing the day into 10 hours or 1,000 minutes, each new hour into 100 minutes, each new minute into 1,000 seconds, and each new second into 1,000tierces(older French for "third"). The distance thetwilight zonetravels in one suchtierceat theequator, which would be one-billionth of thecircumferenceof theearth, would be a new unit of length, provisionally called a half-handbreadth, equal to four moderncentimetres. Further, the newtiercewould be divided into 1,000quatierces, which he called "microscopic points of time". He also suggested a week of 10 days and dividing the year into 10 "solar months".[7]
Decimal time was officially introduced during theFrench Revolution.Jean-Charles de Bordamade a proposal for decimal time on 5 November 1792. TheNational Conventionissued a decree on 5 October 1793, to which the underlined words were added on 24 November 1793 (4Frimaireof the Year II):
Thus, midnight was calleddix heures("ten hours"), noon was calledcinq heures("five hours"), etc.
The colon (:) was not yet in use as a unit separator for standard times, and is used for non-decimal bases. The Frenchdecimal separatoris the comma (,), while the period (.), or "point", is used in English. Units were either written out in full, or abbreviated. Thus, five hours eighty three minutes decimal might be written as 5 h. 83 m. Even today, "h" is commonly used in France to separate hours and minutes of 24-hour time, instead of a colon, such as 14h00. Midnight was represented in civil records as "ten hours". Times between midnight and the first decimal hour were written without hours, so 1:00 am, or 0.41 decimal hours, was written as "four décimes" or "forty-one minutes". 2:00 am (0.8333) was written as "eight décimes", "eighty-three minutes", or even "eighty-three minutes thirty-three seconds".
As with duodecimal time, decimal time was represented according to true solar time, rather than mean time, with noon being marked when the sun reached its highest point locally, which varied at different locations, and throughout the year.
In "Methods to find the Leap Years of the French Calendar", Jean-Baptiste-Joseph Delambre used three different representations for the same decimal time:
Sometimes in official records, decimal hours were divided into tenths, ordécimes, instead of minutes. Onedécimeis equal to 10 decimal minutes, which is nearly equal to a quarter-hour (15 minutes) in standard time. Thus, "five hours two décimes" equals 5.2 decimal hours, roughly 12:30 p.m. in standard time.[8][9]One hundredth of a decimal second was a decimaltierce.[10]
Althoughclocksandwatcheswere produced with faces showing both standard time with numbers 1–24 and decimal time with numbers 1–10, decimal time never caught on; it was not used for public records until the beginning of the Republican year III, 22 September 1794, and mandatory use was suspended 7 April 1795 (18Germinalof the Year III). In spite of this, decimal time was used in many cities, includingMarseilleandToulouse, where a decimal clock with just an hour hand was on the front of theCapitolefor five years.[11]In some places, decimal time was used to record certificates of births, marriages, and deaths until the end of Year VIII (September 1800). On thePalace of the TuileriesinParis, two of the four clock faces displayed decimal time until at least 1801.[12]The mathematician and astronomerPierre-Simon Laplacehad a decimal watch made for him, and used decimal time in his work, in the form offractional days.
Decimal time was part of a larger attempt atdecimalisationin revolutionary France (which also included decimalisation of currency andmetrication) and was introduced as part of theFrench Republican Calendar, which, in addition to decimally dividing the day, divided the month into threedécadesof 10 days each; this calendar was abolished at the end of 1805. The start of each year was determined according to the day of theautumnal equinox, in relation to true orapparent solar timeat theParis Observatory.
In designing the new metric system, the intent was to replace all the various units of different bases with a small number of standard decimal units. This was to include units for length, weight, area, liquid capacity, volume, and money. Initially the traditional second of time equal to 1/86400 day was proposed as the base of the metric system, but this was changed in 1791 to base the meter on a decimal division of a measurement of the Earth, instead. Early drafts of the metric system published in 1793 included the new decimal divisions of the day included with the Republican calendar, and some of the same individuals were involved with both projects.[13]
On March 28, 1794,Joseph-Louis Lagrangeproposed to the Commission for Republican Weights and Measures on dividing the day into 10 decidays and 100 centidays, which would be expressed together as two digits, counting periods of 14 minutes and 24 seconds since midnight, nearly a quarter hour. This would be displayed by one hand on watches. Another hand would display 100 divisions of a centiday, which is 1/10,000 day, or 8.64 seconds. A third hand on a smaller dial would further divide these into 10, which would be 1/100,000 day, or 864 milliseconds, slightly less than a whole second. He suggested the deciday and centiday be used together to represent the time of day, such as "4 and 5", "4/5", or simply "45".
This was opposed by Jean-Marie Viallon, of the Sainte-Geneviève Library in Paris, who thought that decimal hours, equal to 2.4 old hours, were too long, and that 100 centidays were too many, and proposed dividing two halves of the day into 10 new hours each, for a total of 20 per day, and that simply changing the numbers on watch dials from 12 to 10, he thought, would be sufficient for rural people. For others, there would be 50 decimal minutes per decimal hour, and 100 decimal seconds per decimal minute. His new hours, minutes, and seconds would thus be more similar to the old units.[14]
C.A. Prieur (of the Côte-d'Or), read at the National Convention on Ventôse 11, year III (March 1, 1795):
Thus, the law of 18 Germinal An III (April 7, 1795) establishing the metric system, rather than including metric units for time, repealed the mandatory use of decimal time, although its use continued for a number of years in some places. As predicted, it was quickly found to be useful by astronomers, who still use it in the form of fractional days.
Carl Friedrich Gaussrecommended the ephemeris second as a metric base unit for time interval in 1832, which eventually became the atomic second in theInternational System. However, for longer periods of time interval, the old non-decimal units were approved for use.
At theInternational Meridian Conferenceof 1884, the following resolution was proposed by the French delegation and passednem con(with 3 abstentions):
In the 1890s, Joseph Charles François de Rey-Pailhade, president of the Toulouse Geographical Society, proposed dividing the day into 100 parts, calledcés, equal to 14.4 standard minutes, and each divided into 10decicés, 100centicés, etc. The Toulouse Chamber of Commerce adopted a resolution supporting his proposal in April 1897. Although widely published, the proposal received little backing.[15]
The French made another attempt at the decimalization of time in 1897, when theCommission de décimalisation du tempswas created by theBureau des Longitudes, with the mathematicianHenri Poincaréas secretary. The commission adopted a compromise, originally proposed by Henri de Sarrauton of the Oran Geographical Society, of retaining the 24-hour day, but dividing each hour into 100 decimal minutes, and each minute into 100 seconds. The plan did not gain acceptance and was abandoned in 1900.
On 23 October 1998, theSwiss watchcompanySwatchintroduced a decimal time calledInternet Timefor its line of digital watches, which divided the day into 1,000 ".beats", (each 86.4 seconds in standard time) counted from 000–999, with @000 being midnight and @500 being noonstandard timeinSwitzerland, which isCentral European Time(one hour ahead ofUniversal Time).
Although Swatch did not specify units smaller than one .beat, third party implementations extended the standard by adding "centibeats" or "sub-beats", for extended precision: @248.00. Each "centibeat" was a hundredth of a .beat and was therefore equal to one French decimal second (0.864 seconds).[16][17]
When using .beats and centibeats, Swatch Internet Time divided the day into 1,000 French decimal minutes and each decimal minute into 100 decimal seconds. So 9pm was 21:00:00 in standard time and @875.00 in extended Swatch Internet Time.
Swatch no longer markets digital watches with Internet Time.
There are exactly 86,400 standard seconds (seeSIfor the current definition of the standard second) in a standard day, but in the French decimal time system there were 100,000 decimal seconds in the day; thus, the decimal second was 13.6% shorter than its standard counterpart.
Another common type of decimal time is decimal hours. In 1896, Henri de Sarrauton of the Oran Geographical Society proposed dividing the 24 hours of the day each into 100 decimal minutes, and each minute into 100 decimal seconds.[18]Although endorsed by the Bureau des Longitudes, this proposal failed, but using decimal fractions of an hour to represent the time of day instead of minutes has become common.
Decimal hours are frequently used in accounting for payrolls and hourly billing.Time clockstypically record the time of day in tenths or hundredths of an hour. For instance, 08:30 would be recorded as 08.50. This is intended to make accounting easier by eliminating the need to convert between minutes and hours.
For aviation purposes, where it is common to add times in an already complicated environment, time tracking is simplified by recording decimal fractions of hours. For instance, instead of adding 1:36 to 2:36, getting 3:72 and converting it to 4:12, one would add 1.6 to 2.6 and get 4.2 hours.[19]
The time of day is sometimes represented as a decimal fraction of a day in science and computers. Standard 24-hour time is converted into a fractional day by dividing the number of hours elapsed since midnight by 24 to make adecimal fraction. Thus, midnight is 0.0 day, noon is 0.5 d, etc., which can be added to any type of date, including the following, all of which refer to the same moment in time:
As many decimal places may be used as required for precision, so 0.5 d = 0.500000 d. Fractional days are often calculated inUTCorTT, although Julian Dates use pre-1925 astronomical date/time (each date began at noon = ".0") andMicrosoft Exceluses the local time zone of the computer. Using fractional days reduces the number of units in time calculations from four (days, hours, minutes, seconds) to just one (days).
Fractional days are often used byastronomersto record observations, and were expressed in relation to Paris Mean Time by the 18th century French mathematician and astronomerPierre-Simon Laplace, as in these examples:[20]
... et la distance périhélie, égale à 1,053095; ce qui a donné pour l'instant du passage au périhélie, sept.29j,10239, temps moyen compté de minuit à Paris.
Les valeurs précédentes de a, b, h, l, relatives à trois observations, ont donné la distance périhélie égale à 1,053650; et pour l'instant du passage, sept.29j,04587; ce qui diffère peu des résultats fondés sur cinq observations.
Fractional days have been used by astronomers ever since. For instance, the 19th century British astronomerJohn Herschelgave these examples:[21]
Between Greenwich noon of the 22d and 23d of March, 1829, the 1828th equinoctial year terminates, and the 1829th commences. This happens at 0d·286003, or at 6h51m50s·66 Greenwich Mean Time ... For example, at 12h0m0sGreenwich Mean Time, or 0d·500000...
Fractional days are commonly used to expressepochsoforbital elements. The decimal fraction is usually added to the calendar date orJulian dayfor natural objects, or to theordinal datefor artificial satellites intwo-line elements.
The second is theInternational System of Units (SI)unit of time duration. It is also the standard single-unit time representation in many programming languages, most notably C, and part of UNIX/POSIX standards used by Linux, Mac OS X, etc.; to convert fractional days to fractional seconds, multiply the number by 86400. Fractional seconds are represented asmilliseconds(ms),microseconds(μs) ornanoseconds(ns). Absolute times are usually represented relative to 1 January 1970, at midnight UT. Other systems may use a different zero point (likeUnix time).
In principle, time spans greater than one second may be given in units such askiloseconds(ks),megaseconds(Ms),gigaseconds(Gs),and so on. Occasionally, these units can be found in technical literature, but traditional units like minutes, hours, days and years are much more common, and are accepted for use with SI.
It is possible to specify the time of day as the number of kiloseconds of elapsed time since midnight. Thus, instead of saying 3:45 p.m. one could say (time of day) 56.7 ks. There are exactly 86.4 ks in one day (each kilosecond being equivalent to 16 minutes and 40 seconds worth of conventional time). However, this nomenclature is rarely used in practice.
Scientists often record time as decimal. For example, decimal days divide the day into 10 equal parts, and decimal years divide the year into 10 equal parts. Decimals are easier to plot than both (a) minutes and seconds, which uses thesexagesimalnumbering system, (b) hours, months and days, which has irregular month lengths. In astronomy, the so-calledJulian dayuses decimal days centered on Greenwich noon.
Since there are 60 seconds in a minute, a tenth part represents60/10= 6 seconds.
Since there are 60 minutes in an hour, a tenth part represents60/10= 6 minutes.
Since there are 24 hours in a day, a tenth part represents24/10= 2.4 hours (2 hours and 24 minutes).
Since there are about 365 days in a year, there are about365/10= 36.5 days in a tenth of a year. Hence the year 2020.5 represents the day 2 July 2020.[22]More exactly, a"Julian year"is exactly 365.25 days long, so a tenth of the year is 36.525 days (36 days, 12 hours, 36 minutes).
These values, based on the Julian year, are most likely to be those used in astronomy and related sciences. AGregorian year, which takes into account the 100 vs. 400 leap year exception rule of theGregorian calendar, is 365.2425 days (the average length of a year over a 400–year cycle), resulting in 0.1 years being a period of 36.52425 days (3155695.2seconds; 36 days, 12 hours, 34 minutes, 55.2 seconds).
Numerous individuals have proposed variations of decimal time, dividing the day into different numbers of units and subunits with different names. Most are based upon fractional days, so that one decimal time format may be easily converted into another, such that all the following are equivalent:
Some decimal time proposals are based upon alternate units of metric time. The difference between metric time and decimal time is that metric time defines units for measuringtime interval, as measured with astopwatch, and decimal time defines the time of day, as measured by a clock. Just as standard time uses the metric time unit of the second as its basis, proposed decimal time scales may use alternative metric units.
In the fictionalStar Trekuniverse, eachstardateincrement represents one milliyear, with 78 years in 2401, counted from 2323. The decimal represents a fractional day. Thus, stardates are a composition of two types of decimal time.[citation needed]In 2023, 78 years earlier would be 1945.
|
https://en.wikipedia.org/wiki/Decimal_time
|
Adecimal representationof anon-negativereal numberris its expression as asequenceof symbols consisting ofdecimal digitstraditionally written with a single separator:r=bkbk−1⋯b0.a1a2⋯{\displaystyle r=b_{k}b_{k-1}\cdots b_{0}.a_{1}a_{2}\cdots }Here.is thedecimal separator,kis anonnegative integer, andb0,⋯,bk,a1,a2,⋯{\displaystyle b_{0},\cdots ,b_{k},a_{1},a_{2},\cdots }aredigits, which are symbols representing integers in the range 0, ..., 9.
Commonly,bk≠0{\displaystyle b_{k}\neq 0}ifk≥1.{\displaystyle k\geq 1.}The sequence of theai{\displaystyle a_{i}}—the digits after the dot—is generallyinfinite. If it is finite, the lacking digits are assumed to be 0. If allai{\displaystyle a_{i}}are0, the separator is also omitted, resulting in a finite sequence of digits, which represents anatural number.
The decimal representation represents theinfinite sum:r=∑i=0kbi10i+∑i=1∞ai10i.{\displaystyle r=\sum _{i=0}^{k}b_{i}10^{i}+\sum _{i=1}^{\infty }{\frac {a_{i}}{10^{i}}}.}
Every nonnegative real number has at least one such representation; it has two such representations (withbk≠0{\displaystyle b_{k}\neq 0}ifk>0{\displaystyle k>0})if and only ifone has a trailing infinite sequence of0, and the other has a trailing infinite sequence of9. For having a one-to-one correspondence between nonnegative real numbers and decimal representations, decimal representations with a trailing infinite sequence of9are sometimes excluded.[1]
The natural number∑i=0kbi10i{\textstyle \sum _{i=0}^{k}b_{i}10^{i}}, is called theinteger partofr, and is denoted bya0in the remainder of this article. The sequence of theai{\displaystyle a_{i}}represents the number0.a1a2…=∑i=1∞ai10i,{\displaystyle 0.a_{1}a_{2}\ldots =\sum _{i=1}^{\infty }{\frac {a_{i}}{10^{i}}},}which belongs to theinterval[0,1),{\displaystyle [0,1),}and is called thefractional partofr(except when allai{\displaystyle a_{i}}are equal to9).
Any real number can be approximated to any desired degree of accuracy byrational numberswith finite decimal representations.
Assumex≥0{\displaystyle x\geq 0}. Then for every integern≥1{\displaystyle n\geq 1}there is a finite decimalrn=a0.a1a2⋯an{\displaystyle r_{n}=a_{0}.a_{1}a_{2}\cdots a_{n}}such that:
rn≤x<rn+110n.{\displaystyle r_{n}\leq x<r_{n}+{\frac {1}{10^{n}}}.}
Proof:
Letrn=p10n{\displaystyle r_{n}=\textstyle {\frac {p}{10^{n}}}}, wherep=⌊10nx⌋{\displaystyle p=\lfloor 10^{n}x\rfloor }.
Thenp≤10nx<p+1{\displaystyle p\leq 10^{n}x<p+1}, and the result follows from dividing all sides by10n{\displaystyle 10^{n}}.
(The fact thatrn{\displaystyle r_{n}}has a finite decimal representation is easily established.)
Some real numbersx{\displaystyle x}have two infinite decimal representations. For example, the number 1 may be equally represented by 1.000... as by0.999...(where the infinite sequences of trailing 0's or 9's, respectively, are represented by "..."). Conventionally, the decimal representation without trailing 9's is preferred. Moreover, in thestandard decimal representationofx{\displaystyle x}, an infinite sequence of trailing 0's appearing after thedecimal pointis omitted, along with the decimal point itself ifx{\displaystyle x}is an integer.
Certain procedures for constructing the decimal expansion ofx{\displaystyle x}will avoid the problem of trailing 9's. For instance, the following algorithmic procedure will give the standard decimal representation: Givenx≥0{\displaystyle x\geq 0}, we first definea0{\displaystyle a_{0}}(theinteger partofx{\displaystyle x}) to be the largest integer such thata0≤x{\displaystyle a_{0}\leq x}(i.e.,a0=⌊x⌋{\displaystyle a_{0}=\lfloor x\rfloor }). Ifx=a0{\displaystyle x=a_{0}}the procedure terminates. Otherwise, for(ai)i=0k−1{\textstyle (a_{i})_{i=0}^{k-1}}already found, we defineak{\displaystyle a_{k}}inductively to be the largest integer such that:
The procedure terminates wheneverak{\displaystyle a_{k}}is found such that equality holds in (*); otherwise, it continues indefinitely to give an infinite sequence of decimal digits. It can be shown thatx=supk{∑i=0kai10i}{\textstyle x=\sup _{k}\left\{\sum _{i=0}^{k}{\frac {a_{i}}{10^{i}}}\right\}}[2](conventionally written asx=a0.a1a2a3⋯{\displaystyle x=a_{0}.a_{1}a_{2}a_{3}\cdots }), wherea1,a2,a3…∈{0,1,2,…,9},{\displaystyle a_{1},a_{2},a_{3}\ldots \in \{0,1,2,\ldots ,9\},}and the nonnegative integera0{\displaystyle a_{0}}is represented indecimal notation. This construction is extended tox<0{\displaystyle x<0}by applying the above procedure to−x>0{\displaystyle -x>0}and denoting the resultant decimal expansion by−a0.a1a2a3⋯{\displaystyle -a_{0}.a_{1}a_{2}a_{3}\cdots }.
The decimal expansion of non-negative real numberxwill end in zeros (or in nines) if, and only if,xis a rational number whose denominator is of the form 2n5m, wheremandnare non-negative integers.
Proof:
If the decimal expansion ofxwill end in zeros, orx=∑i=0nai10i=∑i=0n10n−iai/10n{\textstyle x=\sum _{i=0}^{n}{\frac {a_{i}}{10^{i}}}=\sum _{i=0}^{n}10^{n-i}a_{i}/10^{n}}for somen, then the denominator ofxis of the form 10n= 2n5n.
Conversely, if the denominator ofxis of the form 2n5m,x=p2n5m=2m5np2n+m5n+m=2m5np10n+m{\displaystyle x={\frac {p}{2^{n}5^{m}}}={\frac {2^{m}5^{n}p}{2^{n+m}5^{n+m}}}={\frac {2^{m}5^{n}p}{10^{n+m}}}}for somep.
Whilexis of the formp10k{\displaystyle \textstyle {\frac {p}{10^{k}}}},p=∑i=0n10iai{\displaystyle p=\sum _{i=0}^{n}10^{i}a_{i}}for somen.
Byx=∑i=0n10n−iai/10n=∑i=0nai10i{\displaystyle x=\sum _{i=0}^{n}10^{n-i}a_{i}/10^{n}=\sum _{i=0}^{n}{\frac {a_{i}}{10^{i}}}},xwill end in zeros.
Some real numbers have decimal expansions that eventually get into loops, endlessly repeating a sequence of one or more digits:
Every time this happens the number is still arational number(i.e. can alternatively be represented as a ratio of an integer and a positive integer).
Also the converse is true: The decimal expansion of a rational number is either finite, or endlessly repeating.
Finite decimal representations can also be seen as a special case of infinite repeating decimal representations. For example,36⁄25= 1.44 = 1.4400000...; the endlessly repeated sequence is the one-digit sequence "0".
Other real numbers have decimal expansions that never repeat. These are precisely theirrational numbers, numbers that cannot be represented as a ratio of integers. Some well-known examples are:
Every decimal representation of a rational number can be converted to a fraction by converting it into a sum of the integer, non-repeating, and repeating parts and then converting that sum to a single fraction with a common denominator.
For example, to convert±8.1234567¯{\textstyle \pm 8.123{\overline {4567}}}to a fraction one notes the lemma:0.0004567¯=4567×0.0000001¯=4567×0.0001¯×1103=4567×19999×1103=45679999×1103=4567(104−1)×103The exponents are the number of non-repeating digits after the decimal point (3) and the number of repeating digits (4).{\displaystyle {\begin{aligned}0.000{\overline {4567}}&=4567\times 0.000{\overline {0001}}\\&=4567\times 0.{\overline {0001}}\times {\frac {1}{10^{3}}}\\&=4567\times {\frac {1}{9999}}\times {\frac {1}{10^{3}}}\\&={\frac {4567}{9999}}\times {\frac {1}{10^{3}}}\\&={\frac {4567}{(10^{4}-1)\times 10^{3}}}&{\text{The exponents are the number of non-repeating digits after the decimal point (3) and the number of repeating digits (4).}}\end{aligned}}}
Thus one converts as follows:±8.1234567¯=±(8+123103+4567(104−1)×103)from above=±8×(104−1)×103+123×(104−1)+4567(104−1)×103common denominator=±812264449999000multiplying, and summing the numerator=±203066112499750reducing{\displaystyle {\begin{aligned}\pm 8.123{\overline {4567}}&=\pm \left(8+{\frac {123}{10^{3}}}+{\frac {4567}{(10^{4}-1)\times 10^{3}}}\right)&{\text{from above}}\\&=\pm {\frac {8\times (10^{4}-1)\times 10^{3}+123\times (10^{4}-1)+4567}{(10^{4}-1)\times 10^{3}}}&{\text{common denominator}}\\&=\pm {\frac {81226444}{9999000}}&{\text{multiplying, and summing the numerator}}\\&=\pm {\frac {20306611}{2499750}}&{\text{reducing}}\\\end{aligned}}}
If there are no repeating digits one assumes that there is a forever repeating 0, e.g.1.9=1.90¯{\displaystyle 1.9=1.9{\overline {0}}}, although since that makes the repeating term zero the sum simplifies to two terms and a simpler conversion.
For example:±8.1234=±(8+1234104)=±8×104+1234104common denominator=±8123410000multiplying, and summing the numerator=±406175000reducing{\displaystyle {\begin{aligned}\pm 8.1234&=\pm \left(8+{\frac {1234}{10^{4}}}\right)&\\&=\pm {\frac {8\times 10^{4}+1234}{10^{4}}}&{\text{common denominator}}\\&=\pm {\frac {81234}{10000}}&{\text{multiplying, and summing the numerator}}\\&=\pm {\frac {40617}{5000}}&{\text{reducing}}\\\end{aligned}}}
|
https://en.wikipedia.org/wiki/Decimal_representation
|
Aparagraph(fromAncient Greekπαράγραφος (parágraphos)'to write beside') is a self-contained unit of discourse inwritingdealing with a particular point oridea. Though not required by the orthographic conventions of any language with awriting system, paragraphs are a conventional means of organizing extended segments ofprose.
The oldest classical British and Latin writings had little or no space between words and could be written inboustrophedon(alternating directions). Over time, text direction (left to right) became standardized.Word dividersandterminal punctuationbecame common. The first way to divide sentences into groups was the originalparagraphos, similar to anunderscoreat the beginning of the new group.[1]The Greekparágraphosevolved into thepilcrow(¶), which in English manuscripts in theMiddle Agescan be seen inserted inline between sentences.
Ancient manuscripts also divided sentences into paragraphs with line breaks (newline) followed by aninitialat the beginning of the next paragraph. An initial is an oversized capital letter, sometimes outdented beyond the margin of the text. This style can be seen, for example, in the originalOld Englishmanuscript ofBeowulf. Outdenting is still used in English typography, though not commonly.[2]Modern English typography usually indicates a new paragraph byindentingthe first line. This style can be seen in the (handwritten)United States Constitutionfrom 1787. For additional ornamentation, a hedera leaf or other symbol can be added to the inter-paragraph white space, or put in the indentation space.
A second common modern English style is to use no indenting, but add vertical white space to create "block paragraphs." On a typewriter, a doublecarriage returnproduces a blank line for this purpose; professional typesetters (orword processingsoftware) may put in an arbitrary vertical space by adjustingleading. This style is very common in electronic formats, such as on theWorld Wide Webandemail. Wikipedia itself employs this format.
Professionally printed material in English typically does not indent the first paragraph, but indents those that follow. For example,Robert Bringhurststates that we should "Set opening paragraphs flush left."[2]Bringhurst explains as follows:
The function of a paragraph is to mark a pause, setting the paragraph apart from what precedes it. If a paragraph is preceded by a title or subhead, the indent is superfluous and can therefore be omitted.[2]
The Elements of Typographic Stylestates that "at least oneen [space]" should be used to indent paragraphs after the first,[2]noting that that is the "practical minimum".[3]Anem spaceis the most commonly used paragraph indent.[3]Miles Tinker, in his bookLegibility of Print, concluded that indenting the first line of paragraphs increasesreadabilityby 7%, on average.[4]
When referencing a paragraph, typographic symbolU+00A7§SECTION SIGN(§) may be used: "See § Background".
In modern usage, paragraph initiation is typically indicated by one or more of a preceding blank line,indentation, an "Initial" ("drop cap") or other indication. Historically, thepilcrowsymbol(¶) was used in Latin and western European languages. Other languages havetheir own marks with similar function.
Widows and orphansoccur when the first line of a paragraph is the last in a column or page, or when the last line of a paragraph is the first line of a new column or page.
Inword processinganddesktop publishing, ahard returnorparagraph breakindicates a new paragraph, to be distinguished from thesoft returnat the end of a line internal to a paragraph. This distinction allowsword wrapto automatically re-flow text as it is edited, without losing paragraph breaks. The software may apply vertical white space or indenting at paragraph breaks, depending on the selected style.
How such documents are actually stored depends on thefile format. For example,HTMLuses the <p> tag as a paragraph container. Inplaintextfiles, there are two common formats. The pre-formatted text will have anewlineat the end of every physical line, and two newlines at the end of a paragraph, creating a blank line. An alternative is to only put newlines at the end of each paragraph, and leave word wrapping up to the application that displays or processes the text.
A line break that is inserted manually, and preserved when re-flowing, may still be distinct from a paragraph break, although this is typically not done inprose.HTML's <br /> tag produces a line break without ending the paragraph; theW3Crecommends using it only to separate lines of verse (where each "paragraph" is astanza), or in astreet address.[5]
Paragraphs are commonly numbered using thedecimal system, where (in books) the integral part of the decimal represents the number of the chapter and the fractional parts are arranged in each chapter in order of magnitude. Thus in Whittaker and Watson's 1921A Course of Modern Analysis, chapter 9 is devoted to Fourier Series; within that chapter §9.6 introduces Riemann's theory, the following section §9.61 treats an associated function, following §9.62 some properties of that function, following §9.621 a related lemma, while §9.63 introduces Riemann's main theorem, and so on. Whittaker and Watson attribute this system of numbering toGiuseppe Peanoon their "Contents" page, although this attribution does not seem to be widely credited elsewhere.[6]Gradshteyn and Ryzhikis another book using this scheme since its third edition in 1951.
Many published books use a device to separate certain paragraphs further when there is a change of scene or time. This extra space, especially when co-occurring at a page or section break, may contain a special symbol known as adinkus, afleuron, or a stylisticdingbat.
The crafting of clear, coherent paragraphs is the subject of considerable stylistic debate. The form varies among different types of writing. For example, newspapers, scientific journals, and fictional essays have somewhat different conventions for the placement of paragraph breaks.
Acommon English usage misconceptionis that a paragraph has three to five sentences; single-word paragraphs can be seen in some professional writing, and journalists often use single-sentence paragraphs.[7]
English students are sometimes taught that a paragraph should have atopic sentenceor "main idea", preferably first, and multiple "supporting" or "detail" sentences that explain or supply evidence. One technique of this type, intended for essay writing, is known as theSchaffer paragraph. Topic sentences are largely a phenomenon of school-based writing, and the convention does not necessarily obtain in other contexts.[8]This advice is also culturally specific, for example, it differs from stock advice for the construction of paragraphs in Japanese (translated asdanraku段落).[9]
|
https://en.wikipedia.org/wiki/Decimal_section_numbering
|
Adecimal separatoris a symbol that separates theintegerpart from thefractional partof anumberwritten indecimalform. Different countries officially designate different symbols for use as the separator. The choice of symbol can also affect the choice of symbol for thethousands separatorused in digit grouping.
Any such symbol can be called adecimal mark,decimal marker, ordecimal sign. Symbol-specific names are also used;decimal pointanddecimal commarefer to a dot (eitherbaselineormiddle) andcommarespectively, when it is used as a decimal separator; these are the usual terms used in English,[1][2][3]with the aforementioned generic terms reserved for abstract usage.[4][5]
In many contexts, when a number is spoken, the function of the separator is assumed by the spoken name of the symbol:commaorpointin most cases.[6][2][7]In some specialized contexts, the worddecimalis instead used for this purpose (such as inInternational Civil Aviation Organization-regulatedair traffic controlcommunications). In mathematics, the decimal separator is a type ofradix point, a term that also applies to number systems with bases other than ten.
In theMiddle Ages, before printing, abar( ¯ ) over theunits digitwas used to separate the integral part of a number from itsfractional part, as in 9995 (meaning 99.95 indecimalpoint format). A similar notation remains in common use as an underbar to superscript digits, especially for monetary values without a decimal separator, as in 9995. Later, a "separatrix" (i.e., a short, roughly vertical ink stroke) between the units and tenths position became the norm amongArab mathematicians(e.g. 99ˌ95), while an L-shaped orvertical bar(|) served as the separatrix in England.[8]When this character wastypeset, it was convenient to use the existingcomma(99,95) orfull stop(99.95) instead.
Positionaldecimal fractionsappear for the first time in a book by the Arab mathematicianAbu'l-Hasan al-Uqlidisiwritten in the 10th century.[9]The practice is ultimately derived from the decimalHindu–Arabic numeral systemused inIndian mathematics,[10]and popularized by thePersianmathematicianAl-Khwarizmi,[11]whenLatintranslation ofhis workon theIndian numeralsintroduced the decimalpositional number systemto the Western world. HisCompendious Book on Calculation by Completion and Balancingpresented the first systematic solution oflinearandquadratic equationsin Arabic.
Gerbert of Aurillacmarked triples of columns with an arc (called a "Pythagorean arc"), when using his Hindu–Arabic numeral-based abacus in the 10th century.Fibonaccifollowed this convention when writing numbers, such as in his influential workLiber Abaciin the 13th century.[12]
The earliest known record of using the decimal point is in the astronomical tables compiled by the Italian merchant and mathematicianGiovanni Bianchiniin the 1440s.[13][contradictory]
Tables oflogarithmsprepared byJohn Napierin 1614 and 1619 used the period (full stop) as the decimal separator, which was then adopted byHenry Briggsin his influential 17th century work.
InFrance, the full stop was already in use in printing to makeRoman numeralsmore readable, so the comma was chosen.[14]
Many other countries, such as Italy, also chose to use the comma to mark the decimal units position.[14]It has beenmade standardby theISOfor international blueprints.[15]However, English-speaking countries took the comma to separate sequences of three digits. In some countries, a raised dot or dash (upper comma) may be used for grouping or decimal separator; this is particularly common in handwriting.
In theUnited States, the full stop or period (.) is used as the standard decimal separator.
In the nations of theBritish Empire(and, later, theCommonwealth of Nations), the full stop could be used in typewritten material and its use was not banned, although theinterpunct(a.k.a. decimal point, point or mid dot) was preferred as a decimal separator, in printing technologies that could accommodate it, e.g.99·95 .[17]However, as the mid dot was already in common use in the mathematics world to indicate multiplication, theSIrejected its use as the decimal separator.
During the beginning of Britishmetricationin the late 1960s and with impending currencydecimalisation, there was some debate in the United Kingdom as to whether the decimal comma or decimal point should be preferred: theBritish Standards Institutionand some sectors of industry advocated the comma and theDecimal Currency Boardadvocated for the point. In the event, the point was chosen by theMinistry of Technologyin 1968.[18]
When South Africaadopted the metric system, it adopted the comma as its decimal separator,[19]although a number of house styles, including some English-language newspapers such asThe Sunday Times, continue to use the full stop.[citation needed]
Previously, signs alongCaliforniaroads expressed distances in decimal numbers with the decimal part in superscript, as in 37, meaning 3.7.[20]Though California has since transitioned tomixed numberswithcommon fractions, the older style remains onpostmilemarkers and bridge inventory markers.
The three most spokeninternational auxiliary languages,Ido,Esperanto, andInterlingua, all use the comma as the decimal separator.
Interlingua has used the comma as its decimal separator since the publication of theInterlingua Grammarin 1951.[21]
Esperanto also uses the comma as its official decimal separator, whilst thousands are usually separated bynon-breaking spaces(e.g.12 345 678,9). It is possible to separate thousands by afull stop(e.g.12.345.678,9), though this is not as common.[22]
Ido'sKompleta Gramatiko Detaloza di la Linguo Internaciona Ido(Complete Detailed Grammar of the International Language Ido) officially states that commas are used for the decimal separator whilst full stops are used to separate thousands, millions, etc. So the number 12,345,678.90123 (in American notation), for instance, would be written12.345.678,90123in Ido.
The 1931 grammar ofVolapükuses the comma as its decimal separator but, somewhat unusually, the middle dot as its thousands separator (12·345·678,90123).[23]
In 1958, disputes between European and American delegates over the correct representation of the decimal separator nearly stalled the development of theALGOLcomputer programming language.[24]ALGOL ended up allowing different decimal separators, but most computer languages and standard data formats (e.g.,C,Java,Fortran,Cascading Style Sheets (CSS)) specify a dot.C++and a couple of others permit a quote (') as thousands separator, and many others like Python and Julia, (only) allow '_' as such a separator (it's usually ignored, i.e. also allows 1_00_00_000 aligning with the Indian number style of 1,00,00,000 that would be 10,000,000 in the US).
Inmathematicsandcomputing, aradix pointorradix characteris a symbol used in the display of numbers to separate theintegerpart of the value from itsfractional part. InEnglishand many other languages (including many that are written right-to-left), the integer part is at the left of the radix point, and the fraction part at the right of it.[25]
A radix point is most often used indecimal(base 10) notation, when it is more commonly called thedecimal point(the prefixdeci-implyingbase 10). InEnglish-speaking countries, the decimal point is usually a small dot (.) placed either on the baseline, or halfway between the baseline and the top of thedigits(·)[26][a]In many other countries, the radix point is a comma (,) placed on the baseline.[26][a]
These conventions are generally used both in machine displays (printing,computer monitors) and inhandwriting. It is important to know which notation is being used when working in different software programs. The respectiveISO standarddefines both the comma and the small dot as decimal markers, but does not explicitly define universal radix marks for bases other than 10.
Fractional numbers are rarely displayed in othernumber bases, but, when they are, a radix character may be used for the same purpose. When used with thebinary(base 2) representation, it may be called "binary point".
The 22ndGeneral Conference on Weights and Measures[27]declared in 2003, "The symbol for the decimal marker shall be either the point on the line or the comma on the line." It further reaffirmed,[27]
Numbers may be divided in groups of three in order to facilitate reading; neither dots nor commas are ever inserted in the spaces between groups
That is, "1 000 000 000" is preferred over "1,000,000,000" or "1.000.000.000". This use has therefore been recommended by technical organizations, such as the United States'sNational Institute of Standards and Technology.[28]
Past versions ofISO 8601, but not the 2019 revision, also stipulated normative notation based on SI conventions, adding that the comma is preferred over the full stop.[29]
ISO 80000-1stipulates, "The decimal sign is either a comma or a point on the line." The standard does not stipulate any preference, observing that usage will depend on customary usage in the language concerned, but adds a note that as per ISO/IEC directives, all ISO standards should use the comma as the decimal marker.
For ease of reading, numbers with many digits (e.g. numbers over 999) may be divided into groups using adelimiter,[30]such as comma (,), dot (.), half-space orthin space(" "),space(" "), underscore (_; as in maritime "21_450"),[citation needed]or apostrophe ('). In some countries, these "digit group separators" are only employed to the left of the decimal separator; in others, they are also used to separate numbers with a longfractional part. An important reason for grouping is that it allows rapid judgement of the number of digits, via telling at a glance ("subitizing") rather than counting (contrast, for example,100 000 000with 100000000 for one hundred million).
The use of thin spaces as separators[31]: 133instead of dots or commas (for example:20 000and1 000 000for "twenty thousand" and "one million"), has been official policy of theInternational Bureau of Weights and Measures(BIPM) since 1948 (and reaffirmed in 2003),[27]as well as of theInternational Union of Pure and Applied Chemistry(IUPAC),[32][33]theAmerican Medical Association's widely followedAMA Manual of Style, and the UKMetrication Board, among others.
The groups created by the delimiters tend to follow the usages of local languages, which vary. In European languages, large numbers are read in groups of thousands, and the delimiter (occuring every three digits when used) may be called a "thousands separator". InEast Asian cultures, particularlyChina,Japan, andKorea, large numbers are read in groups ofmyriads(10 000s), but the delimiter often separates the digits into groups of three.[citation needed]
TheIndian numbering systemis more complex: it groups the rightmost three digits together (until the hundreds place) and then groups digits in sets of two. For example, one trillion would be written "10,00,00,00,00,000" or "10kharab".[34]
The convention for digit group separators historically varied among countries, but usually sought to distinguish the delimiter from the decimal separator. Traditionally,English-speaking countries(except South Africa)[35]employed commas as the delimiter – 10,000 – and other European countries employed periods or spaces: 10.000 or10 000. Because of the confusion that could result in international documents, in recent years, the use of spaces as separators has been advocated by the supersededSI/ISO 31-0 standard,[36]as well as by the BIPM and IUPAC. These groups have also begun advocating the use of a "thin space" in "groups of three".[32][33]
Within the United States, the American Medical Association's widely followedAMA Manual of Stylealso calls for a thin space.[30]In programming languages and onlineencodingenvironments (for example,ASCII-only languages and environments) a thin space is not practical or available. Often, either underscores[37]and regular word spaces, or no delimiters at all are used instead.
Digit group separators can occur either as part of the data or as a mask through which the data is displayed. This is an example of theseparation of presentation and content, making it possible to display numbers in spaced groups while not inserting anywhitespace charactersinto the string of digits that make up those numbers. In many computing contexts, it is preferred to omit the digit group separators from the data and instead overlay them as a mask (aninput maskor an output mask).
Common examples includespreadsheetsanddatabases, in which currency values are entered without such marks but are displayed with them inserted. Similarly, phone numbers can have hyphens, spaces or parentheses as a mask rather than as data. Inweb content, digit grouping can be done withCSS. This is useful because the number can be copied and pasted elsewhere (such as into a calculator) and parsed by the computer as-is (i.e., without the user manually purging the extraneous characters). For example:
In someprogramming languages, it is possible to group the digits in the program'ssource codeto make it easier to read(see:Integer literal § Digit separators). Examples include:Ada,C#(sinceversion 7.0),[38]D,Go(sinceversion 1.13),Haskell(from GHCversion 8.6.1),Java,JavaScript(sinceES2021),Kotlin,[39]OCaml,Perl,Python(sinceversion 3.6),PHP(sinceversion 7.4),[40]Ruby,RustandZig.
Java, JavaScript,Swift,Juliaand free-formFortran 90use theunderscore(_) character for this purpose. As such, these languages would allow the number seven hundred million to be entered as "700_000_000". On the other hand, fixed-form Fortran ignores whitespace in all contexts, so "700 000 000" would be allowed. InC++14,RebolandRed, the use of anapostrophefor digit grouping is allowed. Thus, "700'000'000" would be allowed in those languages.
The code shown below, written in Kotlin, illustrates the use of separators to increase readability:
The International Bureau of Weights and Measures states that "when there are only four digits before or after the decimal marker, it is customary not to use a space to isolate a single digit."[32]Likewise, somemanuals of stylestate that thousands separators should not be used in normal text for numbers from1000to9999where no decimal fractional part is shown (or, in other words, for four-digit whole numbers), whereas others use thousands separators and others use both. For example,APA stylestipulates a thousands separator for "most figures of1000or more" except for page numbers, binary digits, temperatures, etc.
There are always "common-sense" country-specific exceptions to digit grouping, such as year numbers,postal codes, and ID numbers of predefined nongrouped format, which style guides usually point out.
In binary (base-2), a full space can be used between groups of four digits, corresponding to anibble, or equivalently to ahexadecimaldigit. For integer numbers, dots are used as well to separate groups of four bits.[b]Alternatively, binary digits may be grouped by threes, corresponding to anoctaldigit. Similarly, in hexadecimal (base-16), full spaces are usually used to group digits into twos, making each group correspond to abyte.[c]Additionally, groups of eight bytes are often separated by a hyphen.[c]
In countries with a decimal comma, the decimal point is also common as the "international" notation[citation needed]because of the influence of devices, such aselectronic calculators, which use the decimal point. Most computeroperating systemsallow selection of the decimal separator; programs that have been carefullyinternationalizedwill follow this, but some programs ignore it and a few may even fail to operate if the setting has been changed.
Computer interfaces may be set to the Unicode international "Common locale" usingLC_NUMERIC=Cas defined at"Unicode CLDR project".Unicode Consortium.Details of the current (2020) definitions may be found at"01102-POSIX15897".Unicode Consortium.
Countries where a comma (,) is used as a decimal separator include:
Countries where a dot (.) is used as a decimal separator include:
Notes
Unicode defines adecimal separator key symbol(⎖ in hex U+2396, decimal 9110) which looks similar to theapostrophe. This symbol is fromISO/IEC 9995and is intended for use on a keyboard to indicate a key that performs decimal separation.
In theArab world, whereEastern Arabic numeralsare used for writing numbers, a different character is used to separate the integer and fractional parts of numbers. It is referred to as anArabic decimal separator(U+066B, rendered:٫) inUnicode. An Arabic thousands separator (U+066C, rendered:٬) also exists. Example:۹٬۹۹۹٫۹۹ (9,999.99)
InPersian, the decimal separator is calledmomayyez. The Unicode Consortium's investigation concluded that "computer programs should render U+066B as a shortened, lowered, and possibly more slantedslash(٫); this should be distinguishable from the slash at the first sight." To separatesequencesof three digits, an Arabic thousands separator (rendered as:٬), a Latin comma, or ablank spacemay be used; however this is not a standard.[49][50][51]Example:۹٬۹۹۹٫۹۹(9,999.99)
InEnglish Braille, the decimal point,⠨, is distinct from both the comma,⠂, and the full stop,⠲.
The following examples show the decimal separator and the thousands separator in various countries that use the Arabic numeral system.
Used withWestern Arabic numerals(0123456789):
Used withEastern Arabic numerals(٠١٢٣٤٥٦٧٨٩):
Used with keyboards:
|
https://en.wikipedia.org/wiki/Decimal_separator
|
Decimalisationordecimalization(seespelling differences) is the conversion of a system of currency or of weights and measures to units related bypowers of 10.
Most countries have decimalised their currencies, converting them from non-decimal sub-units to adecimalsystem, with one basic currency unit and sub-units that are valued relative to the basic unit by a power of10, most commonly 100 and exceptionally 1000, and sometimes at the same time, changing the name of the currency and/or the conversion rate to the new currency.
Today, only two countries havede jurenon-decimal currencies, these beingMauritania(where 1ouguiya= 5khoums) andMadagascar(where 1ariary= 5iraimbilanja):[1]however, these currencies arede factodecimal as the value of both currencies' main unit is now so low that the sub-units are too small to be of any practical use, and coins of these sub-units are no longer used.
Russiawas the first country to convert to a decimal currency when it decimalised under TsarPeter the Greatin 1704, resulting in the silverrublebeing equal to 100 copperkopeks.[2][3][page needed]
For weights and measures, this is also calledmetrication, replacing traditional units that are related in other ways, such as those formed by successive doubling or halving, or by more arbitraryconversion factors. Units of physical measurement, such as length and mass, were decimalised with the introduction of themetric system, which has been adopted by almost all countries (with the prominent exceptions of theUnited States, and, to a lesser extent, theUnited KingdomandCanada). Thus, a kilometre is 1,000 metres, while a mile is 1,760 yards.Electrical unitsare decimalised worldwide.
Commonunits of timeremain undecimalised. Although anattempt to decimalise themwas made during theFrench Revolution, this proved to be unsuccessful and was quickly abandoned.
Decimal currencies have sub-units based on a power of 10. Most sub-units are one-100thof the base currency unit, but currencies based on1,000sub-units also exist in several Arab countries.
Some countries changed the name of the base unit when they decimalised their currency, including:
In 1534 thekopekofNovgorodwas equated to 1/100 of therubleofMoscow, thus making the Russian ruble Europe's first decimal currency. In the 18th century were introduced the coinsgrivennik(10 kopeks) andimperial(10 rubles). This was not quite decimal currencies as they are known today, as there were smaller units beneath the kopek itself: thedenga(half a kopek, or 200 to the ruble) and thepolushka(half a denga, one-quarter kopek, or 400 to the ruble). After theOctober Revolution, theSoviet Uniontransitioned to a purely decimal model by eliminating the non-decimal subdivisions of the kopek.
Franceintroduced thefrancin 1795 to replace thelivre tournois,[4]abolished during theFrench Revolution. France introduced decimalisation in a number of countries that it invaded during theNapoleonic period.
TheDutch guilderdecimalised in 1817, becoming equal to 100 centen (instead of 20stuivers= 160duiten= 320 penningen), with the last pre-decimal coins withdrawn from circulation in 1848.
Swedenintroduced decimal currency in 1855. Theriksdalerwas divided into 100öre. The riksdaler was renamed thekronain 1873.
TheAustro-Hungarian Empiredecimalised theguldenin 1857, concurrent with its transition from theConventionsthalerto theVereinsthalerstandard.
Spainintroduced its decimal currency unit, thepeseta, in 1868, replacing all previous currencies.
Cyprusdecimalised theCypriot poundin 1955, which comprised 1000 mils, later replaced by 100 cents.
TheUnited Kingdom(including its overseas territories) andIrelanddecimalisedsterlingand theIrish pound, respectively, in 1971. (See£sdandDecimal Day.)
Maltadecimalised thelirain 1972.
Decimalisation was introduced into theThirteen Coloniesby theAmerican Revolution, and then enshrined in US law by theCoinage Act of 1792.
Decimalisation in Canada was complicated by the different jurisdictions before Confederation in 1867. In 1841, the unitedProvince of Canada's Governor General,Lord Sydenham, argued for establishment of a bank that would issue dollar currency (theCanadian dollar).Francis Hincks, who would become the Province of Canada's Prime Minister in 1851, favoured the plan. Ultimately the provincial assembly rejected the proposal.[5]In June 1851, the Canadian legislature passed a law requiring provincial accounts to be kept decimalised as dollars and cents. The establishment of acentral bankwas not touched upon in the 1851 legislation. The British government delayed the implementation of the currency change on a technicality, wishing to distinguish the Canadian currency from the United States' currency by referencing the units as "Royals" rather than "Dollars".[6]The British delay was overcome by the Currency Act of 1 August 1854. In 1858, coins denominated in cents and imprinted with "Canada" were issued for the first time.
Decimalisation occurred in:[6]
The colonial elite, the main advocates of decimalisation, based their case on two main arguments.[7]The first was for facilitation of trade and economic ties with the United States, the colonies' largest trading partner; the second was to simplify calculations and reduce accounting errors.[8]
TheMexican pesowas formally decimalised in the 1860s with the introduction of coins denominated in centavos; however, the currency did not fully decimalise in practice immediately and pre-decimal reales were issued until 1897.
Bermuda decimalised in 1970, by introducing theBermudian dollarequal to 8 shillings 4 pence (100 pence, effectively equal to the US dollar under theBretton Woods system).
Therandwas introduced on 14 February 1961. A Decimal Coinage Commission had been set up in 1956 to consider a move away from the denominations of pounds, shillings and pence, submitting its recommendation on 8 August 1958.[9]It replaced theSouth African poundas legal tender, at the rate of 2 rand = 1 pound or 10shillingsto the rand. Australia, New Zealand andRhodesiaalso chose ten shillings as the base unit of their new currency.
Australiadecimalised on 14 February 1966, with theAustralian dollarsreplacing theAustralian pound. A television campaign containing a memorablejingle, sung to the tune of "Click Go the Shears", was used to help the public to understand the changes.[10]New Zealanddecimalised on 10 July 1967, with theNew Zealand dollarsreplacing theNew Zealand pound.
In both countries, the conversion rate was one pound to two dollars and 10 shillings to one dollar.
To ease the transition, the new 5-cent, 10-cent and 20-cents coins were the same size and weight, and the new $1, $2, $10 and $20 banknotes (and the new $100 banknote in New Zealand) were the same colour, as their pre-decimal equivalents. Because of the inexact conversion between cents and pence, people were advised to tender halfpenny, penny and threepence coins in multiples of sixpence (thelowest common multipleof both systems) during the transition.[11]
KingChulalongkorndecimalised theThai currencyin 1897. The tical (baht) is now divided into one hundred satang.
Irandecimalised its currency in 1932, with therial, subdivided into 100 new dinars, replacing theqiranat par.
Saudi Arabiadecimalised theriyalin 1963, with 1 riyal = 100 halalas. Between 1960 and 1963, the riyal was worth 20qirsh, and before that, it was worth 22 qirsh.
TheYemen Arab Republicintroduced the coinage system of 1North Yemeni rial= 100filsin 1974, to replace the 1 rial = 40 buqsha = 80 halala = 160 zalat system. The country was one of the last to convert its coinage.
Japanhistorically had two decimal subdivisions of the yen: the sen (1/100) and the rin (1/1,000). However, they were taken out of circulation as of December 31, 1953, and all transactions are now conducted in multiples of 1 yen.[12]
Indiachanged from therupee,anna,piesystem to decimal currency on 1 April 1957.Pakistandecimalisedits currencyin 1961.
In India, Pakistan, and other places under British colonization where a system of 1 rupee = 16anna= 64 pice (old paisa) = 192 pie was used, the decimalisation process defines 1 rupee = 100 naya (new) paisa. The following table shows the conversion of common denominations of coins issued in modern India and Pakistan.
Burma(nowMyanmar) decimalised in 1952 (predating the Indian case) by changing from therupee(worth 16 pe, each of 4 pyas) to thekyat(worth 100 pyas).
Ceylon(nowSri Lanka) decimalised in 1869, dividing therupeeinto one hundred cents.
MauritaniaandMadagascartheoretically retain currencies with units whose values are in the ratio five to one: theMauritanian ouguiya(MRU) is equivalent to fivekhoums, and theMalagasy ariary(MGA) to fiveiraimbilanja.
In practice, however, the value of each of these two larger units is very small: as of 2021, the MRU is traded against theeuroat about 44:1, and the MGA at about 4,600:1. In each of these countries, the smaller denomination is no longer used, although in Mauritania there is still a "one-fifth ouguiya" coin.
In the special context of quoting the prices of stocks, traded almost always in blocks of 100 or moresharesand usually in blocks of many thousands, stock exchanges in the United States used eighths or sixteenths of dollars, until converting to decimals between September 2000 and April 2001.[13]
Similarly, in the United Kingdom, the prices of government securities continued to be quoted in multiples of1⁄32of a pound (7+1⁄2d or3+1⁄8p) long after the currency was decimalised.
The idea of measurement and currency systems where units are related by factors of ten was suggested bySimon Stevinwho in 1585 first advocated the use of decimal numbers for everyday purposes.[14]TheMetric systemwas developed in France in the 1790s as part of the reforms introduced during theFrench Revolution. Its adoption was gradual, both within France and in other countries, but its use is nearly universal today. One aspect of measurement decimalisation was the introduction ofmetric prefixesto derive bigger and smaller sizes from base unit names. Examples includekilofor 1000,hectofor 100,centifor 1/100 andmillifor 1/1000. The list of metric prefixes has expanded in modern times to encompass a wider range of measurements.
While the commonunits of time, minute, hour, day, month and year, are not decimalised, there have been proposals fordecimalisation of the time of dayanddecimal calendarsystems. Astronomers use a decimalisedJulian day numberto record and predict events. Decades, centuries, andmillenniaare examples of common units of time that are decimalised.[15]Themillisecondis a decimalised unit of time equivalent to a thousandth of a second, and is sometimes used in computing contexts.[16][17]
Thegradianor grade is anangular unitdefined as one hundredth of theright angle(approximately 0.0157rad), further divided into one hundred centigrades.
In computer science, there are several metric prefixes used withunits of information. For example, akilobitis equivalent to 1,000bits.[18]
Amounts of money are sometimes described in a decimalised way. For example, the letter K (standing forkilo-) can be used to indicate that a sum of money ought to be multiplied by 1,000 i.e. $250k means $250,000. The letters M or MM can be used to indicate that a sum of money should be multiplied by a million i.e. $3.5M means $3,500,000. The letter B similarly stands for a billion.[19][20]
|
https://en.wikipedia.org/wiki/Decimalisation
|
Densely packed decimal(DPD) is an efficient method forbinaryencodingdecimaldigits.
The traditional system of binary encoding for decimal digits, known asbinary-coded decimal(BCD), uses four bits to encode each digit, resulting in significant wastage ofbinary databandwidth (since four bits can store 16 states and are being used to store only 10), even when usingpacked BCD. Densely packed decimal is a more efficient code that packs three digits into ten bits using a scheme that allows compression from, or expansion to, BCD with only two or threehardware gate delays.[1]
The densely packed decimal encoding is a refinement ofChen–Ho encoding; it gives the same compression and speed advantages, but the particular arrangement of bits used confers additional advantages:
In 1969, Theodore M. Hertz, and in 1971,Tien Chi Chen(陳天機) withIrving Tze Ho(何宜慈) devised losslessprefix codes(referred to asHertzandChen–Ho encodings[2]) which packed three decimal digits into ten binary bits using a scheme which allowed compression from or expansion to BCD with only two or three gate delays in hardware. Densely packed decimal is a refinement of this, devised byMike F. Cowlishawin 2002,[1]which was incorporated into theIEEE 754-2008[3]andISO/IEC/IEEE 60559:2011[4]standards for decimalfloating point.
Like Chen–Ho encoding, DPD encoding classifies each decimal digit into one of two ranges, depending on the most significant bit of the binary form: "small" digits have values 0 through 7 (binary 0000–0111), and "large" digits, 8 through 9 (binary 1000–1001). Once it is known or has been indicated that a digit is small, three more bits are still required to specify the value. If a large value has been indicated, only one bit is required to distinguish between the values 8 or 9.
When encoding, the most significant bits of each of the three digits to be encoded determine one of eight coding patterns for the remaining bits, according to the following table. The table shows how, on decoding, the ten bits of the coded form in columnsb9throughb0are copied into the three digitsd2throughd0, and the remaining bits are filled in with constant zeros or ones.
Bits b7, b4 and b0 (c,fandi) are passed through the encoding unchanged, and do not affect the meaning of the other bits. The remaining seven bits can be considered a seven-bit encoding for three base-5 digits.
Bits b8 and b9 are not needed andignoredwhen decoding DPD groups with three large digits (marked as "x" in the last row of the table above), but are filled with zeros when encoding.
The eight decimal values whose digits are all 8s or 9s have four codings each.
The bits marked x in the table above are ignored on input, but will always be 0 in computed results.
(The 3 × 8 = 24 non-standard encodings fill in the gap between 103= 1000 and 210= 1024.)
This table shows some representative decimal numbers and their encodings in BCD, Chen–Ho, and densely packed decimal (DPD):
|
https://en.wikipedia.org/wiki/Densely_packed_decimal
|
Theduodecimalsystem, also known asbase twelveordozenal, is apositionalnumeral systemusingtwelveas itsbase. In duodecimal, the number twelve is denoted "10", meaning 1 twelve and 0units; in thedecimalsystem, this number is instead written as "12" meaning 1 ten and 2 units, and the string "10" means ten. In duodecimal, "100" means twelvesquared(144), "1,000" means twelvecubed(1,728), and "0.1" means a twelfth (0.08333...).
Various symbols have been used to stand for ten and eleven in duodecimal notation; this page uses A and B, as inhexadecimal, which make a duodecimal count from zero to twelve read 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, and finally 10. The Dozenal Societies of America and Great Britain (organisations promoting the use of duodecimal) use turned digits in their published material:2(a turned 2) for ten (dek, pronounced dɛk) and3(a turned 3) for eleven (el, pronounced ɛl).
The number twelve, asuperior highly composite number, is the smallest number with four non-trivialfactors(2, 3, 4, 6), and the smallest to include as factors all four numbers (1 to 4) within thesubitizingrange, and the smallestabundant number. All multiples ofreciprocalsof3-smoothnumbers (a/2b·3cwherea,b,care integers) have aterminatingrepresentation in duodecimal. In particular,+1/4(0.3),+1/3(0.4),+1/2(0.6),+2/3(0.8), and+3/4(0.9) all have a short terminating representation in duodecimal. There is also higher regularity observable in the duodecimal multiplication table. As a result, duodecimal has been described as the optimal number system.[1]
In these respects, duodecimal is considered superior to decimal, which has only 2 and 5 as factors, and other proposed bases likeoctalorhexadecimal.Sexagesimal(base sixty) does even better in this respect (the reciprocals of all5-smoothnumbers terminate), but at the cost of unwieldy multiplication tables and a much larger number of symbols to memorize.
Georges Ifrahspeculatively traced the origin of the duodecimal system to a system offinger countingbased on the knuckle bones of the four larger fingers. Using the thumb as a pointer, it is possible to count to 12 by touching each finger bone, starting with the farthest bone on the fifth finger, and counting on. In this system, one hand counts repeatedly to 12, while the other displays the number of iterations, until five dozens, i.e. the 60, are full. This system is still in use in many regions of Asia.[2][3]
Languages using duodecimal number systems are uncommon. Languages in theNigerianMiddle Belt such asJanji,Gbiri-Niragu(Gure-Kahugu),Piti, and the Nimbia dialect ofGwandara;[4]and theChepang languageofNepal[5]are known to use duodecimal numerals.
Germanic languageshave special words for 11 and 12, such aselevenandtwelveinEnglish. They come fromProto-Germanic*ainlifand *twalif(meaning, respectively,one leftandtwo left), suggesting a decimal rather than duodecimal origin.[6][7]However,Old Norseused a hybrid decimal–duodecimal counting system, with its words for "one hundred and eighty" meaning 200 and "two hundred" meaning 240.[8]In the British Isles, this style of counting survived well into the Middle Ages as thelong hundred("hundred" meaning 120).
Historically,units of timein manycivilizationsare duodecimal. There are twelve signs of thezodiac, twelve months in a year, and theBabylonianshad twelve hours in a day (although at some point, this was changed to 24). TraditionalChinese calendars, clocks, and compasses are based on the twelveEarthly Branchesor 24 (12×2)Solar terms. There are 12 inches in an imperial foot, 12troyounces in a troy pound, 24 (12×2) hours in a day; many other items are counted by thedozen,gross(144, twelvesquared), orgreat gross(1728, twelvecubed). The Romans used a fraction system based on 12, including theuncia, which became both the English wordsounceandinch. Historically, many parts of western Europe used a mixedvigesimal–duodecimal currency system ofpounds, shillings, and pence, with 20 shillings to a pound and 12 pence to a shilling,originally establishedbyCharlemagnein the 780s.
In a positional numeral system of basen(twelve for duodecimal), each of the firstnnatural numbers is given a distinct numeral symbol, and thennis denoted "10", meaning 1 timesnplus 0 units. For duodecimal, the standard numeral symbols for 0–9 are typically preserved for zero through nine, but there are numerous proposals for how to write the numerals representing "ten" and "eleven".[9]More radical proposals do not use anyArabic numeralsunder the principle of "separate identity."[9]
Pronunciation of duodecimal numbers also has no standard, but various systems have been proposed.
Several authors have proposed using letters of the alphabet for the transdecimal symbols. Latin letters such as⟨A, B⟩(as inhexadecimal) or⟨T, E⟩(initials ofTenandEleven) are convenient because they are widely accessible, and for instance can be typed on typewriters. However, when mixed with ordinary prose, they might be confused for letters. As an alternative, Greek letters such as⟨τ, ε⟩could be used instead.[9]Frank Emerson Andrews, an early American advocate for duodecimal, suggested and used in his 1935 bookNew Numbers⟨X,Ɛ⟩(italic capital X from theRoman numeralfor ten and a roundeditaliccapital E similar toopen E), along with italic numerals0–9.[11]
Edna Kramer in her 1951 bookThe Main Stream of Mathematicsused a⟨*, #⟩(sextileor six-pointed asterisk,[12]hashor octothorpe).[9]The symbols were chosen because they were available on some typewriters; they are also onpush-button telephones.[9]This notation was used in publications of the Dozenal Society of America (DSA) from 1974 to 2008.[13][14]
From 2008 to 2015, the DSA used⟨,⟩, the symbols devised byWilliam Addison Dwiggins.[9][15]
The Dozenal Society of Great Britain (DSGB) proposed symbols⟨2,3⟩.[9]This notation, derived from Arabic digits by 180° rotation, was introduced byIsaac Pitmanin 1857.[9][16]In March 2013, a proposal was submitted to include the digit forms for ten and eleven propagated by the Dozenal Societies in theUnicode Standard.[17]Of these, the British/Pitman forms were accepted for encoding as characters at code pointsU+218A↊TURNED DIGIT TWOandU+218B↋TURNED DIGIT THREE. They were included inUnicode 8.0(2015).[18][19]
After the Pitman digits were added to Unicode, the DSA took a vote and then began publishing PDF content using the Pitman digits instead, but continues to use the letters X and E on its webpage.[20]
There are also varying proposals of how to distinguish a duodecimal number from a decimal one. The most common method used in mainstream mathematics sources comparing various number bases uses a subscript "10" or "12", e.g. "5412= 6410". To avoid ambiguity about the meaning of the subscript 10, the subscripts might be spelled out, "54twelve= 64ten". In 2015 the Dozenal Society of America adopted the more compact single-letter abbreviation "z" for "dozenal" and "d" for "decimal", "54z= 64d".[26]
Other proposed methods include italicizing duodecimal numbers "54= 64", adding a "Humphrey point" (asemicoloninstead of adecimal point) to duodecimal numbers "54;6 = 64.5", prefixing duodecimal numbers by an asterisk "*54 = 64", or some combination of these. The Dozenal Society of Great Britain uses an asterisk prefix for duodecimal whole numbers, and a Humphrey point for other duodecimal numbers.[26]
The Dozenal Society of America suggested ten and eleven should be pronounced as "dek" and "el", respectively.
Terms for some powers of twelve already exist in English: The numbertwelve(1012or 1210) is also called adozen. Twelve squared (10012or 14410) is called agross.[27]Twelve cubed (100012or 172810) is called agreat gross.[28]
William James Sidisused 12 as the base for his constructed languageVendergoodin 1906, noting it being the smallest number with four factors and its prevalence in commerce.[29]
The case for the duodecimal system was put forth at length inFrank Emerson Andrews' 1935 bookNew Numbers: How Acceptance of a Duodecimal Base Would Simplify Mathematics. Emerson noted that, due to the prevalence of factors of twelve in many traditional units of weight and measure, many of the computational advantages claimed for the metric system could be realizedeitherby the adoption of ten-based weights and measureorby the adoption of the duodecimal number system.[11]
Both the Dozenal Society of America (founded as the Duodecimal Society of America in 1944) and the Dozenal Society of Great Britain (founded 1959) promote adoption of the duodecimal system.
Mathematician and mental calculatorAlexander Craig Aitkenwas an outspoken advocate of duodecimal:
The duodecimal tables are easy to master, easier than the decimal ones; and in elementary teaching they would be so much more interesting, since young children would find more fascinating things to do with twelve rods or blocks than with ten. Anyone having these tables at command will do these calculations more than one-and-a-half times as fast in the duodecimal scale as in the decimal. This is my experience; I am certain that even more so it would be the experience of others.
But the final quantitative advantage, in my own experience, is this: in varied and extensive calculations of an ordinary and not unduly complicated kind, carried out over many years, I come to the conclusion that the efficiency of the decimal system might be rated at about 65 or less, if we assign 100 to the duodecimal.
In "Little Twelvetoes," an episode of the American educational television seriesSchoolhouse Rock!, a farmer encounters an alien being with a total of twelve fingers and twelve toes who uses duodecimal arithmetic. The alien uses "dek" and "el" as names for ten and eleven, and Andrews' script-X and script-E for the digit symbols.[32][33]
Systems of measurementproposed by dozenalists include Tom Pendlebury's TGM system,[34][35]Takashi Suga's Universal Unit System,[36][35]and John Volan's Primel system.[37]
The Dozenal Society of America argues that if a base is too small, significantly longer expansions are needed for numbers; if a base is too large, one must memorise a large multiplication table to perform arithmetic. Thus, it presumes that "a number base will need to be between about 7 or 8 through about 16, possibly including 18 and 20".[38]
The number 12 has six factors, which are 1, 2, 3, 4, 6, and 12, of which 2 and 3 areprime. It is the smallest number to have six factors, the largest number to have at least half of the numbers below it as divisors, and is only slightly larger than 10. (The numbers 18 and 20 also have six factors but are much larger.) Ten, in contrast, only has four factors, which are 1, 2, 5, and 10, of which 2 and 5 are prime.[38]Six shares the prime factors 2 and 3 with twelve; however, like ten, six only has four factors (1, 2, 3, and 6) instead of six. Its corresponding base,senary, is below the DSA's stated threshold.
Eight and sixteen only have 2 as a prime factor. Therefore, inoctalandhexadecimal, the onlyterminating fractionsare those whosedenominatoris apower of two.
Thirty is the smallest number that has three different prime factors (2, 3, and 5, the first three primes), and it has eight factors in total (1, 2, 3, 5, 6, 10, 15, and 30).Sexagesimalwas actually used by the ancientSumeriansandBabylonians, among others; its base, sixty, adds the four convenient factors 4, 12, 20, and 60 to 30 but no new prime factors. The smallest number that has four different prime factors is 210; the pattern follows theprimorials. However, these numbers are quite large to use as bases, and are far beyond the DSA's stated threshold.
In all base systems, there are similarities to the representation of multiples of numbers that are one less than or one more than the base.
In the following multiplication table, numerals are written in duodecimal. For example, "10" means twelve, and "12" means fourteen.
To convert numbers between bases, one can use the general conversion algorithm (see the relevant section underpositional notation). Alternatively, one can use digit-conversion tables. The ones provided below can be used to convert any duodecimal number between 0.1 and BB,BBB.B to decimal, or any decimal number between 0.1 and 99,999.9 to duodecimal. To use them, the given number must first be decomposed into a sum of numbers with only one significant digit each. For example:
This decomposition works the same no matter what base the number is expressed in. Just isolate each non-zero digit, padding them with as many zeros as necessary to preserve their respective place values. If the digits in the given number include zeroes (for example, 7,080.9), these are left out in the digit decomposition (7,080.9 = 7,000 + 80 + 0.9). Then, the digit conversion tables can be used to obtain the equivalent value in the target base for each digit. If the given number is in duodecimal and the target base is decimal, we get:
Because the summands are already converted to decimal, the usual decimal arithmetic is used to perform the addition and recompose the number, arriving at the conversion result:
That is,(duodecimal)12,345.6 equals(decimal)24,677.5
If the given number is in decimal and the target base is duodecimal, the method is same. Using the digit conversion tables:
(decimal)10,000 + 2,000 + 300 + 40 + 5 + 0.6=(duodecimal)5,954 + 1,1A8 + 210 + 34 + 5 + 0.7249
To sum these partial products and recompose the number, the addition must be done with duodecimal rather than decimal arithmetic:
That is,(decimal)12,345.6 equals(duodecimal)7,189.7249
Duodecimalfractionsfor rational numbers with3-smoothdenominators terminate:
while other rational numbers haverecurringduodecimal fractions:
As explained inrecurring decimals, whenever anirreducible fractionis written inradix pointnotation in any base, the fraction can be expressed exactly (terminates) if and only if all theprime factorsof its denominator are also prime factors of the base.
Because2×5=10{\displaystyle 2\times 5=10}in the decimal system, fractions whose denominators are made up solely of multiples of 2 and 5 terminate:1/8=1/(2×2×2),1/20=1/(2×2×5), and1/500=1/(2×2×5×5×5)can be expressed exactly as 0.125, 0.05, and 0.002 respectively.1/3and1/7, however, recur (0.333... and 0.142857142857...).
Because2×2×3=12{\displaystyle 2\times 2\times 3=12}in the duodecimal system,1/8is exact;1/20and1/500recur because they include 5 as a factor;1/3is exact, and1/7recurs, just as it does in decimal.
The number of denominators that give terminating fractions within a given number of digits,n, in a basebis the number of factors (divisors) ofbn{\displaystyle b^{n}}, thenth power of the baseb(although this includes the divisor 1, which does not produce fractions when used as the denominator). The number of factors ofbn{\displaystyle b^{n}}is given using its prime factorization.
For decimal,10n=2n×5n{\displaystyle 10^{n}=2^{n}\times 5^{n}}. The number of divisors is found by adding one to each exponent of each prime and multiplying the resulting quantities together, so the number of factors of10n{\displaystyle 10^{n}}is(n+1)(n+1)=(n+1)2{\displaystyle (n+1)(n+1)=(n+1)^{2}}.
For example, the number 8 is a factor of 103(1000), so18{\textstyle {\frac {1}{8}}}and other fractions with a denominator of 8 cannot require more than three fractional decimal digits to terminate.58=0.62510.{\textstyle {\frac {5}{8}}=0.625_{10}.}
For duodecimal,10n=22n×3n{\displaystyle 10^{n}=2^{2n}\times 3^{n}}. This has(2n+1)(n+1){\displaystyle (2n+1)(n+1)}divisors. The sample denominator of 8 is a factor of a gross122=144{\textstyle 12^{2}=144}(in decimal), so eighths cannot need more than two duodecimal fractional places to terminate.58=0.7612.{\textstyle {\frac {5}{8}}=0.76_{12}.}
Because both ten and twelve have two unique prime factors, the number of divisors ofbn{\displaystyle b^{n}}forb= 10 or 12grows quadratically with the exponentn(in other words, of the order ofn2{\displaystyle n^{2}}).
The Dozenal Society of America argues that factors of 3 are more commonly encountered in real-lifedivisionproblems than factors of 5.[38]Thus, in practical applications, the nuisance ofrepeating decimalsis encountered less often when duodecimal notation is used. Advocates of duodecimal systems argue that this is particularly true of financial calculations, in which the twelve months of the year often enter into calculations.
However, when recurring fractionsdooccur in duodecimal notation, they are less likely to have a very short period than in decimal notation, because 12 (twelve) is between twoprime numbers, 11 (eleven) and 13 (thirteen), whereas ten is adjacent to thecomposite number9. Nonetheless, having a shorter or longer period does not help the main inconvenience that one does not get a finite representation for such fractions in the given base (sorounding, which introduces inexactitude, is necessary to handle them in calculations), and overall one is more likely to have to deal with infinite recurring digits when fractions are expressed in decimal than in duodecimal, because one out of every three consecutive numbers contains the prime factor 3 in its factorization, whereas only one out of every five contains the prime factor 5. All other prime factors, except 2, are not shared by either ten or twelve, so they do not
influence the relative likeliness of encountering recurring digits (any irreducible fraction that contains any of these other factors in its denominator will recur in either base).
Also, the prime factor 2 appears twice in the factorization of twelve, whereas only once in the factorization of ten; which means that most fractions whose denominators arepowers of twowill have a shorter, more convenient terminating representation in duodecimal than in decimal:
The duodecimal period length of 1/nare (in decimal)
The duodecimal period length of 1/(nth prime) are (in decimal)
Smallest prime with duodecimal periodnare (in decimal)
The representations ofirrational numbersin any positional number system (including decimal and duodecimal) neither terminate norrepeat. The following table gives the first digits for some importantalgebraicandtranscendentalnumbers in both decimal and duodecimal.
|
https://en.wikipedia.org/wiki/Duodecimal
|
Ametric prefixis aunit prefixthat precedes a basic unit of measure to indicate amultiple or submultipleof the unit. All metric prefixes used today aredecadic. Each prefix has a unique symbol that is prepended to any unit symbol. The prefixkilo, for example, may be added togramto indicatemultiplicationby one thousand: onekilogramis equal to one thousand grams. The prefixmilli, likewise, may be added tometreto indicatedivisionby one thousand; one millimetre is equal to one thousandth of a metre.
Decimal multiplicative prefixes have been a feature of all forms of themetric system, with six of these dating back to the system's introduction in the 1790s. Metric prefixes have also been used with some non-metric units. TheSI prefixesare metric prefixes that were standardised for use in theInternational System of Units(SI) by theInternational Bureau of Weights and Measures(BIPM) in resolutions dating from 1960 to 2022.[1][2]Since 2009, they have formed part of theISO/IEC 80000standard. They are also used in theUnified Code for Units of Measure(UCUM).
The BIPM specifies twenty-fourprefixes for the International System of Units (SI).
The first uses of prefixes in SI date back to the definition of kilogram after the French Revolution at the end of the 18th century. Several more prefixes came into use, and were recognised by the 1947IUPAC14th International Conference of Chemistry[5]before being officially adopted for the first time in 1960.[6]
The most recent prefixes adopted wereronna,quetta,ronto, andquectoin 2022, after a proposal from British metrologist Richard J. C. Brown (since before 2022, Q/q and R/r were the only Latin letters available for abbreviations, all other Latin letters are either already used for other prefixes (a,c,d,E,f,G,h,k,M,m,n,P,p,T,Y,y,Z,z) or already used forSI units(including:SI base units,SI derived units,Non-SI units mentioned in the SI) (A,B,C,d,F,g,H,h,J,K,L,m,N,S,s,T,t,u,V,W) or easily confused with mathematical operators (I and l are easily confused with 1, O and o are easily confused with 0, X and x are easily confused with ×)). The large prefixesronnaandquettawere adopted in anticipation of needs for use in data science, and because unofficial prefixes that did not meet SI requirements were already circulating. The small prefixes were also added, even without such a driver, in order to maintain symmetry.[7]
The prefixes frompetatoquettaare based on the Ancient Greek or Ancient Latin numbers from 5 to 10, referring to the 5th through 10th powers of 103. The initial letterhhas been removed from some of these stems and the initial lettersz,y,r, andqhave been added, ascending in reverse alphabetical order, to avoid confusion with other metric prefixes.
Whenmegaandmicrowere adopted in 1873, three prefixes existed starting with "m". It was necessary to use a symbol other than upper and lowercase 'm'. Eventually the Greek letter "μ" was adopted.
With the lack of a "μ" key on most typewriters, as well as computer keyboards, various other abbreviations remained common, including "mc", "mic",M, and "u".
From about 1960 onwards, "u" prevailed in type-written documents.[c]BecauseASCII,EBCDIC, and other common encodings lacked code-points for "μ", this tradition remained even as computers replaced typewriters.
WhenISO 8859-1was created, it included the "μ" symbol formicroat codepoint0xB5; later, the whole of ISO 8859-1 was incorporated into the initial version ofUnicode. Many fonts that support both characters render them identically, but because the micro sign and the Greek lower-case letter have different applications (normally, a Greek letter would be used with other Greek letters, but the micro sign is never used like that), some fonts render them differently, e.g.Linux LibertineandSegoe UI.[citation needed]
Most English-language keyboards do not have a "μ" key, so it is necessary to use a key-code; this varies depending on the operating system, physical keyboard layout, and user's language.
TheLaTeXtypesetting system features anSIunitxpackage in which the units of measurement are spelled out, for example,\qty{3}{\tera\hertz}formats as "3 THz".[13]
The use of prefixes can be traced back to the introduction of the metric system in the 1790s, long before the 1960 introduction of the SI.[citation needed]The prefixes, including those introduced after 1960, are used with any metric unit, whether officially included in the SI or not (e.g., millidyne and milligauss). Metric prefixes may also be used with some non-metric units, but not, for example, with the non-SI units of time.[14]
The unitskilogram,gram,milligram, microgram, and smaller are commonly used for measurement ofmass. However, megagram, gigagram, and larger are rarely used;tonnes(and kilotonnes, megatonnes, etc.) orscientific notationare used instead. The megagram does not share the risk of confusion that the tonne has with other units with the name "ton".
The kilogram is the only coherent unit of theInternational System of Unitsthat includes a metric prefix.[15]: 144
Thelitre(equal to a cubic decimetre), millilitre (equal to a cubic centimetre), microlitre, and smaller are common. In Europe, the centilitre is often used for liquids, and the decilitre is used less frequently. Bulk agricultural products, such as grain, beer and wine, often use the hectolitre (100 litres).[citation needed]
Larger volumes are usually denoted in kilolitres, megalitres or gigalitres, or else in cubic metres (1 cubic metre = 1 kilolitre) or cubic kilometres (1 cubic kilometre = 1 teralitre). For scientific purposes, the cubic metre is usually used.[citation needed]
The kilometre, metre, centimetre, millimetre, and smaller units are common. The decimetre is rarely used. The micrometre is often referred to by the older non-SI namemicron, which is officially deprecated. In some fields, such aschemistry, theångström(0.1 nm) has been used commonly instead of the nanometre. Thefemtometre, used mainly in particle physics, is sometimes called afermi. For large scales, megametre, gigametre, and larger are rarely used. Instead, ad hoc non-metric units are used, such as thesolar radius,astronomical units,light years, andparsecs; the astronomical unit is mentioned in the SI standards as an accepted non-SI unit.[citation needed]
Prefixes for the SI standard unitsecondare most commonly encountered for quantities less than one second. For larger quantities, the system ofminutes(60 seconds),hours(60 minutes) anddays(24 hours) isaccepted for use with the SIand more commonly used. When speaking of spans of time, the length of the day is usually standardised to86400seconds so as not to create issues with the irregularleap second.[citation needed]
Larger multiples of the second such as kiloseconds and megaseconds are occasionally encountered in scientific contexts, but are seldom used in common parlance. For long-scale scientific work, particularly inastronomy, theJulian yearorannum(a) is a standardised variant of theyear, equal to exactly31557600seconds (365+1/4days). The unit is so named because it was the average length of a year in theJulian calendar. Long time periods are then expressed by using metric prefixes with the annum, such as megaannum (Ma) orgigaannum(Ga).[citation needed]
The SI unit of angle is theradian, butdegrees, as well asarc-minutes and arc-seconds, see some scientific use in fields such as astronomy.[16]
Common practice does not typically use the flexibility allowed by official policy in the case of the degree Celsius (°C). NIST states:[17]"Prefix symbols may be used with the unit symbol °C and prefix names may be used with the unit namedegree Celsius. For example, 12 m°C (12 millidegrees Celsius) is acceptable." In practice, it is more common for prefixes to be used with thekelvinwhen it is desirable to denote extremely large or small absolute temperatures or temperature differences. Thus, temperatures of star interiors may be given with the unit of MK (megakelvin), and molecular cooling may be given with the unit mK (millikelvin).[citation needed]
In use thejouleand kilojoule are common, with larger multiples seen in limited contexts. In addition, thekilowatt-hour, a composite unit formed from thekilowattand hour, is often used for electrical energy; other multiples can be formed by modifying the prefix of watt (e.g. terawatt-hour).[citation needed]
Several definitions exist for the non-SI unitcalorie. Distinguished are gram calories and kilogram calories. One kilogram calorie, which equals one thousand gram calories, often appears capitalized and without a prefix (i.e.Cal) when referring to "dietary calories" in food.[18]It is common to apply metric prefixes to the gram calorie, but not to the kilogram calorie: thus, 1 kcal = 1000 cal = 1 Cal.
Metric prefixes are widely used outside the metric SI system. Common examples include themegabyteand thedecibel. Metric prefixes rarely appear withimperialorUSunits except in some special cases (e.g., microinch, kilofoot,kilopound). They are also used with other specialised units used in particular fields (e.g.,megaelectronvolt,gigaparsec,millibarn,kilodalton). In astronomy, geology, and palaeontology, theyear, with symbol 'a' (from the Latinannus), is commonly used with metric prefixes:ka, Ma, and Ga.[19]
Official policies about the use of SI prefixes with non-SI units vary slightly between theInternational Bureau of Weights and Measures(BIPM) and the AmericanNational Institute of Standards and Technology(NIST). For instance, the NIST advises that "to avoid confusion, prefix symbols (and prefix names) are not used with the time-related unit symbols (names) min (minute), h (hour), d (day); nor with the angle-related symbols (names) ° (degree), ′ (minute), and ″ (second)",[17]whereas the BIPM adds information about the use of prefixes with the symbolasfor arcsecond when they state: "However astronomers use milliarcsecond, which they denote mas, and microarcsecond, μas, which they use as units for measuring very small angles."[20]
Some of the prefixes formerly used in the metric system have fallen into disuse and were not adopted into the SI.[21][22][23]The decimal prefix for ten thousand,myria-(sometimes speltmyrio-), and the earlybinary prefixesdouble-(2×) anddemi-(1/2×) were parts of the original metric system adopted by France in 1795,[24][d]but were not retained when the SI prefixes were internationally adopted by the 11thCGPM conferencein 1960.
Other metric prefixes used historically includehebdo-(107) andmicri-(10−14).
Double prefixes have been used in the past, such asmicromillimetresormillimicrons(nownanometres),micromicrofarads(μμF; nowpicofarads, pF),kilomegatonnes(nowgigatonnes),hectokilometres(now 100kilometres) and the derived adjectivehectokilometric(typically used for qualifying the fuel consumption measures).[25]These are not compatible with the SI.
Other obsolete double prefixes included "decimilli-" (10−4), which was contracted to "dimi-"[26]and standardised in France up to 1961.
There are no more letters of the Latin alphabet available for new prefixes (all the unused letters are already used for units). As such, Richard J.C. Brown (who proposed the prefixes adopted for 10±27and 10±30) has proposed a reintroduction of compound prefixes (e.g.kiloquetta-for 1033) if a driver for prefixes at such scales ever materialises, with a restriction that the last prefix must always bequetta-orquecto-. This usage has not been approved by the BIPM.[27][28]
In written English, the symbolKis often used informally to indicate a multiple of thousand in many contexts. For example, one may talk of a40K salary(40000), or call theYear 2000 problemtheY2K problem. In these cases, an uppercase K is often used with an implied unit (although it could then be confused with the symbol for the kelvin temperature unit if the context is unclear). This informal postfix is read or spoken as "thousand", "grand", or just "k".
The financial and general news media mostly use m or M, b or B, and t or T as abbreviations for million, billion (109) and trillion (1012), respectively, for large quantities, typically currency[29]and population.[30]
Themedicalandautomotivefields in the United States use the abbreviationsccorccmfor cubic centimetres. Onecubic centimetreis equal to onemillilitre.
For nearly a century[clarification needed], engineers used the abbreviationMCMto designate a "thousandcircular mils" in specifying the cross-sectional area of largeelectrical cables. Since the mid-1990s,kcmilhas been adopted as the official designation of a thousand circular mils, but the designationMCMstill remains in wide use. A similar system is used in natural gas sales in the United States:m(orM) for thousands andmm(orMM) for millions (thousand thousands) ofBritish thermal unitsortherms, and in the oil industry,[31]whereMMbblis the symbol for "millions of barrels". These usages of the capital letterMfor "thousand" in MCM is fromRoman numerals, in whichMmeans 1000.[32][31]
|
https://en.wikipedia.org/wiki/Metric_prefix
|
Standard formis a way of expressingnumbersthat are too large or too small to be conveniently written indecimal form, since to do so would require writing out an inconveniently long string of digits. It may be referred to asscientific formorstandard index form, orScientific notationin the United States. Thisbase tennotation is commonly used by scientists, mathematicians, and engineers, in part because it can simplify certainarithmetic operations. Onscientific calculators, it is usually known as "SCI" display mode.
In scientific notation, nonzero numbers are written in the form
ormtimes ten raised to the power ofn, wherenis aninteger, and thecoefficientmis a nonzeroreal number(usually between 1 and 10 in absolute value, and nearly always written as aterminating decimal). The integernis called theexponentand the real numbermis called thesignificandormantissa.[1]The term "mantissa" can be ambiguous where logarithms are involved, because it is also the traditional name of thefractional partof thecommon logarithm. If the number is negative then a minus sign precedesm, as in ordinary decimal notation. Innormalized notation, the exponent is chosen so that theabsolute value(modulus) of the significandmis at least 1 but less than 10.
Decimal floating pointis a computer arithmetic system closely related to scientific notation.
For performing calculations with aslide rule, standard form expression is required. Thus, the use of scientific notation increased as engineers and educators used that tool. SeeSlide rule#History.
Any real number can be written in the formm×10^nin many ways: for example, 350 can be written as3.5×102or35×101or350×100.
Innormalizedscientific notation (called "standard form" in the United Kingdom), the exponentnis chosen so that theabsolute valueofmremains at least one but less than ten (1 ≤ |m| < 10). Thus 350 is written as3.5×102. This form allows easy comparison of numbers: numbers with bigger exponents are (due to the normalization) larger than those with smaller exponents, and subtraction of exponents gives an estimate of the number oforders of magnitudeseparating the numbers. It is also the form that is required when using tables ofcommon logarithms. In normalized notation, the exponentnis negative for a number with absolute value between 0 and 1 (e.g. 0.5 is written as5×10−1). The 10 and exponent are often omitted when the exponent is 0. For a series of numbers that are to be added or subtracted (or otherwise compared), it can be convenient to use the same value ofmfor all elements of the series.
Normalized scientific form is the typical form of expression of large numbers in many fields, unless an unnormalized or differently normalized form, such asengineering notation, is desired. Normalized scientific notation is often calledexponentialnotation– although the latter term is more general and also applies whenmis not restricted to the range 1 to 10 (as in engineering notation for instance) and tobasesother than 10 (for example,3.15×2^20).
Engineering notation (often named "ENG" on scientific calculators) differs from normalized scientific notation in that the exponentnis restricted tomultiplesof 3. Consequently, the absolute value ofmis in the range 1 ≤ |m| < 1000, rather than 1 ≤ |m| < 10. Though similar in concept, engineering notation is rarely called scientific notation. Engineering notation allows the numbers to explicitly match their correspondingSI prefixes, which facilitates reading and oral communication. For example,12.5×10−9mcan be read as "twelve-point-five nanometres" and written as12.5 nm, while its scientific notation equivalent1.25×10−8mwould likely be read out as "one-point-two-five times ten-to-the-negative-eight metres".
Calculatorsandcomputer programstypically present very large or small numbers using scientific notation, and some can be configured to uniformly present all numbers that way. Becausesuperscriptexponents like 107can be inconvenient to display or type, the letter "E" or "e" (for "exponent") is often used to represent "times ten raised to the power of", so that the notationmEnfor a decimal significandmand integer exponentnmeans the same asm× 10n. For example6.022×1023is written as6.022E23or6.022e23, and1.6×10−35is written as1.6E-35or1.6e-35. While common in computer output, this abbreviated version of scientific notation is discouraged for published documents by some style guides.[2][3]
Most popular programming languages – includingFortran,C/C++,Python, andJavaScript– use this "E" notation, which comes from Fortran and was present in the first version released for theIBM 704in 1956.[4]The E notation was already used by the developers ofSHARE Operating System(SOS) for theIBM 709in 1958.[5]Later versions of Fortran (at least sinceFORTRAN IVas of 1961) also use "D" to signifydouble precisionnumbers in scientific notation,[6]and newer Fortran compilers use "Q" to signifyquadruple precision.[7]TheMATLABprogramming language supports the use of either "E" or "D".
TheALGOL 60(1960) programming language uses a subscript ten "10" character instead of the letter "E", for example:6.0221023.[8][9]This presented a challenge for computer systems which did not provide such a character, soALGOL W(1966) replaced the symbol by a single quote, e.g.6.022'+23,[10]and some Soviet ALGOL variants allowed the use of the Cyrillic letter "ю", e.g.6.022ю+23[citation needed]. Subsequently, theALGOL 68programming language provided a choice of characters:E,e,\,⊥, or10.[11]The ALGOL "10" character was included in the SovietGOST 10859text encoding (1964), and was added toUnicode5.2 (2009) asU+23E8⏨DECIMAL EXPONENT SYMBOL.[12]
Some programming languages use other symbols. For instance,Simulauses&(or&&forlong), as in6.022&23.[13]Mathematicasupports the shorthand notation6.022*^23(reserving the letterEfor themathematical constante).
The firstpocket calculatorssupporting scientific notation appeared in 1972.[14]To enter numbers in scientific notation calculators include a button labeled "EXP" or "×10x", among other variants. The displays of pocket calculators of the 1970s did not display an explicit symbol between significand and exponent; instead, one or more digits were left blank (e.g.6.022 23, as seen in theHP-25), or a pair of smaller and slightly raised digits were reserved for the exponent (e.g.6.02223, as seen in theCommodore PR100). In 1976,Hewlett-Packardcalculator user Jim Davidson coined the termdecapowerfor the scientific-notation exponent to distinguish it from "normal" exponents, and suggested the letter "D" as a separator between significand and exponent in typewritten numbers (for example,6.022D23); these gained some currency in the programmable calculator user community.[15]The letters "E" or "D" were used as a scientific-notation separator bySharppocket computersreleased between 1987 and 1995, "E" used for 10-digit numbers and "D" used for 20-digit double-precision numbers.[16]TheTexas InstrumentsTI-83andTI-84series of calculators (1996–present) use asmall capitalEfor the separator.[17]
In 1962, Ronald O. Whitaker of Rowco Engineering Co. proposed a power-of-ten system nomenclature where the exponent would be circled, e.g. 6.022 × 103would be written as "6.022③".[18]
A significant figure is a digit in a number that adds to its precision. This includes all nonzero numbers, zeroes between significant digits, and zeroesindicated to be significant. Leading and trailing zeroes are not significant digits, because they exist only to show the scale of the number. Unfortunately, this leads to ambiguity. The number1230400is usually read to have five significant figures: 1, 2, 3, 0, and 4, the final two zeroes serving only as placeholders and adding no precision. The same number, however, would be used if the last two digits were also measured precisely and found to equal 0 – seven significant figures.
When a number is converted into normalized scientific notation, it is scaled down to a number between 1 and 10. All of the significant digits remain, but the placeholding zeroes are no longer required. Thus1230400would become1.2304×106if it had five significant digits. If the number were known to six or seven significant figures, it would be shown as1.23040×106or1.230400×106. Thus, an additional advantage of scientific notation is that the number of significant figures is unambiguous.
It is customary in scientific measurement to record all the definitely known digits from the measurement and to estimate at least one additional digit if there is any information at all available on its value. The resulting number contains more information than it would without the extra digit, which may be considered a significant digit because it conveys some information leading to greater precision in measurements and in aggregations of measurements (adding them or multiplying them together).
Additional information about precision can be conveyed through additional notation. It is often useful to know how exact the final digit or digits are. For instance, the accepted value of the mass of theprotoncan properly be expressed as1.67262192369(51)×10−27kg, which is shorthand for(1.67262192369±0.00000000051)×10−27kg. However it is still unclear whether the error (5.1×10−37in this case) is the maximum possible error,standard error, or some otherconfidence interval.
In normalized scientific notation, in E notation, and in engineering notation, thespace(which intypesettingmay be represented by a normal width space or athin space) that is allowedonlybefore and after "×" or in front of "E" is sometimes omitted, though it is less common to do so before the alphabetical character.[19]
Converting a number in these cases means to either convert the number into scientific notation form, convert it back into decimal form or to change the exponent part of the equation. None of these alter the actual number, only how it's expressed.
First, move the decimal separator point sufficient places,n, to put the number's value within a desired range, between 1 and 10 for normalized notation. If the decimal was moved to the left, append× 10n; to the right,× 10−n. To represent the number1,230,400in normalized scientific notation, the decimal separator would be moved 6 digits to the left and× 106appended, resulting in1.2304×106. The number−0.0040321would have its decimal separator shifted 3 digits to the right instead of the left and yield−4.0321×10−3as a result.
Converting a number from scientific notation to decimal notation, first remove the× 10non the end, then shift the decimal separatorndigits to the right (positiven) or left (negativen). The number1.2304×106would have its decimal separator shifted 6 digits to the right and become1,230,400, while−4.0321×10−3would have its decimal separator moved 3 digits to the left and be−0.0040321.
Conversion between different scientific notation representations of the same number with different exponential values is achieved by performing opposite operations of multiplication or division by a power of ten on the significand and an subtraction or addition of one on the exponent part. The decimal separator in the significand is shiftedxplaces to the left (or right) andxis added to (or subtracted from) the exponent, as shown below.
Given two numbers in scientific notation,x0=m0×10n0{\displaystyle x_{0}=m_{0}\times 10^{n_{0}}}andx1=m1×10n1{\displaystyle x_{1}=m_{1}\times 10^{n_{1}}}
Multiplicationanddivisionare performed using the rules for operation withexponentiation:x0x1=m0m1×10n0+n1{\displaystyle x_{0}x_{1}=m_{0}m_{1}\times 10^{n_{0}+n_{1}}}andx0x1=m0m1×10n0−n1{\displaystyle {\frac {x_{0}}{x_{1}}}={\frac {m_{0}}{m_{1}}}\times 10^{n_{0}-n_{1}}}
Some examples are:5.67×10−5×2.34×102≈13.3×10−5+2=13.3×10−3=1.33×10−2{\displaystyle 5.67\times 10^{-5}\times 2.34\times 10^{2}\approx 13.3\times 10^{-5+2}=13.3\times 10^{-3}=1.33\times 10^{-2}}and2.34×1025.67×10−5≈0.413×102−(−5)=0.413×107=4.13×106{\displaystyle {\frac {2.34\times 10^{2}}{5.67\times 10^{-5}}}\approx 0.413\times 10^{2-(-5)}=0.413\times 10^{7}=4.13\times 10^{6}}
Additionandsubtractionrequire the numbers to be represented using the same exponential part, so that the significand can be simply added or subtracted:
Next, add or subtract the significands:x0±x1=(m0±m1)×10n0{\displaystyle x_{0}\pm x_{1}=(m_{0}\pm m_{1})\times 10^{n_{0}}}
An example:2.34×10−5+5.67×10−6=2.34×10−5+0.567×10−5=2.907×10−5{\displaystyle 2.34\times 10^{-5}+5.67\times 10^{-6}=2.34\times 10^{-5}+0.567\times 10^{-5}=2.907\times 10^{-5}}
While base ten is normally used for scientific notation, powers of other bases can be used too,[25]base 2 being the next most commonly used one.
For example, in base-2 scientific notation, the number 1001binbinary(=9d) is written as1.001b× 2d11bor1.001b× 10b11busing binary numbers (or shorter1.001 × 1011if binary context is obvious).[citation needed]In E notation, this is written as1.001bE11b(or shorter: 1.001E11) with the letter "E" now standing for "times two (10b) to the power" here. In order to better distinguish this base-2 exponent from a base-10 exponent, a base-2 exponent is sometimes also indicated by using the letter "B" instead of "E",[26]a shorthand notation originally proposed byBruce Alan MartinofBrookhaven National Laboratoryin 1968,[27]as in1.001bB11b(or shorter: 1.001B11). For comparison, the same number indecimal representation:1.125 × 23(using decimal representation), or 1.125B3 (still using decimal representation). Some calculators use a mixed representation for binary floating point numbers, where the exponent is displayed as decimal number even in binary mode, so the above becomes1.001b× 10b3dor shorter 1.001B3.[26]
This is closely related to the base-2floating-pointrepresentation commonly used in computer arithmetic, and the usage of IECbinary prefixes(e.g. 1B10 for 1×210(kibi), 1B20 for 1×220(mebi), 1B30 for 1×230(gibi), 1B40 for 1×240(tebi)).
Similar to "B" (or "b"[28]), the letters "H"[26](or "h"[28]) and "O"[26](or "o",[28]or "C"[26]) are sometimes also used to indicatetimes 16 or 8 to the poweras in 1.25 =1.40h× 10h0h= 1.40H0 = 1.40h0, or 98000 =2.7732o× 10o5o= 2.7732o5 = 2.7732C5.[26]
Another similar convention to denote base-2 exponents is using a letter "P" (or "p", for "power"). In this notation the significand is always meant to be hexadecimal, whereas the exponent is always meant to be decimal.[29]This notation can be produced by implementations of theprintffamily of functions following theC99specification and (Single Unix Specification)IEEE Std 1003.1POSIXstandard, when using the%aor%Aconversion specifiers.[29][30][31]Starting withC++11,C++I/O functions could parse and print the P notation as well. Meanwhile, the notation has been fully adopted by the language standard sinceC++17.[32]Apple'sSwiftsupports it as well.[33]It is also required by theIEEE 754-2008binary floating-point standard. Example: 1.3DEp42 represents1.3DEh× 242.
Engineering notationcan be viewed as a base-1000 scientific notation.
Sayre, David, ed. (1956-10-15).The FORTRAN Automatic Coding System for the IBM 704 EDPM: Programmer's Reference Manual(PDF). New York: Applied Science Division and Programming Research Department,International Business Machines Corporation. pp. 9, 27. Retrieved2022-07-04.(2+51+1 pages)
"6. Extensions: 6.1 Extensions implemented in GNU Fortran: 6.1.8 Q exponent-letter".The GNU Fortran Compiler. 2014-06-12. Retrieved2022-12-21.
"The Unicode Standard"(v. 7.0.0 ed.). Retrieved2018-03-23.
Vanderburgh, Richard C., ed. (November 1976)."Decapower"(PDF).52-Notes – Newsletter of the SR-52 Users Club.1(6). Dayton, OH: 1. V1N6P1. Retrieved2017-05-28.Decapower– In the January 1976 issue of65-Notes(V3N1p4) Jim Davidson (HP-65Users Club member #547) suggested the term "decapower" as a descriptor for the power-of-ten multiplier used in scientific notation displays. I'm going to begin using it in place of "exponent" which is technically incorrect, and the letter D to separate the "mantissa" from the decapower for typewritten numbers, as Jim also suggests. For example,123−45[sic] which is displayed in scientific notation as1.23 -43will now be written1.23D-43. Perhaps, as this notation gets more and more usage, the calculator manufacturers will change their keyboard abbreviations. HP's EEX and TI's EE could be changed to ED (for enter decapower).[1]"Decapower".52-Notes – Newsletter of the SR-52 Users Club. Vol. 1, no. 6. Dayton, OH. November 1976. p. 1. Retrieved2018-05-07.(NB. The termdecapowerwas frequently used in subsequent issues of this newsletter up to at least 1978.)
電言板6 PC-U6000 PROGRAM LIBRARY[Telephone board 6 PC-U6000 program library] (in Japanese). Vol. 6. University Co-op. 1993.
"TI-83 Programmer's Guide"(PDF). Retrieved2010-03-09.
"INTOUCH 4GL a Guide to the INTOUCH Language". Archived fromthe originalon 2015-05-03.
|
https://en.wikipedia.org/wiki/Scientific_notation
|
In computers, aserial decimalnumeric representation is one in which tenbitsare reserved for each digit, with a different bit turned on depending on which of the ten possible digits is intended.ENIACandCALDICused this representation.[1]
Thiscomputer-storage-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Serial_decimal
|
Acomputer number formatis the internal representation of numeric values in digital device hardware and software, such as in programmablecomputersandcalculators.[1]Numerical values are stored as groupings ofbits, such asbytesand words. The encoding between numerical values and bit patterns is chosen for convenience of the operation of the computer;[citation needed]the encoding used by the computer's instruction set generally requires conversion for external use, such as for printing and display. Different types of processors may have different internal representations of numerical values and different conventions are used for integer and real numbers. Most calculations are carried out with number formats that fit into a processor register, but some software systems allow representation of arbitrarily large numbers using multiple words of memory.
Computers represent data in sets of binary digits. The representation is composed of bits, which in turn are grouped into larger sets such as bytes.
Abitis abinarydigitthat represents one of twostates. The concept of a bit can be understood as a value of either1or0,onoroff,yesorno,trueorfalse, orencodedby a switch ortoggleof some kind.
While a single bit, on its own, is able to represent only two values, astring of bitsmay be used to represent larger values. For example, a string of three bits can represent up to eight distinct values as illustrated in Table 1.
As the number of bits composing a string increases, the number of possible0and1combinations increasesexponentially. A single bit allows only two value-combinations, two bits combined can make four separate values, three bits for eight, and so on, increasing with the formula 2n. The amount of possible combinations doubles with each binary digit added as illustrated in Table 2.
Groupings with a specific number of bits are used to represent varying things and have specific names.
Abyteis a bit string containing the number of bits needed to represent acharacter. On most modern computers, this is an eight bit string. Because the definition of a byte is related to the number of bits composing a character, some older computers have used a different bit length for their byte.[2]In manycomputer architectures, the byte is the smallestaddressable unit, the atom of addressability, say. For example, even though 64-bit processors may address memory sixty-four bits at a time, they may still split that memory into eight-bit pieces. This is called byte-addressable memory. Historically, manyCPUsread data in some multiple of eight bits.[3]Because the byte size of eight bits is so common, but the definition is not standardized, the termoctetis sometimes used to explicitly describe an eight bit sequence.
Anibble(sometimesnybble), is a number composed of four bits.[4]Being ahalf-byte, the nibble was named as a play on words. A person may need several nibbles for one bite from something; similarly, a nybble is a part of a byte. Because four bits allow for sixteen values, a nibble is sometimes known as ahexadecimal digit.[5]
Octaland hexadecimal encoding are convenient ways to represent binary numbers, as used by computers. Computer engineers often need to write out binary quantities, but in practice writing out a binary number such as 1001001101010001 is tedious and prone to errors. Therefore, binary quantities are written in a base-8, or "octal", or, much more commonly, a base-16, "hexadecimal" (hex), number format. In the decimal system, there are 10 digits, 0 through 9, which combine to form numbers. In an octal system, there are only 8 digits, 0 through 7. That is, the value of an octal "10" is the same as a decimal "8", an octal "20" is a decimal "16", and so on. In a hexadecimal system, there are 16 digits, 0 through 9 followed, by convention, with A through F. That is, a hexadecimal "10" is the same as a decimal "16" and a hexadecimal "20" is the same as a decimal "32". An example and comparison of numbers in different bases is described in the chart below.
When typing numbers, formatting characters are used to describe the number system, for example 000_0000B or 0b000_00000 for binary and 0F8H or 0xf8 for hexadecimal numbers.
Each of these number systems is a positional system, but while decimal weights are powers of 10, the octal weights are powers of 8 and the hexadecimal weights are powers of 16. To convert from hexadecimal or octal to decimal, for each digit one multiplies the value of the digit by the value of its position and then adds the results. For example:
Fixed-pointformatting can be useful to represent fractions in binary.
The number of bits needed for the precision and range desired must be chosen to store the fractional and integer parts of a number. For instance, using a 32-bit format, 16 bits may be used for the integer and 16 for the fraction.
The eight's bit is followed by the four's bit, then the two's bit, then the one's bit. The fractional bits continue the pattern set by the integer bits. The next bit is the half's bit, then the quarter's bit, then the eighth's bit, and so on. For example:
This form of encoding cannot represent some values in binary. For example, the fraction1/5, 0.2 in decimal, the closest approximations would be as follows:
Even if more digits are used, an exact representation is impossible. The number1/3, written in decimal as 0.333333333..., continues indefinitely. If prematurely terminated, the value would not represent1/3precisely.
While both unsigned and signed integers are used in digital systems, even a 32-bit integer is not enough to handle all the range of numbers a calculator can handle, and that's not even including fractions. To approximate the greater range and precision ofreal numbers, we have to abandon signed integers and fixed-point numbers and go to a "floating-point" format.
In the decimal system, we are familiar with floating-point numbers of the form (scientific notation):
or, more compactly:
which means "1.1030402 times 1 followed by 5 zeroes". We have a certain numeric value (1.1030402) known as a "significand", multiplied by a power of 10 (E5, meaning 105or 100,000), known as an "exponent". If we have a negative exponent, that means the number is multiplied by a 1 that many places to the right of the decimal point. For example:
The advantage of this scheme is that by using the exponent we can get a much wider range of numbers, even if the number of digits in the significand, or the "numeric precision", is much smaller than the range. Similar binary floating-point formats can be defined for computers. There is a number of such schemes, the most popular has been defined byInstitute of Electrical and Electronics Engineers(IEEE). TheIEEE 754-2008standard specification defines a 64 bit floating-point format with:
With the bits stored in 8 bytes of memory:
where "S" denotes the sign bit, "x" denotes an exponent bit, and "m" denotes a significand bit. Once the bits here have been extracted, they are converted with the computation:
This scheme provides numbers valid out to about 15 decimal digits, with the following range of numbers:
The specification also defines several special values that are not defined numbers, and are known asNaNs, for "Not A Number". These are used by programs to designate invalid operations and the like.
Some programs also use 32-bit floating-point numbers. The most common scheme uses a 23-bit significand with a sign bit, plus an 8-bit exponent in "excess-127" format, giving seven valid decimal digits.
The bits are converted to a numeric value with the computation:
leading to the following range of numbers:
Such floating-point numbers are known as "reals" or "floats" in general, but with a number of variations:
A 32-bit float value is sometimes called a "real32" or a "single", meaning "single-precision floating-point value".
A 64-bit float is sometimes called a "real64" or a "double", meaning "double-precision floating-point value".
The relation between numbers and bit patterns is chosen for convenience in computer manipulation; eight bytes stored in computer memory may represent a 64-bit real, two 32-bit reals, or four signed or unsigned integers, or some other kind of data that fits into eight bytes. The only difference is how the computer interprets them. If the computer stored four unsigned integers and then read them back from memory as a 64-bit real, it almost always would be a perfectly valid real number, though it would be junk data.
Only a finite range of real numbers can be represented with a given number of bits. Arithmetic operations can overflow or underflow, producing a value too large or too small to be represented.
The representation has a limited precision. For example, only 15 decimal digits can be represented with a 64-bit real. If a very small floating-point number is added to a large one, the result is just the large one. The small number was too small to even show up in 15 or 16 digits of resolution, and the computer effectively discards it. Analyzing the effect of limited precision is a well-studied problem. Estimates of the magnitude of round-off errors and methods to limit their effect on large calculations are part of any large computation project. The precision limit is different from the range limit, as it affects the significand, not the exponent.
The significand is a binary fraction that doesn't necessarily perfectly match a decimal fraction. In many cases a sum of reciprocal powers of 2 does not match a specific decimal fraction, and the results of computations will be slightly off. For example, the decimal fraction "0.1" is equivalent to an infinitely repeating binary fraction: 0.000110011 ...[6]
Programming inassembly languagerequires the programmer to keep track of the representation of numbers. Where the processor does not support a required mathematical operation, the programmer must work out a suitable algorithm and instruction sequence to carry out the operation; on some microprocessors, even integer multiplication must be done in software.
High-levelprogramming languagessuch asRubyandPythonoffer an abstract number that may be an expanded type such asrational,bignum, orcomplex. Mathematical operations are carried out by library routines provided by the implementation of the language. A given mathematical symbol in the source code, byoperator overloading, will invoke different object code appropriate to the representation of the numerical type; mathematical operations on any number—whether signed, unsigned, rational, floating-point, fixed-point, integral, or complex—are written exactly the same way.
Some languages, such asREXXandJava, provide decimal floating-points operations, which provide rounding errors of a different form.
The initial version of this article was based on apublic domainarticle fromGreg Goebel's Vectorsite.
|
https://en.wikipedia.org/wiki/Computer_number_format
|
Octal gamesare asubclassofheap gamesthat involve removing tokens (game pieces or stones) from heaps of tokens.
They have been studied incombinatorial game theoryas a generalization ofNim,Kayles, and similar games.[1][2]
Octal games areimpartialmeaning that every move available to one player is also available to the other player.
They differ from each other in the numbers of tokens that may be removed in a single move, and (depending on this number) whether it is allowed to remove an entire heap, reduce the size of a heap, or split a heap into two heaps. These rule variations may be described compactly by a coding system usingoctalnumerals.
An octal game is played with tokens divided into heaps. Two players take turns moving until no moves are possible. Every move consists of selecting just one of the heaps, and either
Heaps other than the selected heap remain unchanged. The last player to move wins innormal play. The game may also be played inmisère play, in which the last player to move loses.
Games played with heaps in this fashion, in which the allowed moves for each heap are determined by the original heap's size, are calledTaking and Breaking gamesin the literature.[1]Octal games are a subset of the taking and breaking games in which the allowed moves are determined by the number of tokensremovedfrom the heap.
The octal code for a game is specified as
where the octal digitdnspecifies whether the player is allowed to leave zero, one, or two heaps after removingntokens from a heap. The digitdnis the sum of
Zero tokens are not counted as a heap. Thus the digitdnis odd if a heap ofntokens may be removed entirely, and even otherwise. The specification of one-heap results indnapplies to removingntokens from a heap of more thann. The two-heap results indnapply to removingntokens from a heap of at leastn+2, and separating the remainder into two nonempty heaps.
Octal games may allow splitting a heap into two parts without removing any tokens, by use of the digit 4 to the left of the decimal point. This is similar to the move inGrundy's game, which is to split a heap into two unequal parts. Standard octal game notation, however, does not have the power to express the constraint of unequal parts.
Octal games with only a finite number of non-zero digits are calledfinite octal games.
The most fundamental game incombinatorial game theoryisNim, in which any number of tokens may be removed from a heap, leaving zero or one heaps behind. The octal code for Nim is0.333…, appearing in the published literature as
to signify the repeating part as in arepeating decimal. It is important to realize, however, that the repeating part does not play the same role as in octal fractions, in that the games
and
are not identical, despite their equality as octal fractions.
The gameKaylesis usually visualized as played with a row ofnpins, but may be modeled by a heap ofncounters. One is allowed to remove one or two tokens from a heap and arrange the remainder into zero, one, or two heaps. The octal code for Kayles is0.77.
Dawson's Chessis a game arising from a chess puzzle posed byThomas Rayner DawsoninCaissa's Wild Roses, 1938.[3]The puzzle was posed as involving opposed rows of pawns separated by a single rank. Although the puzzle is not posed as animpartial game, the assumption that captures are mandatory implies that a player's moving in any file results only in the removal of that file and its neighbors (if any) from further consideration, with the opposite player to move. Modeling this as a heap ofntokens, a player may remove an entire heap of one, two, or three tokens, may reduce any heap by two or three tokens, or may split a heap into two parts after removing three tokens. Dawson's Chess is thus represented by the octal code0.137.
In the game0.07, calledDawson's Kayles, a move is to remove exactly two tokens from a heap and to distribute the remainder into zero, one, or two heaps. Dawson's Kayles is named for its (non-obvious) similarity to Dawson's Chess, in that a Dawson's Kayles heap ofn+1 tokens acts exactly like a Dawson's Chess heap ofntokens. Dawson's Kayles is said to be afirst cousinof Dawson's Chess.
Octal games likeNim, in which every move transforms a heap into zero or one heaps, are calledquaternarygames because the only digits that appear are 0, 1, 2, and 3. The octal notation may also be extended to includehexadecimalgames, in which digits permit division of a heap into three parts. In fact, arbitrarily large bases are possible. The analysis of quaternary, octal, and hexadecimal games show that these classes of games are markedly different from each other,[1]and the behavior of larger bases has not received as much scrutiny.
TheSprague–Grundy theoremimplies that a heap of size n is equivalent to anim heapof a given size, usually noted G(n). The analysis of an octal game then consists in finding the sequence of the nim-values for heaps of increasing size. This sequence G(0), G(1), G(2) ... is usually called the nim-sequence of the game.
Allfiniteoctal games analyzed so far have shown a nim-sequence ultimately periodic, and whether all finite octal games are ultimately periodic is an open question. It is listed byRichard Guyas an important problem in the field ofcombinatorial games.[4]
A complete analysis of an octal game results in finding its period and preperiod of its nim-sequence. It is shown inWinning Ways for your Mathematical Playsthat only a finite number of values of the nim-sequence is needed to prove that a finite octal game is periodic, which opened the door to computations with computers.
Octal games with at most 3 octal-digits have been analyzed through the years. There are 79 non-trivial octal games, among which 14 have been solved:
There remain 63 of these games, despite the computation of millions of nim-values by Achim Flammenkamp.[5]
|
https://en.wikipedia.org/wiki/Octal_games
|
Combinatorial game theoryis a branch ofmathematicsandtheoretical computer sciencethat typically studiessequential gameswithperfect information. Study has been largely confined to two-playergamesthat have apositionthat the players take turns changing in defined ways ormovesto achieve a defined winning condition. Combinatorial game theory has not traditionally studiedgames of chanceor those that use imperfect or incomplete information, favoring games that offerperfect informationin which the state of the game and the set of available moves is always known by both players.[1]However, as mathematical techniques advance, the types of game that can be mathematically analyzed expands, thus the boundaries of the field are ever changing.[2]Scholars will generally define what they mean by a "game" at the beginning of a paper, and these definitions often vary as they are specific to the game being analyzed and are not meant to represent the entire scope of the field.
Combinatorialgames include well-known games such aschess,checkers, andGo, which are regarded as non-trivial, andtic-tac-toe, which is considered trivial, in the sense of being "easy to solve". Some combinatorial games may also have anunboundedplaying area, such asinfinite chess. In combinatorial game theory, the moves in these and other games are represented as agame tree.
Combinatorial games also include one-player combinatorial puzzles such asSudoku, and no-player automata, such asConway's Game of Life, (although in the strictest definition, "games" can be said to require more than one participant, thus the designations of "puzzle" and "automata".[3])
Game theoryin general includes games of chance, games of imperfect knowledge, and games in which players can move simultaneously, and they tend to represent real-life decision making situations.
Combinatorial game theory has a different emphasis than "traditional" or "economic" game theory, which was initially developed to study games with simple combinatorial structure, but with elements of chance (although it also considers sequential moves, seeextensive-form game). Essentially, combinatorial game theory has contributed new methods for analyzing game trees, for example usingsurreal numbers, which are a subclass of all two-player perfect-information games.[3]The type of games studied by combinatorial game theory is also of interest inartificial intelligence, particularly forautomated planning and scheduling. In combinatorial game theory there has been less emphasis on refining practicalsearch algorithms(such as thealpha–beta pruningheuristic included in most artificial intelligence textbooks), but more emphasis on descriptive theoretical results (such as measures ofgame complexityor proofs of optimal solution existence without necessarily specifying an algorithm, such as thestrategy-stealing argument).
An important notion in combinatorial game theory is that of thesolved game. For example,tic-tac-toeis considered a solved game, as it can be proven that any game will result in a draw if both players play optimally. Deriving similar results for games with rich combinatorial structures is difficult. For instance, in 2007 it was announced thatcheckershas beenweakly solved—optimal play by both sides also leads to a draw—but this result was acomputer-assisted proof.[4]Other real world games are mostly too complicated to allow complete analysis today, although the theory has had some recent successes in analyzing Go endgames. Applying combinatorial game theory to apositionattempts to determine the optimum sequence of moves for both players until the game ends, and by doing so discover the optimum move in any position. In practice, this process is torturously difficult unless the game is very simple.
It can be helpful to distinguish between combinatorial "mathgames" of interest primarily to mathematicians and scientists to ponder and solve, and combinatorial "playgames" of interest to the general population as a form of entertainment and competition.[5]However, a number of games fall into both categories.Nim, for instance, is a playgame instrumental in the foundation of combinatorial game theory, and one of the first computerized games.[6]Tic-tac-toe is still used to teach basic principles of gameAIdesign tocomputer sciencestudents.[7]
Combinatorial game theory arose in relation to the theory ofimpartial games, in which any play available to one player must be available to the other as well. One such game isNim, which can be solved completely. Nim is an impartial game for two players, and subject to thenormal play condition, which means that a player who cannot move loses. In the 1930s, theSprague–Grundy theoremshowed that all impartial games are equivalent to heaps in Nim, thus showing that major unifications are possible in games considered at a combinatorial level, in which detailed strategies matter, not just pay-offs.
In the 1960s,Elwyn R. Berlekamp,John H. ConwayandRichard K. Guyjointly introduced the theory of apartisan game, in which the requirement that a play available to one player be available to both is relaxed. Their results were published in their bookWinning Ways for your Mathematical Playsin 1982. However, the first work published on the subject was Conway's 1976 bookOn Numbers and Games, also known as ONAG, which introduced the concept ofsurreal numbersand the generalization to games.On Numbers and Gameswas also a fruit of the collaboration between Berlekamp, Conway, and Guy.
Combinatorial games are generally, by convention, put into a form where one player wins when the other has no moves remaining. It is easy to convert any finite game with only two possible results into an equivalent one where this convention applies. One of the most important concepts in the theory of combinatorial games is that of thesumof two games, which is a game where each player may choose to move either in one game or the other at any point in the game, and a player wins when his opponent has no move in either game. This way of combining games leads to a rich and powerful mathematical structure.
Conway stated inOn Numbers and Gamesthat the inspiration for the theory of partisan games was based on his observation of the play inGoendgames, which can often be decomposed into sums of simpler endgames isolated from each other in different parts of the board.
The introductory textWinning Waysintroduced a large number of games, but the following were used as motivating examples for the introductory theory:
The classic gameGowas influential on the early combinatorial game theory, and Berlekamp and Wolfe subsequently developed an endgame andtemperaturetheory for it (see references). Armed with this they were able to construct plausible Go endgame positions from which they could give expert Go players a choice of sides and then defeat them either way.
Another game studied in the context of combinatorial game theory ischess. In 1953Alan Turingwrote of the game, "If one can explain quite unambiguously in English, with the aid of mathematical symbols if required, how a calculation is to be done, then it is always possible to programme any digital computer to do that calculation, provided the storage capacity is adequate."[8]In a 1950 paper,Claude Shannonestimated the lower bound of thegame-tree complexityof chess to be 10120, and today this is referred to as theShannon number.[9]Chess remains unsolved, although extensive study, including work involving the use of supercomputers has created chess endgametablebases, which shows the result of perfect play for all end-games with seven pieces or less.Infinite chesshas an even greater combinatorial complexity than chess (unless only limited end-games, or composed positions with a small number of pieces are being studied).
A game, in its simplest terms, is a list of possible "moves" that two players, calledleftandright, can make. The game position resulting from any move can be considered to be another game. This idea of viewing games in terms of their possible moves to other games leads to arecursivemathematical definition of games that is standard in combinatorial game theory. In this definition, each game has the notation{L|R}. L is thesetof game positions that the left player can move to, and R is the set of game positions that the right player can move to; each position in L and R is defined as a game using the same notation.
UsingDomineeringas an example, label each of the sixteen boxes of the four-by-four board by A1 for the upper leftmost square, C2 for the third box from the left on the second row from the top, and so on. We use e.g. (D3, D4) to stand for the game position in which a vertical domino has been placed in the bottom right corner. Then, the initial position can be described in combinatorial game theory notation as
In standard Cross-Cram play, the players alternate turns, but this alternation is handled implicitly by the definitions of combinatorial game theory rather than being encoded within the game states.
The above game describes a scenario in which there is only one move left for either player, and if either player makes that move, that player wins. (An irrelevant open square at C3 has been omitted from the diagram.) The {|} in each player's move list (corresponding to the single leftover square after the move) is called thezero game, and can actually be abbreviated 0. In the zero game, neither player has any valid moves; thus, the player whose turn it is when the zero game comes up automatically loses.
The type of game in the diagram above also has a simple name; it is called thestar game, which can also be abbreviated ∗. In the star game, the only valid move leads to the zero game, which means that whoever's turn comes up during the star game automatically wins.
An additional type of game, not found in Domineering, is aloopy game, in which a valid move of eitherleftorrightis a game that can then lead back to the first game.Checkers, for example, becomes loopy when one of the pieces promotes, as then it can cycle endlessly between two or more squares. A game that does not possess such moves is calledloopfree.
There are alsotransfinitegames, which have infinitely many positions—that is,leftandrighthave lists of moves that are infinite rather than finite.
Numbers represent the number of free moves, or the move advantage of a particular player. By convention positive numbers represent an advantage for Left, while negative numbers represent an advantage for Right. They are defined recursively with 0 being the base case.
Thezero gameis a loss for the first player.
The sum of number games behaves like the integers, for example 3 + −2 = 1.
Any game number is in the class of thesurreal numbers.
Star, written as ∗ or {0|0}, is a first-player win since either player must (if first to move in the game) move to a zero game, and therefore win.
The game ∗ is neither positive nor negative; it and all other games in which the first player wins (regardless of which side the player is on) are said to befuzzyorconfused with 0; symbolically, we write ∗ || 0.
The game ∗n is notation for {0, ∗, …, ∗(n−1)| 0, ∗, …, ∗(n−1)}, which is also representative of normal-playNimwith a single heap of n objects. (Note that ∗0 = 0 and ∗1 = ∗.)
Up, written as ↑, is a position in combinatorial game theory.[10]In standard notation, ↑ = {0|∗}. Its negative is calleddown.
Up is strictly positive (↑ > 0), and down is strictly negative (↓ < 0), but both areinfinitesimal. Up and down are defined inWinning Ways for your Mathematical Plays.
Consider the game {1|−1}. Both moves in this game are an advantage for the player who makes them; so the game is said to be "hot;" it is greater than any number less than −1, less than any number greater than 1, and fuzzy with any number in between. It is written as ±1. Note that a subclass of hot games, referred to as ±n for some numerical game n is a switch game. Switch games can be added to numbers, or multiplied by positive ones, in the expected fashion; for example, 4 ± 1 = {5|3}.
Animpartial gameis one where, at every position of the game, the same moves are available to both players. For instance,Nimis impartial, as any set of objects that can be removed by one player can be removed by the other. However,domineeringis not impartial, because one player places horizontal dominoes and the other places vertical ones. Likewise Checkers is not impartial, since the players own different colored pieces. For anyordinal number, one can define an impartial game generalizing Nim in which, on each move, either player may replace the number with any smaller ordinal number; the games defined in this way are known asnimbers. TheSprague–Grundy theoremstates that every impartial game under thenormal play conventionis equivalent to a nimber.
The "smallest" nimbers – the simplest and least under the usual ordering of the ordinals – are 0 and ∗.
|
https://en.wikipedia.org/wiki/Combinatorial_game_theory
|
Syllabic octalandsplit octalare two similar notations for 8-bit and 16-bitoctal numbers, respectively, used in some historical contexts.
Syllabic octalis an 8-bit octalnumber representationthat was used byEnglish Electricin conjunction with theirKDF9machine in the mid-1960s.
Although the word 'byte' had been coined by the designers of theIBM 7030 Stretchfor a group of eightbits, it was not yet well known, and English Electric used the word 'syllable' for what is now called a byte.
Machine codeprogramming used an unusual form ofoctal, known locally as 'bastardized octal'. It represented 8 bits with three octal digits but the first digit represented only the two most-significant bits (with values 0..3), whilst the others the remaining two groups of three bits (with values 0..7) each.[1]A more polite colloquial name was 'silly octal', derived from the official name which wassyllabic octal[2][3](also known as 'slob-octal' or 'slob' notation,[4][5]).
This 8-bit notation was similar to the later 16-bit split octal notation.
Split octalis an unusual address notation used byHeathkit's PAM8 and portions ofHDOSfor theHeathkit H8in the late 1970s (and sometimes up to the present).[6][7]It was also used byDigital Equipment Corporation(DEC).
Following this convention, 16-bit addresses were split into two 8-bit numbers printed separately in octal, that is base 8 on 8-bit boundaries: the first memory location was "000.000" and the memory location after "000.377" was "001.000" (rather than "000.400").
In order to distinguish numbers in split-octal notation from ordinary 16-bit octal numbers, the two digit groups were often separated by a slash (/),[8]dot (.),[9]colon (:),[10]comma (,),[11]hyphen (-),[12]or hash mark (#).[13][14]
Mostminicomputersandmicrocomputersused either straight octal (where 377 is followed by 400) orhexadecimal. With the introduction of the optional HA8-6Z80processor replacement for the8080board, the front-panel keyboard got a new set of labels and hexadecimal notation was used instead of octal.[15]
Through tricky number alignment theHP-16Cand otherHewlett-PackardRPNcalculators supportingbase conversioncan implicitly support numbers in split octal as well.[16]
|
https://en.wikipedia.org/wiki/Split_octal
|
Atransponder(short fortransmitter-responder[1]and sometimes abbreviated to XPDR,[2]XPNDR,[3]TPDR[4]or TP[5]) is an electronic device that produces a response when it receives a radio-frequency interrogation. Aircraft havetranspondersto assist in identifying them on air traffic controlradar.Collision avoidance systemshave been developed to use transponder transmissions as a means of detecting aircraft at risk of colliding with each other.[6][7]
Air traffic control(ATC) units use the term "squawk" when they are assigning an aircraft a transponder code, e.g., "Squawk 7421". Squawk thus can be said to mean "select transponder code" or "squawkingxxxx" to mean "I have selected transponder codexxxx".[6]
The transponder receives interrogation from the secondary surveillance radar on 1030 MHz and replies on 1090 MHz.
Secondary surveillance radar (SSR) is referred to as "secondary", to distinguish it from the "primary radar" that works by reflecting a radio signal off the skin of the aircraft. Primary radar determines range and bearing to a target with reasonably high fidelity, but it cannot determine target elevation (altitude) reliably except at close range. SSR uses an active transponder (beacon) to transmit a response to an interrogation by a secondary radar. This response most often includes the aircraft'spressure altitudeand a 4-digitoctalidentifier.[7][8]
A pilot may be requested to squawk a given code by an air traffic controller, via the radio, using a phrase such as "Cessna 123AB, squawk 0363". The pilot then selects the 0363 code on their transponder and the track on the air traffic controller's radar screen will become correctly associated with their identity.[6][7]
Because primary radar generally gives bearing and range position information, but lacks altitude information, mode C andmode Stransponders also report pressure altitude. Mode C altitude information conventionally comes from the pilot's altimeter, and is transmitted using a modifiedGray code, called aGillham code. Where the pilot's altimeter does not contain a suitable altitude encoder, ablind encoder(which does not directly display altitude) is connected to the transponder. Around busyairspacethere is often a regulatory requirement that all aircraft be equipped with altitude-reporting mode C or mode S transponders. In the United States, this is known as aMode C veil. Mode S transponders are compatible with transmitting the mode C signal, and have the capability to report in 25-foot (7.5 m) increments; they receive information from a GPS receiver and also transmit location and speed. Without the pressure altitude reporting, the air traffic controller has no display of accurate altitude information, and must rely on the altitude reported by the pilot via radio.[6][7]Similarly, thetraffic collision avoidance system(TCAS) installed on some aircraft needs the altitude information supplied by transponder signals.
All mode A, C, and S transponders include an "IDENT" switch which activates a special thirteenth bit on the mode A reply known as IDENT, short for "identify". When ground-based radar equipment[9]receives the IDENT bit, it results in the aircraft's blip "blossoming" on the radar scope. This is often used by the controller to locate the aircraft amongst others by requesting the ident function from the pilot, e.g., "Cessna 123AB, squawk 0363 and ident".[6][7]
Ident can also be used in case of a reported or suspected radio failure to determine if the failure is only one way and whether the pilot can still transmitorreceive, but not both, e.g., "Cessna 123AB, if you read, squawk ident".[7]
Transponder codes are four-digit numbers transmitted by an aircraft transponder in response to a secondary surveillance radar interrogation signal to assist air traffic controllers with traffic separation. A discrete transponder code (often called asquawk code) is assigned by air traffic controllers to identify an aircraft uniquely in aflight information region(FIR). This allows easy identification of aircraft on radar.[6][7]
Codes are made of fouroctaldigits; the dials on a transponder read from zero to seven, inclusive. Four octal digits can represent up to 4096 different codes, which is why such transponders are sometimes described as "4096 code transponders".[10]
The use of the word "squawk" comes from the system's origin in the World War IIidentification friend or foe(IFF) system, which was code-named "Parrot".[11][12]
Some codes can be selected by the pilot if and when the situation requires or allows it, without permission from ATC. Such codes are referred to as "conspicuity codes" in the UK.[13]Other codes are generally assigned by ATC units.[6][7]For flights oninstrument flight rules(IFR), the squawk code is typically assigned as part of the departure clearance and stays the same throughout the flight.[6][7]
Flights onvisual flight rules(VFR), when in uncontrolled airspace, will "squawk VFR" (1200 in the United States and Canada, 7000 in Europe). Upon contact with an ATC unit, they will be told to squawk a certain code. When changing frequency, for instance because the VFR flight leaves controlled airspace or changes to another ATC unit, the VFR flight will be told to "squawk VFR" again.[6][7]
In order to avoid confusion over assigned squawk codes, ATC units will typically be allocated blocks of squawk codes, not overlapping with the blocks of nearby ATC units, to assign at their discretion.
Not all ATC units will use radar to identify aircraft, but they assign squawk codes nevertheless. As an example, London Information—the flight information service station that covers the southern half of the UK—does not have access to radar images, but does assign squawk code 1177 to all aircraft that receive aflight information service(FIS) from them. This tells other radar-equipped ATC units that a specific aircraft is listening on the London Information radio frequency, in case they need to contact that aircraft.[13]
The following codes are applicable worldwide.
SeeList of transponder codesfor list of country-specific and historic allocations.
|
https://en.wikipedia.org/wiki/Squawk_code
|
Gillham codeis a zero-padded 12-bitbinary codeusing a parallel nine-[1]to eleven-wireinterface,[2]theGillham interface, that is used to transmit uncorrectedbarometricaltitudebetween an encodingaltimeteror analogair data computerand adigitaltransponder. It is a modified form of aGray codeand is sometimes referred to simply as a "Gray code" inavionicsliterature.[3]
TheGillham interfaceandcodeare an outgrowth of the 12-bitIFF Mark Xsystem, which was introduced in the 1950s. The civiltransponder interrogation modesAandCwere defined inair traffic control(ATC) andsecondary surveillance radar(SSR) in 1960.
The code is named after Ronald Lionel Gillham, a signals officer at Air Navigational Services,Ministry of Transport and Civil Aviation, who had been appointed a civil member of theMost Excellent Order of the British Empire(MBE) in the Queen's1955 Birthday Honours.[4]He was the UK's representative to theInternational Air Transport Association(IATA) committee developing the specification for the second generation of air traffic control system, known in the UK as "Plan Ahead", and is said to have had the idea of using a modified Gray code.[nb 1]The final code variant was developed in late 1961[5]for the ICAO Communications Division meeting (VII COM) held in January/February 1962,[6]and described in a 1962FAAreport.[7][8][9]The exact timeframe and circumstances of the termGillham codebeing coined are unclear, but by 1963 the code was already recognized under this name.[10][11]By the mid-1960s the code was also known asMOA–Gillham code[12]orICAO–Gillham code.ARINC 572specified the code as well in 1968.[13][14]
Once recommended by theICAOfor automatic height transmission for air traffic control purposes,[9][15]the interface is now discouraged[2]and has been mostly replaced by modern serial communication in newer aircraft.
An altitude encoder takes the form of a small metal box containing apressure sensorand signal conditioning electronics.[16][17]The pressure sensor is often heated, which requires a warm-up time during which height information is either unavailable or inaccurate. Older style units can have a warm-up time of up to 10 minutes; more modern units warm up in less than 2 minutes. Some of the very latest encoders incorporate unheated 'instant on' type sensors. During the warm-up of older style units the height information may gradually increase until it settles at its final value. This is not normally a problem as the power would typically be applied before the aircraft enters the runway and so it would be transmitting correct height information soon after take-off.[18]
The encoder has anopen-collectoroutput, compatible with 14 V or 28 V electrical systems.[citation needed]
The height information is represented as 11 binary digits in a parallel form using 11 separate lines designated D2 D4 A1 A2 A4 B1 B2 B4 C1 C2 C4.[3]As a twelfth bit, the Gillham code contains a D1 bit but this is unused and consequently set to zero in practical applications.
Different classes of altitude encoder do not use all of the available bits. All use the A, B and C bits; increasing altitude limits require more of the D bits. Up to and including 30700 ft does not require any of the D bits (9-wire interface[1]). This is suitable for most light general aviation aircraft. Up to and including 62700 ft requires D4 (10-wire interface[2]). Up to and including 126700 ft requires D4 and D2 (11-wire interface[2]). D1 is never used.[19][20]
Bits D2 (msbit) through B4 (lsbit) encode the pressure altitude in 500 ft increments (above a base altitude of −1000±250 ft) in a standard 8-bitreflected binary code(Gray code).[19][21][22][23][24]The specification stops at code 1000000 (126500±250 ft), above which D1 would be needed as a most significant bit.
Bits C1, C2 and C4 use a mirrored 5-state 3-bit Gray BCD code of aGiannini Datex codetype[12][25][26][27][28](with the first 5 states resemblingO'Brien code type II[29][5][23][24][27][28]) to encode the offset from the 500 ft altitude in 100 ft increments.[3]Specifically, if the parity of the 500 ft code is even then codes 001, 011, 010, 110 and 100 encode −200, −100, 0, +100 and +200 ft relative to the 500 ft altitude. If the parity is odd, the assignments are reversed.[19][21]Codes 000, 101 and 111 are not used.[30]: 13(6.17–21)
The Gillham code can be decoded using various methods. Standard techniques use hardware[30]or software solutions. The latter often uses a lookup table but an algorithmic approach can be taken.[21]
|
https://en.wikipedia.org/wiki/Gillham_code
|
Syllabic octalandsplit octalare two similar notations for 8-bit and 16-bitoctal numbers, respectively, used in some historical contexts.
Syllabic octalis an 8-bit octalnumber representationthat was used byEnglish Electricin conjunction with theirKDF9machine in the mid-1960s.
Although the word 'byte' had been coined by the designers of theIBM 7030 Stretchfor a group of eightbits, it was not yet well known, and English Electric used the word 'syllable' for what is now called a byte.
Machine codeprogramming used an unusual form ofoctal, known locally as 'bastardized octal'. It represented 8 bits with three octal digits but the first digit represented only the two most-significant bits (with values 0..3), whilst the others the remaining two groups of three bits (with values 0..7) each.[1]A more polite colloquial name was 'silly octal', derived from the official name which wassyllabic octal[2][3](also known as 'slob-octal' or 'slob' notation,[4][5]).
This 8-bit notation was similar to the later 16-bit split octal notation.
Split octalis an unusual address notation used byHeathkit's PAM8 and portions ofHDOSfor theHeathkit H8in the late 1970s (and sometimes up to the present).[6][7]It was also used byDigital Equipment Corporation(DEC).
Following this convention, 16-bit addresses were split into two 8-bit numbers printed separately in octal, that is base 8 on 8-bit boundaries: the first memory location was "000.000" and the memory location after "000.377" was "001.000" (rather than "000.400").
In order to distinguish numbers in split-octal notation from ordinary 16-bit octal numbers, the two digit groups were often separated by a slash (/),[8]dot (.),[9]colon (:),[10]comma (,),[11]hyphen (-),[12]or hash mark (#).[13][14]
Mostminicomputersandmicrocomputersused either straight octal (where 377 is followed by 400) orhexadecimal. With the introduction of the optional HA8-6Z80processor replacement for the8080board, the front-panel keyboard got a new set of labels and hexadecimal notation was used instead of octal.[15]
Through tricky number alignment theHP-16Cand otherHewlett-PackardRPNcalculators supportingbase conversioncan implicitly support numbers in split octal as well.[16]
|
https://en.wikipedia.org/wiki/Syllabic_octal
|
Base32(also known asduotrigesimal) is an encoding method based on thebase-32numeral system. It uses an alphabet of 32digits, each of which represents a different combination of 5bits(25). Since base32 is not very widely adopted, the question of notation—which characters to use to represent the 32 digits—is not as settled as in the case of more well-known numeral systems (such ashexadecimal), thoughRFCsand unofficial and de-facto standards exist. One way to represent Base32 numbers inhuman-readableform is using digits 0–9 followed by the twenty-two upper-case letters A–V. However, many other variations are used in different contexts. Historically,Baudot codecould be considered a modified (stateful) base32 code. Base32 is often used to represent byte strings.
The October 2006 proposed Internet standard[1]RFC4648documentsbase16, base32 and base64 encodings. It includes two schemes for base32, but recommends one over the other. It further recommends that regardless of precedent, only the alphabet it defines in its section 6 actually be called base32, and that the other similar alphabet in its section 7 instead be called base32hex.[a]Agreement with those recommendations is not universal. Care needs to be taken when using systems that are called base32, as those systems could be base32 per RFC 4648 §6, or per §7 (possibly disregarding that RFC's deprecation of the simpler name for the latter), or they could be yet another encoding variant, see further below.
The most widely used[citation needed]base32 alphabet is defined in RFC4648 §6and the earlierRFC3548(2003). The scheme was originally designed in 2000 by John Myers forSASL/GSSAPI.[2]It uses analphabetofA–Z, followed by2–7. The digits0,1and8are skipped due to their similarity with the lettersO,IandB(thus "2" has a decimal value of26).
In some circumstances padding is not required or used (the padding can be inferred from the length of the string modulo 8). RFC 4648 states that padding must be used unless the specification of the standard (referring to the RFC) explicitly states otherwise. Excluding padding is useful when using Base32 encoded data in URL tokens or file names where the padding character could pose a problem.
This is an example of a Base32 representation using the previously described 32-character set (IPFSCIDv1 in Base32 upper-case encoding):BAFYBEICZSSCDSBS7FFQZ55ASQDF3SMV6KLCW3GOFSZVWLYARCI47BGF354
"Extended hex" base 32orbase32hex,[3]another scheme for base 32 per RFC4648 §7, extendshexadecimalin a more natural way: Its lower half is identical with hexadecimal, and beyond that, base32hex simply continues the alphabet through to the letter V.
This scheme was first proposed by Christian Lanctot, a programmer working atSage software, in a letter toDr. Dobb'smagazine in March 1999[4]as part of a suggested solution for theY2K bug. Lanctot referred to it as "Double Hex". The same alphabet was described in 2000 inRFC2938under the name "Base-32". RFC 4648, while acknowledging existing use of this version inNSEC3, refers to it as base32hex and discourages referring to it as only "base32".
Since this notation uses digits 0–9 followed by consecutive letters of the alphabet, it matches the digits used by theJavaScriptparseInt()function[5]and thePythonint()constructor[6]when a base larger than 10 (such as 16 or 32) is specified. It also retains hexadecimal's property of preserving bitwise sort order of the represented data, unlike RFC 4648's §6 base32, or base64.[3]
Unlike many other base 32 notation systems, base32hex digits beyond 9 are contiguous. However, its set of digits includes characters that may visually conflict. With the rightfontit is possible to visually distinguish between 0, O and 1, I, but other fonts may be unsuitable, as those letters could be hard for humans to tell apart, especially when the context English usually provides is not present in a notation system that is only expressing numbers.[b]The choice of font is not controlled by notation or encoding, yet base32hex makes no attempt to compensate for the shortcomings of affected fonts.[c]
Changing the Base32 alphabet, all alternative standards have similar combinations of alphanumeric symbols.
z-base-32[7]is a Base32 encoding designed byZooko Wilcox-O'Hearnto be easier for human use and more compact. It includes1,8and9but excludesl,v,0and2. It also permutes the alphabet so that the easier characters are the ones that occur more frequently.[clarification needed]It compactly encodes bitstrings whose length in bits is not a multiple of 8[clarification needed]and omits trailing padding characters. z-base-32 was used in theMnetopen source project, and is currently used inPhil Zimmermann'sZRTPprotocol, and in theTahoe-LAFSopen source project.
Another alternative design for Base32 is created byDouglas Crockford, who proposes using additional characters for a mod-37 checksum.[8]It excludes the letters I, L, and O to avoid confusion with digits. It also excludes the letter U to reduce the likelihood of accidental obscenity.
Libraries to encode binary data in Crockford's Base32 are available in a variety of languages.
An earlier form of base 32 notation was used by programmers working on theElectrologica X1to represent machine addresses. The "digits" were represented as decimal numbers from 0 to 31. For example,12-16would represent the machine address400(= 12 × 32 + 16).
SeeGeohash algorithm, used to represent latitude and longitude values in one (bit-interlaced) positive integer.[9]The base32 representation of Geohash uses all decimal digits (0–9) and almost all of the lower case alphabet, except letters "a", "i", "l", "o", as shown by the following character map:
In approximately 1950,[10]Alan Turingwrote software requirements for the Manchester Mark I computing system.[11]A transcription of Turing'smanualfor the Mark I is available on archive.org.[12]
The University of Manchester's archive site commemorating 60 years of computing[13]has atableof the base 32 encoding that Turing used. The table and the accompanying explanation also appear in the manual.
Another account of this period in Turing's life appears on his biography page underEarly computers and the Turing test.
BeforeNVRAMbecame universal, several video games forNintendoplatforms used base 31 numbers forpasswords.
These systems omit vowels (except Y) to prevent the game from accidentally giving aprofanepassword.
Thus, the characters are generally some minor variation of the following set: 0–9, B, C, D, F, G, H, J, K, L, M, N, P, Q, R, S, T, V, W, X, Y, Z, and some punctuation marks.
Games known to use such a system includeMario Is Missing!,Mario's Time Machine,Tetris Blast, andThe Lord of the Rings(Super NES).
The word-safe Base32 alphabet is an extension of theOpen Location CodeBase20alphabet. That alphabet uses 8 numeric digits and 12 case-sensitive letter digits chosen to avoid accidentally forming words. Treating the alphabet as case-sensitive produces a 32 (8+12+12) digit set.
Base32 has a number of advantages overBase64:
Base32 has advantages overhexadecimal/Base16:
Compared with 8-bit-based encodings, 5-bit systems might also have advantages when used for character transmission:
Base32 representation takes roughly 20% more space thanBase64. Also, because it encodes five 8-bit bytes (40 bits) to eight 5-bit base32 characters rather than three 8-bit bytes (24 bits) to four 6-bit base64 characters, padding to an 8-character boundary is a greater burden on short messages (which may be a reason to elide padding, which is an option inRFC4648).
Even if Base32 takes roughly 20% less space thanhexadecimal, Base32 is much less used. Hexadecimal can easily be mapped to bytes because two hexadecimal digits is a byte. Base32 does not map to individual bytes. However, two Base32 digits correspond to ten bits, which can encode (32 × 32 =) 1,024 values, with obvious applications for orders of magnitude ofmultiple-byte unitsin terms of powers of 1,024.
Hexadecimal is easier to learn and remember, since that only entails memorising the numerical values of six additional symbols (A–F), and even if those are not instantly recalled, it is easier to count through just over a handful of values.
Base32programsare suitable for encoding arbitrary byte data using a restricted set of symbols that can both be conveniently used by humans and processed by computers.
Base32 implementations use a symbol set made up of at least 32 different characters (sometimes a 33rd for padding), as well as an algorithm for encoding arbitrary sequences of 8-bit bytes into a Base32 alphabet. Because more than one 5-bit Base32 character is needed to represent each 8-bit input byte, if the input is not a multiple of 5 bytes (40 bits), then it doesn't fit exactly in 5-bit Base32 characters. In that case, some specifications require padding characters to be added while some require extra zero bits to make a multiple of 5 bits. The closely related Base64 system, in contrast, uses a set of 64 symbols (or 65 symbols when padding is used).
Base32 implementations in C/C++,[14][15]Perl,[16]Java,[17]JavaScript[18]Python,[19]Go[20]and Ruby[21]are available.[22]
|
https://en.wikipedia.org/wiki/Base32
|
Incomputer programming,Base64(also known astetrasexagesimal) is a group ofbinary-to-text encodingschemes that transformsbinary datainto a sequence ofprintablecharacters, limited to a set of 64 unique characters. More specifically, the source binary data is taken 6 bits at a time, then this group of 6 bits is mapped to one of 64 unique characters.
As with all binary-to-text encoding schemes, Base64 is designed to carry data stored in binary formats across channels that only reliably support text content. Base64 is particularly prevalent on theWorld Wide Web[1]where one of its uses is the ability to embedimage filesor other binary assets inside textual assets such asHTMLandCSSfiles.[2]
Base64 is also widely used for sendinge-mailattachments, becauseSMTP– in its original form – was designed to transport7-bit ASCIIcharacters only. Encoding an attachment as Base64 before sending, and then decoding when received, assures older SMTP servers will not interfere with the attachment.
Base64 encoding causes an overhead of 33–37% relative to the size of the original binary data (33% by the encoding itself; up to 4% more by the inserted line breaks).
The particular set of 64 characters chosen to represent the 64-digit values for the base varies between implementations. The general strategy is to choose 64 characters that are common to most encodings and that are also printable. This combination leaves the data unlikely to be modified in transit through information systems, such as email, that were traditionally not8-bit clean.[3]For example,MIME's Base64 implementation usesA–Z,a–z, and0–9for the first 62 values. Other variations share this property but differ in the symbols chosen for the last two values; an example isUTF-7.
The earliest instances of this type of encoding were created for dial-up communication between systems running the sameOS– for example,uuencodeforUNIXandBinHexfor theTRS-80(later adapted for theMacintosh) – and could therefore make more assumptions about what characters were safe to use. For instance, uuencode uses uppercase letters, digits, and many punctuation characters, but no lowercase.[4][5][6][3]
This is the Base64 alphabet defined inRFC 4648 §4.See also§ Variants summary table.
The example below usesASCIItext for simplicity, but this is not a typical use case, as it can already be safely transferred across all systems that can handle Base64. The more typical use is to encodebinary data(such as an image); the resulting Base64 data will only contain 64 different ASCII characters, all of which can reliably be transferred across systems that may corrupt the raw source bytes.
Here is a well-knownidiomfromdistributed computing:
Many hands make light work.
When the quote (without trailing whitespace) is encoded into Base64, it is represented as a byte sequence of 8-bit-paddedASCIIcharacters encoded inMIME's Base64 scheme as follows (newlines and white spaces may be present anywhere but are to be ignored on decoding):
TWFueSBoYW5kcyBtYWtlIGxpZ2h0IHdvcmsu
In the above quote, the encoded value ofManisTWFu. Encoded in ASCII, the charactersM,a, andnare stored as the byte values77,97, and110, which are the 8-bit binary values01001101,01100001, and01101110. These three values are joined together into a 24-bit string, producing010011010110000101101110. Groups of 6 bits (6 bits have a maximum of 26= 64 different binary values) areconverted into individual numbersfrom start to end (in this case, there are four numbers in a 24-bit string), which are then converted into their corresponding Base64 character values.
As this example illustrates, Base64 encoding converts threeoctetsinto four encoded characters.
=padding characters might be added to make the last encoded block contain four Base64 characters.
Hexadecimaltooctaltransformation is useful to convert between binary and Base64. Such conversion is available for both advanced calculators and programming languages. For example, the hexadecimal representation of the 24 bits above is 4D616E. The octal representation is 23260556. Those 8 octal digits can be split into pairs (23 26 05 56), and each pair is converted to decimal to yield19 22 05 46. Using those four decimal numbers as indices for the Base64 alphabet, the corresponding ASCII characters areTWFu.
If there are only two significant input octets (e.g., 'Ma'), or when the last input group contains only two octets, all 16 bits will be captured in the first three Base64 digits (18 bits); the twoleast significant bitsof the last content-bearing 6-bit block will turn out to be zero, and discarded on decoding (along with the succeeding=padding character):
If there is only one significant input octet (e.g., 'M'), or when the last input group contains only one octet, all 8 bits will be captured in the first two Base64 digits (12 bits); the fourleast significant bitsof the last content-bearing 6-bit block will turn out to be zero, and discarded on decoding (along with the succeeding two=padding characters):
Because Base64 is a six-bit encoding, and because the decoded values are divided into 8-bit octets, every four characters of Base64-encoded text (4 sextets =4 × 6= 24 bits) represents three octets of unencoded text or data (3 octets =3 × 8= 24 bits). This means that when the length of the unencoded input is not a multiple of three, the encoded output must have padding added so that its length is a multiple of four. The padding character is=, which indicates that no further bits are needed to fully encode the input. (This is different fromA, which means that the remaining bits are all zeros.) The example below illustrates how truncating the input of the above quote changes the output padding:
The padding character is not essential for decoding, since the number of missing bytes can be inferred from the length of the encoded text. In some implementations, the padding character is mandatory, while for others it is not used. An exception in which padding characters are required is when multiple Base64 encoded files have been concatenated.
When decoding Base64 text, four characters are typically converted back to three bytes. The only exceptions are when padding characters exist. A single=indicates that the four characters will decode to only two bytes, while==indicates that the four characters will decode to only a single byte. For example:
Another way to interpret the padding character is to consider it as an instruction to discard 2 trailing bits from the bit string each time a=is encountered. For example, when `bGlnaHQgdw==` is decoded, we convert each character (except the trailing occurrences of=) into their corresponding 6-bit representation, and then discard 2 trailing bits for the first=and another 2 trailing bits for the other=. In this instance, we would get 6 bits from thed, and another 6 bits from thewfor a bit string of length 12, but since we remove 2 bits for each=(for a total of 4 bits), thedw==ends up producing 8 bits (1 byte) when decoded.
Without padding, after normal decoding of four characters to three bytes over and over again, fewer than four encoded characters may remain. In this situation, only two or three characters can remain. A single remaining encoded character is not possible, because a single Base64 character only contains 6 bits, and 8 bits are required to create a byte, so a minimum of two Base64 characters are required: The first character contributes 6 bits, and the second character contributes its first 2 bits. For example:
Decoding without padding is not performed consistently among decoders. In addition, allowing padless decoding by definition allows multiple strings to decode into the same set of bytes, which can be a security risk.[7]
Implementations may have some constraints on the alphabet used for representing some bit patterns. This notably concerns the last two characters used in the alphabet at positions 62 and 63, and the character used for padding (which may be mandatory in some protocols or removed in others). The table below summarizes these known variants and provides links to the subsections below.
The first known standardized use of the encoding now called MIME Base64 was in thePrivacy-enhanced Electronic Mail(PEM) protocol, proposed byRFC989in 1987. PEM defines a "printable encoding" scheme that uses Base64 encoding to transform an arbitrary sequence ofoctetsto a format that can be expressed in short lines of 6-bit characters, as required by transfer protocols such asSMTP.[8]
The current version of PEM (specified inRFC1421) uses a 64-character alphabet consisting of upper- and lower-caseRoman letters(A–Z,a–z), the numerals (0–9), and the+and/symbols. The=symbol is also used as a padding suffix.[4]The original specification,RFC989, additionally used the*symbol to delimit encoded but unencrypted data within the output stream.
To convert data to PEM printable encoding, the first byte is placed in themost significanteight bits of a 24-bitbuffer, the next in the middle eight, and the third in theleast significanteight bits. If there are fewer than three bytes left to encode (or in total), the remaining buffer bits will be zero. The buffer is then used, six bits at a time, most significant first, as indices into the string: "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/", and the indicated character is output.
The process is repeated on the remaining data until fewer than four octets remain. If three octets remain, they are processed normally. If fewer than three octets (24 bits) are remaining to encode, the input data is right-padded with zero bits to form an integral multiple of six bits.
After encoding the non-padded data, if two octets of the 24-bit buffer are padded-zeros, two=characters are appended to the output; if one octet of the 24-bit buffer is filled with padded-zeros, one=character is appended. This signals the decoder that the zero bits added due to padding should be excluded from the reconstructed data. This also guarantees that the encoded output length is a multiple of 4 bytes.
PEM requires that all encoded lines consist of exactly 64 printable characters, with the exception of the last line, which may contain fewer printable characters. Lines are delimited by whitespace characters according to local (platform-specific) conventions.
TheMIME(Multipurpose Internet Mail Extensions) specification lists Base64 as one of twobinary-to-text encodingschemes (the other beingquoted-printable).[5]MIME's Base64 encoding is based on that of theRFC1421version of PEM: it uses the same 64-character alphabet and encoding mechanism as PEM and uses the=symbol for output padding in the same way, as described atRFC2045.
MIME does not specify a fixed length for Base64-encoded lines, but it does specify a maximum line length of 76 characters. Additionally, it specifies that any character outside the standard set of 64 encoding characters (For example CRLF sequences), must be ignored by a compliant decoder, although most implementations use a CR/LFnewlinepair to delimit encoded lines.
Thus, the actual length of MIME-compliant Base64-encoded binary data is usually about 137% of the original data length (4⁄3×78⁄76), though for very short messages the overhead can be much higher due to the overhead of the headers. Very roughly, the final size of Base64-encoded binary data is equal to 1.37 times the original data size + 814 bytes (for headers). The size of the decoded data can be approximated with this formula:
UTF-7, described first inRFC1642, which was later superseded byRFC2152, introduced a system calledmodified Base64. This data encoding scheme is used to encodeUTF-16asASCIIcharacters for use in 7-bit transports such asSMTP. It is a variant of the Base64 encoding used in MIME.[9][10]
The "Modified Base64" alphabet consists of the MIME Base64 alphabet, but does not use the "=" padding character. UTF-7 is intended for use in mail headers (defined inRFC2047), and the "=" character is reserved in that context as the escape character for "quoted-printable" encoding. Modified Base64 simply omits the padding and ends immediately after the last Base64 digit containing useful bits leaving up to three unused bits in the last Base64 digit.
OpenPGP, described inRFC9580, specifies "ASCII armor", which is identical to the "Base64" encoding described by MIME, with the addition of an optional 24-bitCRC. Thechecksumis calculated on the input data before encoding; the checksum is then encoded with the same Base64 algorithm and, prefixed by the "=" symbol as the separator, appended to the encoded output data.[11]
RFC3548, entitledThe Base16, Base32, and Base64 Data Encodings, is an informational (non-normative) memo that attempts to unify theRFC1421andRFC2045specifications of Base64 encodings, alternative-alphabet encodings, and the Base32 (which is seldom used) and Base16 encodings.
Unless implementations are written to a specification that refers toRFC3548and specifically requires otherwise, RFC 3548 forbids implementations from generating messages containing characters outside the encoding alphabet or without padding, and it also declares that decoder implementations must reject data that contain characters outside the encoding alphabet.[6]
RFC4648obsoletesRFC3548and focuses on Base64/32/16:
Base64 encoding can be helpful when fairly lengthy identifying information is used in an HTTP environment. For example, a database persistence framework forJavaobjects might use Base64 encoding to encode a relatively large unique id (generally 128-bitUUIDs) into a string for use as an HTTP parameter in HTTP forms or HTTP GETURLs. Also, many applications need to encode binary data in a way that is convenient for inclusion in URLs, including in hidden web form fields, and Base64 is a convenient encoding to render them in a compact way.
Using standard Base64 inURLrequires encoding of '+', '/' and '=' characters into specialpercent-encodedhexadecimal sequences ('+' becomes '%2B', '/' becomes '%2F' and '=' becomes '%3D'), which makes the string unnecessarily longer.
For this reason,modified Base64 for URLvariants exist (such asbase64urlinRFC4648), where the '+' and '/' characters of standard Base64 are respectively replaced by '-' and '_', so that usingURL encoders/decodersis no longer necessary and has no effect on the length of the encoded value, leaving the same encoded form intact for use in relational databases, web forms, and object identifiers in general. A popular site to make use of such isYouTube.[12]Some variants allow or require omitting the padding '=' signs to avoid them being confused with field separators, or require that any such padding be percent-encoded. Some libraries[which?]will encode '=' to '.', potentially exposing applications to relative path attacks when a folder name is encoded from user data.[citation needed]
Theatob()andbtoa()JavaScript methods, defined in the HTML5 draft specification,[13]provide Base64 encoding and decoding functionality to web pages. Thebtoa()method outputs padding characters, but these are optional in the input of theatob()method.
Base64 can be used in a variety of contexts:
Some applications use a Base64 alphabet that is significantly different from the alphabets used in the most common Base64 variants (seeVariants summary tableabove).
|
https://en.wikipedia.org/wiki/Base64
|
Hexadecimal timeis the representation of thetimeofdayas ahexadecimalnumberin theinterval[0, 1).
The day is divided into 1016(1610) hexadecimal hours, each hour into 10016(25610) hexadecimal minutes, and each minute into 1016(1610) hexadecimal seconds.
This time format was proposed by the Swedish-American engineerJohn W. Nystromin 1863 as part of histonal system.[1]
In 1997, the American Mark Vincent Rogers ofIntuitorproposed a similar system of hexadecimal time and implemented it inJavaScriptas the Hexclock.[2]
A day is unity, or1, and any fraction thereof can be shown with digits to the right of the hexadecimalseparator. So the day begins at midnight with.0000and one hexadecimal second after midnight is.0001. Noon is.8000(one half), one hexadecimal second before was.7FFFand one hexadecimal second before next midnight will be.FFFF.
Intuitor-hextime may also be formatted with an underscore separating hexadecimal hours, minutes and seconds. For example:
|
https://en.wikipedia.org/wiki/Hexadecimal_time
|
Hexadecimalfloating point(now calledHFPbyIBM) is a format for encoding floating-point numbers first introduced on theIBMSystem/360computers, and supported on subsequent machines based on that architecture,[1][2][3]as well as machines which were intended to be application-compatible with System/360.[4][5]
In comparison toIEEE 754floating point, the HFP format has a longersignificand, and a shorterexponent. All HFP formats have 7 bits of exponent with abiasof 64. The normalized range of representable numbers is from 16−65to 1663(approx. 5.39761 × 10−79to 7.237005 × 1075).
The number is represented as the following formula: (−1)sign× 0.significand× 16exponent−64.
Asingle-precisionHFP number (called "short" by IBM) is stored in a 32-bit word:
In this format the initial bit is not suppressed, and the
radix (hexadecimal) point is set to the left of the significand (fraction in IBM documentation and the figures).
Since the base is 16, the exponent in this form is about twice as large as the equivalent in IEEE 754, in order to have similar exponent range in binary, 9 exponent bits would be required.
Consider encoding the value −118.625 as an HFP single-precision floating-point value.
The value is negative, so the sign bit is 1.
The value 118.62510in binary is 1110110.1012. This value is normalized by moving the radix point left four bits (one hexadecimal digit) at a time until the leftmost digit is zero, yielding 0.011101101012. The remaining rightmost digits are padded with zeros, yielding a 24-bit fraction of .0111 0110 1010 0000 0000 00002.
The normalized value moved the radix point two hexadecimal digits to the left, yielding a multiplier and exponent of 16+2. A bias of +64 is added to the exponent (+2), yielding +66, which is 100 00102.
Combining the sign, exponent plus bias, and normalized fraction produces this encoding:
In other words, the number represented is −0.76A00016× 1666 − 64= −0.4633789… × 16+2= −118.625
The number represented is +0.FFFFFF16× 16127 − 64= (1 − 16−6) × 1663≈ +7.2370051 × 1075
The number represented is +0.116× 160 − 64= 16−1× 16−64≈ +5.397605 × 10−79.
Zero (0.0) is represented in normalized form as all zero bits, which is arithmetically the value +0.016× 160 − 64= +0 × 16−64≈ +0.000000 × 10−79= 0. Given a fraction of all-bits zero, any combination of positive or negative sign bit and a non-zero biased exponent will yield a value arithmetically equal to zero. However, the normalized form generated for zero by CPU hardware is all-bits zero. This is true for all three floating-point precision formats. Addition or subtraction with other exponent values can lose precision in the result.
Since the base is 16, there can be up to three leading zero bits in the binary significand. That means when the number is converted into binary, there can be as few as 21 bits of precision. Because of the "wobbling precision" effect, this can cause some calculations to be very inaccurate. This has caused considerable criticism.[6]
A good example of the inaccuracy is representation of decimal value 0.1. It has no exact binary or hexadecimal representation. In hexadecimal format, it is represented as 0.19999999...16or 0.0001 1001 1001 1001 1001 1001 1001...2, that is:
This has only 21 bits, whereas the binary version has 24 bits of precision.
Six hexadecimal digits of precision is roughly equivalent to six decimal digits (i.e. (6 − 1) log10(16) ≈ 6.02). A conversion of single precision hexadecimal float to decimal string would require at least 9 significant digits (i.e. 6 log10(16) + 1 ≈ 8.22) in order to convert back to the same hexadecimal float value.
Thedouble-precisionHFP format (called "long" by IBM) is the same as the "short" format except that the fraction field is wider and the double-precision number is stored in a double word (8 bytes):
The exponent for this format covers only about a quarter of the range as the corresponding IEEE binary format.
14 hexadecimal digits of precision is roughly equivalent to 17 decimal digits. A conversion of double precision hexadecimal float to decimal string would require at least 18 significant digits in order to convert back to the same hexadecimal float value.
Called extended-precision by IBM, aquadruple-precisionHFP format was added to the System/370 series and was available on some S/360 models (S/360-85, -195, and others by special request or simulated by OS software). The extended-precision fraction field is wider, and the extended-precision number is stored as two double words (16 bytes):
28 hexadecimal digits of precision is roughly equivalent to 32 decimal digits. A conversion of extended precision HFP to decimal string would require at least 35 significant digits in order to convert back to the same HFP value. The stored exponent in the low-order part is 14 less than the high-order part, unless this would be less than zero.
Available arithmetic operations are add and subtract, both normalized and unnormalized, and compare. Prenormalization is done based on the exponent difference. Multiply and divide prenormalize unnormalized values, and truncate the result after one guard digit. There is a halve operation to simplify dividing by two. Starting in ESA/390, there is a square root operation. All operations have one hexadecimal guard digit to avoid precision loss. Most arithmetic operations truncate like simple pocket calculators. Therefore, 1 − 16−8= 1. In this case, the result is rounded away from zero.[7]
Starting with theS/390G5 in 1998,[8]IBM mainframes have also included IEEE binary floating-point units which conform to theIEEE 754 Standard for Floating-Point Arithmetic. IEEE decimal floating-point was added toIBM System z9GA2[9]in 2007 usingmillicode[10]and in 2008 to theIBM System z10in hardware.[11]
Modern IBM mainframes support three floating-point radices with 3 hexadecimal (HFP) formats, 3 binary (BFP) formats, and 3 decimal (DFP) formats. There are two floating-point units per core; one supporting HFP and BFP, and one supporting DFP; there is one register file, FPRs, which holds all 3 formats. Starting with thez13in 2015, processors have added a vector facility that includes 32 vector registers, each 128 bits wide; a vector register can contain two 64-bit or four 32-bit floating-point numbers.[12]The traditional 16 floating-point registers are overlaid on the new vector registers so some data can be manipulated with traditional floating-point instructions or with the newer vector instructions.
The IBM HFP format is used in:
As IBM is the only remaining provider of hardware using the HFP format, and as the only IBM machines that support that format are their mainframes, few file formats require it. One exception is the SAS 5 Transport file format, which the FDA requires; in that format, "All floating-point numbers in the file are stored using the IBM mainframe representation. [...] Most platforms use the IEEE representation for floating-point numbers. [...] To assist you in reading and/or writing transport files, we are providing routines to convert from IEEE representation (either big endian or little endian) to transport representation and back again."[13]Code for IBM's format is also available underLGPLv2.1.[15]
The article "Architecture of the IBM System/360" explains the choice as being because "the frequency of pre-shift, overflow, and precision-loss post-shift on floating-point addition are substantially reduced by this choice."[16]This allowed higher performance for the large System/360 models, and reduced cost for the small ones. The authors were aware of the potential for precision loss, but assumed that this would not be significant for 64-bit floating-point variables. Unfortunately, the designers seem not to have been aware ofBenford's Lawwhich means that a large proportion of numbers will suffer reduced precision.
The book "Computer Architecture" by two of the System/360 architects quotes Sweeney's study of 1958-65 which showed that using a base greater than 2 greatly reduced the number of shifts required for alignment and normalisation, in particular the number ofdifferentshifts needed. They used a larger base to make the implementations run faster, and the choice of base 16 was natural given 8-bit bytes. The intention was that 32-bit floats would only be used for calculations that would not propagate rounding errors, and 64-bit double precision would be used for all scientific and engineering calculations. The initial implementation of double precision lacked a guard digit to allow proper rounding, but this was changed soon after the first customer deliveries.[17]
|
https://en.wikipedia.org/wiki/IBM_hexadecimal_floating-point
|
Ahex editor(orbinary file editororbyte editor) is acomputer programthat allows for manipulation of the fundamentalbinarydata that constitutes acomputer file. The name 'hex' comes from 'hexadecimal', a standard numerical format for representing binary data. A typical computer file occupies multiple areas on the storage medium, whose contents are combined to form the file. Hex editors that are designed toparseand editsectordata from the physical segments offloppyorhard disksare sometimes calledsector editorsordisk editors.
With a hex editor, a user can see or edit the raw and exact contents of a file, as opposed to the interpretation of the same content that other, higher levelapplication softwaremay associate with thefile format. For example, this could be raw image data, in contrast to the way image editing software would interpret and show the same file.
Hex editors may be used to correctdata corruptedby system or application program problems where it may not be worthwhile to write a special program to make the corrections. They are useful to bypass application edit checks which may prevent correction of erroneous data. They have been used to "patch" executable programs to change or add a few instructions as an alternative to recompilation. Program fixes forIBM mainframesystems are sometimes distributed as patches rather than distributing a complete copy of the affected program.
In most hex editor applications, thedataof the computer file is represented ashexadecimalvalues grouped in 4 groups of 4bytes(or two groups of 8 bytes), followed by one group of 16 printableASCIIcharacters which correspond to each pair of hex values (each byte). Non-printable ASCII characters (e.g., Bell) and characters that would take more than one character space (e.g., tab) are typically represented by a dot (".") in the following ASCII field.
Unlike conventional text editors, Hex editors are able to efficiently handle files with indefinite sizes, as only a portion of the file is loaded while browsing it and modified when saving it, rather than the entire file at once.
Since the invention of computers and their different uses, a variety of file formats has been created. In some special circumstances it was convenient to be able to access the data as a series of raw digits. A program called SUPERZAP (AMASPZAP) was available for IBMOS/360systems which could edit raw disk records and also understood the format of executable files.[1]Pairs ofhexadecimaldigits (each pair can represent a byte) are the current standard, because the vast majority of machines and file formats in use today handle data in units or groups of 8-bit bytes. Hexadecimal and alsooctalare common because these digits allow one to see which bits in a byte are set. Today, decimal instead of hexadecimal representation is becoming a popular second option due to the more familiar number base and additional helper tools, such as template systems and data inspectors, that reduce the benefits of the hexadecimal numerical format.[citation needed]
Some hex editors offer a template system that can present the sequence of bytes of abinary filein a structured way, covering part or all of the desired file format. Usually theGUIfor a template is a separate tool window next to the main hex editor. Somecheat enginesystems consist only of such a template GUI.
Typically, a template is represented as a list of labeled text boxes, such that individual values of a file can be easily edited in the appropriate format (e.g., as string, color, or decimal number). Without template support, it is necessary to find the right offset in a file where the value that is to be changed is stored. Also, raw hex editing may require conversion from hexadecimal to decimal, catering forbyte order, or other data type conversion peculiarities.
Templates can be stored as files, thereby exchanged by users, and are often shared publicly over the manufacturer's website. Most if not all hex editors define their own template file format; there is no trend to support a standard or even compatibility between the various formats out in the wild.
Advanced hex editors have scripting systems that let the user create macro like functionality as a sequence of user interface commands for automating common tasks. This can be used for providing scripts that automatically patch files (e.g., game cheating, modding, or product fixes provided by community) or to write more complex/intelligent templates.
Scripting languages vary widely, often being product specific languages resembling MS-DOS batch files, to systems that support fully-fledged scripting languages such asLuaorPython.
A few select editors[which?]have apluginsystem that allows to extend the GUI and add new functionality, usually loading dynamic link libraries written in a C-compatible language.
|
https://en.wikipedia.org/wiki/Hex_editor
|
Incomputing, ahex dumpis atextualhexadecimalview (on screen or paper) of (often, but not necessarilybinary) computer data, frommemoryor from acomputer fileorstoragedevice. Looking at a hex dump of data is usually done in the context of eitherdebugging,reverse engineeringordigital forensics.[1]Interactive editors that provide a similar view but also manipulating the data in question are calledhex editors.
In a hex dump, eachbyte(8bits) is represented as a two-digithexadecimalnumber. Hex dumps are commonly organized into rows of 8 or 16 bytes, sometimes separated by whitespaces. Some hex dumps have the hexadecimalmemory addressat the beginning.
Some common names for this program function arehexdump,hd,od,xxdand simplydumpor evenD.
A sample text file:
as displayed byUnixhexdump:
The leftmost column is the hexadecimal displacement (or address) for the values of the following columns. Each row displays 16 bytes, with the exception of the row containing a single *. The * is used to indicate multiple occurrences of the same display were omitted.
The last line displays the number of bytes taken from the input.
An additional column shows the correspondingASCIIcharacter translation withhexdump -Corhd:
This is helpful when trying to locate TAB characters in a file which is expected to use multiple spaces.
The-voption causeshexdumpto display all data verbosely:
The POSIX[2]commandodcan be used to display a hex dump with the-t x2option.
Character evaluations can be added with the-coption:
In this output the TAB characters are displayed as\tand NEWLINE characters as\n.
In theCP/M8-bit operating system used on early personal computers, the standardDUMPprogram would list a file 16 bytes per line with the hex offset at the start of the line and the ASCII equivalent of each byte at the end.[3]: 1-41, 5-40–5-46Bytes outside the standard range of printable ASCII characters (20 to 7E) would be displayed as a single period for visual alignment. This same format was used to display memory when invoking the D command in the standard CP/MdebuggerDDT.[3]: 4-5Later incarnations of the format (e.g. in the DOS debuggerDEBUG) changed the space between the 8th and 9th byte to a dash, without changing the overall width.
This notation has been retained in operating systems that were directly or indirectly derived from CP/M, includingDR-DOS,MS-DOS/PC DOS,OS/2andWindows. On Linux systems, the command hexcat produces this classic output format, too. The main reason for the design of this format is that it fits the maximum amount of data on a standard 80-character-wide screen or printer, while still being very easy to read and skim visually.
Here the leftmost column represents the address at which the bytes represented by the following columns are located. CP/M and variousDOSsystems ran inreal modeon thex86CPUs, where addresses are composed of two parts (base and offset).
In the above examples the final 00s are non-existent bytes beyond the end of the file. Some dump tools display other characters so that it is clear they are beyond the end of the file, typically using spaces or asterisks, e.g.:
or
|
https://en.wikipedia.org/wiki/Hex_dump
|
TheBailey–Borwein–Plouffe formula(BBP formula) is a formula forπ. It was discovered in 1995 bySimon Plouffeand is named after the authors of the article in which it was published,David H. Bailey,Peter Borwein, and Plouffe.[1]The formula is:
The BBP formula gives rise to aspigot algorithmfor computing thenthbase-16(hexadecimal) digit ofπ(and therefore also the4nthbinary digitofπ) without computing the preceding digits. This doesnotcompute thenth decimal digit ofπ(i.e., in base 10).[2]But another formula discovered by Plouffe in 2022 allows extracting thenth digit ofπin decimal.[3]BBP and BBP-inspired algorithms have been used in projects such asPiHex[4]for calculating many digits ofπusing distributed computing. The existence of this formula came as a surprise[clarification needed]. It had been widely believed that computing thenth digit ofπis just as hard as computing the firstndigits.[1]
Since its discovery, formulas of the general form:
have been discovered for many otherirrational numbersα{\displaystyle \alpha }, wherep(k){\displaystyle p(k)}andq(k){\displaystyle q(k)}arepolynomialswith integer coefficients andb≥2{\displaystyle b\geq 2}is an integerbase.
Formulas of this form are known asBBP-type formulas.[5]Given a numberα{\displaystyle \alpha }, there is no known systematic algorithm for finding appropriatep(k){\displaystyle p(k)},q(k){\displaystyle q(k)}, andb{\displaystyle b}; such formulas are discoveredexperimentally.
A specialization of the general formula that has produced many results is:
wheres,b, andmare integers, andA=(a1,a2,…,am){\displaystyle A=(a_{1},a_{2},\dots ,a_{m})}is asequenceof integers.
ThePfunction leads to a compact notation for some solutions. For example, the original BBP formula:
can be written as:
Some of the simplest formulae of this type that were well known before BBP and for which thePfunction leads to a compact notation, are:
(In fact, this identity holds true fora> 1:
Plouffe was also inspired by thearctan power seriesof the form (thePnotation can be also generalized to the case wherebis not an integer):
Using thePfunction mentioned above, the simplest known formula forπis fors= 1, butm> 1. Many now-discovered formulae are known forbas an exponent of 2 or 3 andmas an exponent of 2 or it some other factor-rich value, but where several of the terms of sequenceAare zero. The discovery of these formulae involves a computer search for such linear combinations after computing the individual sums. The search procedure consists of choosing a range of parameter values fors,b, andm, evaluating the sums out to many digits, and then using aninteger relation-finding algorithm(typicallyHelaman Ferguson'sPSLQ algorithm) to find a sequenceAthat adds up those intermediate sums to a well-known constant or perhaps to zero.
The original BBPπsummation formula was found in 1995 by Plouffe usingPSLQ. It is also representable using thePfunction:
which also reduces to this equivalent ratio of two polynomials:
This formula has been shown through a fairly simple proof to equalπ.[6]
We would like to define a formula that returns the (n+1{\displaystyle n+1})-th (withn≥0{\displaystyle n\geq 0})hexadecimaldigit ofπ. A few manipulations are required to implement aspigot algorithmusing this formula.
We must first rewrite the formula as:
Now, for a particular value ofnand taking the first sum, we split thesumtoinfinityacross thenth term:
We now multiply by 16n, so that the hexadecimal point (the divide between fractional and integer parts of the number) shifts (or remains, ifn = 0) to the left of the(n+1)-th fractional digit:
Since we only care about the fractional part of the sum, we look at our two terms and realise that only the first sum contains terms with an integer part; conversely, the second sum doesn't contain terms with an integer part, since the numerator can never be larger than the denominator fork>n. Therefore, we need a trick to remove the integer parts, that we don't need, from the terms of the first sum, in order to speed up and increase the precision of the calculations. That trick is to reduce modulo 8k+ 1. Our first sum (out of four) to compute the fractional part then becomes:
Notice how themodulusoperator always guarantees that only the fractional parts of the terms of the first sum will be kept. To calculate 16n−kmod (8k+ 1) quickly and efficiently, themodular exponentiationalgorithm is done at the same loop level, notnested. When its running 16xproduct becomes greater than one, the modulus is taken, just as for the running total in each sum.
Now to complete the calculation, this must be applied to each of the four sums in turn. Once this is done, the four summations are put back into the sum toπ:
Since only the fractional part is accurate, extracting the wanted digit requires that one removes the integer part of the final sum, multiplies it by 16 and keeps the integer part to "skim off" the hexadecimal digit at the desired position (in theory, the next few digits up to the accuracy of the calculations used would also be accurate).
This process is similar to performinglong multiplication, but only having to perform the summation of some middle columns. While there are somecarriesthat are not counted, computers usually perform arithmetic for many bits (32 or 64) and round, and we are only interested in the most significant digit(s). There is a possibility that a particular computation will be akin to failing to add a small number (e.g. 1) to the number 999999999999999, and that the error will propagate to the most significant digit.
This algorithm computesπwithout requiring custom data types having thousands or even millions of digits. The method calculates thenth digitwithoutcalculating the firstn− 1 digits and can use small, efficient data types.Fabrice Bellardfound a variant of BBP,Bellard's formula, which is faster.
Though the BBP formula can directly calculate the value of any given digit ofπwith less computational effort than formulas that must calculate all intervening digits, BBP remainslinearithmic(O(nlogn){\displaystyle O(n\log n)}), whereby successively larger values ofnrequire increasingly more time to calculate; that is, the "further out" a digit is, the longer it takes BBP to calculate it, just like the standardπ-computing algorithms.[7]
D. J. Broadhurst provides a generalization of the BBP algorithm that may be used to compute a number of other constants in nearly linear time and logarithmic space.[8]Explicit results are given forCatalan's constant,π3{\displaystyle \pi ^{3}},π4{\displaystyle \pi ^{4}},Apéry's constantζ(3){\displaystyle \zeta (3)},ζ(5){\displaystyle \zeta (5)}, (whereζ(x){\displaystyle \zeta (x)}is theRiemann zeta function),log32{\displaystyle \log ^{3}2},log42{\displaystyle \log ^{4}2},log52{\displaystyle \log ^{5}2}, and various products of powers ofπ{\displaystyle \pi }andlog2{\displaystyle \log 2}. These results are obtained primarily by the use ofpolylogarithm ladders.
|
https://en.wikipedia.org/wiki/Bailey%E2%80%93Borwein%E2%80%93Plouffe_formula
|
Hexspeakis a novelty form of variantEnglishspelling using the hexadecimal digits. Created by programmers as memorablemagic numbers, hexspeak words can serve as a clear and unique identifier with which to mark memory or data.
Hexadecimal notationrepresents numbers using the 16 digits0123456789ABCDEF. Using only the lettersABCDEFit is possible to spell several words. Further words can be made by treating some of the decimal numbers as letters - the digit "0" can represent the letter "O", and "1" can represent the letters "I" or "L". Less commonly, "5" can represent "S", "7" represent "T", "12" represent "R" and "6" or "9" can represent "G" or "g", respectively. Numbers such as2,4or8can be used in a manner similar toleetorrebuses; e.g. the word "defecate" can be expressed either asDEFECA7EorDEFEC8.
Manycomputer processors,operating systems, anddebuggersmake use of magic numbers, especially as amagic debug value.
Many computer languages require that a hexadecimal number be marked with a prefix or suffix (or both) to identify it as a number. Sometimes the prefix or suffix is used as part of the word.
In reverse engineering aspects of the SonyPlayStation 3, a number of hexspeak codes were found to either trigger, affect or were present in aspects of communicating to and through the PlayStation 3 Hypervisor in communication to its GPU, theRSX Reality Synthesizer.[40]
These projects were largely born out of PS3 homebrew operating on the PS3'sOtherOSwhich allowed Linux to be installed, initially with extremely limited GPU access.
|
https://en.wikipedia.org/wiki/Hexspeak
|
Standard formis a way of expressingnumbersthat are too large or too small to be conveniently written indecimal form, since to do so would require writing out an inconveniently long string of digits. It may be referred to asscientific formorstandard index form, orScientific notationin the United States. Thisbase tennotation is commonly used by scientists, mathematicians, and engineers, in part because it can simplify certainarithmetic operations. Onscientific calculators, it is usually known as "SCI" display mode.
In scientific notation, nonzero numbers are written in the form
ormtimes ten raised to the power ofn, wherenis aninteger, and thecoefficientmis a nonzeroreal number(usually between 1 and 10 in absolute value, and nearly always written as aterminating decimal). The integernis called theexponentand the real numbermis called thesignificandormantissa.[1]The term "mantissa" can be ambiguous where logarithms are involved, because it is also the traditional name of thefractional partof thecommon logarithm. If the number is negative then a minus sign precedesm, as in ordinary decimal notation. Innormalized notation, the exponent is chosen so that theabsolute value(modulus) of the significandmis at least 1 but less than 10.
Decimal floating pointis a computer arithmetic system closely related to scientific notation.
For performing calculations with aslide rule, standard form expression is required. Thus, the use of scientific notation increased as engineers and educators used that tool. SeeSlide rule#History.
Any real number can be written in the formm×10^nin many ways: for example, 350 can be written as3.5×102or35×101or350×100.
Innormalizedscientific notation (called "standard form" in the United Kingdom), the exponentnis chosen so that theabsolute valueofmremains at least one but less than ten (1 ≤ |m| < 10). Thus 350 is written as3.5×102. This form allows easy comparison of numbers: numbers with bigger exponents are (due to the normalization) larger than those with smaller exponents, and subtraction of exponents gives an estimate of the number oforders of magnitudeseparating the numbers. It is also the form that is required when using tables ofcommon logarithms. In normalized notation, the exponentnis negative for a number with absolute value between 0 and 1 (e.g. 0.5 is written as5×10−1). The 10 and exponent are often omitted when the exponent is 0. For a series of numbers that are to be added or subtracted (or otherwise compared), it can be convenient to use the same value ofmfor all elements of the series.
Normalized scientific form is the typical form of expression of large numbers in many fields, unless an unnormalized or differently normalized form, such asengineering notation, is desired. Normalized scientific notation is often calledexponentialnotation– although the latter term is more general and also applies whenmis not restricted to the range 1 to 10 (as in engineering notation for instance) and tobasesother than 10 (for example,3.15×2^20).
Engineering notation (often named "ENG" on scientific calculators) differs from normalized scientific notation in that the exponentnis restricted tomultiplesof 3. Consequently, the absolute value ofmis in the range 1 ≤ |m| < 1000, rather than 1 ≤ |m| < 10. Though similar in concept, engineering notation is rarely called scientific notation. Engineering notation allows the numbers to explicitly match their correspondingSI prefixes, which facilitates reading and oral communication. For example,12.5×10−9mcan be read as "twelve-point-five nanometres" and written as12.5 nm, while its scientific notation equivalent1.25×10−8mwould likely be read out as "one-point-two-five times ten-to-the-negative-eight metres".
Calculatorsandcomputer programstypically present very large or small numbers using scientific notation, and some can be configured to uniformly present all numbers that way. Becausesuperscriptexponents like 107can be inconvenient to display or type, the letter "E" or "e" (for "exponent") is often used to represent "times ten raised to the power of", so that the notationmEnfor a decimal significandmand integer exponentnmeans the same asm× 10n. For example6.022×1023is written as6.022E23or6.022e23, and1.6×10−35is written as1.6E-35or1.6e-35. While common in computer output, this abbreviated version of scientific notation is discouraged for published documents by some style guides.[2][3]
Most popular programming languages – includingFortran,C/C++,Python, andJavaScript– use this "E" notation, which comes from Fortran and was present in the first version released for theIBM 704in 1956.[4]The E notation was already used by the developers ofSHARE Operating System(SOS) for theIBM 709in 1958.[5]Later versions of Fortran (at least sinceFORTRAN IVas of 1961) also use "D" to signifydouble precisionnumbers in scientific notation,[6]and newer Fortran compilers use "Q" to signifyquadruple precision.[7]TheMATLABprogramming language supports the use of either "E" or "D".
TheALGOL 60(1960) programming language uses a subscript ten "10" character instead of the letter "E", for example:6.0221023.[8][9]This presented a challenge for computer systems which did not provide such a character, soALGOL W(1966) replaced the symbol by a single quote, e.g.6.022'+23,[10]and some Soviet ALGOL variants allowed the use of the Cyrillic letter "ю", e.g.6.022ю+23[citation needed]. Subsequently, theALGOL 68programming language provided a choice of characters:E,e,\,⊥, or10.[11]The ALGOL "10" character was included in the SovietGOST 10859text encoding (1964), and was added toUnicode5.2 (2009) asU+23E8⏨DECIMAL EXPONENT SYMBOL.[12]
Some programming languages use other symbols. For instance,Simulauses&(or&&forlong), as in6.022&23.[13]Mathematicasupports the shorthand notation6.022*^23(reserving the letterEfor themathematical constante).
The firstpocket calculatorssupporting scientific notation appeared in 1972.[14]To enter numbers in scientific notation calculators include a button labeled "EXP" or "×10x", among other variants. The displays of pocket calculators of the 1970s did not display an explicit symbol between significand and exponent; instead, one or more digits were left blank (e.g.6.022 23, as seen in theHP-25), or a pair of smaller and slightly raised digits were reserved for the exponent (e.g.6.02223, as seen in theCommodore PR100). In 1976,Hewlett-Packardcalculator user Jim Davidson coined the termdecapowerfor the scientific-notation exponent to distinguish it from "normal" exponents, and suggested the letter "D" as a separator between significand and exponent in typewritten numbers (for example,6.022D23); these gained some currency in the programmable calculator user community.[15]The letters "E" or "D" were used as a scientific-notation separator bySharppocket computersreleased between 1987 and 1995, "E" used for 10-digit numbers and "D" used for 20-digit double-precision numbers.[16]TheTexas InstrumentsTI-83andTI-84series of calculators (1996–present) use asmall capitalEfor the separator.[17]
In 1962, Ronald O. Whitaker of Rowco Engineering Co. proposed a power-of-ten system nomenclature where the exponent would be circled, e.g. 6.022 × 103would be written as "6.022③".[18]
A significant figure is a digit in a number that adds to its precision. This includes all nonzero numbers, zeroes between significant digits, and zeroesindicated to be significant. Leading and trailing zeroes are not significant digits, because they exist only to show the scale of the number. Unfortunately, this leads to ambiguity. The number1230400is usually read to have five significant figures: 1, 2, 3, 0, and 4, the final two zeroes serving only as placeholders and adding no precision. The same number, however, would be used if the last two digits were also measured precisely and found to equal 0 – seven significant figures.
When a number is converted into normalized scientific notation, it is scaled down to a number between 1 and 10. All of the significant digits remain, but the placeholding zeroes are no longer required. Thus1230400would become1.2304×106if it had five significant digits. If the number were known to six or seven significant figures, it would be shown as1.23040×106or1.230400×106. Thus, an additional advantage of scientific notation is that the number of significant figures is unambiguous.
It is customary in scientific measurement to record all the definitely known digits from the measurement and to estimate at least one additional digit if there is any information at all available on its value. The resulting number contains more information than it would without the extra digit, which may be considered a significant digit because it conveys some information leading to greater precision in measurements and in aggregations of measurements (adding them or multiplying them together).
Additional information about precision can be conveyed through additional notation. It is often useful to know how exact the final digit or digits are. For instance, the accepted value of the mass of theprotoncan properly be expressed as1.67262192369(51)×10−27kg, which is shorthand for(1.67262192369±0.00000000051)×10−27kg. However it is still unclear whether the error (5.1×10−37in this case) is the maximum possible error,standard error, or some otherconfidence interval.
In normalized scientific notation, in E notation, and in engineering notation, thespace(which intypesettingmay be represented by a normal width space or athin space) that is allowedonlybefore and after "×" or in front of "E" is sometimes omitted, though it is less common to do so before the alphabetical character.[19]
Converting a number in these cases means to either convert the number into scientific notation form, convert it back into decimal form or to change the exponent part of the equation. None of these alter the actual number, only how it's expressed.
First, move the decimal separator point sufficient places,n, to put the number's value within a desired range, between 1 and 10 for normalized notation. If the decimal was moved to the left, append× 10n; to the right,× 10−n. To represent the number1,230,400in normalized scientific notation, the decimal separator would be moved 6 digits to the left and× 106appended, resulting in1.2304×106. The number−0.0040321would have its decimal separator shifted 3 digits to the right instead of the left and yield−4.0321×10−3as a result.
Converting a number from scientific notation to decimal notation, first remove the× 10non the end, then shift the decimal separatorndigits to the right (positiven) or left (negativen). The number1.2304×106would have its decimal separator shifted 6 digits to the right and become1,230,400, while−4.0321×10−3would have its decimal separator moved 3 digits to the left and be−0.0040321.
Conversion between different scientific notation representations of the same number with different exponential values is achieved by performing opposite operations of multiplication or division by a power of ten on the significand and an subtraction or addition of one on the exponent part. The decimal separator in the significand is shiftedxplaces to the left (or right) andxis added to (or subtracted from) the exponent, as shown below.
Given two numbers in scientific notation,x0=m0×10n0{\displaystyle x_{0}=m_{0}\times 10^{n_{0}}}andx1=m1×10n1{\displaystyle x_{1}=m_{1}\times 10^{n_{1}}}
Multiplicationanddivisionare performed using the rules for operation withexponentiation:x0x1=m0m1×10n0+n1{\displaystyle x_{0}x_{1}=m_{0}m_{1}\times 10^{n_{0}+n_{1}}}andx0x1=m0m1×10n0−n1{\displaystyle {\frac {x_{0}}{x_{1}}}={\frac {m_{0}}{m_{1}}}\times 10^{n_{0}-n_{1}}}
Some examples are:5.67×10−5×2.34×102≈13.3×10−5+2=13.3×10−3=1.33×10−2{\displaystyle 5.67\times 10^{-5}\times 2.34\times 10^{2}\approx 13.3\times 10^{-5+2}=13.3\times 10^{-3}=1.33\times 10^{-2}}and2.34×1025.67×10−5≈0.413×102−(−5)=0.413×107=4.13×106{\displaystyle {\frac {2.34\times 10^{2}}{5.67\times 10^{-5}}}\approx 0.413\times 10^{2-(-5)}=0.413\times 10^{7}=4.13\times 10^{6}}
Additionandsubtractionrequire the numbers to be represented using the same exponential part, so that the significand can be simply added or subtracted:
Next, add or subtract the significands:x0±x1=(m0±m1)×10n0{\displaystyle x_{0}\pm x_{1}=(m_{0}\pm m_{1})\times 10^{n_{0}}}
An example:2.34×10−5+5.67×10−6=2.34×10−5+0.567×10−5=2.907×10−5{\displaystyle 2.34\times 10^{-5}+5.67\times 10^{-6}=2.34\times 10^{-5}+0.567\times 10^{-5}=2.907\times 10^{-5}}
While base ten is normally used for scientific notation, powers of other bases can be used too,[25]base 2 being the next most commonly used one.
For example, in base-2 scientific notation, the number 1001binbinary(=9d) is written as1.001b× 2d11bor1.001b× 10b11busing binary numbers (or shorter1.001 × 1011if binary context is obvious).[citation needed]In E notation, this is written as1.001bE11b(or shorter: 1.001E11) with the letter "E" now standing for "times two (10b) to the power" here. In order to better distinguish this base-2 exponent from a base-10 exponent, a base-2 exponent is sometimes also indicated by using the letter "B" instead of "E",[26]a shorthand notation originally proposed byBruce Alan MartinofBrookhaven National Laboratoryin 1968,[27]as in1.001bB11b(or shorter: 1.001B11). For comparison, the same number indecimal representation:1.125 × 23(using decimal representation), or 1.125B3 (still using decimal representation). Some calculators use a mixed representation for binary floating point numbers, where the exponent is displayed as decimal number even in binary mode, so the above becomes1.001b× 10b3dor shorter 1.001B3.[26]
This is closely related to the base-2floating-pointrepresentation commonly used in computer arithmetic, and the usage of IECbinary prefixes(e.g. 1B10 for 1×210(kibi), 1B20 for 1×220(mebi), 1B30 for 1×230(gibi), 1B40 for 1×240(tebi)).
Similar to "B" (or "b"[28]), the letters "H"[26](or "h"[28]) and "O"[26](or "o",[28]or "C"[26]) are sometimes also used to indicatetimes 16 or 8 to the poweras in 1.25 =1.40h× 10h0h= 1.40H0 = 1.40h0, or 98000 =2.7732o× 10o5o= 2.7732o5 = 2.7732C5.[26]
Another similar convention to denote base-2 exponents is using a letter "P" (or "p", for "power"). In this notation the significand is always meant to be hexadecimal, whereas the exponent is always meant to be decimal.[29]This notation can be produced by implementations of theprintffamily of functions following theC99specification and (Single Unix Specification)IEEE Std 1003.1POSIXstandard, when using the%aor%Aconversion specifiers.[29][30][31]Starting withC++11,C++I/O functions could parse and print the P notation as well. Meanwhile, the notation has been fully adopted by the language standard sinceC++17.[32]Apple'sSwiftsupports it as well.[33]It is also required by theIEEE 754-2008binary floating-point standard. Example: 1.3DEp42 represents1.3DEh× 242.
Engineering notationcan be viewed as a base-1000 scientific notation.
Sayre, David, ed. (1956-10-15).The FORTRAN Automatic Coding System for the IBM 704 EDPM: Programmer's Reference Manual(PDF). New York: Applied Science Division and Programming Research Department,International Business Machines Corporation. pp. 9, 27. Retrieved2022-07-04.(2+51+1 pages)
"6. Extensions: 6.1 Extensions implemented in GNU Fortran: 6.1.8 Q exponent-letter".The GNU Fortran Compiler. 2014-06-12. Retrieved2022-12-21.
"The Unicode Standard"(v. 7.0.0 ed.). Retrieved2018-03-23.
Vanderburgh, Richard C., ed. (November 1976)."Decapower"(PDF).52-Notes – Newsletter of the SR-52 Users Club.1(6). Dayton, OH: 1. V1N6P1. Retrieved2017-05-28.Decapower– In the January 1976 issue of65-Notes(V3N1p4) Jim Davidson (HP-65Users Club member #547) suggested the term "decapower" as a descriptor for the power-of-ten multiplier used in scientific notation displays. I'm going to begin using it in place of "exponent" which is technically incorrect, and the letter D to separate the "mantissa" from the decapower for typewritten numbers, as Jim also suggests. For example,123−45[sic] which is displayed in scientific notation as1.23 -43will now be written1.23D-43. Perhaps, as this notation gets more and more usage, the calculator manufacturers will change their keyboard abbreviations. HP's EEX and TI's EE could be changed to ED (for enter decapower).[1]"Decapower".52-Notes – Newsletter of the SR-52 Users Club. Vol. 1, no. 6. Dayton, OH. November 1976. p. 1. Retrieved2018-05-07.(NB. The termdecapowerwas frequently used in subsequent issues of this newsletter up to at least 1978.)
電言板6 PC-U6000 PROGRAM LIBRARY[Telephone board 6 PC-U6000 program library] (in Japanese). Vol. 6. University Co-op. 1993.
"TI-83 Programmer's Guide"(PDF). Retrieved2010-03-09.
"INTOUCH 4GL a Guide to the INTOUCH Language". Archived fromthe originalon 2015-05-03.
|
https://en.wikipedia.org/wiki/P_notation
|
Mixed radixnumeral systemsarenon-standard positional numeral systemsin which the numericalbasevaries from position to position. Such numerical representation applies when a quantity is expressed using a sequence of units that are each a multiple of the next smaller one, but not by the same factor. Such units are common for instance in measuring time; a time of 32 weeks, 5 days, 7 hours, 45 minutes, 15 seconds, and 500 milliseconds might be expressed as a number of minutes in mixed-radix notation as:
or as
In the tabular format, the digits are written above their base, and asemicolonindicates theradix point. In numeral format, each digit has its associated base attached as a subscript, and the radix point is marked by afull stop or period. The base for each digit is the number of corresponding units that make up the next larger unit. As a consequence there is no base (written as ∞) for the first (most significant) digit, since here the "next larger unit" does not exist (and one could not add a larger unit of "month" or "year" to the sequence of units, as they are not integer multiples of "week").
The most familiar example of mixed-radix systems is in timekeeping and calendars. Western time radices include, both cardinally and ordinally,decimalyears, decades, and centuries,septenaryfor days in a week,duodecimalmonths in a year, bases 28–31 for days within a month, as well as base 52 for weeks in a year. Time is further divided into hours counted inbase 24hours,sexagesimalminutes within an hour and seconds within a minute, with decimal fractions of the latter.
A standard form for dates is2021-04-10 16:31:15, which would be a mixed radix number by this definition, with the consideration that the quantities of days vary both per month, and with leap years. One proposed calendar instead usesbase 13months,quaternaryweeks, and septenary days.
A mixed radix numeral system is often best expressed with a table. A table describing what can be understood as the 604800 seconds of a week is as follows, with the week beginning on hour 0 of day 0 (midnight on Sunday):
In this numeral system, the mixed radix numeral 37172451605760seconds would be interpreted as 17:51:57 on Wednesday, and 0702402602460would be 00:02:24 on Sunday. Ad hoc notations for mixed radix numeral systems are commonplace.
TheMaya calendarconsists of several overlapping cycles of different radices. A short counttzolk'inoverlapsbase 20named days withtridecimalnumbered days. Ahaab'consists of vigesimal days,octodecimalmonths, and base-52 years forming around. In addition, along countof vigesimal days, octodecimalwinal, then base 24tun,k'atun,b'ak'tun, etc., tracks historical dates.
A second example of a mixed-radix numeral system in current use is in the design and use of currency, where a limited set of denominations are printed or minted with the objective of being able to represent any monetary quantity; the amount of money is then represented by the number of coins or banknotes of each denomination. When deciding which denominations to create (and hence which radices to mix), a compromise is aimed for between a minimal number of different denominations, and a minimal number of individual pieces of coinage required to represent typical quantities. So, for example, in the UK, banknotes are printed for £50, £20, £10 and £5, and coins are minted for £2, £1, 50p, 20p, 10p, 5p, 2p and 1p—these follow the1-2-5 series of preferred values.
Prior todecimalisation, monetary amounts in the UK were described in terms of pounds, shillings, and pence, with 12 pence per shilling and 20 shillings per pound, so that "£1 7s 6d", for example, corresponded to the mixed-radix numeral 1∞720612.
United States customary unitsare generally mixed radix systems, with multipliers varying from one size unit to the next in the same manner that units of time do.
Mixed-radix representation is also relevant to mixed-radix versions of theCooley–Tukey FFT algorithm, in which the indices of the input values are expanded in a mixed-radix representation, the indices of the output values are expanded in a corresponding mixed-radix representation with the order of the bases and digits reversed, and each subtransform can be regarded as a Fourier transform in one digit for all values of the remaining digits.
Mixed-radix numbers of the same base can be manipulated using a generalization of manual arithmetic algorithms. Conversion of values from one mixed base to another is easily accomplished by first converting the place values of the one system into the other, and then applying the digits from the one system against these.
APLandJinclude operators to convert to and from mixed-radix systems.
Another proposal is the so-calledfactorialnumber system:
For example, the biggest number that could be represented with six digits would be 543210 which equals 719 indecimal: 5×5! + 4×4! + 3×3! + 2×2! + 1×1! It might not be clear at first sight but the factorial based numbering system is unambiguous and complete. Every number can be represented in one and only one way because the sum of respective factorials multiplied by the index is always the next factorial minus one:
There is a natural mapping between the integers 0, ...,n! − 1 andpermutationsofnelements in lexicographic order, which uses the factorial representation of the integer, followed by an interpretation as aLehmer code.
The above equation is a particular case of the following general rule for any radix (either standard or mixed) base representation which expresses the fact that any radix (either standard or mixed) base representation is unambiguous and complete. Every number can be represented in one and only one way because the sum of respective weights multiplied by the index is always the next weight minus one:
which can be easily proved withmathematical induction.
Another proposal is the number system with successive prime numbers as radix, whose place values areprimorialnumbers, considered byS. S. Pillai,[1]Richard K. Guy(sequenceA049345in theOEIS), and other authors:[2][3][4]
|
https://en.wikipedia.org/wiki/Mixed_radix
|
Inmathematics, apolynomialis amathematical expressionconsisting ofindeterminates(also calledvariables) andcoefficients, that involves only the operations ofaddition,subtraction,multiplicationandexponentiationtononnegative integerpowers, and has a finite number of terms.[1][2][3][4][5]An example of a polynomial of a single indeterminatexisx2− 4x+ 7. An example with three indeterminates isx3+ 2xyz2−yz+ 1.
Polynomials appear in many areas of mathematics and science. For example, they are used to formpolynomial equations, which encode a wide range of problems, from elementaryword problemsto complicated scientific problems; they are used to definepolynomial functions, which appear in settings ranging from basicchemistryandphysicstoeconomicsandsocial science; and they are used incalculusandnumerical analysisto approximate other functions. In advanced mathematics, polynomials are used to constructpolynomial ringsandalgebraic varieties, which are central concepts inalgebraandalgebraic geometry.
The wordpolynomialjoins two diverse roots: the Greekpoly, meaning "many", and the Latinnomen, or "name". It was derived from the termbinomialby replacing the Latin rootbi-with the Greekpoly-. That is, it means a sum of many terms (manymonomials). The wordpolynomialwas first used in the 17th century.[6]
Thexoccurring in a polynomial is commonly called avariableor anindeterminate. When the polynomial is considered as an expression,xis a fixed symbol which does not have any value (its value is "indeterminate"). However, when one considers thefunctiondefined by the polynomial, thenxrepresents the argument of the function, and is therefore called a "variable". Many authors use these two words interchangeably.
A polynomialPin the indeterminatexis commonly denoted either asPor asP(x). Formally, the name of the polynomial isP, notP(x), but the use of thefunctional notationP(x) dates from a time when the distinction between a polynomial and the associated function was unclear. Moreover, the functional notation is often useful for specifying, in a single phrase, a polynomial and its indeterminate. For example, "letP(x) be a polynomial" is a shorthand for "letPbe a polynomial in the indeterminatex". On the other hand, when it is not necessary to emphasize the name of the indeterminate, many formulas are much simpler and easier to read if the name(s) of the indeterminate(s) do not appear at each occurrence of the polynomial.
The ambiguity of having two notations for a single mathematical object may be formally resolved by considering the general meaning of the functional notation for polynomials.
Ifadenotes a number, a variable, another polynomial, or, more generally, any expression, thenP(a) denotes, by convention, the result of substitutingaforxinP. Thus, the polynomialPdefines the functiona↦P(a),{\displaystyle a\mapsto P(a),}which is thepolynomial functionassociated toP.
Frequently, when using this notation, one supposes thatais a number. However, one may use it over any domain where addition and multiplication are defined (that is, anyring). In particular, ifais a polynomial thenP(a) is also a polynomial.
More specifically, whenais the indeterminatex, then theimageofxby this function is the polynomialPitself (substitutingxforxdoes not change anything). In other words,P(x)=P,{\displaystyle P(x)=P,}which justifies formally the existence of two notations for the same polynomial.
Apolynomial expressionis anexpressionthat can be built fromconstantsand symbols calledvariablesorindeterminatesby means ofaddition,multiplicationandexponentiationto anon-negative integerpower. The constants are generallynumbers, but may be any expression that do not involve the indeterminates, and representmathematical objectsthat can be added and multiplied. Two polynomial expressions are considered as defining the samepolynomialif they may be transformed, one to the other, by applying the usual properties ofcommutativity,associativityanddistributivityof addition and multiplication. For example(x−1)(x−2){\displaystyle (x-1)(x-2)}andx2−3x+2{\displaystyle x^{2}-3x+2}are two polynomial expressions that represent the same polynomial; so, one has theequality(x−1)(x−2)=x2−3x+2{\displaystyle (x-1)(x-2)=x^{2}-3x+2}.
A polynomial in a single indeterminatexcan always be written (or rewritten) in the formanxn+an−1xn−1+⋯+a2x2+a1x+a0,{\displaystyle a_{n}x^{n}+a_{n-1}x^{n-1}+\dotsb +a_{2}x^{2}+a_{1}x+a_{0},}wherea0,…,an{\displaystyle a_{0},\ldots ,a_{n}}are constants that are called thecoefficientsof the polynomial, andx{\displaystyle x}is the indeterminate.[7]The word "indeterminate" means thatx{\displaystyle x}represents no particular value, although any value may be substituted for it. The mapping that associates the result of this substitution to the substituted value is afunction, called apolynomial function.
This can be expressed more concisely by usingsummation notation:∑k=0nakxk{\displaystyle \sum _{k=0}^{n}a_{k}x^{k}}That is, a polynomial can either be zero or can be written as the sum of a finite number of non-zeroterms. Each term consists of the product of a number – called thecoefficientof the term[a]– and a finite number of indeterminates, raised to non-negative integer powers.
The exponent on an indeterminate in a term is called the degree of that indeterminate in that term; the degree of the term is the sum of the degrees of the indeterminates in that term, and the degree of a polynomial is the largest degree of any term with nonzero coefficient.[8]Becausex=x1, the degree of an indeterminate without a written exponent is one.
A term with no indeterminates and a polynomial with no indeterminates are called, respectively, aconstant termand aconstant polynomial.[b]The degree of a constant term and of a nonzero constant polynomial is 0. The degree of the zero polynomial 0 (which has no terms at all) is generally treated as not defined (but see below).[9]
For example:−5x2y{\displaystyle -5x^{2}y}is a term. The coefficient is−5, the indeterminates arexandy, the degree ofxis two, while the degree ofyis one. The degree of the entire term is the sum of the degrees of each indeterminate in it, so in this example the degree is2 + 1 = 3.
Forming a sum of several terms produces a polynomial. For example, the following is a polynomial:3x2⏟term1−5x⏟term2+4⏟term3.{\displaystyle \underbrace {_{\,}3x^{2}} _{\begin{smallmatrix}\mathrm {term} \\\mathrm {1} \end{smallmatrix}}\underbrace {-_{\,}5x} _{\begin{smallmatrix}\mathrm {term} \\\mathrm {2} \end{smallmatrix}}\underbrace {+_{\,}4} _{\begin{smallmatrix}\mathrm {term} \\\mathrm {3} \end{smallmatrix}}.}It consists of three terms: the first is degree two, the second is degree one, and the third is degree zero.
Polynomials of small degree have been given specific names. A polynomial of degree zero is aconstant polynomial, or simply aconstant. Polynomials of degree one, two or three are respectivelylinear polynomials,quadratic polynomialsandcubic polynomials.[8]For higher degrees, the specific names are not commonly used, althoughquartic polynomial(for degree four) andquintic polynomial(for degree five) are sometimes used. The names for the degrees may be applied to the polynomial or to its terms. For example, the term2xinx2+ 2x+ 1is a linear term in a quadratic polynomial.
The polynomial 0, which may be considered to have no terms at all, is called thezero polynomial. Unlike other constant polynomials, its degree is not zero. Rather, the degree of the zero polynomial is either left explicitly undefined, or defined as negative (either −1 or −∞).[10]The zero polynomial is also unique in that it is the only polynomial in one indeterminate that has an infinite number ofroots. The graph of the zero polynomial,f(x) = 0, is thex-axis.
In the case of polynomials in more than one indeterminate, a polynomial is calledhomogeneousofdegreenifallof its non-zero terms havedegreen. The zero polynomial is homogeneous, and, as a homogeneous polynomial, its degree is undefined.[c]For example,x3y2+ 7x2y3− 3x5is homogeneous of degree 5. For more details, seeHomogeneous polynomial.
Thecommutative lawof addition can be used to rearrange terms into any preferred order. In polynomials with one indeterminate, the terms are usually ordered according to degree, either in "descending powers ofx", with the term of largest degree first, or in "ascending powers ofx". The polynomial3x2− 5x+ 4is written in descending powers ofx. The first term has coefficient3, indeterminatex, and exponent2. In the second term, the coefficientis−5. The third term is a constant. Because thedegreeof a non-zero polynomial is the largest degree of any one term, this polynomial has degree two.[11]
Two terms with the same indeterminates raised to the same powers are called "similar terms" or "like terms", and they can be combined, using thedistributive law, into a single term whose coefficient is the sum of the coefficients of the terms that were combined. It may happen that this makes the coefficient 0.[12]Polynomials can be classified by the number of terms with nonzero coefficients, so that a one-term polynomial is called amonomial,[d]a two-term polynomial is called abinomial, and a three-term polynomial is called atrinomial.
Areal polynomialis a polynomial withrealcoefficients. When it is used to define afunction, thedomainis not so restricted. However, areal polynomial functionis a function from the reals to the reals that is defined by a real polynomial. Similarly, aninteger polynomialis a polynomial withintegercoefficients, and acomplex polynomialis a polynomial withcomplexcoefficients.
A polynomial in one indeterminate is called aunivariate polynomial, a polynomial in more than one indeterminate is called amultivariate polynomial. A polynomial with two indeterminates is called abivariate polynomial.[7]These notions refer more to the kind of polynomials one is generally working with than to individual polynomials; for instance, when working with univariate polynomials, one does not exclude constant polynomials (which may result from the subtraction of non-constant polynomials), although strictly speaking, constant polynomials do not contain any indeterminates at all. It is possible to further classify multivariate polynomials asbivariate,trivariate, and so on, according to the maximum number of indeterminates allowed. Again, so that the set of objects under consideration be closed under subtraction, a study of trivariate polynomials usually allows bivariate polynomials, and so on. It is also common to say simply "polynomials inx,y, andz", listing the indeterminates allowed.
Polynomials can be added using theassociative lawof addition (grouping all their terms together into a single sum), possibly followed by reordering (using thecommutative law) and combining of like terms.[12][13]For example, ifP=3x2−2x+5xy−2{\displaystyle P=3x^{2}-2x+5xy-2}andQ=−3x2+3x+4y2+8{\displaystyle Q=-3x^{2}+3x+4y^{2}+8}then the sumP+Q=3x2−2x+5xy−2−3x2+3x+4y2+8{\displaystyle P+Q=3x^{2}-2x+5xy-2-3x^{2}+3x+4y^{2}+8}can be reordered and regrouped asP+Q=(3x2−3x2)+(−2x+3x)+5xy+4y2+(8−2){\displaystyle P+Q=(3x^{2}-3x^{2})+(-2x+3x)+5xy+4y^{2}+(8-2)}and then simplified toP+Q=x+5xy+4y2+6.{\displaystyle P+Q=x+5xy+4y^{2}+6.}When polynomials are added together, the result is another polynomial.[14]
Subtraction of polynomials is similar.
Polynomials can also be multiplied. To expand theproductof two polynomials into a sum of terms, the distributive law is repeatedly applied, which results in each term of one polynomial being multiplied by every term of the other.[12]For example, ifP=2x+3y+5Q=2x+5y+xy+1{\displaystyle {\begin{aligned}\color {Red}P&\color {Red}{=2x+3y+5}\\\color {Blue}Q&\color {Blue}{=2x+5y+xy+1}\end{aligned}}}thenPQ=(2x⋅2x)+(2x⋅5y)+(2x⋅xy)+(2x⋅1)+(3y⋅2x)+(3y⋅5y)+(3y⋅xy)+(3y⋅1)+(5⋅2x)+(5⋅5y)+(5⋅xy)+(5⋅1){\displaystyle {\begin{array}{rccrcrcrcr}{\color {Red}{P}}{\color {Blue}{Q}}&{=}&&({\color {Red}{2x}}\cdot {\color {Blue}{2x}})&+&({\color {Red}{2x}}\cdot {\color {Blue}{5y}})&+&({\color {Red}{2x}}\cdot {\color {Blue}{xy}})&+&({\color {Red}{2x}}\cdot {\color {Blue}{1}})\\&&+&({\color {Red}{3y}}\cdot {\color {Blue}{2x}})&+&({\color {Red}{3y}}\cdot {\color {Blue}{5y}})&+&({\color {Red}{3y}}\cdot {\color {Blue}{xy}})&+&({\color {Red}{3y}}\cdot {\color {Blue}{1}})\\&&+&({\color {Red}{5}}\cdot {\color {Blue}{2x}})&+&({\color {Red}{5}}\cdot {\color {Blue}{5y}})&+&({\color {Red}{5}}\cdot {\color {Blue}{xy}})&+&({\color {Red}{5}}\cdot {\color {Blue}{1}})\end{array}}}Carrying out the multiplication in each term producesPQ=4x2+10xy+2x2y+2x+6xy+15y2+3xy2+3y+10x+25y+5xy+5.{\displaystyle {\begin{array}{rccrcrcrcr}PQ&=&&4x^{2}&+&10xy&+&2x^{2}y&+&2x\\&&+&6xy&+&15y^{2}&+&3xy^{2}&+&3y\\&&+&10x&+&25y&+&5xy&+&5.\end{array}}}Combining similar terms yieldsPQ=4x2+(10xy+6xy+5xy)+2x2y+(2x+10x)+15y2+3xy2+(3y+25y)+5{\displaystyle {\begin{array}{rcccrcrcrcr}PQ&=&&4x^{2}&+&(10xy+6xy+5xy)&+&2x^{2}y&+&(2x+10x)\\&&+&15y^{2}&+&3xy^{2}&+&(3y+25y)&+&5\end{array}}}which can be simplified toPQ=4x2+21xy+2x2y+12x+15y2+3xy2+28y+5.{\displaystyle PQ=4x^{2}+21xy+2x^{2}y+12x+15y^{2}+3xy^{2}+28y+5.}As in the example, the product of polynomials is always a polynomial.[14][9]
Given a polynomialf{\displaystyle f}of a single variable and another polynomialgof any number of variables, thecompositionf∘g{\displaystyle f\circ g}is obtained by substituting each copy of the variable of the first polynomial by the second polynomial.[9]For example, iff(x)=x2+2x{\displaystyle f(x)=x^{2}+2x}andg(x)=3x+2{\displaystyle g(x)=3x+2}then(f∘g)(x)=f(g(x))=(3x+2)2+2(3x+2).{\displaystyle (f\circ g)(x)=f(g(x))=(3x+2)^{2}+2(3x+2).}A composition may be expanded to a sum of terms using the rules for multiplication and division of polynomials. The composition of two polynomials is another polynomial.[15]
The division of one polynomial by another is not typically a polynomial. Instead, such ratios are a more general family of objects, calledrational fractions,rational expressions, orrational functions, depending on context.[16]This is analogous to the fact that the ratio of twointegersis arational number, not necessarily an integer.[17][18]For example, the fraction1/(x2+ 1)is not a polynomial, and it cannot be written as a finite sum of powers of the variablex.
For polynomials in one variable, there is a notion ofEuclidean division of polynomials, generalizing theEuclidean divisionof integers.[e]This notion of the divisiona(x)/b(x)results in two polynomials, aquotientq(x)and aremainderr(x), such thata=bq+randdegree(r) < degree(b). The quotient and remainder may be computed by any of several algorithms, includingpolynomial long divisionandsynthetic division.[19]
When the denominatorb(x)ismonicand linear, that is,b(x) =x−cfor some constantc, then thepolynomial remainder theoremasserts that the remainder of the division ofa(x)byb(x)is theevaluationa(c).[18]In this case, the quotient may be computed byRuffini's rule, a special case of synthetic division.[20]
All polynomials with coefficients in aunique factorization domain(for example, the integers or afield) also have a factored form in which the polynomial is written as a product ofirreducible polynomialsand a constant. This factored form is unique up to the order of the factors and their multiplication by an invertible constant. In the case of the field ofcomplex numbers, the irreducible factors are linear. Over thereal numbers, they have the degree either one or two. Over the integers and therational numbersthe irreducible factors may have any degree.[21]For example, the factored form of5x3−5{\displaystyle 5x^{3}-5}is5(x−1)(x2+x+1){\displaystyle 5(x-1)\left(x^{2}+x+1\right)}over the integers and the reals, and5(x−1)(x+1+i32)(x+1−i32){\displaystyle 5(x-1)\left(x+{\frac {1+i{\sqrt {3}}}{2}}\right)\left(x+{\frac {1-i{\sqrt {3}}}{2}}\right)}over the complex numbers.
The computation of the factored form, calledfactorizationis, in general, too difficult to be done by hand-written computation. However, efficientpolynomial factorizationalgorithmsare available in mostcomputer algebra systems.
Calculatingderivativesand integrals of polynomials is particularly simple, compared to other kinds of functions.
Thederivativeof the polynomialP=anxn+an−1xn−1+⋯+a2x2+a1x+a0=∑i=0naixi{\displaystyle P=a_{n}x^{n}+a_{n-1}x^{n-1}+\dots +a_{2}x^{2}+a_{1}x+a_{0}=\sum _{i=0}^{n}a_{i}x^{i}}with respect toxis the polynomialnanxn−1+(n−1)an−1xn−2+⋯+2a2x+a1=∑i=1niaixi−1.{\displaystyle na_{n}x^{n-1}+(n-1)a_{n-1}x^{n-2}+\dots +2a_{2}x+a_{1}=\sum _{i=1}^{n}ia_{i}x^{i-1}.}Similarly, the generalantiderivative(or indefinite integral) ofP{\displaystyle P}isanxn+1n+1+an−1xnn+⋯+a2x33+a1x22+a0x+c=c+∑i=0naixi+1i+1{\displaystyle {\frac {a_{n}x^{n+1}}{n+1}}+{\frac {a_{n-1}x^{n}}{n}}+\dots +{\frac {a_{2}x^{3}}{3}}+{\frac {a_{1}x^{2}}{2}}+a_{0}x+c=c+\sum _{i=0}^{n}{\frac {a_{i}x^{i+1}}{i+1}}}wherecis an arbitrary constant. For example, antiderivatives ofx2+ 1have the form1/3x3+x+c.
For polynomials whose coefficients come from more abstract settings (for example, if the coefficients are integersmodulosomeprime numberp, or elements of an arbitrary ring), the formula for the derivative can still be interpreted formally, with the coefficientkakunderstood to mean the sum ofkcopies ofak. For example, over the integers modulop, the derivative of the polynomialxp+xis the polynomial1.[22]
Apolynomial functionis a function that can be defined byevaluatinga polynomial. More precisely, a functionfof oneargumentfrom a given domain is a polynomial function if there exists a polynomialanxn+an−1xn−1+⋯+a2x2+a1x+a0{\displaystyle a_{n}x^{n}+a_{n-1}x^{n-1}+\cdots +a_{2}x^{2}+a_{1}x+a_{0}}that evaluates tof(x){\displaystyle f(x)}for allxin thedomainoff(here,nis a non-negative integer anda0,a1,a2, ...,anare constant coefficients).[23]Generally, unless otherwise specified, polynomial functions havecomplexcoefficients, arguments, and values. In particular, a polynomial, restricted to have real coefficients, defines a function from the complex numbers to the complex numbers. If the domain of this function is alsorestrictedto the reals, the resulting function is areal functionthat maps reals to reals.
For example, the functionf, defined byf(x)=x3−x,{\displaystyle f(x)=x^{3}-x,}is a polynomial function of one variable. Polynomial functions of several variables are similarly defined, using polynomials in more than one indeterminate, as inf(x,y)=2x3+4x2y+xy5+y2−7.{\displaystyle f(x,y)=2x^{3}+4x^{2}y+xy^{5}+y^{2}-7.}According to the definition of polynomial functions, there may be expressions that obviously are not polynomials but nevertheless define polynomial functions. An example is the expression(1−x2)2,{\displaystyle \left({\sqrt {1-x^{2}}}\right)^{2},}which takes the same values as the polynomial1−x2{\displaystyle 1-x^{2}}on the interval[−1,1]{\displaystyle [-1,1]}, and thus both expressions define the same polynomial function on this interval.
Every polynomial function iscontinuous,smooth, andentire.
Theevaluationof a polynomial is the computation of the corresponding polynomial function; that is, the evaluation consists of substituting a numerical value to each indeterminate and carrying out the indicated multiplications and additions.
For polynomials in one indeterminate, the evaluation is usually more efficient (lower number of arithmetic operations to perform) usingHorner's method, which consists of rewriting the polynomial as(((((anx+an−1)x+an−2)x+⋯+a3)x+a2)x+a1)x+a0.{\displaystyle (((((a_{n}x+a_{n-1})x+a_{n-2})x+\dotsb +a_{3})x+a_{2})x+a_{1})x+a_{0}.}
A polynomial function in one real variable can be represented by agraph.
A non-constant polynomial functiontends to infinitywhen the variable increases indefinitely (inabsolute value). If the degree is higher than one, the graph does not have anyasymptote. It has twoparabolic brancheswith vertical direction (one branch for positivexand one for negativex).
Polynomial graphs are analyzed in calculus using intercepts, slopes, concavity, and end behavior.
Apolynomial equation, also called analgebraic equation, is anequationof the form[24]anxn+an−1xn−1+⋯+a2x2+a1x+a0=0.{\displaystyle a_{n}x^{n}+a_{n-1}x^{n-1}+\dotsb +a_{2}x^{2}+a_{1}x+a_{0}=0.}For example,3x2+4x−5=0{\displaystyle 3x^{2}+4x-5=0}is a polynomial equation.
When considering equations, the indeterminates (variables) of polynomials are also calledunknowns, and thesolutionsare the possible values of the unknowns for which the equality is true (in general more than one solution may exist). A polynomial equation stands in contrast to apolynomialidentitylike(x+y)(x−y) =x2−y2, where both expressions represent the same polynomial in different forms, and as a consequence any evaluation of both members gives a valid equality.
In elementaryalgebra, methods such as thequadratic formulaare taught for solving all first degree and second degree polynomial equations in one variable. There are also formulas for thecubicandquartic equations. For higher degrees, theAbel–Ruffini theoremasserts that there can not exist a general formula in radicals. However,root-finding algorithmsmay be used to findnumerical approximationsof the roots of a polynomial expression of any degree.
The number of solutions of a polynomial equation with real coefficients may not exceed the degree, and equals the degree when thecomplexsolutions are counted with theirmultiplicity. This fact is called thefundamental theorem of algebra.
Arootof a nonzero univariate polynomialPis a valueaofxsuch thatP(a) = 0. In other words, a root ofPis a solution of thepolynomial equationP(x) = 0or azeroof the polynomial function defined byP. In the case of the zero polynomial, every number is a zero of the corresponding function, and the concept of root is rarely considered.
A numberais a root of a polynomialPif and only if thelinear polynomialx−adividesP, that is if there is another polynomialQsuch thatP= (x−a) Q. It may happen that a power (greater than1) ofx−adividesP; in this case,ais amultiple rootofP, and otherwiseais asimple rootofP. IfPis a nonzero polynomial, there is a highest powermsuch that(x−a)mdividesP, which is called themultiplicityofaas a root ofP. The number of roots of a nonzero polynomialP, counted with their respective multiplicities, cannot exceed the degree ofP,[25]and equals this degree if allcomplexroots are considered (this is a consequence of thefundamental theorem of algebra).
The coefficients of a polynomial and its roots are related byVieta's formulas.
Some polynomials, such asx2+ 1, do not have any roots among thereal numbers. If, however, the set of accepted solutions is expanded to thecomplex numbers, every non-constant polynomial has at least one root; this is thefundamental theorem of algebra. By successively dividing out factorsx−a, one sees that any polynomial with complex coefficients can be written as a constant (its leading coefficient) times a product of such polynomial factors of degree 1; as a consequence, the number of (complex) roots counted with their multiplicities is exactly equal to the degree of the polynomial.
There may be several meanings of"solving an equation". One may want to express the solutions as explicit numbers; for example, the unique solution of2x− 1 = 0is1/2. This is, in general, impossible for equations of degree greater than one, and, since the ancient times, mathematicians have searched to express the solutions asalgebraic expressions; for example, thegolden ratio(1+5)/2{\displaystyle (1+{\sqrt {5}})/2}is the unique positive solution ofx2−x−1=0.{\displaystyle x^{2}-x-1=0.}In the ancient times, they succeeded only for degrees one and two. Forquadratic equations, thequadratic formulaprovides such expressions of the solutions. Since the 16th century, similar formulas (using cube roots in addition to square roots), although much more complicated, are known for equations of degree three and four (seecubic equationandquartic equation). But formulas for degree 5 and higher eluded researchers for several centuries. In 1824,Niels Henrik Abelproved the striking result that there are equations of degree 5 whose solutions cannot be expressed by a (finite) formula, involving only arithmetic operations and radicals (seeAbel–Ruffini theorem). In 1830,Évariste Galoisproved that most equations of degree higher than four cannot be solved by radicals, and showed that for each equation, one may decide whether it is solvable by radicals, and, if it is, solve it. This result marked the start ofGalois theoryandgroup theory, two important branches of modernalgebra. Galois himself noted that the computations implied by his method were impracticable. Nevertheless, formulas for solvable equations of degrees 5 and 6 have been published (seequintic functionandsextic equation).
When there is no algebraic expression for the roots, and when such an algebraic expression exists but is too complicated to be useful, the unique way of solving it is to computenumerical approximationsof the solutions.[26]There are many methods for that; some are restricted to polynomials and others may apply to anycontinuous function. The most efficientalgorithmsallow solving easily (on acomputer) polynomial equations of degree higher than 1,000 (seeRoot-finding algorithm).
For polynomials with more than one indeterminate, the combinations of values for the variables for which the polynomial function takes the value zero are generally calledzerosinstead of "roots". The study of the sets of zeros of polynomials is the object ofalgebraic geometry. For a set of polynomial equations with several unknowns, there arealgorithmsto decide whether they have a finite number ofcomplexsolutions, and, if this number is finite, for computing the solutions. SeeSystem of polynomial equations.
The special case where all the polynomials are of degree one is called asystem of linear equations, for which another range of differentsolution methodsexist, including the classicalGaussian elimination.
A polynomial equation for which one is interested only in the solutions which areintegersis called aDiophantine equation. Solving Diophantine equations is generally a very hard task. It has been proved that there cannot be any generalalgorithmfor solving them, or even for deciding whether the set of solutions is empty (seeHilbert's tenth problem). Some of the most famous problems that have been solved during the last fifty years are related to Diophantine equations, such asFermat's Last Theorem.
Polynomials where indeterminates are substituted for some other mathematical objects are often considered, and sometimes have a special name.
Atrigonometric polynomialis a finitelinear combinationoffunctionssin(nx) and cos(nx) withntaking on the values of one or morenatural numbers.[27]The coefficients may be taken as real numbers, for real-valued functions.
If sin(nx) and cos(nx) are expanded in terms of sin(x) and cos(x), a trigonometric polynomial becomes a polynomial in the two variables sin(x) and cos(x) (using themultiple-angle formulae). Conversely, every polynomial in sin(x) and cos(x) may be converted, withProduct-to-sum identities, into a linear combination of functions sin(nx) and cos(nx). This equivalence explains why linear combinations are called polynomials.
Forcomplex coefficients, there is no difference between such a function and a finiteFourier series.
Trigonometric polynomials are widely used, for example intrigonometric interpolationapplied to theinterpolationofperiodic functions. They are also used in thediscrete Fourier transform.
Amatrix polynomialis a polynomial withsquare matricesas variables.[28]Given an ordinary, scalar-valued polynomialP(x)=∑i=0naixi=a0+a1x+a2x2+⋯+anxn,{\displaystyle P(x)=\sum _{i=0}^{n}{a_{i}x^{i}}=a_{0}+a_{1}x+a_{2}x^{2}+\cdots +a_{n}x^{n},}this polynomial evaluated at a matrixAisP(A)=∑i=0naiAi=a0I+a1A+a2A2+⋯+anAn,{\displaystyle P(A)=\sum _{i=0}^{n}{a_{i}A^{i}}=a_{0}I+a_{1}A+a_{2}A^{2}+\cdots +a_{n}A^{n},}whereIis theidentity matrix.[29]
Amatrix polynomial equationis an equality between two matrix polynomials, which holds for the specific matrices in question. Amatrix polynomial identityis a matrix polynomial equation which holds for all matricesAin a specifiedmatrix ringMn(R).
A bivariate polynomial where the second variable is substituted for an exponential function applied to the first variable, for exampleP(x,ex), may be called anexponential polynomial.
Arational fractionis thequotient(algebraic fraction) of two polynomials. Anyalgebraic expressionthat can be rewritten as a rational fraction is arational function.
While polynomial functions are defined for all values of the variables, a rational function is defined only for the values of the variables for which the denominator is not zero.
The rational fractions include the Laurent polynomials, but do not limit denominators to powers of an indeterminate.
Laurent polynomialsare like polynomials, but allow negative powers of the variable(s) to occur.
Formal power seriesare like polynomials, but allow infinitely many non-zero terms to occur, so that they do not have finite degree. Unlike polynomials they cannot in general be explicitly and fully written down (just likeirrational numberscannot), but the rules for manipulating their terms are the same as for polynomials. Non-formalpower seriesalso generalize polynomials, but the multiplication of two power series may not converge.
Apolynomialfover acommutative ringRis a polynomial all of whose coefficients belong toR. It is straightforward to verify that the polynomials in a given set of indeterminates overRform a commutative ring, called thepolynomial ringin these indeterminates, denotedR[x]{\displaystyle R[x]}in the univariate case andR[x1,…,xn]{\displaystyle R[x_{1},\ldots ,x_{n}]}in the multivariate case.
One hasR[x1,…,xn]=(R[x1,…,xn−1])[xn].{\displaystyle R[x_{1},\ldots ,x_{n}]=\left(R[x_{1},\ldots ,x_{n-1}]\right)[x_{n}].}So, most of the theory of the multivariate case can be reduced to an iterated univariate case.
The map fromRtoR[x]sendingrto itself considered as a constant polynomial is an injectivering homomorphism, by whichRis viewed as a subring ofR[x]. In particular,R[x]is analgebraoverR.
One can think of the ringR[x]as arising fromRby adding one new elementxtoR, and extending in a minimal way to a ring in whichxsatisfies no other relations than the obligatory ones, plus commutation with all elements ofR(that isxr=rx). To do this, one must add all powers ofxand their linear combinations as well.
Formation of the polynomial ring, together with forming factor rings by factoring outideals, are important tools for constructing new rings out of known ones. For instance, the ring (in fact field) of complex numbers, which can be constructed from the polynomial ringR[x]over the real numbers by factoring out the ideal of multiples of the polynomialx2+ 1. Another example is the construction offinite fields, which proceeds similarly, starting out with the field of integers modulo someprime numberas the coefficient ringR(seemodular arithmetic).
IfRis commutative, then one can associate with every polynomialPinR[x]apolynomial functionfwith domain and range equal toR. (More generally, one can take domain and range to be any sameunitalassociative algebraoverR.) One obtains the valuef(r)bysubstitutionof the valuerfor the symbolxinP. One reason to distinguish between polynomials and polynomial functions is that, over some rings, different polynomials may give rise to the same polynomial function (seeFermat's little theoremfor an example whereRis the integers modulop). This is not the case whenRis the real or complex numbers, whence the two concepts are not always distinguished inanalysis. An even more important reason to distinguish between polynomials and polynomial functions is that many operations on polynomials (likeEuclidean division) require looking at what a polynomial is composed of as an expression rather than evaluating it at some constant value forx.
IfRis anintegral domainandfandgare polynomials inR[x], it is said thatfdividesgorfis a divisor ofgif there exists a polynomialqinR[x]such thatfq=g. Ifa∈R,{\displaystyle a\in R,}thenais a root offif and onlyx−a{\displaystyle x-a}dividesf. In this case, the quotient can be computed using thepolynomial long division.[30][31]
IfFis afieldandfandgare polynomials inF[x]withg≠ 0, then there exist unique polynomialsqandrinF[x]withf=qg+r{\displaystyle f=q\,g+r}and such that the degree ofris smaller than the degree ofg(using the convention that the polynomial 0 has a negative degree). The polynomialsqandrare uniquely determined byfandg. This is calledEuclidean division, division with remainderorpolynomial long divisionand shows that the ringF[x]is aEuclidean domain.
Analogously,prime polynomials(more correctly,irreducible polynomials) can be defined asnon-zero polynomials which cannot be factorized into the product of two non-constant polynomials. In the case of coefficients in a ring,"non-constant"must be replaced by"non-constant or non-unit"(both definitions agree in the case of coefficients in a field). Any polynomial may be decomposed into the product of an invertible constant by a product of irreducible polynomials. If the coefficients belong to a field or aunique factorization domainthis decomposition is unique up to the order of the factors and the multiplication of any non-unit factor by a unit (and division of the unit factor by the same unit). When the coefficients belong to integers, rational numbers or a finite field, there are algorithms to test irreducibility and to compute the factorization into irreducible polynomials (seeFactorization of polynomials). These algorithms are not practicable for hand-written computation, but are available in anycomputer algebra system.Eisenstein's criterioncan also be used in some cases to determine irreducibility.
In modern positional numbers systems, such as thedecimal system, the digits and their positions in the representation of an integer, for example, 45, are a shorthand notation for a polynomial in theradixor base, in this case,4 × 101+ 5 × 100. As another example, in radix 5, a string of digits such as 132 denotes the (decimal) number1 × 52+ 3 × 51+ 2 × 50= 42. This representation is unique. Letbbe a positive integer greater than 1. Then every positive integeracan be expressed uniquely in the form
a=rmbm+rm−1bm−1+⋯+r1b+r0,{\displaystyle a=r_{m}b^{m}+r_{m-1}b^{m-1}+\dotsb +r_{1}b+r_{0},}wheremis a nonnegative integer and ther's are integers such that
0 <rm<band0 ≤ri<bfori= 0, 1, . . . ,m− 1.[32]
The simple structure of polynomial functions makes them quite useful in analyzing general functions using polynomial approximations. An important example incalculusisTaylor's theorem, which roughly states that everydifferentiable functionlocally looks like a polynomial function, and theStone–Weierstrass theorem, which states that everycontinuous functiondefined on acompactintervalof the real axis can be approximated on the whole interval as closely as desired by a polynomial function. Practical methods of approximation includepolynomial interpolationand the use ofsplines.[33]
Polynomials are frequently used to encode information about some other object. Thecharacteristic polynomialof a matrix or linear operator contains information about the operator'seigenvalues. Theminimal polynomialof analgebraic elementrecords the simplest algebraic relation satisfied by that element. Thechromatic polynomialof agraphcounts the number of proper colourings of that graph.
The term "polynomial", as an adjective, can also be used for quantities or functions that can be written in polynomial form. For example, incomputational complexity theorythe phrasepolynomial timemeans that the time it takes to complete analgorithmis bounded by a polynomial function of some variable, such as the size of the input.
Determining the roots of polynomials, or "solving algebraic equations", is among the oldest problems in mathematics. However, the elegant and practical notation we use today only developed beginning in the 15th century. Before that, equations were written out in words. For example, an algebra problem from the ChineseArithmetic in Nine Sections,c.200 BCE, begins "Three sheafs of good crop, two sheafs of mediocre crop, and one sheaf of bad crop are sold for 29 dou." We would write3x+ 2y+z= 29.
The earliest known use of the equal sign is inRobert Recorde'sThe Whetstone of Witte, 1557. The signs + for addition, − for subtraction, and the use of a letter for an unknown appear inMichael Stifel'sArithemetica integra, 1544.René Descartes, inLa géometrie, 1637, introduced the concept of the graph of a polynomial equation. He popularized the use of letters from the beginning of the alphabet to denote constants and letters from the end of the alphabet to denote variables, as can be seen above, in the general formula for a polynomial in one variable, where theas denote constants andxdenotes a variable. Descartes introduced the use of superscripts to denote exponents as well.[34]
|
https://en.wikipedia.org/wiki/Polynomial
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.