text stringlengths 16 172k | source stringlengths 32 122 |
|---|---|
Inmathematicsandclassical mechanics, thePoisson bracketis an importantbinary operationinHamiltonian mechanics, playing a central role in Hamilton's equations of motion, which govern the time evolution of a Hamiltoniandynamical system. The Poisson bracket also distinguishes a certain class of coordinate transformations, calledcanonical transformations, which mapcanonical coordinate systemsinto other canonical coordinate systems. A "canonical coordinate system" consists of canonical position and momentum variables (below symbolized byqi{\displaystyle q_{i}}andpi{\displaystyle p_{i}}, respectively) that satisfy canonical Poisson bracket relations. The set of possible canonical transformations is always very rich. For instance, it is often possible to choose the Hamiltonian itselfH=H(q,p,t){\displaystyle {\mathcal {H}}={\mathcal {H}}(q,p,t)}as one of the new canonical momentum coordinates.
In a more general sense, the Poisson bracket is used to define aPoisson algebra, of which the algebra of functions on aPoisson manifoldis a special case. There are other general examples, as well: it occurs in the theory ofLie algebras, where thetensor algebraof a Lie algebra forms a Poisson algebra; a detailed construction of how this comes about is given in theuniversal enveloping algebraarticle. Quantum deformations of the universal enveloping algebra lead to the notion ofquantum groups.
All of these objects are named in honor of French mathematicianSiméon Denis Poisson. He introduced the Poisson bracket in his 1809 treatise on mechanics.[1][2]
Given two functionsfandgthat depend onphase spaceand time, their Poisson bracket{f,g}{\displaystyle \{f,g\}}is another function that depends on phase space and time. The following rules hold for any three functionsf,g,h{\displaystyle f,\,g,\,h}of phase space and time:
Also, if a functionk{\displaystyle k}is constant over phase space (but may depend on time), then{f,k}=0{\displaystyle \{f,\,k\}=0}for anyf{\displaystyle f}.
Incanonical coordinates(also known asDarboux coordinates)(qi,pi){\displaystyle (q_{i},\,p_{i})}on thephase space, given two functionsf(pi,qi,t){\displaystyle f(p_{i},\,q_{i},t)}andg(pi,qi,t){\displaystyle g(p_{i},\,q_{i},t)},[Note 1]the Poisson bracket takes the form{f,g}=∑i=1N(∂f∂qi∂g∂pi−∂f∂pi∂g∂qi).{\displaystyle \{f,g\}=\sum _{i=1}^{N}\left({\frac {\partial f}{\partial q_{i}}}{\frac {\partial g}{\partial p_{i}}}-{\frac {\partial f}{\partial p_{i}}}{\frac {\partial g}{\partial q_{i}}}\right).}
The Poisson brackets of the canonical coordinates are{qk,ql}=∑i=1N(∂qk∂qi∂ql∂pi−∂qk∂pi∂ql∂qi)=∑i=1N(δki⋅0−0⋅δli)=0,{pk,pl}=∑i=1N(∂pk∂qi∂pl∂pi−∂pk∂pi∂pl∂qi)=∑i=1N(0⋅δli−δki⋅0)=0,{qk,pl}=∑i=1N(∂qk∂qi∂pl∂pi−∂qk∂pi∂pl∂qi)=∑i=1N(δki⋅δli−0⋅0)=δkl,{\displaystyle {\begin{aligned}\{q_{k},q_{l}\}&=\sum _{i=1}^{N}\left({\frac {\partial q_{k}}{\partial q_{i}}}{\frac {\partial q_{l}}{\partial p_{i}}}-{\frac {\partial q_{k}}{\partial p_{i}}}{\frac {\partial q_{l}}{\partial q_{i}}}\right)=\sum _{i=1}^{N}\left(\delta _{ki}\cdot 0-0\cdot \delta _{li}\right)=0,\\\{p_{k},p_{l}\}&=\sum _{i=1}^{N}\left({\frac {\partial p_{k}}{\partial q_{i}}}{\frac {\partial p_{l}}{\partial p_{i}}}-{\frac {\partial p_{k}}{\partial p_{i}}}{\frac {\partial p_{l}}{\partial q_{i}}}\right)=\sum _{i=1}^{N}\left(0\cdot \delta _{li}-\delta _{ki}\cdot 0\right)=0,\\\{q_{k},p_{l}\}&=\sum _{i=1}^{N}\left({\frac {\partial q_{k}}{\partial q_{i}}}{\frac {\partial p_{l}}{\partial p_{i}}}-{\frac {\partial q_{k}}{\partial p_{i}}}{\frac {\partial p_{l}}{\partial q_{i}}}\right)=\sum _{i=1}^{N}\left(\delta _{ki}\cdot \delta _{li}-0\cdot 0\right)=\delta _{kl},\end{aligned}}}whereδij{\displaystyle \delta _{ij}}is theKronecker delta.
Hamilton's equations of motionhave an equivalent expression in terms of the Poisson bracket. This may be most directly demonstrated in an explicit coordinate frame. Suppose thatf(p,q,t){\displaystyle f(p,q,t)}is a function on the solution's trajectory-manifold. Then from the multivariablechain rule,ddtf(p,q,t)=∂f∂qdqdt+∂f∂pdpdt+∂f∂t.{\displaystyle {\frac {d}{dt}}f(p,q,t)={\frac {\partial f}{\partial q}}{\frac {dq}{dt}}+{\frac {\partial f}{\partial p}}{\frac {dp}{dt}}+{\frac {\partial f}{\partial t}}.}
Further, one may takep=p(t){\displaystyle p=p(t)}andq=q(t){\displaystyle q=q(t)}to be solutions toHamilton's equations; that is,dqdt=∂H∂p={q,H},dpdt=−∂H∂q={p,H}.{\displaystyle {\begin{aligned}{\frac {dq}{dt}}&={\frac {\partial {\mathcal {H}}}{\partial p}}=\{q,{\mathcal {H}}\},\\{\frac {dp}{dt}}&=-{\frac {\partial {\mathcal {H}}}{\partial q}}=\{p,{\mathcal {H}}\}.\end{aligned}}}
Thenddtf(p,q,t)=∂f∂q∂H∂p−∂f∂p∂H∂q+∂f∂t={f,H}+∂f∂t.{\displaystyle {\begin{aligned}{\frac {d}{dt}}f(p,q,t)&={\frac {\partial f}{\partial q}}{\frac {\partial {\mathcal {H}}}{\partial p}}-{\frac {\partial f}{\partial p}}{\frac {\partial {\mathcal {H}}}{\partial q}}+{\frac {\partial f}{\partial t}}\\&=\{f,{\mathcal {H}}\}+{\frac {\partial f}{\partial t}}~.\end{aligned}}}
Thus, the time evolution of a functionf{\displaystyle f}on asymplectic manifoldcan be given as aone-parameter familyofsymplectomorphisms(i.e.,canonical transformations, area-preserving diffeomorphisms), with the timet{\displaystyle t}being the parameter: Hamiltonian motion is a canonical transformation generated by the Hamiltonian. That is, Poisson brackets are preserved in it, so thatany timet{\displaystyle t}in the solution to Hamilton's equations,q(t)=exp(−t{H,⋅})q(0),p(t)=exp(−t{H,⋅})p(0),{\displaystyle q(t)=\exp(-t\{{\mathcal {H}},\cdot \})q(0),\quad p(t)=\exp(-t\{{\mathcal {H}},\cdot \})p(0),}can serve as the bracket coordinates.Poisson brackets arecanonical invariants.
Dropping the coordinates,ddtf=(∂∂t−{H,⋅})f.{\displaystyle {\frac {d}{dt}}f=\left({\frac {\partial }{\partial t}}-\{{\mathcal {H}},\cdot \}\right)f.}
The operator in the convective part of the derivative,iL^=−{H,⋅}{\displaystyle i{\hat {L}}=-\{{\mathcal {H}},\cdot \}}, is sometimes referred to as the Liouvillian (seeLiouville's theorem (Hamiltonian)).
The concept of Poisson brackets can be expanded to that of matrices by defining the Poisson matrix.
Consider the following canonical transformation:η=[q1⋮qNp1⋮pN]→ε=[Q1⋮QNP1⋮PN]{\displaystyle \eta ={\begin{bmatrix}q_{1}\\\vdots \\q_{N}\\p_{1}\\\vdots \\p_{N}\\\end{bmatrix}}\quad \rightarrow \quad \varepsilon ={\begin{bmatrix}Q_{1}\\\vdots \\Q_{N}\\P_{1}\\\vdots \\P_{N}\\\end{bmatrix}}}DefiningM:=∂(Q,P)∂(q,p){\textstyle M:={\frac {\partial (\mathbf {Q} ,\mathbf {P} )}{\partial (\mathbf {q} ,\mathbf {p} )}}}, the Poisson matrix is defined asP(ε)=MJMT{\textstyle {\mathcal {P}}(\varepsilon )=MJM^{T}}, whereJ{\displaystyle J}is the symplectic matrix under the same conventions used to order the set of coordinates. It follows from the definition that:Pij(ε)=[MJMT]ij=∑k=1N(∂εi∂ηk∂εj∂ηN+k−∂εi∂ηN+k∂εj∂ηk)=∑k=1N(∂εi∂qk∂εj∂pk−∂εi∂pk∂εj∂qk)={εi,εj}η.{\displaystyle {\mathcal {P}}_{ij}(\varepsilon )=[MJM^{T}]_{ij}=\sum _{k=1}^{N}\left({\frac {\partial \varepsilon _{i}}{\partial \eta _{k}}}{\frac {\partial \varepsilon _{j}}{\partial \eta _{N+k}}}-{\frac {\partial \varepsilon _{i}}{\partial \eta _{N+k}}}{\frac {\partial \varepsilon _{j}}{\partial \eta _{k}}}\right)=\sum _{k=1}^{N}\left({\frac {\partial \varepsilon _{i}}{\partial q_{k}}}{\frac {\partial \varepsilon _{j}}{\partial p_{k}}}-{\frac {\partial \varepsilon _{i}}{\partial p_{k}}}{\frac {\partial \varepsilon _{j}}{\partial q_{k}}}\right)=\{\varepsilon _{i},\varepsilon _{j}\}_{\eta }.}
The Poisson matrix satisfies the following known properties:PT=−P|P|=1|M|2P−1(ε)=−(M−1)TJM−1=−L(ε){\displaystyle {\begin{aligned}{\mathcal {P}}^{T}&=-{\mathcal {P}}\\|{\mathcal {P}}|&={\frac {1}{|M|^{2}}}\\{\mathcal {P}}^{-1}(\varepsilon )&=-(M^{-1})^{T}JM^{-1}=-{\mathcal {L}}(\varepsilon )\\\end{aligned}}}
where theL(ε){\textstyle {\mathcal {L}}(\varepsilon )}is known as a Lagrange matrix and whose elements correspond toLagrange brackets. The last identity can also be stated as the following:∑k=12N{ηi,ηk}[ηk,ηj]=−δij{\displaystyle \sum _{k=1}^{2N}\{\eta _{i},\eta _{k}\}[\eta _{k},\eta _{j}]=-\delta _{ij}}Note that the summation here involves generalized coordinates as well as generalized momentum.
The invariance of Poisson bracket can be expressed as:{εi,εj}η={εi,εj}ε=Jij{\textstyle \{\varepsilon _{i},\varepsilon _{j}\}_{\eta }=\{\varepsilon _{i},\varepsilon _{j}\}_{\varepsilon }=J_{ij}}, which directly leads to the symplectic condition:MJMT=J{\textstyle MJM^{T}=J}.[3]
Anintegrable systemwill haveconstants of motionin addition to the energy. Such constants of motion will commute with the Hamiltonian under the Poisson bracket. Suppose some functionf(p,q){\displaystyle f(p,q)}is a constant of motion. This implies that ifp(t),q(t){\displaystyle p(t),q(t)}is atrajectoryor solution toHamilton's equations of motion, then along that trajectory:0=dfdt{\displaystyle 0={\frac {df}{dt}}}Where, as above, the intermediate step follows by applying the equations of motion and we assume thatf{\displaystyle f}does not explicitly depend on time. This equation is known as theLiouville equation. The content ofLiouville's theoremis that the time evolution of ameasuregiven by adistribution functionf{\displaystyle f}is given by the above equation.
If the Poisson bracket off{\displaystyle f}andg{\displaystyle g}vanishes ({f,g}=0{\displaystyle \{f,g\}=0}), thenf{\displaystyle f}andg{\displaystyle g}are said to bein involution. In order for a Hamiltonian system to becompletely integrable,n{\displaystyle n}independent constants of motion must be inmutual involution, wheren{\displaystyle n}is the number of degrees of freedom.
Furthermore, according toPoisson's Theorem, if two quantitiesA{\displaystyle A}andB{\displaystyle B}are explicitly time independent (A(p,q),B(p,q){\displaystyle A(p,q),B(p,q)}) constants of motion, so is their Poisson bracket{A,B}{\displaystyle \{A,\,B\}}. This does not always supply a useful result, however, since the number of possible constants of motion is limited (2n−1{\displaystyle 2n-1}for a system withn{\displaystyle n}degrees of freedom), and so the result may be trivial (a constant, or a function ofA{\displaystyle A}andB{\displaystyle B}.)
LetM{\displaystyle M}be asymplectic manifold, that is, amanifoldequipped with asymplectic form: a2-formω{\displaystyle \omega }which is bothclosed(i.e., itsexterior derivativedω{\displaystyle d\omega }vanishes) andnon-degenerate. For example, in the treatment above, takeM{\displaystyle M}to beR2n{\displaystyle \mathbb {R} ^{2n}}and takeω=∑i=1ndqi∧dpi.{\displaystyle \omega =\sum _{i=1}^{n}dq_{i}\wedge dp_{i}.}
Ifιvω{\displaystyle \iota _{v}\omega }is theinterior productorcontractionoperation defined by(ιvω)(u)=ω(v,u){\displaystyle (\iota _{v}\omega )(u)=\omega (v,\,u)}, then non-degeneracy is equivalent to saying that for every one-formα{\displaystyle \alpha }there is a unique vector fieldΩα{\displaystyle \Omega _{\alpha }}such thatιΩαω=α{\displaystyle \iota _{\Omega _{\alpha }}\omega =\alpha }. Alternatively,ΩdH=ω−1(dH){\displaystyle \Omega _{dH}=\omega ^{-1}(dH)}. Then ifH{\displaystyle H}is a smooth function onM{\displaystyle M}, theHamiltonian vector fieldXH{\displaystyle X_{H}}can be defined to beΩdH{\displaystyle \Omega _{dH}}. It is easy to see thatXpi=∂∂qiXqi=−∂∂pi.{\displaystyle {\begin{aligned}X_{p_{i}}&={\frac {\partial }{\partial q_{i}}}\\X_{q_{i}}&=-{\frac {\partial }{\partial p_{i}}}.\end{aligned}}}
ThePoisson bracket{⋅,⋅}{\displaystyle \ \{\cdot ,\,\cdot \}}on(M,ω)is abilinear operationondifferentiable functions, defined by{f,g}=ω(Xf,Xg){\displaystyle \{f,\,g\}\;=\;\omega (X_{f},\,X_{g})}; the Poisson bracket of two functions onMis itself a function onM. The Poisson bracket is antisymmetric because:{f,g}=ω(Xf,Xg)=−ω(Xg,Xf)=−{g,f}.{\displaystyle \{f,g\}=\omega (X_{f},X_{g})=-\omega (X_{g},X_{f})=-\{g,f\}.}
Furthermore,
HereXgfdenotes the vector fieldXgapplied to the functionfas a directional derivative, andLXgf{\displaystyle {\mathcal {L}}_{X_{g}}f}denotes the (entirely equivalent)Lie derivativeof the functionf.
Ifαis an arbitrary one-form onM, the vector fieldΩαgenerates (at least locally) aflowϕx(t){\displaystyle \phi _{x}(t)}satisfying the boundary conditionϕx(0)=x{\displaystyle \phi _{x}(0)=x}and the first-order differential equationdϕxdt=Ωα|ϕx(t).{\displaystyle {\frac {d\phi _{x}}{dt}}=\left.\Omega _{\alpha }\right|_{\phi _{x}(t)}.}
Theϕx(t){\displaystyle \phi _{x}(t)}will besymplectomorphisms(canonical transformations) for everytas a function ofxif and only ifLΩαω=0{\displaystyle {\mathcal {L}}_{\Omega _{\alpha }}\omega \;=\;0}; when this is true,Ωαis called asymplectic vector field. RecallingCartan's identityLXω=d(ιXω)+ιXdω{\displaystyle {\mathcal {L}}_{X}\omega \;=\;d(\iota _{X}\omega )\,+\,\iota _{X}d\omega }anddω = 0, it follows thatLΩαω=d(ιΩαω)=dα{\displaystyle {\mathcal {L}}_{\Omega _{\alpha }}\omega \;=\;d\left(\iota _{\Omega _{\alpha }}\omega \right)\;=\;d\alpha }. Therefore,Ωαis a symplectic vector field if and only if α is aclosed form. Sinced(df)=d2f=0{\displaystyle d(df)\;=\;d^{2}f\;=\;0}, it follows that every Hamiltonian vector fieldXfis a symplectic vector field, and that the Hamiltonian flow consists of canonical transformations. From(1)above, under the Hamiltonian flowXH{\displaystyle X_{\mathcal {H}}},ddtf(ϕx(t))=XHf={f,H}.{\displaystyle {\frac {d}{dt}}f(\phi _{x}(t))=X_{\mathcal {H}}f=\{f,{\mathcal {H}}\}.}
This is a fundamental result in Hamiltonian mechanics, governing the time evolution of functions defined on phase space. As noted above, when{f,\mathcal H} = 0,fis a constant of motion of the system. In addition, in canonical coordinates (with{pi,pj}={qi,qj}=0{\displaystyle \{p_{i},\,p_{j}\}\;=\;\{q_{i},q_{j}\}\;=\;0}and{qi,pj}=δij{\displaystyle \{q_{i},\,p_{j}\}\;=\;\delta _{ij}}), Hamilton's equations for the time evolution of the system follow immediately from this formula.
It also follows from(1)that the Poisson bracket is aderivation; that is, it satisfies a non-commutative version of Leibniz'sproduct rule:
The Poisson bracket is intimately connected to theLie bracketof the Hamiltonian vector fields. Because the Lie derivative is a derivation,Lvιuω=ιLvuω+ιuLvω=ι[v,u]ω+ιuLvω.{\displaystyle {\mathcal {L}}_{v}\iota _{u}\omega =\iota _{{\mathcal {L}}_{v}u}\omega +\iota _{u}{\mathcal {L}}_{v}\omega =\iota _{[v,u]}\omega +\iota _{u}{\mathcal {L}}_{v}\omega .}
Thus ifvanduare symplectic, usingLvω=0=Luω{\displaystyle {\mathcal {L}}_{v}\omega =0={\mathcal {L}}_{u}\omega }, Cartan's identity, and the fact thatιuω{\displaystyle \iota _{u}\omega }is a closed form,ι[v,u]ω=Lvιuω=d(ιvιuω)+ιvd(ιuω)=d(ιvιuω)=d(ω(u,v)).{\displaystyle \iota _{[v,u]}\omega ={\mathcal {L}}_{v}\iota _{u}\omega =d(\iota _{v}\iota _{u}\omega )+\iota _{v}d(\iota _{u}\omega )=d(\iota _{v}\iota _{u}\omega )=d(\omega (u,v)).}
It follows that[v,u]=Xω(u,v){\displaystyle [v,u]=X_{\omega (u,v)}}, so that
Thus, the Poisson bracket on functions corresponds to the Lie bracket of the associated Hamiltonian vector fields. We have also shown that the Lie bracket of two symplectic vector fields is a Hamiltonian vector field and hence is also symplectic. In the language ofabstract algebra, the symplectic vector fields form asubalgebraof theLie algebraof smooth vector fields onM, and the Hamiltonian vector fields form anidealof this subalgebra. The symplectic vector fields are the Lie algebra of the (infinite-dimensional)Lie groupofsymplectomorphismsofM.
It is widely asserted that theJacobi identityfor the Poisson bracket,{f,{g,h}}+{g,{h,f}}+{h,{f,g}}=0{\displaystyle \{f,\{g,h\}\}+\{g,\{h,f\}\}+\{h,\{f,g\}\}=0}follows from the corresponding identity for the Lie bracket of vector fields, but this is true only up to a locally constant function. However, to prove the Jacobi identity for the Poisson bracket, it issufficientto show that:ad{g,f}=ad−{f,g}=[adf,adg]{\displaystyle \operatorname {ad} _{\{g,f\}}=\operatorname {ad} _{-\{f,g\}}=[\operatorname {ad} _{f},\operatorname {ad} _{g}]}where the operatoradg{\displaystyle \operatorname {ad} _{g}}on smooth functions onMis defined byadg(⋅)={⋅,g}{\displaystyle \operatorname {ad} _{g}(\cdot )\;=\;\{\cdot ,\,g\}}and the bracket on the right-hand side is the commutator of operators,[A,B]=AB−BA{\displaystyle [\operatorname {A} ,\,\operatorname {B} ]\;=\;\operatorname {A} \operatorname {B} -\operatorname {B} \operatorname {A} }. By(1), the operatoradg{\displaystyle \operatorname {ad} _{g}}is equal to the operatorXg. The proof of the Jacobi identity follows from(3)because, up to the factor of -1, the Lie bracket of vector fields is just their commutator as differential operators.
Thealgebraof smooth functions on M, together with the Poisson bracket forms aPoisson algebra, because it is aLie algebraunder the Poisson bracket, which additionally satisfies Leibniz's rule(2). We have shown that everysymplectic manifoldis aPoisson manifold, that is a manifold with a "curly-bracket" operator on smooth functions such that the smooth functions form a Poisson algebra. However, not every Poisson manifold arises in this way, because Poisson manifolds allow for degeneracy which cannot arise in the symplectic case.
Given a smoothvector fieldX{\displaystyle X}on the configuration space, letPX{\displaystyle P_{X}}be itsconjugate momentum. The conjugate momentum mapping is aLie algebraanti-homomorphism from theLie bracketto the Poisson bracket:{PX,PY}=−P[X,Y].{\displaystyle \{P_{X},P_{Y}\}=-P_{[X,Y]}.}
This important result is worth a short proof. Write a vector fieldX{\displaystyle X}at pointq{\displaystyle q}in theconfiguration spaceasXq=∑iXi(q)∂∂qi{\displaystyle X_{q}=\sum _{i}X^{i}(q){\frac {\partial }{\partial q^{i}}}}where∂∂qi{\textstyle {\frac {\partial }{\partial q^{i}}}}is the local coordinate frame. The conjugate momentum toX{\displaystyle X}has the expressionPX(q,p)=∑iXi(q)pi{\displaystyle P_{X}(q,p)=\sum _{i}X^{i}(q)\;p_{i}}where thepi{\displaystyle p_{i}}are the momentum functions conjugate to the coordinates. One then has, for a point(q,p){\displaystyle (q,p)}in thephase space,{PX,PY}(q,p)=∑i∑j{Xi(q)pi,Yj(q)pj}=∑ijpiYj(q)∂Xi∂qj−pjXi(q)∂Yj∂qi=−∑ipi[X,Y]i(q)=−P[X,Y](q,p).{\displaystyle {\begin{aligned}\{P_{X},P_{Y}\}(q,p)&=\sum _{i}\sum _{j}\left\{X^{i}(q)\;p_{i},Y^{j}(q)\;p_{j}\right\}\\&=\sum _{ij}p_{i}Y^{j}(q){\frac {\partial X^{i}}{\partial q^{j}}}-p_{j}X^{i}(q){\frac {\partial Y^{j}}{\partial q^{i}}}\\&=-\sum _{i}p_{i}\;[X,Y]^{i}(q)\\&=-P_{[X,Y]}(q,p).\end{aligned}}}
The above holds for all(q,p){\displaystyle (q,p)}, giving the desired result.
Poisson bracketsdeformtoMoyal bracketsuponquantization, that is, they generalize to a different Lie algebra, theMoyal algebra, or, equivalently inHilbert space, quantumcommutators. The Wigner-İnönügroup contractionof these (the classical limit,ħ → 0) yields the above Lie algebra.
To state this more explicitly and precisely, theuniversal enveloping algebraof theHeisenberg algebrais theWeyl algebra(modulo the relation that the center be the unit). The Moyal product is then a special case of the star product on the algebra of symbols. An explicit definition of the algebra of symbols, and the star product is given in the article on theuniversal enveloping algebra. | https://en.wikipedia.org/wiki/Poisson_bracket |
Inmathematical physics, theternary commutatoris an additionalternary operationon atriple systemdefined by
Also called theternutatororalternating ternary sum, it is a special case of then-commutatorforn= 3, whereas the 2-commutator is the ordinarycommutator.
Thisabstract algebra-related article is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Ternary_commutator |
Inmathematics, more specificallygroup theory, thethree subgroups lemmais a result concerningcommutators. It is a consequence ofPhilip HallandErnst Witt'seponymous identity.
In what follows, the following notation will be employed:
LetX,YandZbe subgroups of a groupG, and assume
Then[Z,X,Y]=1{\displaystyle [Z,X,Y]=1}.[1]
More generally, for anormal subgroupN{\displaystyle N}ofG{\displaystyle G}, if[X,Y,Z]⊆N{\displaystyle [X,Y,Z]\subseteq N}and[Y,Z,X]⊆N{\displaystyle [Y,Z,X]\subseteq N}, then[Z,X,Y]⊆N{\displaystyle [Z,X,Y]\subseteq N}.[2]
Hall–Witt identity
Ifx,y,z∈G{\displaystyle x,y,z\in G}, then
Proof of the three subgroups lemma
Letx∈X{\displaystyle x\in X},y∈Y{\displaystyle y\in Y}, andz∈Z{\displaystyle z\in Z}. Then[x,y−1,z]=1=[y,z−1,x]{\displaystyle [x,y^{-1},z]=1=[y,z^{-1},x]}, and by the Hall–Witt identity above, it follows that[z,x−1,y]x=1{\displaystyle [z,x^{-1},y]^{x}=1}and so[z,x−1,y]=1{\displaystyle [z,x^{-1},y]=1}. Therefore,[z,x−1]∈CG(Y){\displaystyle [z,x^{-1}]\in \mathbf {C} _{G}(Y)}for allz∈Z{\displaystyle z\in Z}andx∈X{\displaystyle x\in X}. Since these elements generate[Z,X]{\displaystyle [Z,X]}, we conclude that[Z,X]⊆CG(Y){\displaystyle [Z,X]\subseteq \mathbf {C} _{G}(Y)}and hence[Z,X,Y]=1{\displaystyle [Z,X,Y]=1}. | https://en.wikipedia.org/wiki/Three_subgroups_lemma |
Inquantum statistics,Bose–Einstein statistics(B–E statistics) describes one of two possible ways in which a collection of non-interactingidentical particlesmay occupy a set of available discreteenergy statesatthermodynamic equilibrium. The aggregation of particles in the same state, which is a characteristic of particles obeying Bose–Einstein statistics, accounts for the cohesive streaming oflaser lightand the frictionless creeping ofsuperfluid helium. The theory of this behaviour was developed (1924–25) bySatyendra Nath Bose, who recognized that a collection of identical and indistinguishable particles can be distributed in this way. The idea was later adopted and extended byAlbert Einsteinin collaboration with Bose.
Bose–Einstein statistics apply only to particles that do not follow thePauli exclusion principlerestrictions. Particles that follow Bose-Einstein statistics are calledbosons, which have integer values ofspin. In contrast, particles that followFermi-Dirac statisticsare calledfermionsand havehalf-integerspins.
At low temperatures, bosons behave differently fromfermions(which obey theFermi–Dirac statistics) in a way that an unlimited number of them can "condense" into the same energy state. This apparently unusual property also gives rise to the special state of matter – theBose–Einstein condensate. Fermi–Dirac and Bose–Einstein statistics apply whenquantum effectsare important and the particles are "indistinguishable". Quantum effects appear if the concentration of particles satisfiesNV≥nq,{\displaystyle {\frac {N}{V}}\geq n_{\text{q}},}whereNis the number of particles,Vis the volume, andnqis thequantum concentration, for which the interparticle distance is equal to thethermal de Broglie wavelength, so that thewavefunctionsof the particles are barely overlapping.
Fermi–Dirac statistics applies to fermions (particles that obey thePauli exclusion principle), and Bose–Einstein statistics applies tobosons. As the quantum concentration depends on temperature, most systems at high temperatures obey the classical (Maxwell–Boltzmann) limit, unless they also have a very high density, as for awhite dwarf. Both Fermi–Dirac and Bose–Einstein becomeMaxwell–Boltzmann statisticsat high temperature or at low concentration.
Bose–Einstein statistics was introduced forphotonsin 1924 byBoseand generalized to atoms byEinsteinin 1924–25.
The expected number of particles in an energy stateifor Bose–Einstein statistics is:
n¯i=gie(εi−μ)/kBT−1{\displaystyle {\bar {n}}_{i}={\frac {g_{i}}{e^{(\varepsilon _{i}-\mu )/k_{\text{B}}T}-1}}}
withεi>μand whereniis the occupation number (the number of particles) in statei,gi{\displaystyle g_{i}}is thedegeneracyof energy leveli,εiis theenergyof theith state,μis thechemical potential(zero for aphoton gas),kBis theBoltzmann constant, andTis theabsolute temperature.
The variance of this distributionV(n){\displaystyle V(n)}is calculated directly from the expression above for the average number.[1]V(n)=kT∂∂μn¯i=⟨n⟩(1+⟨n⟩)=n¯+n¯2{\displaystyle V(n)=kT{\frac {\partial }{\partial \mu }}{\bar {n}}_{i}=\langle n\rangle (1+\langle n\rangle )={\bar {n}}+{\bar {n}}^{2}}
For comparison, the average number of fermions with energyεi{\displaystyle \varepsilon _{i}}given byFermi–Dirac particle-energy distributionhas a similar form:n¯i(εi)=gie(εi−μ)/kBT+1.{\displaystyle {\bar {n}}_{i}(\varepsilon _{i})={\frac {g_{i}}{e^{(\varepsilon _{i}-\mu )/k_{\text{B}}T}+1}}.}
As mentioned above, both the Bose–Einstein distribution and the Fermi–Dirac distribution approaches theMaxwell–Boltzmann distributionin the limit of high temperature and low particle density, without the need for any ad hoc assumptions:
In addition to reducing to theMaxwell–Boltzmann distributionin the limit of highT{\displaystyle T}and low density, Bose–Einstein statistics also reduces toRayleigh–Jeans lawdistribution for low energy states withεi−μ≪kBT{\displaystyle \varepsilon _{i}-\mu \ll k_{\text{B}}T}, namelyn¯i=gie(εi−μ)/kBT−1≈gi(εi−μ)/kBT=gikBTεi−μ.{\displaystyle {\begin{aligned}{\bar {n}}_{i}&={\frac {g_{i}}{e^{(\varepsilon _{i}-\mu )/k_{\text{B}}T}-1}}\\&\approx {\frac {g_{i}}{(\varepsilon _{i}-\mu )/k_{\text{B}}T}}={\frac {g_{i}k_{\text{B}}T}{\varepsilon _{i}-\mu }}.\end{aligned}}}
Władysław Natansonin 1911 concluded that Planck's law requires indistinguishability of "units of energy", although he did not frame this in terms of Einstein's light quanta.[2][3]
While presenting a lecture at theUniversity of Dhaka(in what was thenBritish Indiaand is nowBangladesh) on the theory of radiation and theultraviolet catastrophe,Satyendra Nath Boseintended to show his students that the contemporary theory was inadequate, because it predicted results not in accordance with experimental results. During this lecture, Bose committed an error in applying the theory, which unexpectedly gave a prediction that agreed with the experiment. The error was a simple mistake – similar to arguing that flipping two fair coins will produce two heads one-third of the time – that would appear obviously wrong to anyone with a basic understanding of statistics (remarkably, this error resembled the famous blunder byd'Alembertknown from hisCroix ou Pilearticle[4][5]). However, the results it predicted agreed with experiment, and Bose realized it might not be a mistake after all. For the first time, he took the position that the Maxwell–Boltzmann distribution would not be true for all microscopic particles at all scales. Thus, he studied the probability of finding particles in various states in phase space, where each state is a little patch having phase volume ofh3, and the position and momentum of the particles are not kept particularly separate but are considered as one variable.
Bose adapted this lecture into a short article called "Planck's law and the hypothesis of light quanta"[6][7]and submitted it to thePhilosophical Magazine. However, the referee's report was negative, and the paper was rejected. Undaunted, he sent the manuscript to Albert Einstein requesting publication in theZeitschrift für Physik. Einstein immediately agreed, personally translated the article from English into German (Bose had earlier translated Einstein's article on the general theory of relativity from German to English), and saw to it that it was published. Bose's theory achieved respect when Einstein sent his own paper in support of Bose's toZeitschrift für Physik, asking that they be published together. The paper came out in 1924.[8]
The reason Bose produced accurate results was that since photons are indistinguishable from each other, one cannot treat any two photons having equal quantum numbers (e.g., polarization and momentum vector) as being two distinct identifiable photons. Bose originally had a factor of 2 for the possible spin states, but Einstein changed it to polarization.[9]By analogy, if in an alternate universe coins were to behave like photons and other bosons, the probability of producing two heads would indeed be one-third, and so is the probability of getting a head and a tail which equals one-half for the conventional (classical, distinguishable) coins. Bose's "error" leads to what is now called Bose–Einstein statistics.
Bose and Einstein extended the idea to atoms and this led to the prediction of the existence of phenomena which became known as Bose–Einstein condensate, a dense collection of bosons (which are particles with integer spin, named after Bose), which was demonstrated to exist by experiment in 1995.
In themicrocanonical ensemble, one considers a system with fixed energy, volume, and number of particles. We take a system composed ofN=∑ini{\textstyle N=\sum _{i}n_{i}}identical bosons,ni{\displaystyle n_{i}}of which have energyεi{\displaystyle \varepsilon _{i}}and are distributed overgi{\displaystyle g_{i}}levels or states with the same energyεi{\displaystyle \varepsilon _{i}}, i.e.gi{\displaystyle g_{i}}is the degeneracy associated with energyεi{\displaystyle \varepsilon _{i}}of total energyE=∑iniεi{\textstyle E=\sum _{i}n_{i}\varepsilon _{i}}. Calculation of the number of arrangements ofni{\displaystyle n_{i}}particles distributed amonggi{\displaystyle g_{i}}states is a problem ofcombinatorics. Since particles are indistinguishable in the quantum mechanical context here, the number of ways for arrangingni{\displaystyle n_{i}}particles ingi{\displaystyle g_{i}}boxes (for thei{\displaystyle i}th energy level) would be (see image):
wi,BE=(ni+gi−1)!ni!(gi−1)!=Cnini+gi−1,{\displaystyle w_{i,{\text{BE}}}={\frac {(n_{i}+g_{i}-1)!}{n_{i}!(g_{i}-1)!}}=C_{n_{i}}^{n_{i}+g_{i}-1},}whereCkm{\displaystyle C_{k}^{m}}is thek-combinationof a set withmelements. The total number of arrangements in an ensemble of bosons is simply the product of the binomial coefficientsCnini+gi−1{\displaystyle C_{n_{i}}^{n_{i}+g_{i}-1}}above over all the energy levels, i.e.WBE=∏iwi,BE=∏i(ni+gi−1)!(gi−1)!ni!,{\displaystyle W_{\text{BE}}=\prod _{i}w_{i,{\text{BE}}}=\prod _{i}{\frac {(n_{i}+g_{i}-1)!}{(g_{i}-1)!n_{i}!}},}
The maximum number of arrangements determining the corresponding occupation numberni{\displaystyle n_{i}}is obtained by maximizing theentropy, or equivalently, settingd(lnWBE)=0{\displaystyle \mathrm {d} (\ln W_{\text{BE}})=0}and taking the subsidiary conditionsN=∑ni,E=∑iniεi{\textstyle N=\sum n_{i},E=\sum _{i}n_{i}\varepsilon _{i}}into account (asLagrange multipliers).[10]The result forni≫1{\displaystyle n_{i}\gg 1},gi≫1{\displaystyle g_{i}\gg 1},ni/gi=O(1){\displaystyle n_{i}/g_{i}=O(1)}is the Bose–Einstein distribution.
The Bose–Einstein distribution, which applies only to a quantum system of non-interacting bosons, is naturally derived from thegrand canonical ensemblewithout any approximations.[11]In this ensemble, the system is able to exchange energy and exchange particles with a reservoir (temperatureTand chemical potentialμfixed by the reservoir).
Due to the non-interacting quality, each available single-particle level (with energy levelϵ) forms a separate thermodynamic system in contact with the reservoir. That is, the number of particles within the overall systemthat occupy a given single particle stateform a sub-ensemble that is also grand canonical ensemble; hence, it may be analysed through the construction of agrand partition function.
Every single-particle state is of a fixed energy,ε{\displaystyle \varepsilon }. As the sub-ensemble associated with a single-particle state varies by the number of particles only, it is clear that the total energy of the sub-ensemble is also directly proportional to the number of particles in the single-particle state; whereN{\displaystyle N}is the number of particles, the total energy of the sub-ensemble will then beNε{\displaystyle N\varepsilon }. Beginning with the standard expression for a grand partition function and replacingE{\displaystyle E}withNε{\displaystyle N\varepsilon }, the grand partition function takes the formZ=∑Nexp((Nμ−Nε)/kBT)=∑Nexp(N(μ−ε)/kBT){\displaystyle {\mathcal {Z}}=\sum _{N}\exp((N\mu -N\varepsilon )/k_{\text{B}}T)=\sum _{N}\exp(N(\mu -\varepsilon )/k_{\text{B}}T)}
This formula applies to fermionic systems as well as bosonic systems. Fermi–Dirac statistics arises when considering the effect of thePauli exclusion principle: whilst the number of fermions occupying the same single-particle state can only be either 1 or 0, the number of bosons occupying a single particle state may be any integer. Thus, the grand partition function for bosons can be considered ageometric seriesand may be evaluated as such:Z=∑N=0∞exp(N(μ−ε)/kBT)=∑N=0∞[exp((μ−ε)/kBT)]N=11−exp((μ−ε)/kBT).{\displaystyle {\begin{aligned}{\mathcal {Z}}&=\sum _{N=0}^{\infty }\exp(N(\mu -\varepsilon )/k_{\text{B}}T)=\sum _{N=0}^{\infty }[\exp((\mu -\varepsilon )/k_{\text{B}}T)]^{N}\\&={\frac {1}{1-\exp((\mu -\varepsilon )/k_{\text{B}}T)}}.\end{aligned}}}
Note that the geometric series is convergent only ife(μ−ε)/kBT<1{\displaystyle e^{(\mu -\varepsilon )/k_{\text{B}}T}<1}, including the case whereε=0{\displaystyle \varepsilon =0}. This implies that the chemical potential for the Bose gas must be negative, i.e.,μ<0{\displaystyle \mu <0}, whereas the Fermi gas is allowed to take both positive and negative values for the chemical potential.[12]
The average particle number for that single-particle substate is given by⟨N⟩=kBT1Z(∂Z∂μ)V,T=1exp((ε−μ)/kBT)−1{\displaystyle \langle N\rangle =k_{\text{B}}T{\frac {1}{\mathcal {Z}}}\left({\frac {\partial {\mathcal {Z}}}{\partial \mu }}\right)_{V,T}={\frac {1}{\exp((\varepsilon -\mu )/k_{\text{B}}T)-1}}}This result applies for each single-particle level and thus forms the Bose–Einstein distribution for the entire state of the system.[13][14]
Thevariancein particle number,σN2=⟨N2⟩−⟨N⟩2{\textstyle \sigma _{N}^{2}=\langle N^{2}\rangle -\langle N\rangle ^{2}}, is:σN2=kBT(d⟨N⟩dμ)V,T=exp((ε−μ)/kBT)(exp((ε−μ)/kBT)−1)2=⟨N⟩(1+⟨N⟩).{\displaystyle \sigma _{N}^{2}=k_{\text{B}}T\left({\frac {d\langle N\rangle }{d\mu }}\right)_{V,T}={\frac {\exp((\varepsilon -\mu )/k_{\text{B}}T)}{(\exp((\varepsilon -\mu )/k_{\text{B}}T)-1)^{2}}}=\langle N\rangle (1+\langle N\rangle ).}
As a result, for highly occupied states thestandard deviationof the particle number of an energy level is very large, slightly larger than the particle number itself:σN≈⟨N⟩{\displaystyle \sigma _{N}\approx \langle N\rangle }. This large uncertainty is due to the fact that theprobability distributionfor the number of bosons in a given energy level is ageometric distribution; somewhat counterintuitively, the most probable value forNis always 0. (In contrast,classical particleshave instead aPoisson distributionin particle number for a given state, with a much smaller uncertainty ofσN,classical=⟨N⟩{\textstyle \sigma _{N,{\rm {classical}}}={\sqrt {\langle N\rangle }}}, and with the most-probableNvalue being near⟨N⟩{\displaystyle \langle N\rangle }.)
It is also possible to derive approximate Bose–Einstein statistics in thecanonical ensemble. These derivations are lengthy and only yield the above results in the asymptotic limit of a large number of particles. The reason is that the total number of bosons is fixed in the canonical ensemble. The Bose–Einstein distribution in this case can be derived as in most texts by maximization, but the mathematically best derivation is by theDarwin–Fowler methodof mean values as emphasized by Dingle.[15]See also Müller-Kirsten.[10]The fluctuations of the ground state in the condensed region are however markedly different in the canonical and grand-canonical ensembles.[16]
Suppose we have a number of energy levels, labeled by indexi{\displaystyle i}, each level having energyεi{\displaystyle \varepsilon _{i}}and containing a total ofni{\displaystyle n_{i}}particles. Suppose each level containsgi{\displaystyle g_{i}}distinct sublevels, all of which have the same energy, and which are distinguishable. For example, two particles may have different momenta, in which case they are distinguishable from each other, yet they can still have the same energy.
The value ofgi{\displaystyle g_{i}}associated with leveli{\displaystyle i}is called the "degeneracy" of that energy level. Any number of bosons can occupy the same sublevel.
Letw(n,g){\displaystyle w(n,g)}be the number of ways of distributingn{\displaystyle n}particles among theg{\displaystyle g}sublevels of an energy level. There is only one way of distributingn{\displaystyle n}particles with one sublevel, thereforew(n,1)=1{\displaystyle w(n,1)=1}. It is easy to see that there are(n+1){\displaystyle (n+1)}ways of distributingn{\displaystyle n}particles in two sublevels which we will write as:w(n,2)=(n+1)!n!1!.{\displaystyle w(n,2)={\frac {(n+1)!}{n!1!}}.}
With a little thought (seeNotesbelow) it can be seen that the number of ways of distributingn{\displaystyle n}particles in three sublevels isw(n,3)=w(n,2)+w(n−1,2)+⋯+w(1,2)+w(0,2){\displaystyle w(n,3)=w(n,2)+w(n-1,2)+\cdots +w(1,2)+w(0,2)}so thatw(n,3)=∑k=0nw(n−k,2)=∑k=0n(n−k+1)!(n−k)!1!=(n+2)!n!2!{\displaystyle w(n,3)=\sum _{k=0}^{n}w(n-k,2)=\sum _{k=0}^{n}{\frac {(n-k+1)!}{(n-k)!1!}}={\frac {(n+2)!}{n!2!}}}where we have used the followingtheoreminvolvingbinomial coefficients:∑k=0n(k+a)!k!a!=(n+a+1)!n!(a+1)!.{\displaystyle \sum _{k=0}^{n}{\frac {(k+a)!}{k!a!}}={\frac {(n+a+1)!}{n!(a+1)!}}.}
Continuing this process, we can see thatw(n,g){\displaystyle w(n,g)}is just a binomial coefficient
(SeeNotesbelow)w(n,g)=(n+g−1)!n!(g−1)!.{\displaystyle w(n,g)={\frac {(n+g-1)!}{n!(g-1)!}}.}
For example, the population numbers for two particles in three sublevels are 200, 110, 101, 020, 011, or 002 for a total of six which equals 4!/(2!2!). The number of ways that a set of occupation numbersni{\displaystyle n_{i}}can be realized is the product of the ways that each individual energy level can be populated:W=∏iw(ni,gi)=∏i(ni+gi−1)!ni!(gi−1)!≈∏i(ni+gi)!ni!(gi)!{\displaystyle W=\prod _{i}w(n_{i},g_{i})=\prod _{i}{\frac {(n_{i}+g_{i}-1)!}{n_{i}!(g_{i}-1)!}}\approx \prod _{i}{\frac {(n_{i}+g_{i})!}{n_{i}!(g_{i})!}}}where the approximation assumes thatni≫1{\displaystyle n_{i}\gg 1}.
Following the same procedure used in deriving theMaxwell–Boltzmann statistics, we wish to find the set ofni{\displaystyle n_{i}}for whichWis maximised, subject to the constraint that there be a fixed total number of particles, and a fixed total energy. The maxima ofW{\displaystyle W}andln(W){\displaystyle \ln(W)}occur at the same value ofni{\displaystyle n_{i}}and, since it is easier to accomplish mathematically, we will maximise the latter function instead. We constrain our solution usingLagrange multipliersforming the function:f(ni)=ln(W)+α(N−∑ni)+β(E−∑niεi){\displaystyle f(n_{i})=\ln(W)+\alpha (N-\sum n_{i})+\beta (E-\sum n_{i}\varepsilon _{i})}
Using theni≫1{\displaystyle n_{i}\gg 1}approximation and usingStirling's approximationfor the factorials(x!≈xxe−x2πx){\displaystyle \left(x!\approx x^{x}\,e^{-x}\,{\sqrt {2\pi x}}\right)}givesf(ni)=∑i(ni+gi)ln(ni+gi)−niln(ni)+α(N−∑ni)+β(E−∑niεi)+K,{\displaystyle f(n_{i})=\sum _{i}(n_{i}+g_{i})\ln(n_{i}+g_{i})-n_{i}\ln(n_{i})+\alpha \left(N-\sum n_{i}\right)+\beta \left(E-\sum n_{i}\varepsilon _{i}\right)+K,}whereKis the sum of a number of terms which are not functions of theni{\displaystyle n_{i}}. Taking the derivative with respect toni{\displaystyle n_{i}}, and setting the result to zero and solving forni{\displaystyle n_{i}}, yields the Bose–Einstein population numbers:ni=gieα+βεi−1.{\displaystyle n_{i}={\frac {g_{i}}{e^{\alpha +\beta \varepsilon _{i}}-1}}.}
By a process similar to that outlined in theMaxwell–Boltzmann statisticsarticle, it can be seen that:dlnW=αdN+βdE{\displaystyle d\ln W=\alpha \,dN+\beta \,dE}which, using Boltzmann's famous relationshipS=kBlnW{\displaystyle S=k_{\text{B}}\,\ln W}becomes a statement of thesecond law of thermodynamicsat constant volume, and it follows thatβ=1kBT{\displaystyle \beta ={\frac {1}{k_{\text{B}}T}}}andα=−μkBT{\displaystyle \alpha =-{\frac {\mu }{k_{\text{B}}T}}}whereSis theentropy,μ{\displaystyle \mu }is thechemical potential,kBis theBoltzmann constantandTis thetemperature, so that finally:ni=gie(εi−μ)/kBT−1.{\displaystyle n_{i}={\frac {g_{i}}{e^{(\varepsilon _{i}-\mu )/k_{\text{B}}T}-1}}.}
Note that the above formula is sometimes written:ni=gieεi/kBT/z−1,{\displaystyle n_{i}={\frac {g_{i}}{e^{\varepsilon _{i}/k_{\text{B}}T}/z-1}},}wherez=exp(μ/kBT){\displaystyle z=\exp(\mu /k_{\text{B}}T)}is the absoluteactivity, as noted by McQuarrie.[17]
Also note that when the particle numbers are not conserved, removing the conservation of particle numbers constraint is equivalent to settingα{\displaystyle \alpha }and therefore the chemical potentialμ{\displaystyle \mu }to zero. This will be the case for photons and massive particles in mutual equilibrium and the resulting distribution will be thePlanck distribution.
A much simpler way to think of Bose–Einstein distribution function is to consider thatnparticles are denoted by identical balls andg shells are marked by g-1 line partitions.It is clear that thepermutationsof thesen ballsandg − 1 partitionswill give different ways of arranging bosons in different energy levels. Say, for 3 (=n) particles and 3 (=g) shells, therefore(g− 1) = 2, the arrangement might be|●●|●, or||●●●, or|●|●●, etc. Hence the number of distinct permutations ofn+ (g− 1)objects which havenidentical items and (g− 1) identical items will be:(g−1+n)!(g−1)!n!{\displaystyle {\frac {(g-1+n)!}{(g-1)!n!}}}
OR
The purpose of these notes is to clarify some aspects of the derivation of the Bose–Einstein distribution for beginners. The enumeration of cases (or ways) in the Bose–Einstein distribution can be recast as follows. Consider a game of dice throwing in which there aren{\displaystyle n}dice, with each die taking values in the set{1,…,g}{\displaystyle \{1,\dots ,g\}}, forg≥1{\displaystyle g\geq 1}. The constraints of the game are that the value of a diei{\displaystyle i}, denoted bymi{\displaystyle m_{i}}, has to begreater than or equal tothe value of die(i−1){\displaystyle (i-1)}, denoted bymi−1{\displaystyle m_{i-1}}, in the previous throw, i.e.,mi≥mi−1{\displaystyle m_{i}\geq m_{i-1}}. Thus a valid sequence of die throws can be described by ann-tuple(m1,m2,…,mn){\displaystyle (m_{1},m_{2},\dots ,m_{n})}, such thatmi≥mi−1{\displaystyle m_{i}\geq m_{i-1}}. LetS(n,g){\displaystyle S(n,g)}denote the set of these validn-tuples:
Then the quantityw(n,g){\displaystyle w(n,g)}(defined aboveas the number of ways to distributen{\displaystyle n}particles among theg{\displaystyle g}sublevels of an energy level) is the cardinality ofS(n,g){\displaystyle S(n,g)}, i.e., the number of elements (or validn-tuples) inS(n,g){\displaystyle S(n,g)}. Thus the problem of finding an expression forw(n,g){\displaystyle w(n,g)}becomes the problem of counting the elements inS(n,g){\displaystyle S(n,g)}.
Examplen= 4,g= 3:S(4,3)={(1111),(1112),(1113)⏟(a),(1122),(1123),(1133)⏟(b),(1222),(1223),(1233),(1333)⏟(c),(2222),(2223),(2233),(2333),(3333)⏟(d)}{\displaystyle S(4,3)=\left\{\underbrace {(1111),(1112),(1113)} _{(a)},\underbrace {(1122),(1123),(1133)} _{(b)},\underbrace {(1222),(1223),(1233),(1333)} _{(c)},\underbrace {(2222),(2223),(2233),(2333),(3333)} _{(d)}\right\}}w(4,3)=15{\displaystyle w(4,3)=15}(there are15{\displaystyle 15}elements inS(4,3){\displaystyle S(4,3)})
Subset(a){\displaystyle (a)}is obtained by fixing all indicesmi{\displaystyle m_{i}}to1{\displaystyle 1}, except for the last index,mn{\displaystyle m_{n}}, which is incremented from1{\displaystyle 1}tog=3{\displaystyle g=3}. Subset(b){\displaystyle (b)}is obtained by fixingm1=m2=1{\displaystyle m_{1}=m_{2}=1}, and incrementingm3{\displaystyle m_{3}}from2{\displaystyle 2}tog=3{\displaystyle g=3}. Due to the constraintmi≥mi−1{\displaystyle m_{i}\geq m_{i-1}}on the indices inS(n,g){\displaystyle S(n,g)}, the indexm4{\displaystyle m_{4}}must automatically take values in{2,3}{\displaystyle \left\{2,3\right\}}. The construction of subsets(c){\displaystyle (c)}and(d){\displaystyle (d)}follows in the same manner.
Each element ofS(4,3){\displaystyle S(4,3)}can be thought of as amultisetof cardinalityn=4{\displaystyle n=4}; the elements of such multiset are taken from the set{1,2,3}{\displaystyle \left\{1,2,3\right\}}of cardinalityg=3{\displaystyle g=3}, and the number of such multisets is themultiset coefficient⟨34⟩=(3+4−13−1)=(3+4−14)=6!4!2!=15{\displaystyle \left\langle {\begin{matrix}3\\4\end{matrix}}\right\rangle ={3+4-1 \choose 3-1}={3+4-1 \choose 4}={\frac {6!}{4!2!}}=15}
More generally, each element ofS(n,g){\displaystyle S(n,g)}is amultisetof cardinalityn{\displaystyle n}(number of dice) with elements taken from the set{1,…,g}{\displaystyle \left\{1,\dots ,g\right\}}of cardinalityg{\displaystyle g}(number of possible values of each die), and the number of such multisets, i.e.,w(n,g){\displaystyle w(n,g)}is themultiset coefficient
w(n,g)=⟨gn⟩=(g+n−1g−1)=(g+n−1n)=(g+n−1)!n!(g−1)!{\displaystyle w(n,g)=\left\langle {\begin{matrix}g\\n\end{matrix}}\right\rangle ={g+n-1 \choose g-1}={g+n-1 \choose n}={\frac {(g+n-1)!}{n!(g-1)!}}}
which is exactly the same as theformulaforw(n,g){\displaystyle w(n,g)}, as derived above with the aid of atheoreminvolving binomial coefficients, namely
∑k=0n(k+a)!k!a!=(n+a+1)!n!(a+1)!.{\displaystyle \sum _{k=0}^{n}{\frac {(k+a)!}{k!a!}}={\frac {(n+a+1)!}{n!(a+1)!}}.}
To understand the decomposition
w(n,g)=∑k=0nw(n−k,g−1)=w(n,g−1)+w(n−1,g−1)+⋯+w(1,g−1)+w(0,g−1){\displaystyle w(n,g)=\sum _{k=0}^{n}w(n-k,g-1)=w(n,g-1)+w(n-1,g-1)+\dots +w(1,g-1)+w(0,g-1)}
or for example,n=4{\displaystyle n=4}andg=3{\displaystyle g=3}w(4,3)=w(4,2)+w(3,2)+w(2,2)+w(1,2)+w(0,2),{\displaystyle w(4,3)=w(4,2)+w(3,2)+w(2,2)+w(1,2)+w(0,2),}
let us rearrange the elements ofS(4,3){\displaystyle S(4,3)}as followsS(4,3)={(1111),(1112),(1122),(1222),(2222)⏟(α),(1113=),(1123=),(1223=),(2223=)⏟(β),(1133==),(1233==),(2233==)⏟(γ),(1333===),(2333===)⏟(δ)(3333====)⏟(ω)}.{\displaystyle S(4,3)=\left\{\underbrace {(1111),(1112),(1122),(1222),(2222)} _{(\alpha )},\underbrace {(111{\color {Red}{\underset {=}{3}}}),(112{\color {Red}{\underset {=}{3}}}),(122{\color {Red}{\underset {=}{3}}}),(222{\color {Red}{\underset {=}{3}}})} _{(\beta )},\underbrace {(11{\color {Red}{\underset {==}{33}}}),(12{\color {Red}{\underset {==}{33}}}),(22{\color {Red}{\underset {==}{33}}})} _{(\gamma )},\underbrace {(1{\color {Red}{\underset {===}{333}}}),(2{\color {Red}{\underset {===}{333}}})} _{(\delta )}\underbrace {({\color {Red}{\underset {====}{3333}}})} _{(\omega )}\right\}.}
Clearly, the subset(α){\displaystyle (\alpha )}ofS(4,3){\displaystyle S(4,3)}is the same as the setS(4,2)={(1111),(1112),(1122),(1222),(2222)}.{\displaystyle S(4,2)=\left\{(1111),(1112),(1122),(1222),(2222)\right\}.}
By deleting the indexm4=3{\displaystyle m_{4}=3}(shown inred with double underline) in the subset(β){\displaystyle (\beta )}ofS(4,3){\displaystyle S(4,3)}, one obtains the setS(3,2)={(111),(112),(122),(222)}.{\displaystyle S(3,2)=\left\{(111),(112),(122),(222)\right\}.}
In other words, there is a one-to-one correspondence between the subset(β){\displaystyle (\beta )}ofS(4,3){\displaystyle S(4,3)}and the setS(3,2){\displaystyle S(3,2)}. We write(β)⟷S(3,2).{\displaystyle (\beta )\longleftrightarrow S(3,2).}
Similarly, it is easy to see that(γ)⟷S(2,2)={(11),(12),(22)}{\displaystyle (\gamma )\longleftrightarrow S(2,2)=\left\{(11),(12),(22)\right\}}(δ)⟷S(1,2)={(1),(2)}{\displaystyle (\delta )\longleftrightarrow S(1,2)=\left\{(1),(2)\right\}}(ω)⟷S(0,2)={}=∅.{\displaystyle (\omega )\longleftrightarrow S(0,2)=\{\}=\varnothing .}
Thus we can writeS(4,3)=⋃k=04S(4−k,2){\displaystyle S(4,3)=\bigcup _{k=0}^{4}S(4-k,2)}or more generally,
S(n,g)=⋃k=0nS(n−k,g−1);{\displaystyle S(n,g)=\bigcup _{k=0}^{n}S(n-k,g-1);}
and since the setsS(i,g−1),fori=0,…,n{\displaystyle S(i,g-1),{\text{ for }}i=0,\dots ,n}are non-intersecting, we thus have
w(n,g)=∑k=0nw(n−k,g−1),{\displaystyle w(n,g)=\sum _{k=0}^{n}w(n-k,g-1),}
with the convention that
w(0,g)=1,∀g,andw(n,0)=1,∀n.{\displaystyle w(0,g)=1\ ,\forall g,{\text{ and }}w(n,0)=1\ ,\forall n.}
Continuing the process, we arrive at the following formulaw(n,g)=∑k1=0n∑k2=0n−k1w(n−k1−k2,g−2)=∑k1=0n∑k2=0n−k1⋯∑kg=0n−∑j=1g−1kjw(n−∑i=1gki,0).{\displaystyle w(n,g)=\sum _{k_{1}=0}^{n}\sum _{k_{2}=0}^{n-k_{1}}w(n-k_{1}-k_{2},g-2)=\sum _{k_{1}=0}^{n}\sum _{k_{2}=0}^{n-k_{1}}\cdots \sum _{k_{g}=0}^{n-\sum _{j=1}^{g-1}k_{j}}w(n-\sum _{i=1}^{g}k_{i},0).}Using the convention (7)2above, we obtain the formula
w(n,g)=∑k1=0n∑k2=0n−k1⋯∑kg=0n−∑j=1g−1kj1,{\displaystyle w(n,g)=\sum _{k_{1}=0}^{n}\sum _{k_{2}=0}^{n-k_{1}}\cdots \sum _{k_{g}=0}^{n-\sum _{j=1}^{g-1}k_{j}}1,}
keeping in mind that forq{\displaystyle q}andp{\displaystyle p}being constants, we have
It can then be verified that (8) and (2) give the same result forw(4,3){\displaystyle w(4,3)},w(3,3){\displaystyle w(3,3)},w(3,2){\displaystyle w(3,2)}, etc.
Viewed as a pureprobability distribution, the Bose–Einstein distribution has found application in other fields: | https://en.wikipedia.org/wiki/Bose%E2%80%93Einstein_statistics |
Fermi–Dirac statisticsis a type ofquantum statisticsthat applies to thephysicsof asystemconsisting of many non-interacting,identical particlesthat obey thePauli exclusion principle. A result is the Fermi–Dirac distribution of particles overenergy states. It is named afterEnrico FermiandPaul Dirac, each of whom derived the distribution independently in 1926.[1][2]Fermi–Dirac statistics is a part of the field ofstatistical mechanicsand uses the principles ofquantum mechanics.
Fermi–Dirac statistics applies to identical and indistinguishable particles withhalf-integerspin(1/2, 3/2, etc.), calledfermions, inthermodynamic equilibrium. For the case of negligible interaction between particles, the system can be described in terms of single-particleenergy states. A result is the Fermi–Dirac distribution of particles over these states where no two particles can occupy the same state, which has a considerable effect on the properties of the system. Fermi–Dirac statistics is most commonly applied toelectrons, a type of fermion withspin 1/2.
A counterpart to Fermi–Dirac statistics isBose–Einstein statistics, which applies to identical and indistinguishable particles with integer spin (0, 1, 2, etc.) calledbosons. In classical physics,Maxwell–Boltzmann statisticsis used to describe particles that are identical and treated as distinguishable. For both Bose–Einstein and Maxwell–Boltzmann statistics, more than one particle can occupy the same state, unlike Fermi–Dirac statistics.
Before the introduction of Fermi–Dirac statistics in 1926, understanding some aspects of electron behavior was difficult due to seemingly contradictory phenomena. For example, the electronicheat capacityof a metal atroom temperatureseemed to come from 100 times fewerelectronsthan were in theelectric current.[3]It was also difficult to understand why theemission currentsgenerated by applying high electric fields to metals at room temperature were almost independent of temperature.
The difficulty encountered by theDrude model, the electronic theory of metals at that time, was due to considering that electrons were (according to classical statistics theory) all equivalent. In other words, it was believed that each electron contributed to the specific heat an amount on the order of theBoltzmann constantkB.
This problem remained unsolved until the development of Fermi–Dirac statistics.
Fermi–Dirac statistics was first published in 1926 byEnrico Fermi[1]andPaul Dirac.[2]According toMax Born,Pascual Jordandeveloped in 1925 the same statistics, which he calledPaulistatistics, but it was not published in a timely manner.[4][5][6]According to Dirac, it was first studied by Fermi, and Dirac called it "Fermi statistics" and the corresponding particles "fermions".[7]
Fermi–Dirac statistics was applied in 1926 byRalph Fowlerto describe the collapse of astarto awhite dwarf.[8]In 1927Arnold Sommerfeldapplied it to electrons in metals and developed thefree electron model,[9]and in 1928 Fowler andLothar Nordheimapplied it tofield electron emissionfrom metals.[10]Fermi–Dirac statistics continue to be an important part of physics.
For a system of identical fermions in thermodynamic equilibrium, the average number of fermions in a single-particle stateiis given by theFermi–Dirac (F–D) distribution:[11][nb 1]
n¯i=1e(εi−μ)/kBT+1,{\displaystyle {\bar {n}}_{i}={\frac {1}{e^{(\varepsilon _{i}-\mu )/k_{\text{B}}T}+1}},}
wherekBis theBoltzmann constant,Tis the absolutetemperature,εiis the energy of the single-particle statei, andμis thetotal chemical potential. The distribution is normalized by the condition
that can be used to expressμ=μ(T,N){\displaystyle \mu =\mu (T,N)}in thatμ{\displaystyle \mu }can assume either a positive or negative value.[12]
At zero absolute temperature,μis equal to theFermi energyplus the potential energy per fermion, provided it is in aneighbourhoodof positive spectral density. In the case of a spectral gap, such as for electrons in a semiconductor, the point of symmetryμis typically called theFermi levelor—for electrons—theelectrochemical potential, and will be located in the middle of the gap.[13][14]
The Fermi–Dirac distribution is only valid if the number of fermions in the system is large enough so that adding one more fermion to the system has negligible effect onμ.[15]Since the Fermi–Dirac distribution was derived using thePauli exclusion principle, which allows at most one fermion to occupy each possible state, a result is that0<n¯i<1{\displaystyle 0<{\bar {n}}_{i}<1}.[nb 2]
Thevarianceof the number of particles in stateican be calculated from the above expression forn¯i{\displaystyle {\bar {n}}_{i}}:[17][18]
From the Fermi–Dirac distribution of particles over states, one can find the distribution of particles over energy.[nb 3]The average number of fermions with energyεi{\displaystyle \varepsilon _{i}}can be found by multiplying the Fermi–Dirac distributionn¯i{\displaystyle {\bar {n}}_{i}}by thedegeneracygi{\displaystyle g_{i}}(i.e. the number of states with energyεi{\displaystyle \varepsilon _{i}}),[19]
Whengi≥2{\displaystyle g_{i}\geq 2}, it is possible thatn¯(εi)>1{\displaystyle {\bar {n}}(\varepsilon _{i})>1}, since there is more than one state that can be occupied by fermions with the same energyεi{\displaystyle \varepsilon _{i}}.
When a quasi-continuum of energiesε{\displaystyle \varepsilon }has an associateddensity of statesg(ε){\displaystyle g(\varepsilon )}(i.e. the number of states per unit energy range per unit volume[20]), the average number of fermions per unit energy range per unit volume is
whereF(ε){\displaystyle F(\varepsilon )}is called theFermi functionand is the samefunctionthat is used for the Fermi–Dirac distributionn¯i{\displaystyle {\bar {n}}_{i}}:[21]
so that
The Fermi–Dirac distribution approaches theMaxwell–Boltzmann distributionin the limit of high temperature and low particle density, without the need for any ad hoc assumptions:
The classical regime, whereMaxwell–Boltzmann statisticscan be used as an approximation to Fermi–Dirac statistics, is found by considering the situation that is far from the limit imposed by theHeisenberg uncertainty principlefor a particle's position andmomentum. For example, in physics of semiconductor, when the density of states of conduction band is much higher than the doping concentration, the energy gap between conduction band and fermi level could be calculated using Maxwell-Boltzmann statistics. Otherwise, if the doping concentration is not negligible compared to density of states of conduction band, the Fermi–Dirac distribution should be used instead for accurate calculation. It can then be shown that the classical situation prevails when theconcentrationof particles corresponds to anaverage interparticle separationR¯{\displaystyle {\bar {R}}}that is much greater than the averagede Broglie wavelengthλ¯{\displaystyle {\bar {\lambda }}}of the particles:[22]
wherehis thePlanck constant, andmis themass of a particle.
For the case of conduction electrons in a typical metal atT= 300K(i.e. approximately room temperature), the system is far from the classical regime becauseR¯≈λ¯/25{\displaystyle {\bar {R}}\approx {\bar {\lambda }}/25}. This is due to the small mass of the electron and the high concentration (i.e. smallR¯{\displaystyle {\bar {R}}}) of conduction electrons in the metal. Thus Fermi–Dirac statistics is needed for conduction electrons in a typical metal.[22]
Another example of a system that is not in the classical regime is the system that consists of the electrons of a star that has collapsed to a white dwarf. Although the temperature of white dwarf is high (typicallyT=10000Kon its surface[23]), its high electron concentration and the small mass of each electron precludes using a classical approximation, and again Fermi–Dirac statistics is required.[8]
The Fermi–Dirac distribution, which applies only to a quantum system of non-interacting fermions, is easily derived from thegrand canonical ensemble.[24]In this ensemble, the system is able to exchange energy and exchange particles with a reservoir (temperatureTand chemical potentialμfixed by the reservoir).
Due to the non-interacting quality, each available single-particle level (with energy levelϵ) forms a separate thermodynamic system in contact with the reservoir.
In other words, each single-particle level is a separate, tiny grand canonical ensemble.
By the Pauli exclusion principle, there are only two possiblemicrostatesfor the single-particle level: no particle (energyE= 0), or one particle (energyE=ε). The resultingpartition functionfor that single-particle level therefore has just two terms:
and the average particle number for that single-particle level substate is given by
This result applies for each single-particle level, and thus gives the Fermi–Dirac distribution for the entire state of the system.[24]
The variance in particle number (due tothermal fluctuations) may also be derived (the particle number has a simpleBernoulli distribution):
This quantity is important in transport phenomena such as theMott relationsfor electrical conductivity andthermoelectric coefficientfor anelectron gas,[25]where the ability of an energy level to contribute to transport phenomena is proportional to⟨(ΔN)2⟩{\displaystyle {\big \langle }(\Delta N)^{2}{\big \rangle }}.
It is also possible to derive Fermi–Dirac statistics in thecanonical ensemble. Consider a many-particle system composed ofNidentical fermions that have negligible mutual interaction and are in thermal equilibrium.[15]Since there is negligible interaction between the fermions, the energyER{\displaystyle E_{R}}of a stateR{\displaystyle R}of the many-particle system can be expressed as a sum of single-particle energies:
wherenr{\displaystyle n_{r}}is called the occupancy number and is the number of particles in the single-particle stater{\displaystyle r}with energyεr{\displaystyle \varepsilon _{r}}. The summation is over all possible single-particle statesr{\displaystyle r}.
The probability that the many-particle system is in the stateR{\displaystyle R}is given by the normalizedcanonical distribution:[26]
whereβ=1/kBT{\displaystyle \beta =1/k_{\text{B}}T},e−βER{\displaystyle e^{-\beta E_{R}}}is called theBoltzmann factor, and the summation is over all possible statesR′{\displaystyle R'}of the many-particle system. The average value for an occupancy numberni{\displaystyle n_{i}}is[26]
Note that the stateR{\displaystyle R}of the many-particle system can be specified by the particle occupancy of the single-particle states, i.e. by specifyingn1,n2,…,{\displaystyle n_{1},n_{2},\ldots ,}so that
and the equation forn¯i{\displaystyle {\bar {n}}_{i}}becomes
where the summation is over all combinations of values ofn1,n2,…{\displaystyle n_{1},n_{2},\ldots }which obey the Pauli exclusion principle, andnr=0{\displaystyle n_{r}=0}= 0 or1{\displaystyle 1}for eachr{\displaystyle r}. Furthermore, each combination of values ofn1,n2,…{\displaystyle n_{1},n_{2},\ldots }satisfies the constraint that the total number of particles isN{\displaystyle N}:
Rearranging the summations,
where the upper index(i){\displaystyle (i)}on the summation sign indicates that the sum is not overni{\displaystyle n_{i}}and is subject to the constraint that the total number of particles associated with the summation isNi=N−ni{\displaystyle N_{i}=N-n_{i}}. Note that∑(i){\displaystyle \textstyle \sum ^{(i)}}still depends onni{\displaystyle n_{i}}through theNi{\displaystyle N_{i}}constraint, since in one caseni=0{\displaystyle n_{i}=0}and∑(i){\displaystyle \textstyle \sum ^{(i)}}is evaluated withNi=N,{\displaystyle N_{i}=N,}while in the other caseni=1,{\displaystyle n_{i}=1,}and∑(i){\displaystyle \textstyle \sum ^{(i)}}is evaluated withNi=N−1.{\displaystyle N_{i}=N-1.}To simplify the notation and to clearly indicate that∑(i){\displaystyle \textstyle \sum ^{(i)}}still depends onni{\displaystyle n_{i}}throughN−ni,{\displaystyle N-n_{i},}define
so that the previous expression forn¯i{\displaystyle {\bar {n}}_{i}}can be rewritten and evaluated in terms of theZi{\displaystyle Z_{i}}:
The following approximation[27]will be used to find an expression to substitute forZi(N)/Zi(N−1){\displaystyle Z_{i}(N)/Z_{i}(N-1)}:
whereαi≡∂lnZi(N)∂N.{\displaystyle \alpha _{i}\equiv {\frac {\partial \ln Z_{i}(N)}{\partial N}}.}
If the number of particlesN{\displaystyle N}is large enough so that the change in the chemical potentialμ{\displaystyle \mu }is very small when a particle is added to the system, thenαi≃−μ/kBT.{\displaystyle \alpha _{i}\simeq -\mu /k_{\text{B}}T.}[28]Applying the exponential function to both sides, substituting forαi{\displaystyle \alpha _{i}}and rearranging,
Substituting the above into the equation forn¯i{\displaystyle {\bar {n}}_{i}}and using a previous definition ofβ{\displaystyle \beta }to substitute1/kBT{\displaystyle 1/k_{\text{B}}T}forβ{\displaystyle \beta }, results in the Fermi–Dirac distribution:
Like theMaxwell–Boltzmann distributionand theBose–Einstein distribution, the Fermi–Dirac distribution can also be derived by theDarwin–Fowler methodof mean values.[29]
A result can be achieved by directly analyzing the multiplicities of the system and usingLagrange multipliers.[30]
Suppose we have a number of energy levels, labeled by indexi, each level having energy εiand containing a total ofniparticles. Suppose each level containsgidistinct sublevels, all of which have the same energy, and which are distinguishable. For example, two particles may have different momenta (i.e. their momenta may be along different directions), in which case they are distinguishable from each other, yet they can still have the same energy. The value ofgiassociated with leveliis called the "degeneracy" of that energy level. ThePauli exclusion principlestates that only one fermion can occupy any such sublevel.
The number of ways of distributingniindistinguishable particles among thegisublevels of an energy level, with a maximum of one particle per sublevel, is given by thebinomial coefficient, using itscombinatorial interpretation:
For example, distributing two particles in three sublevels will give population numbers of 110, 101, or 011 for a total of three ways which equals 3!/(2!1!).
The number of ways that a set of occupation numbersnican be realized is the product of the ways that each individual energy level can be populated:
Following the same procedure used in deriving theMaxwell–Boltzmann statistics, we wish to find the set ofnifor whichWis maximized, subject to the constraint that there be a fixed number of particles and a fixed energy. We constrain our solution usingLagrange multipliersforming the function:
UsingStirling's approximationfor the factorials, taking the derivative with respect toni, setting the result to zero, and solving forniyields the Fermi–Dirac population numbers:
By a process similar to that outlined in theMaxwell–Boltzmann statisticsarticle, it can be shown thermodynamically thatβ=1kBT{\displaystyle \beta ={\tfrac {1}{k_{\text{B}}T}}}andα=−μkBT{\displaystyle \alpha =-{\tfrac {\mu }{k_{\text{B}}T}}}, so that finally, the probability that a state will be occupied is | https://en.wikipedia.org/wiki/Fermi%E2%80%93Dirac_statistics |
Incomputer architecture, atrace cacheorexecution trace cacheis a specializedinstruction cachewhich stores the dynamic stream ofinstructionsknown astrace. It helps in increasing the instruction fetchbandwidthand decreasing power consumption (in the case ofIntelPentium 4) by storing traces of instructions that have already been fetched and decoded.[1]Atrace processor[2]is an architecture designed around the trace cache and processes the instructions at trace level granularity. The formal mathematical theory of traces is described bytrace monoids.
The earliest academic publication of trace cache was "Trace Cache: a Low Latency Approach to High Bandwidth Instruction Fetching".[1]This widely acknowledged paper was presented by Eric Rotenberg, Steve Bennett, and Jim Smith at 1996International Symposium on Microarchitecture(MICRO) conference. An earlier publication is US patent 5381533,[3]by Alex Peleg and Uri Weiser of Intel, "Dynamic flow instruction cache memory organized around trace segments independent of virtual address line", a continuation of an application filed in 1992, later abandoned.
Widersuperscalar processorsdemand multiple instructions to be fetched in a single cycle for higher performance. Instructions to be fetched are not always in contiguous memory locations (basic blocks) because ofbranchandjumpinstructions. So processors need additional logic and hardware support to fetch and align such instructions from non-contiguous basic blocks. If multiple branches are predicted asnot-taken, then processors can fetch instructions from multiple contiguous basic blocks in a single cycle. However, if any of the branches is predicted astaken, then processor should fetch instructions from the taken path in that same cycle. This limits the fetch capability of a processor.
Consider these four basic blocks (A,B,C,D) as shown in the figure that correspond to a simpleif-elseloop. These blocks will be storedcontiguouslyasABCDin the memory. If the branchDis predictednot-taken,the fetch unit can fetch the basic blocksA,B,Cwhich are placed contiguously. However, ifDis predictedtaken, the fetch unit has to fetchA,B,Dwhich are non-contiguously placed. Hence, fetching these blocks which are non contiguously placed, in a single cycle will be very difficult. So, in situations like these, the trace cache comes in aid to the processor.
Once fetched, the trace cache stores the instructions in their dynamic sequence. When these instructions are encountered again, the trace cache allows the instruction fetch unit of a processor to fetch several basic blocks from it without having to worry about branches in the execution flow. Instructions will be stored in the trace cache either after they have been decoded, or as they are retired. However, instruction sequence is speculative if they are stored just after decode stage.
A trace, also called a dynamic instruction sequence, is an entry in the trace cache. It can be characterized bymaximum number of instructionsandmaximum basic blocks. Traces can start at any dynamic instruction. Multiple traces can have same starting instruction i.e., same startingprogram counter(PC) and instructions from different basic blocks as per the branch outcomes. For the figure above, ABC and ABD are valid traces. They both start at the same PC (address of A) and have different basic blocks as per D's prediction.
Traces usually terminate when one of the following occurs:
A single trace will have following information:
Following are the factors that need to be considered while designing a trace cache.
A trace cache is not on the critical path of instruction fetch[4]
Trace lines are stored in the trace cache based on the PC of the first instruction in the trace and a set of branch predictions. This allows for storing different trace paths that start on the same address, each representing different branch outcomes. This method of tagging helps to provide path associativity to the trace cache. Other method can include having only starting PC as tag in trace cache. In the instruction fetch stage of apipeline, the current PC along with a set of branch predictions is checked in the trace cache for ahit. If there is a hit, a trace line is supplied to fetch unit which does not have to go to a regular cache or to memory for these instructions. The trace cache continues to feed the fetch unit until the trace line ends or until there is amispredictionin the pipeline. If there is a miss, a new trace starts to be built.
The Pentium 4's execution trace cache storesmicro-operationsresulting from decodingx86 instructions, providing also the functionality of a micro-operation cache. Having this, the next time an instruction is needed, it does not have to be decoded into micro-ops again.[5]
The disadvantages of trace cache are:
Within the L1 cache of theNetBurstCPUs, Intel incorporated its execution trace cache.[7][8]It stores decodedmicro-operations, so that when executing a new instruction, instead of fetching and decoding the instruction again, the CPU directly accesses the decoded micro-ops from the trace cache, thereby saving considerable time. Moreover, the micro-ops are cached in their predicted path of execution, which means that when instructions are fetched by the CPU from the cache, they are already present in the correct order of execution. Intel later introduced a similar but simpler concept withSandy Bridgecalledmicro-operation cache(UOP cache). | https://en.wikipedia.org/wiki/Trace_cache |
Inmathematical logic,algebraic logicis the reasoning obtained by manipulating equations withfree variables.
What is now usually called classical algebraic logic focuses on the identification and algebraic description ofmodelsappropriate for the study of various logics (in the form of classes of algebras that constitute thealgebraic semanticsfor thesedeductive systems) and connected problems likerepresentationand duality. Well known results like therepresentation theorem for Boolean algebrasandStone dualityfall under the umbrella of classical algebraic logic (Czelakowski 2003).
Works in the more recentabstract algebraic logic(AAL) focus on the process of algebraization itself, like classifying various forms of algebraizability using theLeibniz operator(Czelakowski 2003).
A homogeneousbinary relationis found in thepower setofX×Xfor some setX, while aheterogeneous relationis found in the power set ofX×Y, whereX≠Y. Whether a given relation holds for two individuals is onebitof information, so relations are studied with Boolean arithmetic. Elements of the power set are partially ordered byinclusion, and lattice of these sets becomes an algebra throughrelative multiplicationorcomposition of relations.
"The basic operations are set-theoretic union, intersection and complementation, the relative multiplication, and conversion."[1]
Theconversionrefers to theconverse relationthat always exists, contrary to function theory. A given relation may be represented by alogical matrix; then the converse relation is represented by thetransposematrix. A relation obtained as the composition of two others is then represented by the logical matrix obtained bymatrix multiplicationusing Boolean arithmetic.
An example of calculus of relations arises inerotetics, the theory of questions. In the universe of utterances there arestatementsSandquestionsQ. There are two relationsπand α fromQtoS:qαaholds whenais a direct answer to questionq. The other relation,qπpholds whenpis apresuppositionof questionq. The converse relationπTruns fromStoQso that the compositionπTα is a homogeneous relation onS.[2]The art of putting the right question to elicit a sufficient answer is recognized inSocratic methoddialogue.
The description of the key binary relation properties has been formulated with the calculus of relations. The univalence property of functions describes a relationRthat satisfies the formulaRTR⊆I,{\displaystyle R^{T}R\subseteq I,}whereIis the identity relation on the range ofR. The injective property corresponds to univalence ofRT{\displaystyle R^{T}}, or the formulaRRT⊆I,{\displaystyle RR^{T}\subseteq I,}where this timeIis the identity on the domain ofR.
But a univalent relation is only apartial function, while a univalenttotal relationis afunction. The formula for totality isI⊆RRT.{\displaystyle I\subseteq RR^{T}.}Charles LoewnerandGunther Schmidtuse the termmappingfor a total, univalent relation.[3][4]
The facility ofcomplementary relationsinspiredAugustus De MorganandErnst Schröderto introduceequivalencesusingR¯{\displaystyle {\bar {R}}}for the complement of relationR. These equivalences provide alternative formulas for univalent relations (RI¯⊆R¯{\displaystyle R{\bar {I}}\subseteq {\bar {R}}}), and total relations (R¯⊆RI¯{\displaystyle {\bar {R}}\subseteq R{\bar {I}}}).
Therefore, mappings satisfy the formulaR¯=RI¯.{\displaystyle {\bar {R}}=R{\bar {I}}.}Schmidt uses this principle as "slipping below negation from the left".[5]For a mappingf,fA¯=fA¯.{\displaystyle f{\bar {A}}={\overline {fA}}.}
Therelation algebrastructure, based in set theory, was transcended by Tarski with axioms describing it. Then he asked if every algebra satisfying the axioms could be represented by a set relation. The negative answer[6]opened the frontier ofabstract algebraic logic.[7][8][9]
Algebraic logic treatsalgebraic structures, oftenbounded lattices, as models (interpretations) of certainlogics, making logic a branch oforder theory.
In algebraic logic:
In the table below, the left column contains one or morelogicalor mathematical systems, and the algebraic structure which are its models are shown on the right in the same row. Some of these structures are eitherBoolean algebrasorproper extensionsthereof.Modaland othernonclassical logicsare typically modeled by what are called "Boolean algebras with operators."
Algebraic formalisms going beyondfirst-order logicin at least some respects include:
Algebraic logic is, perhaps, the oldest approach to formal logic, arguably beginning with a number of memorandaLeibnizwrote in the 1680s, some of which were published in the 19th century and translated into English byClarence Lewisin 1918.[10]: 291–305But nearly all of Leibniz's known work on algebraic logic was published only in 1903 afterLouis Couturatdiscovered it in Leibniz'sNachlass.Parkinson (1966)andLoemker (1969)translated selections from Couturat's volume into English.
Modern mathematical logic began in 1847, with two pamphlets whose respective authors wereGeorge Boole[11]andAugustus De Morgan.[12]In 1870Charles Sanders Peircepublished the first of several works on thelogic of relatives.Alexander Macfarlanepublished hisPrinciples of the Algebra of Logic[13]in 1879, and in 1883,Christine Ladd, a student of Peirce atJohns Hopkins University, published "On the Algebra of Logic".[14]Logic turned more algebraic whenbinary relationswere combined withcomposition of relations. For setsAandB, arelationoverAandBis represented as a member of thepower setofA×Bwith properties described byBoolean algebra. The "calculus of relations"[9]is arguably the culmination of Leibniz's approach to logic. At theHochschule Karlsruhethe calculus of relations was described byErnst Schröder.[15]In particular he formulatedSchröder rules, though De Morgan had anticipated them with his Theorem K.
In 1903Bertrand Russelldeveloped the calculus of relations andlogicismas his version of pure mathematics based on the operations of the calculus asprimitive notions.[16]The "Boole–Schröder algebra of logic" was developed atUniversity of California, Berkeleyin atextbookbyClarence Lewisin 1918.[10]He treated the logic of relations as derived from thepropositional functionsof two or more variables.
Hugh MacColl,Gottlob Frege,Giuseppe Peano, andA. N. Whiteheadall shared Leibniz's dream of combiningsymbolic logic,mathematics, andphilosophy.
Some writings byLeopold LöwenheimandThoralf Skolemon algebraic logic appeared after the 1910–13 publication ofPrincipia Mathematica, and Tarski revived interest in relations with his 1941 essay "On the Calculus of Relations".[9]
According toHelena Rasiowa, "The years 1920-40 saw, in particular in the Polish school of logic, researches on non-classical propositional calculi conducted by what is termed thelogical matrixmethod. Since logical matrices are certain abstract algebras, this led to the use of an algebraic method in logic."[17]
Brady (2000)discusses the rich historical connections between algebraic logic andmodel theory. The founders of model theory, Ernst Schröder and Leopold Loewenheim, were logicians in the algebraic tradition.Alfred Tarski, the founder ofset theoreticmodel theory as a major branch of contemporary mathematical logic, also:
In the practice of the calculus of relations,Jacques Riguetused the algebraic logic to advance useful concepts: he extended the concept of an equivalence relation (on a set) to the heterogeneous case with the notion of adifunctionalrelation. Riguet also extended ordering to the heterogeneous context by his note that a staircase logical matrix has a complement that is also a staircase, and that the theorem ofN. M. Ferrersfollows from interpretation of thetransposeof a staircase. Riguet generatedrectangular relationsby taking theouter productof logical vectors; these contribute to thenon-enlargeable rectanglesofformal concept analysis.
Leibniz had no influence on the rise of algebraic logic because his logical writings were little studied before the Parkinson and Loemker translations. Our present understanding of Leibniz as a logician stems mainly from the work of Wolfgang Lenzen, summarized inLenzen (2004). To see how present-day work in logic andmetaphysicscan draw inspiration from, and shed light on, Leibniz's thought, seeZalta (2000).
Historical perspective | https://en.wikipedia.org/wiki/Algebraic_logic |
Inmathematical logic,abstract model theoryis a generalization ofmodel theorythat studies the general properties of extensions offirst-order logicand their models.[1]
Abstract model theory provides an approach that allows us to step back and study a wide range of logics and their relationships.[2]The starting point for the study of abstract models, which resulted in good examples wasLindström's theorem.[3]
In 1974Jon Barwiseprovided anaxiomatizationof abstract model theory.[4]
Thismathematical logic-related article is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Abstract_model_theory |
Inmathematics, ahierarchyis aset-theoreticalobject, consisting of apreorderdefined on aset. This is often referred to as anordered set, though that is an ambiguous term that many authors reserve forpartially ordered setsortotally ordered sets. The termpre-ordered setis unambiguous, and is always synonymous with a mathematical hierarchy. The termhierarchyis used to stress ahierarchicalrelation among the elements.
Sometimes, a set comes equipped with a natural hierarchical structure. For example, the set ofnatural numbersNis equipped with a natural pre-order structure, wheren≤n′{\displaystyle n\leq n'}whenever we can find some other numberm{\displaystyle m}so thatn+m=n′{\displaystyle n+m=n'}. That is,n′{\displaystyle n'}is bigger thann{\displaystyle n}only because we can get ton′{\displaystyle n'}fromn{\displaystyle n}usingm{\displaystyle m}. This idea can be applied to anycommutative monoid. On the other hand, the set of integersZrequires a more sophisticated argument for its hierarchical structure, since we can always solve the equationn+m=n′{\displaystyle n+m=n'}by writingm=(n′−n){\displaystyle m=(n'-n)}.[citation needed]
A mathematical hierarchy (a pre-ordered set) should not be confused with the more general concept of ahierarchyin the social realm, particularly when one is constructing computational models that are used to describe real-world social, economic or political systems. These hierarchies, orcomplex networks, are much too rich to be described in thecategorySetof sets.[1]This is not just a pedantic claim; there are also mathematical hierarchies, in the general sense, that are not describable using set theory.[citation needed]
Other natural hierarchies arise incomputer science, where the word refers topartially ordered setswhose elements areclassesof objects of increasingcomplexity. In that case, the preorder defining the hierarchy is the class-containment relation.Containment hierarchiesare thus special cases of hierarchies.
Individual elements of a hierarchy are often calledlevelsand a hierarchy is said to be infinite if it has infinitely many distinct levels but said tocollapseif it has only finitely many distinct levels.
Intheoretical computer science, thetime hierarchyis a classification ofdecision problemsaccording to the amount of time required to solve them.
Tree related topics:
Effective complexityhierarchies:
Ineffective complexity hierarchies:
In set theory or logic:
Thisset theory-related article is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Hierarchy_(mathematics) |
Inmathematical logic,model theoryis the study of the relationship betweenformal theories(a collection ofsentencesin aformal languageexpressing statements about amathematical structure), and theirmodels(thosestructuresin which the statements of the theory hold).[1]The aspects investigated include the number and size of models of a theory, the relationship of different models to each other, and their interaction with the formal language itself. In particular, model theorists also investigate the sets that can bedefinedin a model of a theory, and the relationship of such definable sets to each other.
As a separate discipline, model theory goes back toAlfred Tarski, who first used the term "Theory of Models" in publication in 1954.[2]Since the 1970s, the subject has been shaped decisively bySaharon Shelah'sstability theory.
Compared to other areas of mathematical logic such asproof theory, model theory is often less concerned with formal rigour and closer in spirit to classical mathematics.
This has prompted the comment that"ifproof theoryis about the sacred, then model theory is about the profane".[3]The applications of model theory toalgebraicandDiophantine geometryreflect this proximity to classical mathematics, as they often involve an integration of algebraic and model-theoretic results and techniques. Consequently, proof theory issyntacticin nature, in contrast to model theory, which issemanticin nature.
The most prominent scholarly organization in the field of model theory is theAssociation for Symbolic Logic.
This page focuses onfinitaryfirst ordermodel theory of infinite structures.
The relative emphasis placed on the class of models of a theory as opposed to the class of definable sets within a model fluctuated in the history of the subject, and the two directions are summarised by the pithy characterisations from 1973 and 1997 respectively:
where universal algebra stands for mathematical structures and logic for logical theories; and
where logical formulas are to definable sets what equations are to varieties over a field.[4]
Nonetheless, the interplay of classes of models and the sets definable in them has been crucial to the development of model theory throughout its history. For instance, while stability was originally introduced to classify theories by their numbers of models in a givencardinality, stability theory proved crucial to understanding the geometry of definable sets.
A first-orderformulais built out ofatomic formulassuch asR(f(x,y),z){\displaystyle R(f(x,y),z)}ory=x+1{\displaystyle y=x+1}by means of theBoolean connectives¬,∧,∨,→{\displaystyle \neg ,\land ,\lor ,\rightarrow }and prefixing of quantifiers∀v{\displaystyle \forall v}or∃v{\displaystyle \exists v}. A sentence is a formula in which each occurrence of a variable is in the scope of a corresponding quantifier. Examples for formulas areφ{\displaystyle \varphi }(orφ(x){\displaystyle \varphi (x)}to indicatex{\displaystyle x}is the unbound variable inφ{\displaystyle \varphi }) andψ{\displaystyle \psi }(orψ(x){\displaystyle \psi (x)}), defined as follows:
(Note that the equality symbol has a double meaning here.) It is intuitively clear how to translate such formulas into mathematical meaning. In the semiring of natural numbersN{\displaystyle {\mathcal {N}}}, viewed as a structure with binary functions for addition and multiplication and constants for 0 and 1 of the natural numbers, for example, an elementn{\displaystyle n}satisfiesthe formulaφ{\displaystyle \varphi }if and only ifn{\displaystyle n}is a prime number. The formulaψ{\displaystyle \psi }similarly definesirreducibility. Tarski gave a rigorous definition, sometimes called"Tarski's definition of truth", for the satisfaction relation⊨{\displaystyle \models }, so that one easily proves:
A setT{\displaystyle T}of sentences is called a (first-order)theory, which takes the sentences in the set as its axioms. A theory issatisfiableif it has amodelM⊨T{\displaystyle {\mathcal {M}}\models T}, i.e. a structure (of the appropriate signature) which satisfies all the sentences in the setT{\displaystyle T}. A complete theory is a theory that contains everysentenceor its negation.
The complete theory of all sentences satisfied by a structure is also called thetheory of that structure.
It's a consequence of Gödel'scompleteness theorem(not to be confused with hisincompleteness theorems) that a theory has a model if and only if it isconsistent, i.e. no contradiction is proved by the theory.
Therefore, model theorists often use "consistent" as a synonym for "satisfiable".
Asignatureorlanguageis a set ofnon-logical symbolssuch that each symbol is either a constant symbol, or a function or relation symbol with a specifiedarity. Note that in some literature, constant symbols are considered as function symbols with zero arity, and hence are omitted. Astructureis a setM{\displaystyle M}together withinterpretationsof each of the symbols of the signature as relations and functions onM{\displaystyle M}(not to be confused with the formal notion of an "interpretation" of one structure in another).
Example:A common signature for ordered rings isσor=(0,1,+,×,−,<){\displaystyle \sigma _{or}=(0,1,+,\times ,-,<)}, where0{\displaystyle 0}and1{\displaystyle 1}are 0-ary function symbols (also known as constant symbols),+{\displaystyle +}and×{\displaystyle \times }are binary (= 2-ary) function symbols,−{\displaystyle -}is a unary (= 1-ary) function symbol, and<{\displaystyle <}is a binary relation symbol. Then, when these symbols are interpreted to correspond with their usual meaning onQ{\displaystyle \mathbb {Q} }(so that e.g.+{\displaystyle +}is a function fromQ2{\displaystyle \mathbb {Q} ^{2}}toQ{\displaystyle \mathbb {Q} }and<{\displaystyle <}is a subset ofQ2{\displaystyle \mathbb {Q} ^{2}}), one obtains a structure(Q,σor){\displaystyle (\mathbb {Q} ,\sigma _{or})}.
A structureN{\displaystyle {\mathcal {N}}}is said to model a set of first-order sentencesT{\displaystyle T}in the given language if each sentence inT{\displaystyle T}is true inN{\displaystyle {\mathcal {N}}}with respect to theinterpretationof the signature previously specified forN{\displaystyle {\mathcal {N}}}. (Again, not to be confused with the formal notion of an "interpretation" of one structure in another) AmodelofT{\displaystyle T}is a structure that modelsT{\displaystyle T}.
AsubstructureA{\displaystyle {\mathcal {A}}}of a σ-structureB{\displaystyle {\mathcal {B}}}is a subset of its domain, closed under all functions in its signature σ, which is regarded as a σ-structure by restricting all functions and relations in σ to the subset.
This generalises the analogous concepts from algebra; for instance, a subgroup is a substructure in the signature with multiplication and inverse.
A substructure is said to beelementaryif for any first-order formulaφ{\displaystyle \varphi }and any elementsa1, ...,anofA{\displaystyle {\mathcal {A}}},
In particular, ifφ{\displaystyle \varphi }is a sentence andA{\displaystyle {\mathcal {A}}}an elementary substructure ofB{\displaystyle {\mathcal {B}}}, thenA⊨φ{\displaystyle {\mathcal {A}}\models \varphi }if and only ifB⊨φ{\displaystyle {\mathcal {B}}\models \varphi }. Thus, an elementary substructure is a model of a theory exactly when the superstructure is a model.
Example:While the field of algebraic numbersQ¯{\displaystyle {\overline {\mathbb {Q} }}}is an elementary substructure of the field of complex numbersC{\displaystyle \mathbb {C} }, the rational fieldQ{\displaystyle \mathbb {Q} }is not, as we can express "There is a square root of 2" as a first-order sentence satisfied byC{\displaystyle \mathbb {C} }but not byQ{\displaystyle \mathbb {Q} }.
Anembeddingof a σ-structureA{\displaystyle {\mathcal {A}}}into another σ-structureB{\displaystyle {\mathcal {B}}}is a mapf:A→Bbetween the domains which can be written as an isomorphism ofA{\displaystyle {\mathcal {A}}}with a substructure ofB{\displaystyle {\mathcal {B}}}. If it can be written as an isomorphism with an elementary substructure, it is called an elementary embedding. Every embedding is aninjectivehomomorphism, but the converse holds only if the signature contains no relation symbols, such as in groups or fields.
A field or a vector space can be regarded as a (commutative) group by simply ignoring some of its structure. The corresponding notion in model theory is that of areductof a structure to a subset of the original signature. The opposite relation is called anexpansion- e.g. the (additive) group of therational numbers, regarded as a structure in the signature {+,0} can be expanded to a field with the signature {×,+,1,0} or to an ordered group with the signature {+,0,<}.
Similarly, if σ' is a signature that extends another signature σ, then a complete σ'-theory can be restricted to σ by intersecting the set of its sentences with the set of σ-formulas. Conversely, a complete σ-theory can be regarded as a σ'-theory, and one can extend it (in more than one way) to a complete σ'-theory. The terms reduct and expansion are sometimes applied to this relation as well.
Thecompactness theoremstates that a set of sentences S is satisfiable if every finite subset of S is satisfiable. The analogous statement withconsistentinstead ofsatisfiableis trivial, since every proof can have only a finite number of antecedents used in the proof. The completeness theorem allows us to transfer this to satisfiability. However, there are also several direct (semantic) proofs of the compactness theorem.
As a corollary (i.e., its contrapositive), the compactness theorem says that every unsatisfiable first-order theory has a finite unsatisfiable subset. This theorem is of central importance in model theory, where the words "by compactness" are commonplace.[5]
Another cornerstone of first-order model theory is theLöwenheim–Skolem theorem. According to the theorem, every infinite structure in a countable signature has a countable elementary substructure. Conversely, for any infinite cardinal κ every infinite structure in a countable signature that is of cardinality less than κ can be elementarily embedded in another structure of cardinality κ (There is a straightforward generalisation to uncountable signatures). In particular, the Löwenheim-Skolem theorem implies that any theory in a countable signature with infinite models has a countable model as well as arbitrarily large models.[6]
In a certain sense made precise byLindström's theorem, first-order logic is the most expressive logic for which both the Löwenheim–Skolem theorem and the compactness theorem hold.[7]
In model theory,definable setsare important objects of study. For instance, inN{\displaystyle \mathbb {N} }the formula
defines the subset of prime numbers, while the formula
defines the subset of even numbers.
In a similar way, formulas withnfree variables define subsets ofMn{\displaystyle {\mathcal {M}}^{n}}. For example, in a field, the formula
defines the curve of all(x,y){\displaystyle (x,y)}such thaty=x2{\displaystyle y=x^{2}}.
Both of the definitions mentioned here areparameter-free, that is, the defining formulas don't mention any fixed domain elements. However, one can also consider definitionswith parameters from the model.
For instance, inR{\displaystyle \mathbb {R} }, the formula
uses the parameterπ{\displaystyle \pi }fromR{\displaystyle \mathbb {R} }to define a curve.[8]
In general, definable sets without quantifiers are easy to describe, while definable sets involving possibly nested quantifiers can be much more complicated.[9]
This makesquantifier eliminationa crucial tool for analysing definable sets:
A theoryThas quantifier elimination if every first-order formulaφ(x1, ...,xn)over its signature is equivalent moduloTto a first-order formulaψ(x1, ...,xn)without quantifiers, i.e.∀x1…∀xn(ϕ(x1,…,xn)↔ψ(x1,…,xn)){\displaystyle \forall x_{1}\dots \forall x_{n}(\phi (x_{1},\dots ,x_{n})\leftrightarrow \psi (x_{1},\dots ,x_{n}))}holds in all models ofT.[10]If the theory of a structure has quantifier elimination, every set definable in a structure is definable by a quantifier-free formula over the same parameters as the original definition.
For example, the theory of algebraically closed fields in the signatureσring= (×,+,−,0,1)has quantifier elimination.[11]This means that in an algebraically closed field, every formula is equivalent to a Boolean combination of equations between polynomials.
If a theory does not have quantifier elimination, one can add additional symbols to its signature so that it does. Axiomatisability and quantifier elimination results for specific theories, especially in algebra, were among the early landmark results of model theory.[12]But often instead of quantifier elimination a weaker property suffices:
A theoryTis calledmodel-completeif every substructure of a model ofTwhich is itself a model ofTis an elementary substructure. There is a useful criterion for testing whether a substructure is an elementary substructure, called theTarski–Vaught test.[6]It follows from this criterion that a theoryTis model-complete if and only if every first-order formulaφ(x1, ...,xn)over its signature is equivalent moduloTto an existential first-order formula, i.e. a formula of the following form:
where ψ is quantifier free. A theory that is not model-complete may have amodel completion, which is a related model-complete theory that is not, in general, an extension of the original theory. A more general notion is that of amodel companion.[13]
In every structure, every finite subset{a1,…,an}{\displaystyle \{a_{1},\dots ,a_{n}\}}is definable with parameters: Simply use the formula
Since we can negate this formula, every cofinite subset (which includes all but finitely many elements of the domain) is also always definable.
This leads to the concept of aminimal structure.
A structureM{\displaystyle {\mathcal {M}}}is called minimal if every subsetA⊆M{\displaystyle A\subseteq {\mathcal {M}}}definable with parameters fromM{\displaystyle {\mathcal {M}}}is either finite or cofinite.
The corresponding concept at the level of theories is calledstrong minimality:
A theoryTis calledstrongly minimalif every model ofTis minimal.
A structure is calledstrongly minimalif the theory of that structure is strongly minimal. Equivalently, a structure is strongly minimal if every elementary extension is minimal.
Since the theory of algebraically closed fields has quantifier elimination, every definable subset of an algebraically closed field is definable by a quantifier-free formula in one variable. Quantifier-free formulas in one variable express Boolean combinations of polynomial equations in one variable, and since a nontrivial polynomial equation in one variable has only a finite number of solutions, the theory of algebraically closed fields is strongly minimal.[14]
On the other hand, the fieldR{\displaystyle \mathbb {R} }of real numbers is not minimal: Consider, for instance, the definable set
This defines the subset of non-negative real numbers, which is neither finite nor cofinite.
One can in fact useφ{\displaystyle \varphi }to define arbitrary intervals on the real number line.
It turns out that these suffice to represent every definable subset ofR{\displaystyle \mathbb {R} }.[15]This generalisation of minimality has been very useful in the model theory of ordered structures.
Adensely totally orderedstructureM{\displaystyle {\mathcal {M}}}in a signature including a symbol for the order relation is calledo-minimalif every subsetA⊆M{\displaystyle A\subseteq {\mathcal {M}}}definable with parameters fromM{\displaystyle {\mathcal {M}}}is a finite union of points and intervals.[16]
Particularly important are those definable sets that are also substructures, i. e. contain all constants and are closed under function application. For instance, one can study the definable subgroups of a certain group.
However, there is no need to limit oneself to substructures in the same signature. Since formulas withnfree variables define subsets ofMn{\displaystyle {\mathcal {M}}^{n}},n-ary relations can also be definable. Functions are definable if the function graph is a definable relation, and constantsa∈M{\displaystyle a\in {\mathcal {M}}}are definable if there is a formulaφ(x){\displaystyle \varphi (x)}such thatais the only element ofM{\displaystyle {\mathcal {M}}}such thatφ(a){\displaystyle \varphi (a)}is true.
In this way, one can study definable groups and fields in general structures, for instance, which has been important in geometric stability theory.
One can even go one step further, and move beyond immediate substructures.
Given a mathematical structure, there are very often associated structures which can be constructed as a quotient of part of the original structure via an equivalence relation. An important example is a quotient group of a group.
One might say that to understand the full structure one must understand these quotients. When the equivalence relation is definable, we can give the previous sentence a precise meaning. We say that these structures areinterpretable.
A key fact is that one can translate sentences from the language of the interpreted structures to the language of the original structure. Thus one can show that if a structureM{\displaystyle {\mathcal {M}}}interprets another whose theory is undecidable, thenM{\displaystyle {\mathcal {M}}}itself is undecidable.[17]
For a sequence of elementsa1,…,an{\displaystyle a_{1},\dots ,a_{n}}of a structureM{\displaystyle {\mathcal {M}}}and a subsetAofM{\displaystyle {\mathcal {M}}}, one can consider the set of all first-order formulasφ(x1,…,xn){\displaystyle \varphi (x_{1},\dots ,x_{n})}with parameters inAthat are satisfied bya1,…,an{\displaystyle a_{1},\dots ,a_{n}}. This is called thecomplete (n-)type realised bya1,…,an{\displaystyle a_{1},\dots ,a_{n}}over A.
If there is anautomorphismofM{\displaystyle {\mathcal {M}}}that is constant onAand sendsa1,…,an{\displaystyle a_{1},\dots ,a_{n}}tob1,…,bn{\displaystyle b_{1},\dots ,b_{n}}respectively, thena1,…,an{\displaystyle a_{1},\dots ,a_{n}}andb1,…,bn{\displaystyle b_{1},\dots ,b_{n}}realise the same complete type overA.
The real number lineR{\displaystyle \mathbb {R} }, viewed as a structure with only the order relation {<}, will serve as a running example in this section.
Every elementa∈R{\displaystyle a\in \mathbb {R} }satisfies the same 1-type over the empty set. This is clear since any two real numbersaandbare connected by the order automorphism that shifts all numbers byb-a. The complete 2-type over the empty set realised by a pair of numbersa1,a2{\displaystyle a_{1},a_{2}}depends on their order: eithera1<a2{\displaystyle a_{1}<a_{2}},a1=a2{\displaystyle a_{1}=a_{2}}ora2<a1{\displaystyle a_{2}<a_{1}}.
Over the subsetZ⊆R{\displaystyle \mathbb {Z} \subseteq \mathbb {R} }of integers, the 1-type of a non-integer real numberadepends on its value rounded down to the nearest integer.
More generally, wheneverM{\displaystyle {\mathcal {M}}}is a structure andAa subset ofM{\displaystyle {\mathcal {M}}}, a (partial)n-type over Ais a set of formulaspwith at mostnfree variables that are realised in an elementary extensionN{\displaystyle {\mathcal {N}}}ofM{\displaystyle {\mathcal {M}}}.
Ifpcontains every such formula or its negation, thenpiscomplete. The set of completen-types overAis often written asSnM(A){\displaystyle S_{n}^{\mathcal {M}}(A)}. IfAis the empty set, then the type space only depends on the theoryT{\displaystyle T}ofM{\displaystyle {\mathcal {M}}}. The notationSn(T){\displaystyle S_{n}(T)}is commonly used for the set of types over the empty set consistent withT{\displaystyle T}.
If there is a single formulaφ{\displaystyle \varphi }such that the theory ofM{\displaystyle {\mathcal {M}}}impliesφ→ψ{\displaystyle \varphi \rightarrow \psi }for every formulaψ{\displaystyle \psi }inp, thenpis calledisolated.
Since the real numbersR{\displaystyle \mathbb {R} }areArchimedean, there is no real number larger than every integer. However, a compactness argument shows that there is an elementary extension of the real number line in which there is an element larger than any integer.
Therefore, the set of formulas{n<x|n∈Z}{\displaystyle \{n<x|n\in \mathbb {Z} \}}is a 1-type overZ⊆R{\displaystyle \mathbb {Z} \subseteq \mathbb {R} }that is not realised in the real number lineR{\displaystyle \mathbb {R} }.
A subset ofMn{\displaystyle {\mathcal {M}}^{n}}that can be expressed as exactly those elements ofMn{\displaystyle {\mathcal {M}}^{n}}realising a certain type overAis calledtype-definableoverA.
For an algebraic example, supposeM{\displaystyle M}is analgebraically closed field. The theory has quantifier elimination . This allows us to show that a type is determined exactly by the polynomial equations it contains. Thus the set of completen{\displaystyle n}-types over a subfieldA{\displaystyle A}corresponds to the set ofprime idealsof thepolynomial ringA[x1,…,xn]{\displaystyle A[x_{1},\ldots ,x_{n}]}, and the type-definable sets are exactly the affine varieties.[18]
While not every type is realised in every structure, every structure realises its isolated types.
If the only types over the empty set that are realised in a structure are the isolated types, then the structure is calledatomic.
On the other hand, no structure realises every type over every parameter set; if one takes all ofM{\displaystyle {\mathcal {M}}}as the parameter set, then every 1-type overM{\displaystyle {\mathcal {M}}}realised inM{\displaystyle {\mathcal {M}}}is isolated by a formula of the forma = xfor ana∈M{\displaystyle a\in {\mathcal {M}}}. However, any proper elementary extension ofM{\displaystyle {\mathcal {M}}}contains an element that isnotinM{\displaystyle {\mathcal {M}}}. Therefore, a weaker notion has been introduced that captures the idea of a structure realising all types it could be expected to realise.
A structure is calledsaturatedif it realises every type over a parameter setA⊂M{\displaystyle A\subset {\mathcal {M}}}that is of smaller cardinality thanM{\displaystyle {\mathcal {M}}}itself.
While an automorphism that is constant onAwill always preserve types overA, it is generally not true that any two sequencesa1,…,an{\displaystyle a_{1},\dots ,a_{n}}andb1,…,bn{\displaystyle b_{1},\dots ,b_{n}}that satisfy the same type overAcan be mapped to each other by such an automorphism. A structureM{\displaystyle {\mathcal {M}}}in which this converse does hold for allAof smaller cardinality thanM{\displaystyle {\mathcal {M}}}is calledhomogeneous.
The real number line is atomic in the language that contains only the order<{\displaystyle <}, since alln-types over the empty set realised bya1,…,an{\displaystyle a_{1},\dots ,a_{n}}inR{\displaystyle \mathbb {R} }are isolated by the order relations between thea1,…,an{\displaystyle a_{1},\dots ,a_{n}}. It is not saturated, however, since it does not realise any 1-type over the countable setZ{\displaystyle \mathbb {Z} }that impliesxto be larger than any integer.
The rational number lineQ{\displaystyle \mathbb {Q} }is saturated, in contrast, sinceQ{\displaystyle \mathbb {Q} }is itself countable and therefore only has to realise types over finite subsets to be saturated.[19]
The set of definable subsets ofMn{\displaystyle {\mathcal {M}}^{n}}over some parametersA{\displaystyle A}is aBoolean algebra. ByStone's representation theorem for Boolean algebrasthere is a natural dualtopological space, which consists exactly of the completen{\displaystyle n}-types overA{\displaystyle A}. The topologygeneratedby sets of the form{p|φ∈p}{\displaystyle \{p|\varphi \in p\}}for single formulasφ{\displaystyle \varphi }. This is called theStone space of n-types over A.[20]This topology explains some of the terminology used in model theory: The compactness theorem says that the Stone space is a compact topological space, and a typepis isolated if and only ifpis an isolated point in the Stone topology.
While types in algebraically closed fields correspond to the spectrum of the polynomial ring, the topology on the type space is theconstructible topology: a set of types is basicopeniff it is of the form{p:f(x)=0∈p}{\displaystyle \{p:f(x)=0\in p\}}or of the form{p:f(x)≠0∈p}{\displaystyle \{p:f(x)\neq 0\in p\}}. This is finer than theZariski topology.[21]
Constructing models that realise certain types and do not realise others is an important task in model theory.
Not realising a type is referred to asomittingit, and is generally possible by the(Countable) Omitting types theorem:
This implies that if a theory in a countable signature has only countably many types over the empty set, then this theory has an atomic model.
On the other hand, there is always an elementary extension in which any set of types over a fixed parameter set is realised:
However, since the parameter set is fixed and there is no mention here of the cardinality ofN{\displaystyle {\mathcal {N}}}, this does not imply that every theory has a saturated model.
In fact, whether every theory has a saturated model is independent of the axioms ofZermelo–Fraenkel set theory, and is true if thegeneralised continuum hypothesisholds.[24]
Ultraproductsare used as a general technique for constructing models that realise certain types.
Anultraproductis obtained from thedirect productof a set of structures over an index setIby identifying those tuples that agree on almost all entries, wherealmost allis made precise by anultrafilterUonI. An ultraproduct of copies of the same structure is known as anultrapower.
The key to using ultraproducts in model theory isŁoś's theorem:
In particular, any ultraproduct of models of a theory is itself a model of that theory, and thus if two models have isomorphic ultrapowers, they are elementarily equivalent.
TheKeisler-Shelah theoremprovides a converse:
Therefore, ultraproducts provide a way to talk about elementary equivalence that avoids mentioning first-order theories at all. Basic theorems of model theory such as the compactness theorem have alternative proofs using ultraproducts,[27]and they can be used to construct saturated elementary extensions if they exist.[24]
A theory was originally calledcategoricalif it determines a structure up to isomorphism. It turns out that this definition is not useful, due to serious restrictions in the expressivity of first-order logic. The Löwenheim–Skolem theorem implies that if a theoryThas an infinite model for some infinitecardinal number, then it has a model of sizeκfor any sufficiently largecardinal numberκ. Since two models of different sizes cannot possibly be isomorphic, only finite structures can be described by a categorical theory.
However, the weaker notion ofκ-categoricity for a cardinalκhas become a key concept in model theory. A theoryTis calledκ-categoricalif any two models ofTthat are of cardinalityκare isomorphic. It turns out that the question ofκ-categoricity depends critically on whetherκis bigger than the cardinality of the language (i.e.ℵ0+|σ|{\displaystyle \aleph _{0}+|\sigma |}, where|σ|is the cardinality of the signature). For finite or countable signatures this means that there is a fundamental difference betweenω-cardinality andκ-cardinality for uncountableκ.
ω-categorical theoriescan be characterised by properties of their type space:
The theory of(Q,<){\displaystyle (\mathbb {Q} ,<)}, which is also the theory of(R,<){\displaystyle (\mathbb {R} ,<)}, isω-categorical, as everyn-typep(x1,…,xn){\displaystyle p(x_{1},\dots ,x_{n})}over the empty set is isolated by the pairwise order relation between thexi{\displaystyle x_{i}}.
This means that every countabledense linear orderis order-isomorphic to the rational number line. On the other hand, the theories of ℚ, ℝ and ℂ as fields are notω{\displaystyle \omega }-categorical. This follows from the fact that in all those fields, any of the infinitely many natural numbers can be defined by a formula of the formx=1+⋯+1{\displaystyle x=1+\dots +1}.
ℵ0{\displaystyle \aleph _{0}}-categorical theories and their countable models also have strong ties witholigomorphic groups:
The equivalent characterisations of this subsection, due independently toEngeler,Ryll-NardzewskiandSvenonius, are sometimes referred to as the Ryll-Nardzewski theorem.
In combinatorial signatures, a common source ofω-categorical theories areFraïssé limits, which are obtained as the limit of amalgamating all possible configurations of a class of finite relational structures.
Michael Morleyshowed in 1963 that there is only one notion ofuncountable categoricityfor theories in countable languages.[28]
Morley's proof revealed deep connections between uncountable categoricity and the internal structure of the models, which became the starting point of classification theory and stability theory.
Uncountably categorical theories are from many points of view the most well-behaved theories.
In particular, complete strongly minimal theories are uncountably categorical. This shows that the theory of algebraically closed fields of a given characteristic is uncountably categorical, with the transcendence degree of the field determining its isomorphism type.
A theory that is bothω-categorical and uncountably categorical is calledtotally categorical.
A key factor in the structure of the class of models of a first-order theory is its place in thestability hierarchy.
A theory is calledstableif it isλ{\displaystyle \lambda }-stable for some infinite cardinalλ{\displaystyle \lambda }. Traditionally, theories that areℵ0{\displaystyle \aleph _{0}}-stable are calledω{\displaystyle \omega }-stable.[29]
A fundamental result in stability theory is thestability spectrum theorem,[30]which implies that every complete theoryTin a countable signature falls in one of the following classes:
A theory of the first type is calledunstable, a theory of the second type is calledstrictly stableand a theory of the third type is calledsuperstable.
Furthermore, if a theory isω{\displaystyle \omega }-stable, it is stable in every infinite cardinal,[31]soω{\displaystyle \omega }-stability is stronger than superstability.
Many construction in model theory are easier when restricted to stable theories; for instance, every model of a stable theory has a saturated elementary extension, regardless of whether the generalised continuum hypothesis is true.[32]
Shelah's original motivation for studying stable theories was to decide how many models a countable theory has of any uncountable cardinality.[33]If a theory is uncountably categorical, then it isω{\displaystyle \omega }-stable. More generally, theMain gap theoremimplies that if there is an uncountable cardinalλ{\displaystyle \lambda }such that a theoryThas less than2λ{\displaystyle 2^{\lambda }}models of cardinalityλ{\displaystyle \lambda }, thenTis superstable.
The stability hierarchy is also crucial for analysing the geometry of definable sets within a model of a theory.
Inω{\displaystyle \omega }-stable theories,Morley rankis an important dimension notion for definable setsSwithin a model. It is defined bytransfinite induction:
A theoryTin which every definable set has well-defined Morley rank is calledtotally transcendental; ifTis countable, thenTis totally transcendental if and only ifTisω{\displaystyle \omega }-stable.
Morley Rank can be extended to types by setting the Morley rank of a type to be the minimum of the Morley ranks of the formulas in the type. Thus, one can also speak of the Morley rank of an elementaover a parameter setA, defined as the Morley rank of the type ofaoverA.
There are also analogues of Morley rank which are well-defined if and only if a theory is superstable (U-rank) or merely stable (Shelah's∞{\displaystyle \infty }-rank).
Those dimension notions can be used to define notions of independence and of generic extensions.
More recently, stability has been decomposed into simplicity and "not the independence property" (NIP).Simple theoriesare those theories in which a well-behaved notion of independence can be defined, whileNIP theoriesgeneralise o-minimal structures.
They are related to stability since a theory is stable if and only if it is NIP and simple,[34]and various aspects of stability theory have been generalised to theories in one of these classes.
Model-theoretic results have been generalised beyondelementary classes, that is, classes axiomatisable by a first-order theory.
Model theory inhigher-order logicsorinfinitary logicsis hampered by the fact thatcompletenessandcompactnessdo not in general hold for these logics. This is made concrete byLindström's theorem, stating roughly that first-order logic is essentially the strongest logic in which both the Löwenheim-Skolem theorems and compactness hold. However, model theoretic techniques have been developed extensively for these logics too.[35]It turns out, however, that much of the model theory of more expressive logical languages is independent ofZermelo–Fraenkel set theory.[36]
More recently, alongside the shift in focus to complete stable and categorical theories, there has been work on classes of models defined semantically rather than axiomatised by a logical theory.
One example ishomogeneous model theory, which studies the class of substructures of arbitrarily large homogeneous models. Fundamental results of stability theory and geometric stability theory generalise to this setting.[37]As a generalisation of strongly minimal theories,quasiminimally excellentclasses are those in which every definable set is either countable or co-countable. They are key to the model theory of thecomplex exponential function.[38]The most general semantic framework in which stability is studied areabstract elementary classes, which are defined by astrong substructurerelation generalising that of an elementary substructure. Even though its definition is purely semantic, every abstract elementary class can be presented as the models of a first-order theory which omit certain types. Generalising stability-theoretic notions to abstract elementary classes is an ongoing research program.[39]
Among the early successes of model theory are Tarski's proofs of quantifier elimination for various algebraically interesting classes, such as thereal closed fields,Boolean algebrasandalgebraically closed fieldsof a givencharacteristic. Quantifier elimination allowed Tarski to show that the first-order theories of real-closed and algebraically closed fields as well as the first-order theory of Boolean algebras are decidable, classify the Boolean algebras up to elementary equivalence and show that the theories of real-closed fields and algebraically closed fields of a given characteristic are unique. Furthermore, quantifier elimination provided a precise description of definable relations on algebraically closed fields asalgebraic varietiesand of the definable relations on real-closed fields assemialgebraic sets[40][41]
In the 1960s, the introduction of theultraproductconstruction led to new applications in algebra. This includesAx'swork onpseudofinite fields, proving that the theory of finite fields is decidable,[42]and Ax andKochen's proof of as special case of Artin's conjecture on diophantine equations, theAx–Kochen theorem.[43]The ultraproduct construction also led toAbraham Robinson's development ofnonstandard analysis, which aims to provide a rigorous calculus ofinfinitesimals.[44]
More recently, the connection between stability and the geometry of definable sets led to several applications from algebraic and diophantine geometry, includingEhud Hrushovski's 1996 proof of the geometricMordell–Lang conjecturein all characteristics[45]In 2001, similar methods were used to prove a generalisation of the Manin-Mumford conjecture.
In 2011,Jonathan Pilaapplied techniques aroundo-minimalityto prove theAndré–Oort conjecturefor products of Modular curves.[46]
In a separate strand of inquiries that also grew around stable theories, Laskowski showed in 1992 thatNIP theoriesdescribe exactly those definable classes that arePAC-learnablein machine learning theory. This has led to several interactions between these separate areas. In 2018, the correspondence was extended as Hunter and Chase showed that stable theories correspond toonline learnable classes.[47]
Model theory as a subject has existed since approximately the middle of the 20th century, and the name was coined byAlfred Tarski, a member of theLwów–Warsaw school, in 1954.[48]However some earlier research, especially inmathematical logic, is often regarded as being of a model-theoretical nature in retrospect.[49]The first significant result in what is now model theory was a special case of the downward Löwenheim–Skolem theorem, published byLeopold Löwenheimin 1915. Thecompactness theoremwas implicit in work byThoralf Skolem,[50]but it was first published in 1930, as a lemma inKurt Gödel's proof of hiscompleteness theorem. The Löwenheim–Skolem theorem and the compactness theorem received their respective general forms in 1936 and 1941 fromAnatoly Maltsev.
The development of model theory as an independent discipline was brought on by Alfred Tarski during theinterbellum. Tarski's work includedlogical consequence,deductive systems, the algebra of logic, the theory of definability, and thesemantic definition of truth, among other topics. His semantic methods culminated in the model theory he and a number of hisBerkeleystudents developed in the 1950s and '60s.
In the further history of the discipline, different strands began to emerge, and the focus of the subject shifted. In the 1960s, techniques around ultraproducts became a popular tool in model theory.[51]At the same time, researchers such asJames Axwere investigating the first-order model theory of various algebraic classes, and others such asH. Jerome Keislerwere extending the concepts and results of first-order model theory to other logical systems. Then, inspired byMorley's problem, Shelah developedstability theory. His work around stability changed the complexion of model theory, giving rise to a whole new class of concepts. This is known as the paradigm shift.[52]Over the next decades, it became clear that the resulting stability hierarchy is closely connected to the geometry of sets that are definable in those models; this gave rise to the subdiscipline now known as geometric stability theory. An example of an influential proof from geometric model theory isHrushovski's proof of theMordell–Lang conjecturefor function fields.[53]
Finite model theory, which concentrates on finite structures, diverges significantly from the study of infinite structures in both the problems studied and the techniques used.[54]In particular, many central results of classical model theory that fail when restricted to finite structures. This includes thecompactness theorem,Gödel's completeness theorem, and the method ofultraproductsforfirst-order logic. At the interface of finite and infinite model theory are algorithmic orcomputable model theoryand the study of0-1 laws, where the infinite models of a generic theory of a class of structures provide information on the distribution of finite models.[55]Prominent application areas of FMT aredescriptive complexity theory,database theoryandformal language theory.[56]
Anyset theory(which is expressed in acountablelanguage), if it is consistent, has a countable model; this is known asSkolem's paradox, since there are sentences in set theory which postulate the existence of uncountable sets and yet these sentences are true in our countable model. Particularly the proof of the independence of thecontinuum hypothesisrequires considering sets in models which appear to be uncountable when viewed fromwithinthe model, but are countable to someoneoutsidethe model.[57]
The model-theoretic viewpoint has been useful inset theory; for example inKurt Gödel's work on the constructible universe, which, along with the method offorcingdeveloped byPaul Cohencan be shown to prove the (again philosophically interesting)independenceof theaxiom of choiceand the continuum hypothesis from the other axioms of set theory.[58]
In the other direction, model theory is itself formalised within Zermelo–Fraenkel set theory. For instance, the development of the fundamentals of model theory (such as the compactness theorem) rely on the axiom of choice, and is in fact equivalent over Zermelo–Fraenkel set theory without choice to the Boolean prime ideal theorem.[59]Other results in model theory depend on set-theoretic axioms beyond the standard ZFC framework. For example, if the Continuum Hypothesis holds then every countable model has an ultrapower which is saturated (in its own cardinality). Similarly, if the Generalized Continuum Hypothesis holds then every model has a saturated elementary extension. Neither of these results are provable in ZFC alone. Finally, some questions arising from model theory (such as compactness for infinitary logics) have been shown to be equivalent to large cardinal axioms.[60] | https://en.wikipedia.org/wiki/Model_theory |
Inuniversal algebra, avariety of algebrasorequational classis theclassof allalgebraic structuresof a givensignaturesatisfying a given set ofidentities. For example, thegroupsform a variety of algebras, as do theabelian groups, therings, themonoidsetc. According toBirkhoff's theorem, a class of algebraic structures of the same signature is a variety if and only if it is closed under the taking ofhomomorphicimages,subalgebras, and(direct) products. In the context ofcategory theory, a variety of algebras, together with its homomorphisms, forms acategory; these are usually calledfinitary algebraic categories.
Acovarietyis the class of allcoalgebraic structuresof a given signature.
A variety of algebras should not be confused with analgebraic variety, which means a set of solutions to asystem of polynomial equations. They are formally quite distinct and their theories have little in common.
The term "variety of algebras" refers to algebras in the general sense ofuniversal algebra; there is also a more specific sense of algebra, namely asalgebra over a field, i.e. avector spaceequipped with abilinearmultiplication.
Asignature(in this context) is a set, whose elements are calledoperations, each of which is assigned anatural number(0, 1, 2, ...) called itsarity. Given a signatureσand a setV, whose elements are calledvariables, awordis a finiterooted treein which each node is labelled by either a variable or an operation, such that every node labelled by a variable has no branches away from the root and every node labelled by an operationohas as many branches away from the root as the arity ofo. Anequational lawis a pair of such words; the axiom consisting of the wordsvandwis written asv=w.
Atheoryconsists of a signature, a set of variables, and a set of equational laws. Any theory gives a variety of algebras as follows. Given a theoryT, analgebraofTconsists of a setAtogether with, for each operationoofTwith arityn, a functionoA:An→Asuch that for each axiomv=wand each assignment of elements ofAto the variables in that axiom, the equation holds that is given by applying the operations to the elements ofAas indicated by the trees definingvandw. The class of algebras of a given theoryTis called avariety of algebras.
Given two algebras of a theoryT, sayAandB, ahomomorphismis a functionf:A→Bsuch that
for every operationoof arityn. Any theory gives acategorywhere the objects are algebras of that theory and the morphisms are homomorphisms.
The class of allsemigroupsforms a variety of algebras of signature (2), meaning that a semigroup has a single binary operation. A sufficient defining equation is the associative law:
The class ofgroupsforms a variety of algebras of signature (2,0,1), the three operations being respectivelymultiplication(binary),identity(nullary, a constant) andinversion(unary). The familiar axioms of associativity, identity and inverse form one suitable set of identities:
The class ofringsalso forms a variety of algebras. The signature here is (2,2,0,0,1) (two binary operations, two constants, and one unary operation).
If we fix a specific ringR, we can consider the class ofleftR-modules. To express the scalar multiplication with elements fromR, we need one unary operation for each element ofR. If the ring is infinite, we will thus have infinitely many operations, which is allowed by the definition of an algebraic structure in universal algebra. We will then also need infinitely many identities to express the module axioms, which is allowed by the definition of a variety of algebras. So the leftR-modules do form a variety of algebras.
Thefieldsdonotform a variety of algebras; the requirement that all non-zero elements be invertible cannot be expressed as a universally satisfied identity (see below).
Thecancellative semigroupsalso do not form a variety of algebras, since the cancellation property is not an equation, it is an implication that is not equivalent to any set of equations. However, they do form aquasivarietyas the implication defining the cancellation property is an example of aquasi-identity.
Given a class of algebraic structures of the same signature, we can define the notions of homomorphism,subalgebra, andproduct.Garrett Birkhoffproved that a class of algebraic structures of the same signature is a variety if and only if it is closed under the taking of homomorphic images, subalgebras and arbitrary products.[1]This is a result of fundamental importance to universal algebra and known asBirkhoff's variety theoremor as theHSP theorem.H,S, andPstand, respectively, for the operations of homomorphism, subalgebra, and product.
One direction of the equivalence mentioned above, namely that a class of algebras satisfying some set of identities must be closed under the HSP operations, follows immediately from the definitions. Proving theconverse—classes of algebras closed under the HSP operations must be equational—is more difficult.
Using the easy direction of Birkhoff's theorem, we can for example verify the claim made above, that the field axioms are not expressible by any possible set of identities: the product of fields is not a field, so fields do not form a variety.
Asubvarietyof a variety of algebrasVis a subclass ofVthat has the same signature asVand is itself a variety, i.e., is defined by a set of identities.
Notice that although every group becomes a semigroup when the identity as a constant is omitted (and/or the inverse operation is omitted), the class of groups doesnotform a subvariety of the variety of semigroups because the signatures are different.
Similarly, the class of semigroups that are groups is not a subvariety of the variety of semigroups. The class of monoids that are groups contains⟨Z,+⟩{\displaystyle \langle \mathbb {Z} ,+\rangle }and does not contain its subalgebra (more precisely, submonoid)⟨N,+⟩{\displaystyle \langle \mathbb {N} ,+\rangle }.
However, the class ofabelian groupsis a subvariety of the variety of groups because it consists of those groups satisfyingxy=yx, with no change of signature. Thefinitely generated abelian groupsdo not form a subvariety, since by Birkhoff's theorem they don't form a variety, as an arbitrary product of finitely generated abelian groups is not finitely generated.
Viewing a varietyVand its homomorphisms as acategory, a subvarietyUofVis afull subcategoryofV, meaning that for any objectsa,binU, the homomorphisms fromatobinUare exactly those fromatobinV.
SupposeVis a non-trivial variety of algebras, i.e.Vcontains algebras with more than one element. One can show that for every setS, the varietyVcontains afree algebra FSon S. This means that there is an injective set mapi:S→FSthat satisfies the followinguniversal property: given any algebraAinVand any mapk:S→A, there exists a uniqueV-homomorphismf:FS→Asuch thatf∘i=k.
This generalizes the notions offree group,free abelian group,free algebra,free moduleetc. It has the consequence that every algebra in a variety is a homomorphic image of a free algebra.
Besides varieties, category theorists use two other frameworks that are equivalent in terms of the kinds of algebras they describe: finitarymonadsandLawvere theories. We may go from a variety to a finitary monad as follows. A category with some variety of algebras as objects and homomorphisms as morphisms is called afinitary algebraic category. For any finitary algebraic categoryV, theforgetful functorG:V→Sethas aleft adjointF:Set→V, namely the functor that assigns to each set the free algebra on that set. This adjunction ismonadic, meaning that the categoryVis equivalent to theEilenberg–Moore categorySetTfor the monadT=GF. Moreover the monadTisfinitary, meaning it commutes with filteredcolimits.
The monadT:Set→Setis thus enough to recover the finitary algebraic category. Indeed, finitary algebraic categories are precisely those categories equivalent to the Eilenberg-Moore categories of finitary monads. Both these, in turn, are equivalent to categories of algebras of Lawvere theories.
Working with monads permits the following generalization. One says a category is analgebraic categoryif it ismonadicoverSet. This is a more general notion than "finitary algebraic category" because it admits such categories asCABA(complete atomic Boolean algebras) andCSLat(complete semilattices) whose signatures include infinitary operations. In those two cases the signature is large, meaning that it forms not a set but a proper class, because its operations are of unbounded arity. The algebraic category ofsigma algebrasalso has infinitary operations, but their arity is countable whence its signature is small (forms a set).
Every finitary algebraic category is alocally presentable category.
Since varieties are closed under arbitrary direct products, all non-trivial varieties contain infinite algebras. Attempts have been made to develop a finitary analogue of the theory of varieties. This led, e.g., to the notion ofvariety of finite semigroups. This kind of variety uses only finitary products. However, it uses a more general kind of identities.
Apseudovarietyis usually defined to be a class of algebras of a given signature, closed under the taking of homomorphic images, subalgebras and finitary direct products. Not every author assumes that all algebras of a pseudovariety are finite; if this is the case, one sometimes talks of avariety of finite algebras. For pseudovarieties, there is no general finitary counterpart to Birkhoff's theorem, but in many cases the introduction of a more complex notion of equations allows similar results to be derived.[2]Namely, a class of finite monoids is a variety of finite monoids if and only if it can be defined by a set ofprofiniteidentities.[3]
Pseudovarieties are of particular importance in the study of finitesemigroupsand hence informal language theory.Eilenberg's theorem, often referred to as thevariety theorem, describes a natural correspondence between varieties ofregular languagesand pseudovarieties of finite semigroups.
Two monographs available free online: | https://en.wikipedia.org/wiki/Variety_(universal_algebra) |
Universal Logic is an emerging interdisciplinary field involving logic, non-classical logic, categorical logic, set theory, foundation of logic, and the philosophy and history of logic. The goal of the field is to develop an understanding of the nature of different types of logic. The expressionUniversal logicwas coined by analogy with the expressionUniversal algebraby Jean-Yves Béziau. The aim was to develop Universal logic as a field oflogicthat studies the features common to all logical systems, aiming to be to logic whatUniversal algebrais toalgebra, and guided by the features of "unity, generality, abstraction, and undetermination".[1]A number of approaches to universal logic in this sense have been proposed since the twentieth century, usingmodel theoreticandcategoricalapproaches.
The roots of universal logic as general theory of logical systems may go as far back as some work ofAlfred Tarskiin the early twentieth century andPaul Herzin 1922, but the modern notion was first presented in the 1990s by Swiss logicianJean-Yves Béziau.[2][3]The term 'universal logic' has also been separately used by logicians such asRichard SylvanandRoss Bradyto refer to a new type of (weak)relevant logic.[4]
In the context defined by Béziau, three main approaches to universal logic have been explored in depth:[5]
While logic has been studied for centuries, Mossakowski et al commented in 2007 that "it is embarrassing that there is no widely acceptable formal definition of "a logic".[9]These approaches to universal logic thus aim to address and formalize the nature of what may be called 'logic' as a form of "sound reasoning".[9]
Since 2005,Béziauhas been organizing world congresses and schools on universal logic.
A journal dedicated to the field,Logica Universalis, with Béziau aseditor-in-chiefstarted to be published byBirkhäuser Basel(an imprint ofSpringer) in 2007.[10]Springer also started to publish abook serieson the topic,Studies in Universal Logic, with Béziau as series editor.[11]
An anthology titledUniversal Logicwas published in 2012, giving a new light on the subject.[12] | https://en.wikipedia.org/wiki/Universal_logic |
GF(2)(also denotedF2{\displaystyle \mathbb {F} _{2}},Z/2ZorZ/2Z{\displaystyle \mathbb {Z} /2\mathbb {Z} }) is thefinite fieldwith two elements.[1][a]
GF(2)is thefieldwith the smallest possible number of elements, and is unique if theadditive identityand themultiplicative identityare denoted respectively0and1, as usual.
The elements ofGF(2)may be identified with the two possible values of abitand to theBoolean valuestrueandfalse. It follows thatGF(2)is fundamental and ubiquitous incomputer scienceand itslogicalfoundations.
GF(2) is the unique field with two elements with itsadditiveandmultiplicative identitiesrespectively denoted0and1.
Its addition is defined as the usual addition of integers but modulo 2 and corresponds to the table below:
If the elements of GF(2) are seen as Boolean values, then the addition is the same as that of thelogical XORoperation.
Since each element equals itsopposite, subtraction is thus the same operation as addition.
The multiplication of GF(2) is again the usual multiplication modulo 2 (see the table below), and on Boolean variables corresponds to thelogical ANDoperation.
GF(2) can be identified with the field of theintegers modulo2, that is, thequotient ringof thering of integersZby theideal2Zof alleven numbers:GF(2) =Z/2Z.
NotationsZ2andZ2{\displaystyle \mathbb {Z} _{2}}may be encountered although they can be confused with the notation of2-adic integers.
Because GF(2) is a field, many of the familiar properties of number systems such as therational numbersandreal numbersare retained:
Properties that are not familiar from the real numbers include:
Because of the algebraic properties above, many familiar and powerful tools of mathematics work in GF(2) just as well as other fields. For example, matrix operations, includingmatrix inversion, can be applied to matrices with elements in GF(2) (seematrix ring).
Anygroup(V,+) with the propertyv+v= 0 for everyvinVis necessarilyabelianand can be turned into avector spaceover GF(2) in a natural fashion, by defining 0v= 0 and 1v=vfor allvinV. This vector space will have abasis, implying that the number of elements ofVmust be a power of 2 (or infinite).
In moderncomputers, data are represented withbit stringsof a fixed length, calledmachine words. These are endowed with the structure of avector spaceover GF(2). The addition of this vector space is thebitwise operationcalledXOR(exclusive or). Thebitwise ANDis another operation on this vector space, which makes it aBoolean algebra, a structure that underlies allcomputer science. These spaces can also be augmented with a multiplication operation that makes them into a field GF(2n), but the multiplication operation cannot be a bitwise operation. Whennis itself a power of two, the multiplication operation can benim-multiplication; alternatively, for anyn, one can use multiplication of polynomials over GF(2) modulo airreducible polynomial(as for instance for the field GF(28) in the description of theAdvanced Encryption Standardcipher).
Vector spacesandpolynomial ringsover GF(2) are widely used incoding theory, and in particular inerror correcting codesand moderncryptography. For example, many common error correcting codes (such asBCH codes) arelinear codesover GF(2) (codes defined from vector spaces over GF(2)), orpolynomial codes(codes defined asquotientsof polynomial rings over GF(2)).
Like any field, GF(2) has analgebraic closure. This is a fieldFwhich contains GF(2) as asubfield, which isalgebraicover GF(2) (i.e. every element ofFis a root of a polynomial with coefficients in GF(2)), and which isalgebraically closed(any non-constant polynomial with coefficients inFhas a root inF). The fieldFis uniquely determined by these properties,up toafield automorphism(i.e. essentially up to the notation of its elements).
Fis countable and contains a single copy of each of the finite fields GF(2n); the copy of GF(2n) is contained in the copy of GF(2m) if and only ifndividesm.The fieldFis countable and is the union of all these finite fields.
Conway realized thatFcan be identified with theordinal numberωωω{\displaystyle \omega ^{\omega ^{\omega }}}, where the addition and multiplication operations are defined in a natural manner bytransfinite induction(these operations are however different from the standard addition and multiplication of ordinal numbers).[2]The addition in this field is simple to perform and is akin toNim-addition;Lenstrahas shown that the multiplication can also be performed efficiently.[3] | https://en.wikipedia.org/wiki/GF(2) |
Causal research, is the investigation of (researchinto)cause-relationships.[1][2][3]To determine causality, variation in the variable presumed to influence the difference in another variable(s) must be detected, and then the variations from the other variable(s) must be calculated (s). Otherconfounding influencesmust be controlled for so they don't distort the results, either by holding them constant in the experimental creation of evidence. This type of research is very complex and the researcher can never be completely certain that there are no other factors influencing the causal relationship, especially when dealing with people's attitudes and motivations. There are often much deeperpsychologicalconsiderations that even the respondent may not be aware of.
There are tworesearch methodsfor exploring the cause-and-effect relationship between variables:
Experiments are typically conducted in laboratories where many or all aspects of the experiment can be tightly controlled to avoid spurious results due to factors other than the hypothesized causative factor(s). Many studies inphysics, for example, use this approach. Alternatively, field experiments can be performed, as with medical studies in which subjects may have a great many attributes that cannot be controlled for but in which at least the keyhypothesizedcausative variables can be varied and some of the extraneous attributes can at least be measured. Field experiments also are sometimes used ineconomics, such as when two different groups of welfare recipients are given two alternative sets of incentives or opportunities to earn income and the resulting effect on theirlabor supplyis investigated.
In areas such aseconomics, mostempiricalresearch is done on pre-existing data, often collected on a regular basis by a government.Multiple regressionis a group of related statistical techniques that control for (attempt to avoid spurious influence from) various causative influences other than the ones being studied. If the data show sufficient variation in the hypothesizedexplanatory variableof interest, its effect if any upon the potentially influenced variable can be measured. | https://en.wikipedia.org/wiki/Causal_research |
Causal inferenceis the process of determining the independent, actual effect of a particular phenomenon that is a component of a larger system. The main difference between causal inference and inference ofassociationis that causal inference analyzes the response of an effect variable when a cause of the effect variable is changed.[1][2]The study of why things occur is calledetiology, and can be described using the language ofscientific causal notation. Causal inference is said to provide the evidence of causality theorized bycausal reasoning.
Causal inference is widely studied across all sciences. Several innovations in the development and implementation of methodology designed to determine causality have proliferated in recent decades. Causal inference remains especially difficult where experimentation is difficult or impossible, which is common throughout most sciences.
The approaches to causal inference are broadly applicable across all types of scientific disciplines, and many methods of causal inference that were designed for certain disciplines have found use in other disciplines. This article outlines the basic process behind causal inference and details some of the more conventional tests used across different disciplines; however, this should not be mistaken as a suggestion that these methods apply only to those disciplines, merely that they are the most commonly used in that discipline.
Causal inference is difficult to perform and there is significant debate amongst scientists about the proper way to determine causality. Despite other innovations, there remain concerns of misattribution by scientists of correlative results as causal, of the usage of incorrect methodologies by scientists, and of deliberate manipulation by scientists of analytical results in order to obtain statistically significant estimates. Particular concern is raised in the use of regression models, especially linear regression models.
Inferring thecauseof something has been described as:
Causal inference is conducted via the study of systems where the measure of one variable is suspected to affect the measure of another. Causal inference is conducted with regard to thescientific method. The first step of causal inference is to formulate a falsifiablenull hypothesis, which is subsequentlytested with statistical methods. Frequentiststatistical inferenceis the use of statistical methods to determine the probability that the data occur under the null hypothesis by chance;Bayesian inferenceis used to determine the effect of an independent variable.[5]Statistical inference is generally used to determine the difference between variations in the original data that arerandom variationor the effect of a well-specified causal mechanism. Notably,correlation does not imply causation, so the study of causality is as concerned with the study of potential causal mechanisms as it is with variation amongst the data.[citation needed]A frequently sought after standard of causal inference is an experiment wherein treatment is randomly assigned but all other confounding factors are held constant. Most of the efforts in causal inference are in the attempt to replicate experimental conditions.
Epidemiological studies employ differentepidemiological methodsof collecting and measuring evidence of risk factors and effect and different ways of measuring association between the two. Results of a 2020 review of methods for causal inference found that using existing literature for clinical training programs can be challenging. This is because published articles often assume an advanced technical background, they may be written from multiple statistical, epidemiological, computer science, or philosophical perspectives, methodological approaches continue to expand rapidly, and many aspects of causal inference receive limited coverage.[6]
Common frameworks for causal inference include thecausal pie model(component-cause),Pearl's structural causal model(causal diagram+do-calculus),structural equation modeling, andRubin causal model(potential-outcome), which are often used in areas such as social sciences and epidemiology.[7]
Experimental verification of causal mechanisms is possible using experimental methods. The main motivation behind an experiment is to hold other experimental variables constant while purposefully manipulating the variable of interest. If the experiment produces statistically significant effects as a result of only the treatment variable being manipulated, there is grounds to believe that a causal effect can be assigned to the treatment variable, assuming that other standards for experimental design have been met.
Quasi-experimental verification of causal mechanisms is conducted when traditional experimental methods are unavailable. This may be the result of prohibitive costs of conducting an experiment, or the inherent infeasibility of conducting an experiment, especially experiments that are concerned with large systems such as economies of electoral systems, or for treatments that are considered to present a danger to the well-being of test subjects. Quasi-experiments may also occur where information is withheld for legal reasons.
Epidemiologystudies patterns of health and disease in defined populations ofliving beingsin order toinfercauses and effects. An association between an exposure to a putativerisk factorand a disease may be suggestive of, but is not equivalent to causality becausecorrelation does not imply causation. Historically,Koch's postulateshave been used since the 19th century to decide if a microorganism was the cause of a disease. In the 20th century theBradford Hill criteria, described in 1965[8]have been used to assess causality of variables outside microbiology, although even these criteria are not exclusive ways to determine causality.
Inmolecular epidemiologythe phenomena studied are on amolecular biologylevel, including genetics, wherebiomarkersare evidence of cause or effects.
A recent trend[when?]is to identify evidence for influence of the exposure onmolecular pathologywithin diseasedtissueor cells, in the emerging interdisciplinary field ofmolecular pathological epidemiology(MPE).[independent source needed]Linking the exposure to molecular pathologic signatures of the disease can help to assess causality.[independent source needed]Considering the inherent nature ofheterogeneityof a given disease, the unique disease principle, disease phenotyping and subtyping are trends in biomedical andpublic healthsciences, exemplified aspersonalized medicineandprecision medicine.[independent source needed]
Causal Inference has also been used for treatment effect estimation. Assuming a set of observable patient symptoms(X) caused by a set of hidden causes(Z) we can choose to give or not a treatmentt. The result of the giving or not giving the treatment is the effect estimationy. If the treatment is not guaranteed to have a positive effect then the decision whether the treatment should be applied or not depends firstly on expert knowledge that encompasses the causal connections. For novel diseases, this expert knowledge may not be available. As a result, we rely solely on past treatment outcomes to make decisions. A modifiedvariational autoencodercan be used to model the causal graph described above.[9]While the above scenario could be modelled without the use of the hidden confounder(Z) we would lose the insight that the symptoms a patient together with other factors impacts both the treatment assignment and the outcome.
Causal inference is an important concept in the field ofcausal artificial intelligence. Determination of cause and effect from joint observational data for two time-independent variables, say X and Y, has been tackled using asymmetry between evidence for some model in the directions, X → Y and Y → X. The primary approaches are based onAlgorithmic information theorymodels and noise models.[citation needed]
Incorporate an independent noise term in the model to compare the evidences of the two directions.
Here are some of the noise models for the hypothesis Y → X with the noise E:
The common assumption in these models are:
On an intuitive level, the idea is that the factorization of the joint distribution P(Cause, Effect) into P(Cause)*P(Effect | Cause) typically yields models of lower total complexity than the factorization into P(Effect)*P(Cause | Effect). Although the notion of "complexity" is intuitively appealing, it is not obvious how it should be precisely defined.[13]A different family of methods attempt to discover causal "footprints" from large amounts of labeled data, and allow the prediction of more flexible causal relations.[14]
The social sciences in general have moved increasingly toward including quantitative frameworks for assessing causality. Much of this has been described as a means of providing greater rigor to social science methodology. Political science was significantly influenced by the publication ofDesigning Social Inquiry, byGary King,Robert Keohane, andSidney Verba, in 1994. King, Keohane, and Verba recommend that researchers apply both quantitative and qualitative methods and adopt the language of statistical inference to be clearer about their subjects of interest and units of analysis.[15][16]Proponents of quantitative methods have also increasingly adopted thepotential outcomes framework, developed byDonald Rubin, as a standard for inferring causality.[citation needed]
While much of the emphasis remains on statistical inference in the potential outcomes framework, social science methodologists have developed new tools to conduct causal inference with both qualitative and quantitative methods, sometimes called a "mixed methods" approach.[17][18]Advocates of diverse methodological approaches argue that different methodologies are better suited to different subjects of study. Sociologist Herbert Smith and political scientists James Mahoney and Gary Goertz have cited the observation ofPaul W. Holland, a statistician and author of the 1986 article "Statistics and Causal Inference", that statistical inference is most appropriate for assessing the "effects of causes" rather than the "causes of effects".[19][20]Qualitative methodologists have argued that formalized models of causation, includingprocess tracingandfuzzy settheory, provide opportunities to infer causation through the identification of critical factors within case studies or through a process of comparison among several case studies.[16]These methodologies are also valuable for subjects in which a limited number of potential observations or the presence of confounding variables would limit the applicability of statistical inference.[citation needed]
On longer timescales,persistence studiesuses causal inference to link historical events to later political, economic and social outcomes.[21]
In theeconomic sciencesandpolitical sciencescausal inference is often difficult, owing to the real world complexity of economic and political realities and the inability to recreate many large-scale phenomena within controlled experiments. Causal inference in the economic and political sciences continues to see improvement in methodology and rigor, due to the increased level of technology available to social scientists, the increase in the number of social scientists and research, and improvements to causal inference methodologies throughout social sciences.[22]
Despite the difficulties inherent in determining causality in economic systems, several widely employed methods exist throughout those fields.
Economists and political scientists can use theory (often studied in theory-driven econometrics) to estimate the magnitude of supposedly causal relationships in cases where they believe a causal relationship exists.[23]Theorists can presuppose a mechanism believed to be causal and describe the effects using data analysis to justify their proposed theory. For example, theorists can use logic to construct a model, such as theorizing that rain causes fluctuations in economic productivity but that the converse is not true.[24]However, using purely theoretical claims that do not offer any predictive insights has been called "pre-scientific" because there is no ability to predict the impact of the supposed causal properties.[5]It is worth reiterating that regression analysis in the social science does not inherently imply causality, as many phenomena may correlate in the short run or in particular datasets but demonstrate no correlation in other time periods or other datasets. Thus, the attribution of causality to correlative properties is premature absent a well defined and reasoned causal mechanism.
Theinstrumental variables(IV) technique is a method of determining causality that involves the elimination of a correlation between one of a model's explanatory variables and the model's error term. This method presumes that if a model's error term moves similarly with the variation of another variable, then the model's error term is probably an effect of variation in that explanatory variable. The elimination of this correlation through the introduction of a new instrumental variable thus reduces the error present in the model as a whole.[25]
Model specification is the act of selecting a model to be used in data analysis. Social scientists (and, indeed, all scientists) must determine the correct model to use because different models are good at estimating different relationships.[26]
Model specification can be useful in determining causality that is slow to emerge, where the effects of an action in one period are only felt in a later period. It is worth remembering that correlations only measure whether two variables have similar variance, not whether they affect one another in a particular direction; thus, one cannot determine the direction of a causal relation based on correlations only. Because causal acts are believed to precede causal effects, social scientists can use a model that looks specifically for the effect of one variable on another over a period of time. This leads to using the variables representing phenomena happening earlier as treatment effects, where econometric tests are used to look for later changes in data that are attributed to the effect of such treatment effects, where a meaningful difference in results following a meaningful difference in treatment effects may indicate causality between the treatment effects and the measured effects (e.g., Granger-causality tests). Such studies are examples oftime-series analysis.[27]
Other variables, or regressors in regression analysis, are either included or not included across various implementations of the same model to ensure that different sources of variation can be studied more separately from one another. This is a form of sensitivity analysis: it is the study of how sensitive an implementation of a model is to the addition of one or more new variables.[28]
A chief motivating concern in the use of sensitivity analysis is the pursuit of discoveringconfounding variables. Confounding variables are variables that have a large impact on the results of a statistical test but are not the variable that causal inference is trying to study. Confounding variables may cause a regressor to appear to be significant in one implementation, but not in another.
Another reason for the use of sensitivity analysis is to detectmulticollinearity. Multicollinearity is the phenomenon where the correlation between two explanatory variables is very high. A high level of correlation between two such variables can dramatically affect the outcome of a statistical analysis, where small variations in highly correlated data can flip the effect of a variable from a positive direction to a negative direction, or vice versa. This is an inherent property of variance testing. Determining multicollinearity is useful in sensitivity analysis because the elimination of highly correlated variables in different model implementations can prevent the dramatic changes in results that result from the inclusion of such variables.[29]
However, there are limits to sensitivity analysis' ability to prevent the deleterious effects of multicollinearity, especially in the social sciences, where systems are complex. Because it is theoretically impossible to include or even measure all of the confounding factors in a sufficiently complex system, econometric models are susceptible to the common-cause fallacy, where causal effects are incorrectly attributed to the wrong variable because the correct variable was not captured in the original data. This is an example of the failure to account for alurking variable.[30]
Recently, improved methodology in design-based econometrics has popularized the use of both natural experiments and quasi-experimental research designs to study the causal mechanisms that such experiments are believed to identify.[31]
Despite the advancements in the development of methodologies used to determine causality, significant weaknesses in determining causality remain. These weaknesses can be attributed both to the inherent difficulty of determining causal relations in complex systems but also to cases of scientific malpractice.
Separate from the difficulties of causal inference, the perception that large numbers of scholars in the social sciences engage in non-scientific methodology exists among some large groups of social scientists. Criticism of economists and social scientists as passing off descriptive studies as causal studies are rife within those fields.[5]
In the sciences, especially in the social sciences, there is concern among scholars that scientific malpractice is widespread. As scientific study is a broad topic, there are theoretically limitless ways to have a causal inference undermined through no fault of a researcher. Nonetheless, there remain concerns among scientists that large numbers of researchers do not perform basic duties or practice sufficiently diverse methods in causal inference.[32][22][33][failed verification][34]
One prominent example of common non-causal methodology is the erroneous assumption of correlative properties as causal properties. There is no inherent causality in phenomena that correlate. Regression models are designed to measure variance within data relative to a theoretical model: there is nothing to suggest that data that presents high levels of covariance have any meaningful relationship (absent a proposed causal mechanism with predictive properties or a random assignment of treatment). The use of flawed methodology has been claimed to be widespread, with common examples of such malpractice being the overuse of correlative models, especially the overuse of regression models and particularly linear regression models.[5]The presupposition that two correlated phenomena are inherently related is a logical fallacy known asspurious correlation. Some social scientists claim that widespread use of methodology that attributes causality to spurious correlations have been detrimental to the integrity of the social sciences, although improvements stemming from better methodologies have been noted.[31]
A potential effect of scientific studies that erroneously conflate correlation with causality is an increase in the number of scientific findings whose results are not reproducible by third parties. Such non-reproducibility is a logical consequence of findings that correlation only temporarily being overgeneralized into mechanisms that have no inherent relationship, where new data does not contain the previous, idiosyncratic correlations of the original data. Debates over the effect of malpractice versus the effect of the inherent difficulties of searching for causality are ongoing.[35]Critics of widely practiced methodologies argue that researchers have engaged statistical manipulation in order to publish articles that supposedly demonstrate evidence of causality but are actually examples of spurious correlation being touted as evidence of causality: such endeavors may be referred to asP hacking.[36]To prevent this, some have advocated that researchers preregister their research designs prior to conducting to their studies so that they do not inadvertently overemphasize a nonreproducible finding that was not the initial subject of inquiry but was found to be statistically significant during data analysis.[37] | https://en.wikipedia.org/wiki/Causal_inference |
Causality: Models, Reasoning, and Inference(2000;[1]updated 2009[2]) is a book byJudea Pearl.[3]It is an exposition and analysis ofcausality.[4][5]It is considered to have been instrumental in laying the foundations of the modern debate oncausal inferencein several fields includingstatistics,computer scienceandepidemiology.[6]In this book, Pearl espouses the Structural Causal Model (SCM) that usesstructural equation modeling.[7]This model is a competing viewpoint to theRubin causal model. Some of the material from the book was reintroduced in the more general-audience targetingThe Book of Why.
Pearl succeeds in bringing together in a general nonparametric framework the counterfactual tradition of causal analysis with the variants of structural equation modeling worth keeping. The graph theory that he uses to accomplish this fusion is often elegant. Thus, Causality is a major statement, which all who claim to know what causality is must read.
The book earnt Pearl the 2001Lakatos Awardin Philosophy of Science.[9] | https://en.wikipedia.org/wiki/Causality_(book) |
Causationrefers to the existence of "cause and effect" relationships between multiple variables.[1]Causation presumes that variables, which act in a predictable manner, can produce change in related variables and that this relationship can be deduced through direct and repeated observation.[2]Theories of causation underpin social research as it aims to deduce causal relationships between structural phenomena and individuals and explain these relationships through the application and development of theory.[3]Due to divergence amongst theoretical and methodological approaches, different theories, namely functionalism, all maintain varying conceptions on the nature of causality and causal relationships. Similarly, a multiplicity of causes have led to the distinction betweennecessary and sufficientcauses.
- A and B represent some form of phenomena (either concrete or abstract),
- A is statistically related to B in so far as an observed change in A will produce a proportional change in B,
- If the change to A precedes the change to B and the change is not caused by an intervening variable (spurious relationship) then:
- A is said to have a causal relationship (either sufficient or necessary) to B.[4]
This nature, extent, and scope of this relationship, however, must be further defined through further research that accounts for the weaknesses and limitations of preceding works.[3]
Classical conceptions of causation have demonstrably informed the development of social research and different methodological approaches, as the vast majority of research seeks to explain phenomena in terms of cause and effect.[3]Typical criteria for inferring a causal relationship includes: i) a statistical association between the two variables ii) the direction of influence (that changes in the causal factor induce change in thedependent variable) and; iii) a requirement that the relationship between variables isnon-spurious.[3]The identification ofintervening variablesand further replications of studies can also strengthen claims of causal inference.[3]Different methodological approaches make tradeoffs between statistical rigor (the ability to confidently attribute change to one variable or cause), qualitative depth, and finances available for research. Experimental methods, which maximize statistical rigor, are often difficult to conduct as they are expensive and can be detached from the social processes that researchers seek to undertake. In contrast, ethnographical methods and surveys, which maximize the qualitative richness of the data, lack the statistical generalizability that experimental studies produce. As such, causality deduced from social research can be relatively abstract (findings from an ethnography) or exact (statistical research, laboratory studies). As such, care must always be taken when attributing or describing causal relationships from the findings of social research, as this will vary based on methodology and, consequently, the nature of the data.[3]
Causality, within sociology, has been the subject of epistemological debates, particularly concerning theexternal validityof research findings; one factor driving the tenuous nature of causation within social research is the wide variety of potential "causes" that can be attributed to a particular phenomena.Max Weber, inThe Protestant Ethic and the Spirit of Capitalism, attributed the development of capitalism in Northern Europe to the regional prominence ofProtestant Religions. Material and geographical variables, however, also played a significant role in the proliferation of Puritan beliefs and this was a central criticism levied at Weber's study.[4]Talcott Parsonsasserted that such an interpretation of Weber's thoughts were reductive and misdirect from Weber's assertions: that the congruency between theProtestant ethicandmodern capitalismwas necessary for the unprecedented growth of wealth in Northern Europe whereas material factors were merely sufficient.[4]
To this end,Weberidentified two types of causation;
Several causes, either sufficient or necessary, often intersect and interact with one another to produce a given phenomena and, as such, theories of single or essential causality are often not adequate for social research. For this reason, statistical models that can account for and control several variables are prevalent in social research.[3]
Normative conceptions of causation, that have served to inform the development of social research standards, is largely associated withFunctionalistandNewtonianthought and was introduced to social research through individuals likeComteandDurkheim.[6][7]This broader paradigm shift in social research is often associated with the push for sociology to be recognized amongst natural sciences.[6]This perspective of causation perceives individuals, structural variables, and the relationships amongst them strictly in terms of their functional and productive outputs. As such, causal relationships must be observed and deduced through scientific observation.
In relation to culture, causality underpins the logic surrounding socio-cultural norms and deviance.[7]Social structures serve the function of establishing, propagating, and enforcing both cultural and legal norms and, as such, play an indispensable role in constituting and maintaining social order; for these standards to be effective, however, they must be applied universally and in a predictable manner. If this holds, norm violations and punishment can be said to have a causal relationship in that the violation of a standard directly produces equivalent sanctions. Through punishment, standards are then visibly reaffirmed throughout the general populace. All humanistic societies, to varying degrees, function on some principle of causality.[7]
The concept of elective affinity was used by Max Weber to describe the relationship between capitalism and the Protestant ethic and differs from a purely deterministic account of individual behavior.[8]The Newtonian notion of causality underpins the deterministic camp of the structure-agency debate whereas interactionist paradigms emphasize the rational choices that more or less free individuals make in light of broader social forces that guide them.[9]Rather than social forces playing an essentialized role in determining the life course; rational individuals make personal choices based on the knowledge, experiences, and resources they have at their disposal. As such, elective affinity serves to incorporate both structuralist and agent-focused paradigms by incorporating the (admittedly varying) capacity of social actors to make choices in light of their personal experiences and resources. Such a distinction, however, is largely theoretical and is further confounded byWeber'suse of theIdeal typeschema. Furthermore, the level of primacy allotted to agency and structure varies between different social theories and, correspondingly, different notions of causal relationships. | https://en.wikipedia.org/wiki/Causation_(sociology) |
In thephilosophy of religion, acosmological argumentis an argument for the existence ofGodbased uponobservationalandfactualstatements concerning theuniverse(or some general category of itsnaturalcontents) typically in the context ofcausation, change, contingency or finitude.[1][2][3]In referring toreasonand observation alone for itspremises, and precludingrevelation, this category of argument falls within the domain ofnatural theology. A cosmological argument can also sometimes be referred to as anargument from universal causation, anargument from first cause, thecausal argumentor theprime mover argument.
The concept of causation is a principal underpinning idea in all cosmological arguments, particularly in affirming the necessity for aFirst Cause. The latter is typically determined inphilosophical analysisto beGod, as identified withinclassical conceptions of theism.
The origins of the argument date back to at leastAristotle, developed subsequently within the scholarly traditions ofNeoplatonismandearly Christianity, and later under medievalIslamicscholasticismthrough the 9th to 12th centuries. It would eventually be re-introduced to Christian theology in the 13th century byThomas Aquinas. In the 18th century, it would become associated with theprinciple of sufficient reasonformulated byGottfried LeibnizandSamuel Clarke, itself an exposition of theParmenideancausal principle that "nothing comes from nothing".
Contemporarydefenders of cosmological arguments includeWilliam Lane Craig,[4]Robert Koons,[5]John Lennox,Stephen Meyer, andAlexander Pruss.[6]
Plato(c. 427–347 BC) andAristotle(c. 384–322 BC) both posited first cause arguments, though each had certain notable caveats.[7]InThe Laws(Book X), Plato posited that all movement in the world and theCosmoswas "imparted motion". This required a "self-originated motion" to set it in motion and to maintain it. InTimaeus, Plato posited a "demiurge" of supreme wisdom and intelligence as the creator of the Cosmos.
Aristotle arguedagainstthe idea of a first cause, often confused with the idea of a "prime mover" or "unmoved mover" (πρῶτον κινοῦν ἀκίνητονorprimus motor) in hisPhysicsandMetaphysics.[8]Aristotle argued infavorof the idea of several unmoved movers, one powering eachcelestial sphere, which he believed lived beyond the sphere of the fixed stars, and explained why motion in the universe (which he believed was eternal) had continued for an infinite period of time. Aristotle argued theatomist'sassertion of a non-eternal universe would require afirst uncaused cause– in his terminology, anefficientfirst cause – an idea he considered a nonsensical flaw in the reasoning of the atomists.
Like Plato, Aristotle believed in an eternalcosmoswith no beginning and no end (which in turn followsParmenides' famous statement that "nothing comes from nothing"). In what he called "first philosophy" or metaphysics, Aristotledidintend a theological correspondence between the prime mover and a deity; functionally, however, he provided an explanation for the apparent motion of the "fixed stars" (now understood as the daily rotation of the Earth). According to his theses, immaterial unmoved movers are eternal unchangeable beings that constantly think about thinking, but being immaterial, they are incapable of interacting with the cosmos and have no knowledge of what transpires therein. From an "aspiration or desire",[9]thecelestial spheres,imitatethat purely intellectual activity as best they can, byuniform circular motion. The unmoved moversinspiringtheplanetaryspheres are no different in kind from the prime mover, they merely suffer a dependency of relation to the prime mover. Correspondingly, the motions of the planets are subordinate to the motion inspired by the prime mover in the sphere of fixed stars. Aristotle's natural theology admitted no creation or capriciousness from the immortalpantheon, but maintained a defense against dangerous charges of impiety.[10]
Plotinus, a third-centuryPlatonist, taught thatthe Onetranscendent absolute caused the universe to exist simply as a consequence of its existence (creatio ex deo).[11]His discipleProclusstated, "The One is God".[12]In the 6th century,Syriac Christianneo-PlatonistJohn Philoponus(c. 490 – c. 570) examined the contradiction between Greek pagan adherences to the concept of apast-eternal worldand Aristotelian rejection of the existence ofactual infinities. Thereupon, he formulated arguments in defense oftemporal finitism, which underpinned his arguments for the existence of God. Philosopher Steven M. Duncan notes that Philoponus's ideas eventually received their fullest articulation "at the hands of Muslim and Jewish exponents ofkalam", or medieval Islamicscholasticism.[13]
In the 11th century, Islamic philosopherAvicenna(c. 980 – 1037) inquired into the question ofbeing, in which he distinguished betweenessence(māhiyya) andexistence(wuǧūd).[14]He argued that the fact of existence could not be inferred from or accounted for by the essence of existing things, and thatformand matter by themselves could not originate and interact with the movement of the universe or the progressive actualization of existing things. Thus, he reasoned that existence must be due to anagent causethat necessitates, imparts, gives, or adds existence to an essence. To do so, the cause must coexist with its effect and be an existing thing.[15]
Thomas Aquinas(c. 1225 – 1274) adapted and enhanced the argument he found in his reading of Aristotle, Avicenna (theProof of the Truthful) andMaimonidesto formulate one of the most influential versions of the cosmological argument.[16][17]His conception of the first cause was the idea that the universe must be caused by something that is itself uncaused, which he claimed is 'that which we call God':[16]
The second way is from the nature of the efficient cause. In the world of sense we find there is an order of efficient causes. There is no case known (neither is it, indeed, possible) in which a thing is found to be the efficient cause of itself; for so it would be prior to itself, which is impossible. Now in efficient causes it is not possible to go on to infinity, because in all efficient causes following in order, the first is the cause of the intermediate cause, and the intermediate is the cause of the ultimate cause, whether the intermediate cause be several, or only one. Now to take away the cause is to take away the effect. Therefore, if there be no first cause among efficient causes, there will be no ultimate, nor any intermediate cause. But if in efficient causes it is possible to go on to infinity, there will be no first efficient cause, neither will there be an ultimate effect, nor any intermediate efficient causes; all of which is plainly false. Therefore it is necessary to admit a first efficient cause, to which everyone gives the name of God.
Importantly,Aquinas's Five Ways, given the second question of hisSumma Theologica, are not the entirety of Aquinas's demonstration that the Christian God exists. The Five Ways form only the beginning of Aquinas's Treatise on the Divine Nature.
Aregressis a series of related elements, arranged in some type of sequence of succession, examined in backwards succession (regression) from a fixed point of reference. Depending on the type of regress, this retrograde examination may take the form ofrecursiveanalysis, in which the elements in a series are studied as products of prior, often simpler, elements. If there is no 'last member' in a regress (i.e. no 'first member' in the series) it becomes aninfinite regress, continuing in perpetuity.[18]In the context of the cosmological argument the term 'regress' usually refers tocausal regress, in which the series is a chain ofcause and effect, with each element in the series arising from causal activity of the prior member.[19]Some variants of the argument may also refer totemporal regress, wherein the elements are past events (discrete units of time) arranged in atemporalsequence.[4]
Aninfinite regress argumentattempts to establish the falsity of a proposition by showing that itentailsan infinite regress that isvicious.[18][20]The cosmological argument is a type ofpositiveinfinite regress argument given that it defends a proposition (in this case, the existence of afirst cause) by arguing that its negation would lead to a vicious regress.[21]An infinite regress may be vicious due to various reasons:[22][1]
Aquinas refers to the distinction found in Aristotle'sPhysics(8.5) that a series of causes may either beaccidentalor essential,[25][26]though the designation of this terminology would follow later underJohn Duns Scotusat the turn of the 14th century.[27]
In an accidentally ordered series of causes, earlier members need not continue exerting causal activity (having done so to propagate the chain) for the series to continue. For example, in a generational line, ancestors need no longer exist for their offspring to continue the sequence of descent. In an essential series, prior members must maintain causal interrelationship for the series to continue: If a hand grips a stick that moves a rock along the ground, the rock would stop motion once the hand or stick ceases to exist.[28]
Based upon this distinctionFrederick Copleston(1907–1994) characterises two types of causation: Causesin fieri, which cause an effect'sbecoming, or coming into existence, and causesin esse, which causally sustain an effect, inbeing, once it exists.[29]
Two specific properties of an essentially ordered series have significance in the context of the cosmological argument:[28]
Thomisticphilosopher, R. P. Phillips comments on the characteristics of essential ordering:[30]
In thescholasticera,Aquinasformulated the "argument fromcontingency", followingAristotle, in claiming thatthere must be something to explain the existence of the universe. Since the universe could, under different circumstances, conceivablynotexist (i.e. it is contingent) its existence must have a cause. This cause cannot be embodied in another contingent thing, but something that exists bynecessity(i.e. thatmustexist in order for anything else to exist).[16]It is a form of argument fromuniversal causation, therefore compatible with the conception of a universe that has no beginning in time. In other words, according to Aquinas, even if the universe has always existed, it still owes its continuing existence to anuncaused cause,[31]he states: "... and this we understand to be God."[16]
Aquinas's argument from contingency is formulated as theThird Way(Q2, A3) in theSumma Theologica. It may be expressed as follows:[16]
He concludes thereupon that contingent beings are an insufficient explanation for the existence of other contingent beings. Furthermore, that there must exist anecessarybeing, whose non-existence is impossible, to explain the origination of all contingent beings.
In 1714, German philosopherGottfried Leibnizpresented a variation of the cosmological argument based upon theprinciple of sufficient reason. He writes: "There can be found no fact that is true or existent, or any true proposition, without there being a sufficient reason for its being so and not otherwise, although we cannot know these reasons in most cases." Stating his argument succinctly:[32]
Alexander Prussformulates the argument as follows:[33]
Premise 1 expresses theprinciple of sufficient reason. In premise 2, Leibniz proposes the existence of alogical conjunctionof all contingent facts, referred to in later literature as theBig Conjunctive Contingent Fact(BCCF), representing the sum total of contingent reality.[34]Premise 3 applies the principle of sufficient reason to the BCCF, given that it too, as a contingency, has a sufficient explanation. It follows, in statement 4, that the explanation of the BCCF must be necessary, not contingent, given that the BCCF incorporates all contingent facts. Statement 5 proposes that the necessary being explaining the totality of contingent facts is God.
Philosophers Joshua Rasmussen and T. Ryan Byerly have argued in defence of the inference from statement 4 to statement 5.[35][36]
At the turn of the 14th century, medieval Christian theologianJohn Duns Scotus(1265/66–1308) formulated ametaphysicalargument for the existence of God inspired by Aquinas'sargument of the unmoved mover.[37]Like other philosophers and theologians, Scotus believed that his statement for God's existence could be considered distinct to that of Aquinas. The form of the argument can be summarised as follows:[27]
Scotus affirms, in premise 5, that anaccidentally ordered series of causesis impossible without higher-order laws and processes that govern the basic principles of accidental causation, which he characterises as essentially ordered causes.[38]
Premise 6 continues, in accordance with Aquinas's discourses on theSecond WayandThird Way, that an essentially ordered series of causes cannot be an infinite regress.[39]On this, Scotus posits that, if it is merely possible that a first agent exists, then it isnecessarilytrue that a first agent exists, given that the non-existence of a first agent entails the impossibility of its own existence (by virtue of being a first cause in the chain).[27]He argues further that it isnot impossiblefor a being to exist that is causeless by virtue ofontologicalperfection.[40]
With the formulation of this argument, Scotus establishes the first component of his 'triple primacy': The characterisation of a being that is first inefficient causality,final causalityand pre-eminence, or maximal excellence, which he ascribes to God.[27]
The Kalam cosmological argument's central thesis is the impossibility of an infinitetemporalregress of events (or past-infinite universe). Though a modern formulation that defends thefinitude of the pastthrough philosophical and scientific arguments, many of the argument's ideas originate in the writings of early Christian theologianJohn Philoponus(490–570 AD),[41]developed within the proceedings of medievalIslamicscholasticismthrough the 9th to 12th centuries, eventually returning toChristian theologicalscholarship in the 13th century.[42]
These ideas were revitalised for modern discourse by philosopher and theologianWilliam Lane Craigthrough publications such asThe Kalām Cosmological Argument(1979) and theBlackwell Companion to Natural Theology(2009). The form of the argument popularised by Craig is expressed in two parts, as an initialdeductivesyllogismfollowed by further philosophical analysis.[4]
Craig argues that the cause of the universenecessarilyembodies specific properties in creating the universeex nihiloand in effecting creation from a timeless state (implyingfree agency). Based upon this analysis, he appends a further premise and conclusion:[43]
For scientific evidence of the finitude of the past, Craig refers to theBorde-Guth-Vilenkin theorem, which posits a past boundary tocosmic inflation, and the general consensus on the standard model of cosmology, which refers to the origin of the universe in theBig Bang.[44][45]
For philosophical evidence, he citesHilbert's paradox of the grand hotelandBertrand Russell'stale of Tristram Shandyto prove (respectively) the impossibility of actual infinites existing in reality and of forming an actual infinite by successive addition. He concludes that past events, in comprising a series of events that are instantiated in reality and formed by successive addition, cannot extend to an infinite past.[46]
Craig remarks upon thetheologicalimplications that follow from the conclusion of the argument:[47]
Objections to the cosmological argument may question why a first cause is unique in that it does not require any causes. Critics contend that the concept of a first cause qualifies asspecial pleading, or that arguing for the first cause's exemption raises the question of why there should be a first cause at all.[48]Defenders maintain that this question is addressed by various formulations of the cosmological argument, emphasizing that none of its major iterations rests on the premise that everything requires a cause.[49]
Andrew Loke refers to theKalam cosmological argument, in which the causal premise ("whatever begins to exist has a cause") stipulates that only things whichbegin to existrequire a cause.[50]William Lane Craigasserts that—even if one posits a plurality of causes for the existence of the universe—a first uncaused cause is necessary, otherwise an infinite regress of causes would arise, which he argues is impossible.[4][1]Similarly,Edward Feserproposes, in accordance with Aquinas's discourses on theSecond Way, that an essentially ordered series of causes cannot regress to infinity, even if it may be theoretically possible for accidentally ordered causes to do so.[51]
Various arguments have been presented to demonstrate the metaphysical impossibility of an actually infinite regress occurring in thereal world, referring tothought experimentssuch asHilbert's Hotel, thetale of Tristram Shandy, and variations.[52][53]
Craig maintains that thecausal principleis predicated in themetaphysicalintuitionthatnothing comes from nothing.If such intuitions are false, he argues it would be inexplicable why anything and everything does not randomly come into existence without a cause.[4]Yet, not all philosophers subscribe to the view of causality asa prioriinjustification.David Humecontends that the principle is rooted inexperience, therefore within the category ofa posterioriknowledge and subject to theproblem of induction.[54]
WhereasJ. L. Mackieargues that cause and effect cannot be extrapolated to the origins of the universe based upon our inductive experiences and intellectual preferences,[55]Craig proposes that causal laws are unrestricted metaphysical truths that are "not contingent upon the properties, causal powers, and dispositions of the natural kinds of substances which happen to exist".[56]
Secular philosophers such asMichael Martinargue that a cosmological argument may establish the existence of a first cause, but falls short of identifying that cause aspersonal, or as God as defined withinclassicalor other specific conceptions oftheism.[57][1]
Defenders of the argument note that most formulations, such as by Aquinas, Duns Scotus and Craig, employ conceptual analysis to establish the identity of the cause. In Aquinas'sSumma Theologica, thePrima Pars(First Part) is devoted predominantly to establishing the attributes of the cause, such as uniqueness, perfection and intelligence.[58]In Scotus'sOrdinatio, his metaphysical argument is the first component of the 'triple primacy' through which he characterises the first cause as a being with the attributes of maximal excellence.[27]
In the topic ofcosmic originsand the standard model ofcosmology, theinitial singularityof theBig Bangis postulated to be the point at whichspaceandtime, as well as allmatterandenergy, came into existence.[59]J. Richard GottandJames E. Gunnassert that the question of "What was there before the Universe?" makes no sense and that the concept ofbeforebecomes meaningless when considering a timeless state. They add that questioning what occurred before the Big Bang is akin to questioning what is north of theNorth Pole.[59]
Craig refers toKant's postulate that a cause can be simultaneous with its effect, denoting that this is true of the moment of creation when time itself came into being.[60]He affirms that the history of 20th century cosmology belies the proposition that researchers have no strong intuition to pursue a causal explanation of the origin of time and the universe.[56]Accordingly, physicists have sought to examine the causal origins of the Big Bang by conjecturing such scenarios as the collision ofmembranes.[61]Feser also notes that versions of the cosmological argument presented by classical philosophers do not require a commitment to the Big Bang, or even to a cosmic origin.[62]
William L. Rowecharacterises the Hume-Edwards principle, referring to arguments presented byDavid Hume, and laterPaul Edwards, in their criticisms of the cosmological argument:[63]
"If the existence of every member of a set is explained, the existence of that set is thereby explained."
The principle stipulates that a causal series—even one that regresses to infinity—requires no explanatory causes beyond those that are members within that series. If every member of a series has a causal explanation within the sequence, the series in itself is explanatorily complete.[63]Thus, it rejects arguments, such as by Duns Scotus, for the existence of higher-order, efficient causes that govern the basic principles of material causation.[27]Notably, it contradicts Hume's ownDialogues Concerning Natural Religion, in which the character Demea reflects that, even if a succession of causes is infinite, the very existence of the chain still requires a cause.[64][65]
Some objections to the cosmological argument refer to the possibility of loops in the structure ofcause and effectthat would avoid the need for a first cause. Gott and Li refer to the curvature ofspacetimeandclosed timelike curvesas possible mechanisms by which the universe may bring about its own existence.[66]Richard Hanleycontends that causal loops are neither logically nor physically impossible, remarking: "[In timed systems] the only possibly objectionable feature that all causal loops share is that coincidence is required to explain them."[67]
Andrew Loke argues that there is insufficient evidence to postulate a causal loop of the type that would avoid a first cause. He proposes that such a mechanism would suffer from the problem ofvicious circularity, rendering itmetaphysicallyimpossible.[68] | https://en.wikipedia.org/wiki/Cosmological_argument |
Adomino effectis the cumulative effect produced when one event sets off a series of similar[1]or related events, a form ofchain reaction. The term is an analogy to afalling row of dominoes. It typically refers to a linked sequence of events where the time between successive events is relatively short. The term can be used literally (about a series of actual collisions) or metaphorically (about causal linkages within systems such as global finance or politics).
The literal, mechanical domino effect is exploited inRube Goldberg machines. In chemistry, the principle applies to adomino reaction, in which one chemical reaction sets up the conditions necessary for a subsequent one that soon follows. In the realm ofprocess safety, adomino-effect accidentis an initial undesirable event triggering additional ones in related equipment or facilities, leading to a total incident effect more severe than the primary accident alone.
The metaphorical usage implies that an outcome is inevitable or highly likely (as it has already started to happen) – a form ofslippery slopeargument. When this outcome is actually unlikely (the argument isfallacious), it has also been called thedomino fallacy.[2] | https://en.wikipedia.org/wiki/Domino_effect |
Timeis the continuous progression ofexistencethat occurs in an apparentlyirreversiblesuccession from thepast, through thepresent, and into thefuture.[1][2][3]It is a component quantity of various measurements used tosequenceevents, to compare the duration of events (or the intervals between them), and to quantifyrates of changeof quantities in material reality or in theconscious experience.[4][5][6][7]Time is often referred to as a fourthdimension, along withthree spatial dimensions.[8]
Time is one of the seven fundamentalphysical quantitiesin both theInternational System of Units(SI) andInternational System of Quantities. The SI baseunit of timeis thesecond, which is defined by measuring theelectronic transitionfrequency ofcaesiumatoms.General relativityis the primary framework for understanding howspacetimeworks.[9]Through advances in both theoretical and experimental investigations of spacetime, it has been shown that time can be distorted anddilated, particularly at the edges ofblack holes.
Throughout history, time has been an important subject of study in religion, philosophy, and science. Temporal measurement has occupied scientists andtechnologists, and has been a prime motivation in navigation and astronomy. Time is also of significant social importance, having economic value ("time is money") as well as personal value, due to an awareness of the limited time in eachday("carpe diem") and inhuman life spans.
The concept of time can be complex. Multiple notions exist, and defining time in a manner applicable to all fields withoutcircularityhas consistently eluded scholars.[7][10][11]Nevertheless, diverse fields such as business, industry, sports, the sciences, and the performing arts all incorporate some notion of time into their respectivemeasuring systems.[12][13][14]Traditional definitions of time involved the observation of periodic motion such as the apparent motion of the sun across the sky, the phases of the moon, and the passage of a free-swinging pendulum. More modern systems include theGlobal Positioning System, other satellite systems,Coordinated Universal Timeandmean solar time. Although these systems differ from one another, with careful measurements they can be synchronized.
In physics, time is a fundamental concept to define other quantities, such asvelocity. To avoid a circular definition,[15]time in physicsisoperationally definedas "what aclockreads", specifically a count of repeating events such as theSI second.[6][16][17]Although this aids in practical measurements, it does not address the essence of time. Physicists developed the concept of thespacetimecontinuum, where events are assigned four coordinates: three for space and one for time. Events likeparticle collisions,supernovas, orrocket launcheshave coordinates that may vary for different observers, making concepts like "now" and "here" relative. Ingeneral relativity, these coordinates do not directly correspond to the causal structure of events. Instead, thespacetime intervalis calculated and classified as either space-like or time-like, depending on whether an observer exists that would say the events are separated by space or by time.[18]Since the time required for light to travel a specific distance is the same for all observers—a fact first publicly demonstrated by theMichelson–Morley experiment—all observers will consistently agree on this definition of time as acausal relation.[19]
General relativity does not address the nature of time for extremely small intervals where quantum mechanics holds. In quantum mechanics, time is treated as a universal and absolute parameter, differing from general relativity's notion of independent clocks. Theproblem of timeconsists of reconciling these two theories.[20]As of 2025, there is no generally accepted theory of quantum general relativity.[21]
Methods of temporal measurement, orchronometry, generally take two forms. The first is acalendar, a mathematical tool for organising intervals of time on Earth,[22]consulted for periods longer than a day. The second is aclock, a physical mechanism that indicates the passage of time, consulted for periods less than a day. The combined measurement marks a specific moment in time from a reference point, orepoch.
Artifacts from thePaleolithicsuggest that the moon was used to reckon time as early as 6,000 years ago.[23][disputed–discuss]Lunar calendarswere among the first to appear, with years of either 12 or 13lunar months(either 354 or 384 days). Withoutintercalationto add days or months to some years, seasons quickly drift in a calendar based solely on twelve lunar months.Lunisolar calendarshave a thirteenth month added to some years to make up for the difference between a full year (now known to be about 365.24 days) and a year of just twelve lunar months. The numbers twelve and thirteen came to feature prominently in many cultures, at least partly due to this relationship of months to years.
Other early forms of calendars originated inMesoamerica, particularly in ancient Mayan civilization, in which they developed theMaya calendar, consisting of multiple interrelated calendars. These calendars were religiously and astronomically based; theHaab'calendar has 18 months in a year and 20 days in a month, plus fiveepagomenaldays at the end of the year.[24]In conjunction, the Maya also used a 260-day sacred calendar called theTzolk'in.[25]
The reforms ofJulius Caesarin 45 BC put theRoman worldon asolar calendar. ThisJulian calendarwas faulty in that its intercalation still allowed the astronomicalsolsticesandequinoxesto advance against it by about 11 minutes per year.Pope Gregory XIIIintroduced a correction in 1582; theGregorian calendarwas only slowly adopted by different nations over a period of centuries, but it is now by far the most commonly used calendar around the world.
During theFrench Revolution, a new clock and calendar were invented as part of thedechristianization of Franceand to create a more rational system in order to replace the Gregorian calendar. TheFrench Republican Calendar's days consisted of ten hours of a hundred minutes of a hundred seconds, which marked a deviation from the base 12 (duodecimal) system used in many other devices by many cultures. The system was abolished in 1806.[26]
A large variety ofdeviceshave been invented to measure time. The study of these devices is calledhorology.[27]They can be driven by a variety of means, including gravity, springs, and various forms of electrical power, and regulated by a variety of means.
Asundialis any device that uses the direction of sunlight to cast shadows from agnomononto a set of markings calibrated to indicate thelocal time, usually to the hour. The idea to separate the day into smaller parts is credited to Egyptians because of their sundials, which operated on a duodecimal system. The importance of the number 12 is due to the number of lunar cycles in a year and the number of stars used to count the passage of night.[28]Obelisksmade as a gnomon were built as early asc.3500 BC.[29]An Egyptian device that dates toc.1500 BC, similar in shape to a bentT-square, also measured the passage of time from the shadow cast by its crossbar on a nonlinear rule. The T was oriented eastward in the mornings. At noon, the device was turned around so that it could cast its shadow in the evening direction.[30]
Alarm clocks reportedly first appeared in ancient Greecec.250 BCwith a water clock made byPlatothat would set off a whistle.[31]The hydraulic alarm worked by gradually filling a series of vessels with water. After some time, the water emptied out of asiphon.[32]InventorCtesibiusrevised Plato's design; the water clock uses a float as the power drive system and uses a sundial to correct the water flow rate.[33]
In medieval philosophical writings, the atom was a unit of time referred to as the smallest possible division of time. The earliest known occurrence in English is inByrhtferth'sEnchiridion(a science text) of 1010–1012,[34]where it was defined as 1/564 of amomentum(11⁄2minutes),[35]and thus equal to 15/94 of a second. It was used in thecomputus, the process of calculating the date of Easter. The most precise timekeeping device of theancient worldwas thewater clock, orclepsydra, one of which was found in the tomb of Egyptian pharaohAmenhotep I. They could be used to measure the hours even at night but required manual upkeep to replenish the flow of water. Theancient Greeksand the people fromChaldea(southeastern Mesopotamia) regularly maintained timekeeping records as an essential part of their astronomical observations. Arab inventors and engineers, in particular, made improvements on the use of water clocks up to the Middle Ages.[36]In the 11th century,Chinese inventorsandengineersinvented the first mechanical clocks driven by anescapementmechanism.
Incense sticks and candles were, and are, commonly used to measure time in temples and churches across the globe. Water clocks, and, later, mechanical clocks, were used to mark the events of the abbeys and monasteries of the Middle Ages. The passage of the hours at sea can also be marked bybell. The hours were marked by bells in abbeys as well as at sea.Richard of Wallingford(1292–1336), abbot of St. Alban's abbey, famously built a mechanical clock as an astronomicalorreryabout 1330.[37][38]Thehourglassuses the flow of sand to measure the flow of time. They were also used in navigation.Ferdinand Magellanused 18 glasses on each ship for his circumnavigation of the globe (1522).[39]The English wordclockprobably comes from the Middle Dutch wordklockewhich, in turn, derives from the medieval Latin wordclocca, which ultimately derives from Celtic and is cognate with French, Latin, and German words that meanbell.
Great advances in accurate time-keeping were made byGalileo Galileiand especiallyChristiaan Huygenswith the invention of pendulum-driven clocks along with the invention of the minute hand by Jost Burgi.[40]There is also a clock that was designed to keep time for 10,000 years called theClock of the Long Now. Alarm clock devices were later mechanized.Levi Hutchins'salarm clock has been credited as the first American alarm clock, though it can only ring at 4 a.m.Antoine Redierwas also credited as the first person to patent an adjustable mechanical alarm clock in 1847.[41]Digital forms of alarm clocks became more accessible through digitization and integration with other technologies, such assmartphones.
The most accurate timekeeping devices areatomic clocks, which are accurate to seconds in many millions of years,[43]and are used to calibrate other clocks and timekeeping instruments. Atomic clocks use the frequency ofelectronic transitionsin certain atoms to measure the second. One of the atoms used iscaesium; most modern atomic clocks probe caesium with microwaves to determine the frequency of these electron vibrations.[44]Since 1967, the International System of Measurements bases its unit of time, the second, on the properties of caesium atoms.SIdefines the second as 9,192,631,770 cycles of the radiation that corresponds to the transition between two electron spin energy levels of the ground state of the133Cs atom. A portable timekeeper that meets certain precision standards is called achronometer. Initially, the term was used to refer to themarine chronometer, a timepiece used to determinelongitudeby means ofcelestial navigation, a precision first achieved byJohn Harrison. More recently, the term has also been applied to thechronometer watch, a watch that meets precision standards set by the Swiss agencyCOSC.
In modern times, theGlobal Positioning Systemin coordination with theNetwork Time Protocolcan be used to synchronize timekeeping systems across the globe. As of May 2010[update], the smallest time interval uncertainty in direct measurements is on the order of 12attoseconds(1.2 × 10−17seconds), about 3.7 × 1026Planck times.[45]The time measured was the delay caused by out-of-sync electron waves'interference patterns.[46]
The second (s) is theSIbase unit. Aminute(min) is 60 seconds in length (or, rarely, 59 or 61 seconds when leap seconds are employed), and anhouris 60 minutes or 3600 seconds in length. A day is usually 24 hours or 86,400 seconds in length; however, the duration of a calendar day can vary due todaylight saving timeandleap seconds.
A time standard is a specification for measuring time: assigning a number orcalendar dateto aninstant(point in time), quantifying the duration of a time interval, and establishing achronology(ordering of events). In modern times, several time specifications have been officially recognized as standards, where formerly they were matters of custom and practice. The invention in 1955 of the caesiumatomic clockhas led to the replacement of older and purely astronomical time standards such assidereal timeandephemeris time, for most practical purposes, by newer time standards based wholly or partly on atomic time using the SI second.
International Atomic Time(TAI) is the primary international time standard from which other time standards are calculated.Universal Time(UT1) is mean solar time at 0° longitude, computed from astronomical observations. It varies from TAI because of the irregularities in Earth's rotation.Coordinated Universal Time(UTC) is an atomic time scale designed to approximate Universal Time. UTC differs from TAI by an integral number of seconds. UTC is kept within 0.9 second of UT1 by the introduction of one-second steps to UTC, theleap second. TheGlobal Positioning Systembroadcasts a very precise time signal based on UTC time.
The surface of the Earth is split into a number oftime zones. Standard time orcivil timein a time zone deviates a fixed, round amount, usually a whole number of hours, from some form of Universal Time, usually UTC. Most time zones are exactly one hour apart, and by convention compute their local time as an offset from UTC. For example, time zones at sea are based on UTC. In many locations (but not at sea) these offsets vary twice yearly due todaylight saving timetransitions.
Some other time standards are used mainly for scientific work.Terrestrial Timeis a theoretical ideal scale realized by TAI.Geocentric Coordinate TimeandBarycentric Coordinate Timeare scales defined ascoordinate timesin the context of the general theory of relativity, with TCG applying to Earth's center and TCB to the solar system'sbarycenter.Barycentric Dynamical Timeis an older relativistic scale related to TCB that is still in use.
Many ancient cultures, particularly in the East, had a cyclical view of time. In these traditions, time was often seen as a recurring pattern of ages or cycles, where events and phenomena repeated themselves in a predictable manner. One of the most famous examples of this concept is found inHindu philosophy, where time is depicted as a wheel called the "Kalachakra" or "Wheel of Time." According to this belief, the universe undergoes endless cycles of creation, preservation, and destruction.[47]
Similarly, in other ancient cultures such as those of the Mayans, Aztecs, and Chinese, there were also beliefs in cyclical time, often associated with astronomical observations and calendars.[48]These cultures developed complex systems to track time, seasons, and celestial movements, reflecting their understanding of cyclical patterns in nature and the universe.
The cyclical view of time contrasts with the linear concept of time more common in Western thought, where time is seen as progressing in a straight line from past to future without repetition.[49]
In general, the Islamic andJudeo-Christianworld-view regards time as linear[50]and directional,[51]beginning with the act ofcreationby God. The traditional Christian view sees time ending, teleologically,[52]with theeschatologicalend of the present order of things, the "end time". Though some Christian theologians (such asAugustine of HippoandAquinas[53]) believe that God is outside of time, seeing all events simultaneously, that time did not exist before God, and that God created time.[54][55]
In theOld TestamentbookEcclesiastes, traditionally ascribed toSolomon(970–928 BC), time is depicted as cyclical and beyond human control.[56]The book wrote that there is an appropriate season or time for every activity.[57]
The Greek language denotes two distinct principles,ChronosandKairos. The former refers to numeric, or chronological, time. The latter, literally "the right or opportune moment", relates specifically to metaphysical or Divine time. In theology, Kairos is qualitative, as opposed to quantitative.[58]
In Greek mythology, Chronos (ancient Greek: Χρόνος) is identified as the personification of time. His name in Greek means "time" and is alternatively spelled Chronus (Latin spelling) or Khronos. Chronos is usually portrayed as an old, wise man with a long, gray beard, such as "Father Time". Some English words whose etymological root is khronos/chronos includechronology,chronometer,chronic,anachronism,synchronise, andchronicle.
Rabbis sometimes saw time like "an accordion that was expanded and collapsed at will."[59]According toKabbalists, "time" is aparadox[60]and anillusion.[61]
According toAdvaita Vedanta, time is integral to the phenomenal world, which lacks independent reality. Time and the phenomenal world are products ofmaya, influenced by our senses, concepts, and imaginations. The phenomenal world, including time, is seen as impermanent and characterized by plurality, suffering, conflict, and division. Since phenomenal existence is dominated by temporality (kala), everything within time is subject to change and decay. Overcoming pain and death requires knowledge that transcends temporal existence and reveals its eternal foundation.[62]
Two contrasting viewpoints on time divide prominent philosophers. One view is that time is part of the fundamental structure of theuniverse—adimensionindependent of events, in which events occur insequence.Isaac Newtonsubscribed to thisrealistview, and hence it is sometimes referred to asNewtonian time.[63][64]
The opposing view is thattimedoes not refer to any kind of "container" that events and objects "move through", nor to any entity that "flows", but that it is instead part of a fundamental intellectual structure (together withspaceand number) within which humans sequence and compare events. This second view, in the tradition ofGottfried Leibniz[16]andImmanuel Kant,[65][66]holds thattimeis neither an event nor a thing, and thus is not itself measurable nor can it be travelled. Furthermore, it may be that there is asubjectivecomponent to time, but whether or not time itself is "felt", as a sensation, or is a judgment, is a matter of debate.[2][6][7][67][68]
In philosophy, time was questioned throughout the centuries; what time is and if it is real or not. Ancient Greek philosophers asked if time was linear or cyclical and if time was endless orfinite.[69]These philosophers had different ways of explaining time; for instance, ancient Indian philosophers had something called theWheel of Time.It is believed that there was repeating ages over the lifespan of the universe.[70]This led to beliefs like cycles of rebirth andreincarnation.[70]The Greek philosophers believe that the universe was infinite, and was an illusion to humans.[70]Platobelieved that time was made by the Creator at the same instant as the heavens.[70]He also says that time is a period of motion of theheavenly bodies.[70]Aristotlebelieved that time correlated to movement, that time did not exist on its own but was relative to motion of objects.[70]He also believed that time was related to the motion ofcelestial bodies; the reason that humans can tell time was because oforbital periodsand therefore there was a duration on time.[71]
TheVedas, the earliest texts onIndian philosophyandHindu philosophydating to the late2nd millennium BC, describe ancientHindu cosmology, in which theuniversegoes through repeated cycles of creation, destruction and rebirth, with each cycle lasting 4,320 million years.[72]AncientGreek philosophers, includingParmenidesandHeraclitus, wrote essays on the nature of time.[73]Plato, in theTimaeus, identified time with the period of motion of the heavenly bodies.Aristotle, in Book IV of hisPhysicadefined time as 'number of movement in respect of the before and after'.[74]In Book 11 of hisConfessions,St. Augustine of Hipporuminates on the nature of time, asking, "What then is time? If no one asks me, I know: if I wish to explain it to one that asketh, I know not." He begins to define time by what it is not rather than what it is,[75]an approach similar to that taken in othernegative definitions. However, Augustine ends up calling time a "distention" of the mind (Confessions 11.26) by which we simultaneously grasp the past in memory, the present by attention, and the future by expectation.
Philosophers in the 17th and 18th century questioned if time was real and absolute, or if it was an intellectual concept that humans use to understand and sequence events.[69]These questions lead to realism vs anti-realism; the realists believed that time is a fundamental part of the universe, and be perceived by events happening in a sequence, in a dimension.[76]Isaac Newtonsaid that we are merely occupying time, he also says that humans can only understandrelative time.[76]Isaac Newton believed in absolute space and absolute time; Leibniz believed that time and space are relational.[77]The differences between Leibniz's and Newton's interpretations came to a head in the famousLeibniz–Clarke correspondence. Relative time is a measurement of objects in motion.[76]The anti-realists believed that time is merely a convenient intellectual concept for humans to understand events.[76]This means that time was useless unless there were objects that it could interact with, this was calledrelational time.[76]René Descartes,John Locke, andDavid Humesaid that one's mind needs to acknowledge time, in order to understand what time is.[71]Immanuel Kantbelieved that we can not know what something is unless we experience it first hand.[78]
Time is not an empirical concept. For neither co-existence nor succession would be perceived by us, if the representation of time did not exist as a foundationa priori. Without this presupposition, we could not represent to ourselves that things exist together at one and the same time, or at different times, that is, contemporaneously, or in succession.
Immanuel Kant, in theCritique of Pure Reason, described time as ana prioriintuition that allows us (together with the othera prioriintuition, space) to comprehendsense experience.[79]With Kant, neither space nor time are conceived assubstances, but rather both are elements of a systematic mental framework that necessarily structures the experiences of any rational agent, or observing subject. Kant thought of time as a fundamental part of anabstractconceptual framework, together with space and number, within which we sequence events,quantifytheir duration, and compare the motions of objects. In this view,timedoes not refer to any kind of entity that "flows," that objects "move through," or that is a "container" for events. Spatialmeasurementsare used toquantifythe extent of and distances betweenobjects, and temporal measurements are used to quantify the durations of and betweenevents. Time was designated by Kant as the purest possibleschemaof a pure concept or category.
Henri Bergsonbelieved that time was neither a real homogeneous medium nor a mental construct, but possesses what he referred to asDuration. Duration, in Bergson's view, was creativity and memory as an essential component of reality.[80]
According toMartin Heideggerwe do not exist inside time, wearetime. Hence, the relationship to the past is a present awareness ofhaving been, which allows the past to exist in the present. The relationship to the future is the state of anticipating a potential possibility, task, or engagement. It is related to the human propensity for caring and being concerned, which causes "being ahead of oneself" when thinking of a pending occurrence. Therefore, this concern for a potential occurrence also allows the future to exist in the present. The present becomes an experience, which is qualitative instead of quantitative. Heidegger seems to think this is the way that a linear relationship with time, or temporal existence, is broken or transcended.[81]We are not stuck in sequential time. We are able to remember the past and project into the future; we have a kind of random access to our representation of temporal existence; we can, in our thoughts, step out of (ecstasis) sequential time.[82]
Modern era philosophers asked: is time real or unreal, is time happening all at once or a duration, is time tensed or tenseless, and is there a future to be?[69]There is a theory called the tenseless orB-theory; this theory says that any tensed terminology can be replaced with tenseless terminology.[83]For example, "we will win the game" can be replaced with "we do win the game", taking out the future tense. On the other hand, there is a theory called the tense orA-theory; this theory says that our language has tense verbs for a reason and that the future can not be determined.[83]There is also something calledimaginary time, this was fromStephen Hawking, who said that space and imaginary time are finite but have no boundaries.[83]Imaginary time is not real or unreal, it is something that is hard to visualize.[83]Philosophers can agree that physical time exists outside of the human mind and is objective, and psychological time is mind-dependent and subjective.[71]
In 5th century BCGreece,AntiphontheSophist, in a fragment preserved from his chief workOn Truth, held that: "Time is not a reality (hypostasis), but a concept (noêma) or a measure (metron)."Parmenideswent further, maintaining that time, motion, and change were illusions, leading to theparadoxesof his followerZeno.[84]Time as an illusion is also a common theme inBuddhistthought.[85][86]
These arguments often center on what it means for something to beunreal. Modern physicists generally believe that time is asrealas space—though others, such asJulian Barbour, argue quantum equations of the universe take their true form when expressed in the timelessrealmcontaining every possiblenowor momentary configuration of the universe.[87]J. M. E. McTaggart's 1908 articleThe Unreality of Timeargues that, since every event has the characteristic of being both present and not present (i.e., future or past), that time is a self-contradictory idea.
Another modern philosophical theory calledpresentismviews the past and the future as human-mind interpretations of movement instead of real parts of time (or "dimensions") which coexist with the present. This theory rejects the existence of all direct interaction with the past or the future, holding only the present as tangible. This is one of the philosophical arguments against time travel.[88]This contrasts witheternalism(all time: present, past and future, is real) and thegrowing block theory(the present and the past are real, but the future is not).
UntilEinstein'sreinterpretation of the physical concepts associated with time and space in 1907, time was considered to be the same everywhere in the universe, with all observers measuring the same time interval for any event.[89]Non-relativisticclassical mechanicsis based on this Newtonian idea of time. Einstein, in hisspecial theory of relativity,[90]postulated the constancy and finiteness of the speed of light for all observers. He showed that this postulate, together with a reasonable definition for what it means for two events to be simultaneous, requires that distances appear compressed and time intervals appear lengthened for events associated with objects in motion relative to an inertial observer.
The theory of special relativity finds a convenient formulation inMinkowski spacetime, a mathematical structure that combines three dimensions of space with a single dimension of time. In this formalism, distances in space can be measured by how long light takes to travel that distance, e.g., alight-yearis a measure of distance, and a meter is now defined in terms of how far light travels in a certain amount of time. Twoeventsin Minkowski spacetime are separated by aninvariant interval, which can be eitherspace-like,light-like, ortime-like. Events that have a time-like separation cannot be simultaneous in anyframe of reference, there must be a temporal component (and possibly a spatial one) to their separation. Events that have a space-like separation will be simultaneous in some frame of reference, and there is no frame of reference in which they do not have a spatial separation. Different observers may calculate different distances and different time intervals between two events, but theinvariant intervalbetween the events is independent of the observer and their velocity.
Unlike space, where an object can travel in the opposite directions (and in 3 dimensions), time appears to have only one dimension and only one direction—the past lies behind, fixed and immutable, while the future lies ahead and is not necessarily fixed. Yet most laws of physics allow any process to proceed both forward and in reverse. There are only a few physical phenomena that violate the reversibility of time. This time directionality is known as thearrow of time. Acknowledged examples of the arrow of time are:[91][92][93]
The relationships between these different arrows of time is a hotly debated topic intheoretical physics.[94]
Thesecond law of thermodynamicsstates thatentropymust increase over time.Brian Greenetheorizes that, according to the equations, the change in entropy occurs symmetrically whether going forward or backward in time. So entropy tends to increase in either direction, and our current low-entropy universe is a statistical aberration, in a similar manner as tossing a coin often enough that eventually heads will result ten times in a row. However, this theory is not supported empirically in local experiment.[95]
In non-relativisticclassical mechanics, Newton's concept of "relative, apparent, and common time" can be used in the formulation of a prescription for the synchronization of clocks. Events seen by two different observers in motion relative to each other produce a mathematical concept of time that works sufficiently well for describing the everyday phenomena of most people's experience. In the late nineteenth century, physicists encountered problems with the classical understanding of time, in connection with the behavior of electricity and magnetism. The 1860sMaxwell's equationsdescribed that light always travels at aconstant speed(in a vacuum).[96]However, classical mechanics assumed that motion was measured relative to a fixed reference frame. TheMichelson–Morley experimentcontradicted the assumption. Einstein later proposed a method of synchronizing clocks using the constant, finite speed of light as the maximum signal velocity. This led directly to the conclusion that observers in motion relative to one another measure different elapsed times for the same event.
Time has historically been closely related with space, the two together merging into spacetime inEinstein'sspecial relativityandgeneral relativity. According to these theories, the concept of time depends on thespatial reference frame of the observer, and the human perception, as well as the measurement by instruments such as clocks, are different for observers in relative motion. For example, if a spaceship carrying a clock flies through space at (very nearly) the speed of light, its crew does not notice a change in the speed of time on board their vessel because everything traveling at the same speed slows down at the same rate (including the clock, the crew's thought processes, and the functions of their bodies). However, to a stationary observer watching the spaceship fly by, the spaceship appears flattened in the direction it is traveling and the clock on board the spaceship appears to move very slowly.
On the other hand, the crew on board the spaceship also perceives the observer as slowed down and flattened along the spaceship's direction of travel, because both are moving at very nearly the speed of light relative to each other. Because the outside universe appears flattened to the spaceship, the crew perceives themselves as quickly traveling between regions of space that (to the stationary observer) are many light years apart. This is reconciled by the fact that the crew's perception of time is different from the stationary observer's; what seems like seconds to the crew might be hundreds of years to the stationary observer. In either case, however, causality remains unchanged: thepastis the set of events that can send light signals to an entity and thefutureis the set of events to which an entity can send light signals.[97][98]
Einsteinshowed in his thought experiments that people travelling at different speeds, while agreeing oncause and effect, measure different time separations between events, and can even observe different chronological orderings between non-causally related events. Though these effects are typically minute in the human experience, the effect becomes much more pronounced for objects moving at speeds approaching the speed of light.Subatomic particlesexist for a well-known average fraction of a second in a lab relatively at rest, but when travelling close to the speed of light they are measured to travel farther and exist for much longer than when at rest.
According to thespecial theory of relativity, in the high-speed particle'sframe of reference, it exists, on the average, for a standard amount of time known as itsmean lifetime, and the distance it travels in that time is zero, because its velocity is zero. Relative to a frame of reference at rest, time seems to "slow down" for the particle. Relative to the high-speed particle, distances seem to shorten. Einstein showed how both temporal and spatial dimensions can be altered (or "warped") by high-speed motion.
Einstein (The Meaning of Relativity): "Twoeventstaking place at the points A and B of a system K are simultaneous if they appear at the same instant when observed from the middle point, M, of the interval AB. Time is then defined as the ensemble of the indications of similar clocks, at rest relative to K, which register the same simultaneously." Einstein wrote in his book,Relativity, thatsimultaneity is also relative, i.e., two events that appear simultaneous to an observer in a particular inertial reference frame need not be judged as simultaneous by a second observer in a different inertial frame of reference.
According to general relativity, time also runs slower in strongergravitational fields; this isgravitational time dilation.[99]The effect of the dilation becomes more noticeable in a mass-dense object. A famous example of time dilation is a thought experiment of a subject approaching theevent horizonof ablack hole. As a consequence of how gravitational fields warp spacetime, the subject will experience gravitational time dilation. From the perspective of the subject itself, they will experience time normally. Meanwhile, an observer from the outside will see the subject move closer to the black hole until the extreme, in which the subject appears 'frozen' in time and eventually fades to nothingness due to the diminishing amount of light returning.[100][101]
The animations visualise the different treatments of time in the Newtonian and the relativistic descriptions. At the heart of these differences are theGalileanandLorentz transformationsapplicable in the Newtonian and relativistic theories, respectively. In the figures, the vertical direction indicates time. The horizontal direction indicates distance (only one spatial dimension is taken into account), and the thick dashed curve is the spacetime trajectory ("world line") of the observer. The small dots indicate specific (past and future) events in spacetime. The slope of the world line (deviation from being vertical) gives the relative velocity to the observer.
In the Newtonian description these changes are such thattimeis absolute:[102]the movements of the observer do not influence whether an event occurs in the 'now' (i.e., whether an event passes the horizontal line through the observer). However, in the relativistic description theobservability of eventsis absolute: the movements of the observer do not influence whether an event passes the "light cone" of the observer. Notice that with the change from a Newtonian to a relativistic description, the concept ofabsolute timeis no longer applicable: events move up and down in the figure depending on the acceleration of the observer.
Time quantization refers to the theory that time has a smallest possible unit. Time quantization is a hypothetical concept. In the modern established physical theories like theStandard Modelof particle physics andgeneral relativitytime is not quantized.Planck time(~ 5.4 × 10−44seconds) is the unit of time in the system ofnatural unitsknown asPlanck units. Current established physical theories are believed to fail at this time scale, and many physicists expect that the Planck time might be the smallest unit of time that could ever be measured, even in principle. Though tentative physical theories that attempt to describe phenomena at this scale exist; an example isloop quantum gravity. Loop quantum gravity suggests that time is quantized; if gravity is quantized, spacetime is also quantized.[103]
Time travel is the concept of moving backwards or forwards to different points in time, in a manner analogous to moving through space, and different from the normal "flow" of time to an earthbound observer. In this view, all points in time (including future times) "persist" in some way. Time travel has been aplot devicein fiction since the 19th century. Travelling backwards or forwards in time has never been verified as a process, and doing so presents many theoretical problems and contradictory logic which to date have not been overcome. Any technological device, whether fictional or hypothetical, that is used to achieve time travel is known as atime machine.
A central problem with time travel to the past is the violation ofcausality; should an effect precede its cause, it would give rise to the possibility of atemporal paradox. Some interpretations of time travel resolve this by accepting the possibility of travel betweenbranch points,parallel realities, oruniverses. The many-worlds interpretation has been used as a way to solve causality paradoxes arising from time travel. Any quantum event creates another branching timeline, and all possible outcomes coexist without anywave function collapse.[104]This interpretation was an alternative but is opposite from theCopenhagen interpretation, which suggests that wave functions do collapse.[105]In science, hypothetical faster-than-light particles are known astachyons; the mathematics of Einstein's relativity suggests that they would have animaginaryrest mass. Some interpretations suggest that it might move backward in time. General relativity permits the existence ofclosed timelike curves, which could allow an observer to travel back in time to the same space.[106]Though for theGödel metric, such an occurrence requires a globally rotating universe, which has been contradicted by observations of theredshiftsof distant galaxies and thecosmic background radiation.[107]However, it has been suggested that a slowly rotating universe model may solve theHubble tension, so it can not yet be ruled out.[108]
Another solution to the problem of causality-based temporal paradoxes is that such paradoxes cannot arise simply because they have not arisen. As illustrated in numerous works of fiction,free willeither ceases to exist in the past or the outcomes of such decisions are predetermined. A famous example is thegrandfather paradox, in which a person is supposed to travel back in time to kill their own grandfather. This would not be possible to enact because it is a historical fact that one's grandfather was not killed before his child (one's parent) was conceived. This view does not simply hold that history is an unchangeable constant, but that any change made by a hypothetical future time traveller would already have happened in their past, resulting in the reality that the traveller moves from. TheNovikov self-consistency principleasserts that due to causality constraints, time travel to the past is impossible.
Thespecious presentrefers to the time duration wherein one'sperceptionsare considered to be in the present. The experienced present is said to be 'specious' in that, unlike the objective present, it is an interval and not a durationless instant. The termspecious presentwas first introduced by the psychologist E. R. Clay, and later developed byWilliam James.[109]
The brain's judgment of time is known to be a highly distributed system, including at least thecerebral cortex,cerebellum, andbasal gangliaas its components. One particular component, thesuprachiasmatic nuclei, is responsible for thecircadian (or daily) rhythm, while other cell clusters appear capable of shorter-range (ultradian) timekeeping.Mental chronometryis the use of response time in perceptual-motor tasks to infer the content, duration, and temporal sequencing of cognitive operations. Judgments of time can be altered bytemporal illusions(like thekappa effect),[110]age,[111]psychoactive drugs, andhypnosis.[112]The sense of time is impaired in some people with neurological diseases such asParkinson's diseaseandattention deficit disorder.
Psychoactive drugs can impair the judgment of time.Stimulantscan lead both humans and rats to overestimate time intervals,[113][114]whiledepressantscan have the opposite effect.[115]The level of activity in the brain ofneurotransmitterssuch asdopamineandnorepinephrinemay be the reason for this.[116]Such chemicals will either excite or inhibit the firing ofneuronsin the brain, with a greater firing rate allowing the brain to register the occurrence of more events within a given interval (speed up time) and a decreased firing rate reducing the brain's capacity to distinguish events occurring within a given interval (slow down time).[117]
Psychologists assert that time seems to go faster with age, but the literature on this age-related perception of time remains controversial.[118]Those who support this notion argue that young people, having more excitatory neurotransmitters, are able to cope with faster external events.[117]Some also argued that the perception of time is also influenced by memory and how much one have experienced; for example, as one get older, they will have spend less part of their total life waiting a month.[119]Meanwhile children's expanding cognitive abilities allow them to understand time in a different way. Two- and three-year-olds' understanding of time is mainly limited to "now and not now". Five- and six-year-olds can grasp the ideas of past, present, and future. Seven- to ten-year-olds can use clocks and calendars.[120]Socioemotional selectivity theoryproposed that when people perceive their time as open-ended and nebulous, they focus more on future-oriented goals.[121]
Although time is regarded as an abstract concept, there is increasing evidence that time isconceptualizedin the mind in terms of space.[122]That is, instead of thinking about time in a general, abstract way, humans think about time in a spatial way and mentally organize it as such. Using space to think about time allows humans to mentally organize temporal events in a specific way. This spatial representation of time is often represented in the mind as amental timeline(MTL).[123]These origins are shaped by many environmental factors.[122]Literacyappears to play a large role in the different types of MTLs, as reading/writing directionprovides an everyday temporal orientation that differs from culture to culture.[123]In Western cultures, the MTL may unfold rightward (with the past on the left and the future on the right) since people mostly read and write from left to right.[123]Western calendars also continue this trend by placing the past on the left with the future progressing toward the right. Conversely, speakers of Arabic, Farsi, Urdu, and Hebrew read from right to left, and their MTLs unfold leftward (past on the right with future on the left); evidence suggests these speakers organize time events in their minds like this as well.[123]
There is also evidence that some cultures use an allocentric spatialization, often based on environmental features.[122]A study of the indigenous Yupno people ofPapua New Guineafound that they may use an allocentric MTL, in which time flows uphill; when speaking of the past, individuals gestured downhill, where the river of the valley flowed into the ocean. When speaking of the future, they gestured uphill, toward the source of the river. This was common regardless of which direction the person faced.[122]A similar study of the Pormpuraawans, anaboriginal groupin Australia, reported that when they were asked to organize photos of a man aging "in order," individuals consistently placed the youngest photos to the east and the oldest photos to the west, regardless of which direction they faced.[124]This directly clashed with an American group that consistently organized the photos from left to right. Therefore, this group also appears to have an allocentric MTL, but based on the cardinal directions instead of geographical features.[124]The wide array of distinctions in the way different groups think about time leads to the broader question that different groups may also think about other abstract concepts in different ways as well, such as causality and number.[122]
In sociology andanthropology, time discipline is the general name given tosocialand economic rules, conventions, customs, and expectations governing the measurement of time, thesocial currencyand awareness of time measurements, and people's expectations concerning the observance of these customs by others.Arlie Russell Hochschild[125][126]andNorbert Elias[127]have written on the use of time from a sociological perspective.
The use of time is an important issue in understandinghuman behavior, education, andtravel behavior.Time-use researchis a developing field of study. The question concerns how time is allocated across a number of activities (such as time spent at home, at work, shopping, etc.). Time use changes with technology, as the television or the Internet created new opportunities to use time in different ways. However, some aspects of time use are relatively stable over long periods of time, such as the amount of time spent traveling to work, which despite major changes in transport, has been observed to be about 20–30 minutes one-way for a large number of cities over a long period.
Time managementis the organization of tasks or events by first estimating how much time a task requires and when it must be completed, and adjusting events that would interfere with its completion so it is done in the appropriate amount of time. Calendars and day planners are common examples of time management tools.
A sequence of events, or series of events, is asequenceof items, facts, events, actions, changes, or procedural steps, arranged in time order (chronological order), often withcausalityrelationships among the items.[128][129][130]Because ofcausality, cause precedeseffect, or cause and effect may appear together in a single item, but effect never precedes cause. A sequence of events can be presented in text,tables,charts, or timelines. The description of the items or events may include atimestamp. A sequence of events that includes the time along with place or location information to describe a sequential path may be referred to as aworld line.
Uses of a sequence of events include stories,[131]historical events (chronology), directions and steps in procedures,[132]and timetables for scheduling activities. A sequence of events may also be used to help describeprocessesin science, technology, and medicine. A sequence of events may be focused on past events (e.g., stories, history, chronology), on future events that must be in a predetermined order (e.g.,plans,schedules, procedures, timetables), or focused on the observation of past events with the expectation that the events will occur in the future (e.g., processes, projections). The use of a sequence of events occurs in fields as diverse as machines (cam timer), documentaries (Seconds From Disaster), law (choice of law), finance (directional-change intrinsic time),computer simulation(discrete event simulation), andelectric power transmission[133](sequence of events recorder). A specific example of a sequence of events is thetimeline of the Fukushima Daiichi nuclear disaster. | https://en.wikipedia.org/wiki/Sequence_of_events |
ThisCharles Sanders Peirce bibliographyconsolidates numerous references to the writings ofCharles Sanders Peirce, including letters, manuscripts, publications, andNachlass. For an extensive chronological list of Peirce's works (titled in English), see theChronologische Übersicht(Chronological Overview) on theSchriften(Writings) page for Charles Sanders Peirce.
Click on abbreviation in order to jump down this page to the relevant edition information. Click on the abbreviation appearing with that edition information in order to return here.
The manuscript material now (1997) comes to more than a hundred thousand pages. These contain many pages of no philosophical interest, but the number of pages on philosophy certainly number much more than half of that. Also, a significant but unknown number of manuscripts have been lost.
Collected Papers of Charles Sanders Peirce, vols. 1–6 (1931–1935), vols. 7–8 (1958).[17][v]
TheWritingsor theChronological Edition(W)
Contributions toThe Nation(CNorN)
New Elements of Mathematics(NEMorNE)
Some online sources incorrectly list theISBNsof these volumes, for example, sometimes interchanging those of volumes II and III(1/2).
Review PDFbyArthur W. Burksin theBulletin of the American Mathematical Society, vol. 84, no. 5, Sept. 1978.
Historical Perspectives on Peirce's Logic of Science(HP)
Semiotic and Significs(SSorPW)
Essential Peirce(EP)
Philosophy of Mathematics(PMSW)
On British Logicians(the 1869–1870 Harvard lectures)
Reasoning and the Logic of Things(RLT) (The 1898 Lectures in Cambridge, MA)
Editorial Procedures, xi-xiiAbbreviations, xiii-xivIntroduction: The Consequences of Mathematics, 1-54(Kenneth Laine Ketner and Hilary Putman)Comment on the Lectures, 55-102 (Hilary Putman)Lecture One: Philosophy and the Conduct of Life, 105-122Lecture Two: Types of Reasoning, 123-142[Exordium for Lecture Three], 143-145
Lecture Three: The Logic of Relatives, 146-164Lecture Four: First Rule of Logic, 165-180Lecture Five: Training in Reasoning, 181-196Lecture Six: Causation and Force, 197-217Lecture Seven: Habit, 218-241Lecture Eight: The Logic of Continuity, 242-270Notes, 272-288Index, 289-297
Lectures on Pragmatism(LOP) andPragmatism as a Principle and Method of Right Thinking(PPM) (the 1903 Harvard lectures)
Topics of Logic(the 1903 Lowell lectures and syllabus)
Chance, Love, and Logic: Philosophical Essays(CLL)
Preface xviiIntroduction [based on 1916 memorial essay on Peirce] ixProem: The Rules of Philosophy 1
Supplementary Essay—The Pragmatism of Peirce, by John Dewey 301
Philosophical Writings of Peirce(PWP)
Charles S. Peirce's letters to Lady Welby
Essays in the Philosophy of Science
Selected Writings(SW)
Charles S. Peirce: The Essential Writings
Peirce on Signs: Writings on Semiotic(PSWS)
The Logic of Interdisciplinarity: The Monist-series(LI)
Peirce, C. S. (2009),Charles S. Peirce. The Logic of Interdisciplinarity: The Monist-series, Elize Bisanz, editor. Berlin: Akademie Verlag (now de Gruyter), 2009, 455 pp. Print (ISBN978-3-05-004410-1). Electronic (ISBN978-3-05-004733-1). In some places the title is ordered differently, the phrase "The Logic of Interdisciplinarity" coming first. German publication of Peirce's works in English. Bisanz's introduction may be in German. Includes "a short biography" by Kenneth Laine Ketner of Peirce actually entitled "Charles Sanders Peirce: Interdisciplinary Scientist" which includes the entire text of Peirce's 1904 manuscript of his intellectual autobiography. Publisher'scatalog page(in German). Announcement of the book with table of contents,Google-translated into English, andin the original German(T.O.C. still in English].
This list includes mainly published philosophical and logical works of some note. Papers by Peirce in many fields were published and he wrote over 300 reviews forThe Nation. Sometimes an article below is shown after a special series, but was published during the series. Also note a complicating fact of Peirce scholarship, that Peirce sometimes made significant later corrections, modifications, and comments, for which one needs to consult such works as CP, W, EP, and the (online)Commens Dictionary of Peirce's Terms.
NB: Links in this section embedded in page numbers and edition numbers are throughGoogle Book Search. Users outside the US may not yet be able to gain full access to those linked editions.[29]The other links such as to PEP andArisbedo not go toGoogle Book Search.Internet Archivelinks generally go to book's relevant page; once there, click on book's title at pane's top for other formats (pdf, plaintext, and so forth; unfortunately,Internet Archivefails to inform reader about that).
Publishersof journals with multiple articles by Peirce (when not too varied in name or fact):
TheTransactions of the Charles S. Peirce Society, quarterly since spring 1965, contains many Peirce-related articles, most of themnotlisted anywhere below, and their Website has a grand table of contents for all issues (T.O.C.).
Classics in the History of Psychology(Christopher D. Green) has A-O viewable in HTML format (Eprint), with indexes of words linked to their definitions. Listed and linked below are Peirce's entries in A-O. Entries shown here without attribution are Peirce's. Mixed attributions are shown here. Boldfaces and parentheses in definition titles are as in the original. Present article's annotations in brackets. Each link is to the relevant page in Christopher D. Green's online HTML version. Peirce also wrote definitions in P-Z, for instance much of the definition of "Pragmatic (1) and (2) Pragmatism", much of that of "Predication", the whole "Matter and Form" (over 4,060 words), and the long main entry on "Uniformity".
–Dualism(in philosophy)[1st para. "C.S.P.- A.S.P.P."the rest "A.S.P.P."]
–Economy(logical principle of)–Empirical Logic[1st para. "R.A.- C.S.P.",the rest "R.A."]
–Equipollenceor-cy[1st para. "C.S.P.",while 2nd para. "R.A."]
–Genus(in logic)–Given
–Imaging(in logic)["C.S.P., H.B.D."]–Implicit(in logic)–Inconsistency–Independence–Index(in exact logic)–Individual(in logic)
–Inference[1st 5 paras. = in logic, "C.S.P.",2nd 5 paras. = in psych., "J.M.B., G.D.S."]–Insolubilia
–Intention(in logic)–Involution
–Kind–Knowledge(in logic)["C.S.P., C.L.F."]
–Laws of Thought[1st approx. 2040 words, "C.S.P.",next over 800 words "C.L.F.", andfinal two sentences "C.S.P."]
–Leading of Proof–Leading Principle–Lemma–Light of Nature–Limitative–Limiting Notion["J.M.B.- C.S.P."]
–Logic[All 16 paras. "C.S.P., C.L.F."bracketed sentence by "J.M.B."]
–Logic(exact)[contains over 2,920 words]–Logical–Logical Diagram(orGraph)–Logomachy
–MajorandMinor(extreme,term, premise,satz, &c., in logic)–Mark[1st two paras. "C.S.P., C.L.F.",remaining two paras., "C.S.P."]–Material Fallacy–Material Logic–Mathematical Logic["C.S.P" appears twice,but no others' initials appear].–MatterandForm[contains over 4,050 words]–Maxim(in logic)
–MethodandMethodology, orMethodeutic
–Middle Term(andMiddle)["C.S.P., C.L.F."]–Mixed–Mnemonic Verses and Words(in logic)–Modality[contains over 2,900 words]–Modulus["C.S.P.",& 9 words by "E.M." near start]–Modus ponensandModus tollens–Monad(Monadism, Monadology)[1st para. "A.S.P.P.- J.M.B.",next four paras. "C.S.P.",the rest by others]
–Multitude(in mathematics)["C.S.P., H.B.D."]
–Name(in logic)–Necessary(in logic)–Necessity[contains over 1,760 words]
–Negation[1st 1,250 words "C.S.P., C.L.F.",remaining para. "C.L.F., J.M.B."]–Negative["C.S.P." except"negative term" sub-entrywhich is by C.L.F.]
–Nominal–Nomology–Non-A–Non-Contradiction–Nonsequitur–Norm(andNormality)[1st sentence "C.S.P.",rest by "J.J."]–Nota Notae–Numerical
–Observation["C.S.P., J.M.B."]–Obversion–Opposition(in logic)
–Organon | https://en.wikipedia.org/wiki/Charles_Sanders_Peirce_bibliography |
Statistical relational learning(SRL) is a subdiscipline ofartificial intelligenceandmachine learningthat is concerned withdomain modelsthat exhibit bothuncertainty(which can be dealt with using statistical methods) and complex,relationalstructure.[1][2]Typically, theknowledge representationformalisms developed in SRL use (a subset of)first-order logicto describe relational properties of a domain in a general manner (universal quantification) and draw uponprobabilistic graphical models(such asBayesian networksorMarkov networks) to model the uncertainty; some also build upon the methods ofinductive logic programming. Significant contributions to the field have been made since the late 1990s.[1]
As is evident from the characterization above, the field is not strictly limited to learning aspects; it is equally concerned withreasoning(specificallyprobabilistic inference) andknowledge representation. Therefore, alternative terms that reflect the main foci of the field includestatistical relational learning and reasoning(emphasizing the importance of reasoning) andfirst-order probabilistic languages(emphasizing the key properties of the languages with which models are represented).
Another term that is sometimes used in the literature isrelational machine learning(RML).
A number of canonical tasks are associated with statistical relational learning, the most common ones being.[3]
One of the fundamental design goals of the representation formalisms developed in SRL is to abstract away from concrete entities and to represent instead general principles that are intended to be universally applicable. Since there are countless ways in which such principles can be represented, many representation formalisms have been proposed in recent years.[1]In the following, some of the more common ones are listed in alphabetical order: | https://en.wikipedia.org/wiki/Statistical_relational_learning |
Bayesian probability(/ˈbeɪziən/BAY-zee-ənor/ˈbeɪʒən/BAY-zhən)[1]is aninterpretation of the concept of probability, in which, instead offrequencyorpropensityof some phenomenon, probability is interpreted as reasonable expectation[2]representing a state of knowledge[3]or as quantification of a personal belief.[4]
The Bayesian interpretation of probability can be seen as an extension ofpropositional logicthat enables reasoning withhypotheses;[5][6]that is, with propositions whosetruth or falsityis unknown. In the Bayesian view, a probability is assigned to a hypothesis, whereas underfrequentist inference, a hypothesis is typically tested without being assigned a probability.
Bayesian probability belongs to the category of evidential probabilities; to evaluate the probability of a hypothesis, the Bayesian probabilist specifies aprior probability. This, in turn, is then updated to aposterior probabilityin the light of new, relevantdata(evidence).[7]The Bayesian interpretation provides a standard set of procedures and formulae to perform this calculation.
The termBayesianderives from the 18th-century mathematician and theologianThomas Bayes, who provided the first mathematical treatment of a non-trivial problem of statisticaldata analysisusing what is now known asBayesian inference.[8]: 131MathematicianPierre-Simon Laplacepioneered and popularized what is now called Bayesian probability.[8]: 97–98
Bayesian methods are characterized by concepts and procedures as follows:
Broadly speaking, there are two interpretations of Bayesian probability. For objectivists, who interpret probability as an extension oflogic,probabilityquantifies the reasonable expectation that everyone (even a "robot") who shares the same knowledge should share in accordance with the rules of Bayesian statistics, which can be justified byCox's theorem.[3][10]For subjectivists,probabilitycorresponds to a personal belief.[4]Rationality and coherence allow for substantial variation within the constraints they pose; the constraints are justified by theDutch bookargument or bydecision theoryandde Finetti's theorem.[4]The objective and subjective variants of Bayesian probability differ mainly in their interpretation and construction of the prior probability.
The termBayesianderives fromThomas Bayes(1702–1761), who proved a special case of what is now calledBayes' theoremin a paper titled "An Essay Towards Solving a Problem in the Doctrine of Chances".[11]In that special case, the prior and posterior distributions werebeta distributionsand the data came fromBernoulli trials. It wasPierre-Simon Laplace(1749–1827) who introduced a general version of the theorem and used it to approach problems incelestial mechanics, medical statistics,reliability, andjurisprudence.[12]Early Bayesian inference, which used uniform priors following Laplace'sprinciple of insufficient reason, was called "inverse probability" (because itinfersbackwards from observations to parameters, or from effects to causes).[13]After the 1920s, "inverse probability" was largely supplanted by a collection of methods that came to be calledfrequentist statistics.[13]
In the 20th century, the ideas of Laplace developed in two directions, giving rise toobjectiveandsubjectivecurrents in Bayesian practice.Harold Jeffreys'Theory of Probability(first published in 1939) played an important role in the revival of the Bayesian view of probability, followed by works byAbraham Wald(1950) andLeonard J. Savage(1954). The adjectiveBayesianitself dates to the 1950s; the derivedBayesianism,neo-Bayesianismis of 1960s coinage.[14][15][16]In the objectivist stream, the statistical analysis depends on only the model assumed and the data analysed.[17]No subjective decisions need to be involved. In contrast, "subjectivist" statisticians deny the possibility of fully objective analysis for the general case.
In the 1980s, there was a dramatic growth in research and applications of Bayesian methods, mostly attributed to the discovery ofMarkov chain Monte Carlomethods and the consequent removal of many of the computational problems, and to an increasing interest in nonstandard, complex applications.[18]While frequentist statistics remains strong (as demonstrated by the fact that much of undergraduate teaching is based on it[19]), Bayesian methods are widely accepted and used, e.g., in the field ofmachine learning.[20]
The use of Bayesian probabilities as the basis ofBayesian inferencehas been supported by several arguments, such asCox axioms, theDutch book argument, arguments based ondecision theoryandde Finetti's theorem.
Richard T. Coxshowed that Bayesian updating follows from several axioms, including twofunctional equationsand a hypothesis of differentiability.[10][21]The assumption of differentiability or even continuity is controversial; Halpern found a counterexample based on his observation that the Boolean algebra of statements may be finite.[22]Other axiomatizations have been suggested by various authors with the purpose of making the theory more rigorous.[9]
Bruno de Finettiproposed the Dutch book argument based on betting. A cleverbookmakermakes aDutch bookby setting theoddsand bets to ensure that the bookmaker profits—at the expense of the gamblers—regardless of the outcome of the event (a horse race, for example) on which the gamblers bet. It is associated withprobabilitiesimplied by the odds not beingcoherent.
However,Ian Hackingnoted that traditional Dutch book arguments did not specify Bayesian updating: they left open the possibility that non-Bayesian updating rules could avoid Dutch books. For example,Hackingwrites[23][24]"And neither the Dutch book argument, nor any other in the personalist arsenal of proofs of the probability axioms, entails the dynamic assumption. Not one entails Bayesianism. So the personalist requires the dynamic assumption to be Bayesian. It is true that in consistency a personalist could abandon the Bayesian model of learning from experience. Salt could lose its savour."
In fact, there are non-Bayesian updating rules that also avoid Dutch books (as discussed in the literature on "probability kinematics"[25]following the publication ofRichard C. Jeffrey's rule, which is itself regarded as Bayesian[26]). The additional hypotheses sufficient to (uniquely) specify Bayesian updating are substantial[27]and not universally seen as satisfactory.[28]
Adecision-theoreticjustification of the use of Bayesian inference (and hence of Bayesian probabilities) was given byAbraham Wald, who proved that everyadmissiblestatistical procedure is either a Bayesian procedure or a limit of Bayesian procedures.[29]Conversely, every Bayesian procedure isadmissible.[30]
Following the work onexpected utilitytheoryofRamseyandvon Neumann, decision-theorists have accounted forrational behaviorusing a probability distribution for theagent.Johann Pfanzaglcompleted theTheory of Games and Economic Behaviorby providing an axiomatization of subjective probability and utility, a task left uncompleted by von Neumann andOskar Morgenstern: their original theory supposed that all the agents had the same probability distribution, as a convenience.[31]Pfanzagl's axiomatization was endorsed by Oskar Morgenstern: "Von Neumann and I have anticipated ... [the question whether probabilities] might, perhaps more typically, be subjective and have stated specifically that in the latter case axioms could be found from which could derive the desired numerical utility together with a number for the probabilities (cf. p. 19 of The Theory of Games and Economic Behavior). We did not carry this out; it was demonstrated by Pfanzagl ... with all the necessary rigor".[32]
Ramsey andSavagenoted that the individual agent's probability distribution could be objectively studied in experiments. Procedures fortesting hypothesesabout probabilities (using finite samples) are due toRamsey(1931) andde Finetti(1931, 1937, 1964, 1970). BothBruno de Finetti[33][34]andFrank P. Ramsey[34][35]acknowledge their debts topragmatic philosophy, particularly (for Ramsey) toCharles S. Peirce.[34][35]
The "Ramsey test" for evaluating probability distributions is implementable in theory, and has kept experimental psychologists occupied for a half century.[36]This work demonstrates that Bayesian-probability propositions can befalsified, and so meet an empirical criterion ofCharles S. Peirce, whose work inspired Ramsey. (Thisfalsifiability-criterion was popularized byKarl Popper.[37][38])
Modern work on the experimental evaluation of personal probabilities uses the randomization,blinding, and Boolean-decision procedures of the Peirce-Jastrow experiment.[39]Since individuals act according to different probability judgments, these agents' probabilities are "personal" (but amenable to objective study).
Personal probabilities are problematic for science and for some applications where decision-makers lack the knowledge or time to specify an informed probability-distribution (on which they are prepared to act). To meet the needs of science and of human limitations, Bayesian statisticians have developed "objective" methods for specifying prior probabilities.
Indeed, some Bayesians have argued the prior state of knowledge definesthe(unique) prior probability-distribution for "regular" statistical problems; cf.well-posed problems. Finding the right method for constructing such "objective" priors (for appropriate classes of regular problems) has been the quest of statistical theorists from Laplace toJohn Maynard Keynes,Harold Jeffreys, andEdwin Thompson Jaynes. These theorists and their successors have suggested several methods for constructing "objective" priors (Unfortunately, it is not always clear how to assess the relative "objectivity" of the priors proposed under these methods):
Each of these methods contributes useful priors for "regular" one-parameter problems, and each prior can handle some challengingstatistical models(with "irregularity" or several parameters). Each of these methods has been useful in Bayesian practice. Indeed, methods for constructing "objective" (alternatively, "default" or "ignorance") priors have been developed by avowed subjective (or "personal") Bayesians likeJames Berger(Duke University) andJosé-Miguel Bernardo(Universitat de València), simply because such priors are needed for Bayesian practice, particularly in science.[40]The quest for "the universal method for constructing priors" continues to attract statistical theorists.[40]
Thus, the Bayesian statistician needs either to use informed priors (using relevant expertise or previous data) or to choose among the competing methods for constructing "objective" priors. | https://en.wikipedia.org/wiki/Bayesian_probability |
Cox's theorem, named after the physicistRichard Threlkeld Cox, is a derivation of the laws ofprobability theoryfrom a certain set ofpostulates.[1][2]This derivation justifies the so-called "logical" interpretation of probability, as the laws of probability derived by Cox's theorem are applicable to any proposition. Logical (also known as objective Bayesian) probability is a type ofBayesian probability. Other forms of Bayesianism, such as the subjective interpretation, are given other justifications.
Cox wanted his system to satisfy the following conditions:
The postulates as stated here are taken from Arnborg and Sjödin.[3][4][5]"Common sense" includes consistency with Aristotelianlogicin the sense that logically equivalent propositions shall have the same plausibility.
The postulates as originally stated by Cox were not mathematically
rigorous (although more so than the informal description above),
as noted byHalpern.[6][7]However it appears to be possible
to augment them with various mathematical assumptions made either
implicitly or explicitly by Cox to produce a valid proof.
Cox's notation:
Cox's postulates and functional equations are:
The laws of probability derivable from these postulates are the following.[8]LetA∣B{\displaystyle A\mid B}be the plausibility of the propositionA{\displaystyle A}givenB{\displaystyle B}satisfying Cox's postulates. Then there is a functionw{\displaystyle w}mapping plausibilities to interval [0,1] and a positive numberm{\displaystyle m}such that
It is important to note that the postulates imply only these general properties. We may recover the usual laws of probability by setting a new function, conventionally denotedP{\displaystyle P}orPr{\displaystyle \Pr }, equal towm{\displaystyle w^{m}}. Then we obtain the laws of probability in a more familiar form:
Rule 2 is a rule for negation, and rule 3 is a rule for conjunction. Given that any proposition containing conjunction,disjunction, and negation can be equivalently rephrased using conjunction and negation alone (theconjunctive normal form), we can now handle any compound proposition.
The laws thus derived yieldfinite additivityof probability, but notcountable additivity. Themeasure-theoretic formulation of Kolmogorovassumes that a probability measure is countably additive. This slightly stronger condition is necessary for certain results. An elementary example (in which this assumption merely simplifies the calculation rather than being necessary for it) is that the probability of seeing heads for the first time after an even number of flips in a sequence of coin flips is13{\displaystyle {\tfrac {1}{3}}}.[9]
Cox's theorem has come to be used as one of thejustificationsfor the use ofBayesian probability theory. For example, in Jaynes it is discussed in detail in chapters 1 and 2 and is a cornerstone for the rest of the book.[8]Probability is interpreted as aformal systemoflogic, the natural extension ofAristotelian logic(in which every statement is either true or false) into the realm of reasoning in the presence of uncertainty.
It has been debated to what degree the theorem excludes alternative models for reasoning aboutuncertainty. For example, if certain "unintuitive" mathematical assumptions were dropped then alternatives could be devised, e.g., an example provided by Halpern.[6]However Arnborg and Sjödin[3][4][5]suggest additional
"common sense" postulates, which would allow the assumptions to be relaxed in some cases while still ruling out the Halpern example. Other approaches were devised by Hardy[10]or Dupré and Tipler.[11]
The original formulation of Cox's theorem is inCox (1946), which is extended with additional results and more discussion inCox (1961). Jaynes[8]cites Abel[12]for the first known use of the associativity functional equation.János Aczél[13]provides a long proof of the "associativity equation" (pages 256-267). Jaynes[8]: 27reproduces the shorter proof by Cox in which differentiability is assumed. A guide to Cox's theorem by Van Horn aims at comprehensively introducing the reader to all these references.[14]
Baoding Liu, the founder of uncertainty theory, criticizes Cox's theorem for presuming that thetruth valueofconjunctionP∧Q{\displaystyle P\land Q}is atwice differentiablefunctionf{\displaystyle f}of truth values of the twopropositionsP{\displaystyle P}andQ{\displaystyle Q}, i.e.,T(P∧Q)=f(T(P),T(Q)){\displaystyle T(P\land Q)=f(T(P),T(Q))}, which excludes uncertainty theory's "uncertain measure" from its start, because the functionf(x,y)=x∧y{\displaystyle f(x,y)=x\land y},[a]used in uncertainty theory, is not differentiable with respect tox{\displaystyle x}andy{\displaystyle y}.[15]According to Liu, "there does not exist any evidence that the truth value of conjunction is completely determined by the truth values of individual propositions, let alone a twice differentiable function."[15] | https://en.wikipedia.org/wiki/Cox%27s_theorem |
Imprecise probabilitygeneralizesprobability theoryto allow for partial probability specifications, and is applicable when information is scarce, vague, or conflicting, in which case a uniqueprobability distributionmay be hard to identify. Thereby, the theory aims to represent the available knowledge more accurately. Imprecision is useful for dealing withexpert elicitation, because:
Uncertainty is traditionally modelled by aprobabilitydistribution, as developed byKolmogorov,[1]Laplace,de Finetti,[2]Ramsey,Cox,Lindley, and many others. However, this has not been unanimously accepted by scientists, statisticians, and probabilists: it has been argued that some modification or broadening of probability theory is required, because one may not always be able to provide a probability for every event, particularly when only little information or data is available—an early example of such criticism isBoole's critique[3]ofLaplace's work—, or when we wish to model probabilities that a group agrees with, rather than those of a single individual.
Perhaps the most common generalization is to replace a single probability specification with an interval specification.Lower and upper probabilities, denoted byP_(A){\displaystyle {\underline {P}}(A)}andP¯(A){\displaystyle {\overline {P}}(A)}, or more generally, lower and upper expectations (previsions),[4][5][6][7]aim to fill this gap.
A lower probability function issuperadditivebut not necessarily additive, whereas an upper probability is subadditive.
To get a general understanding of the theory, consider:
We then have a flexible continuum of more or less precise models in between.
Some approaches, summarized under the namenonadditive probabilities,[8]directly use one of theseset functions, assuming the other one to be naturally defined such thatP_(Ac)=1−P¯(A){\displaystyle {\underline {P}}(A^{c})=1-{\overline {P}}(A)}, withAc{\displaystyle A^{c}}the complement ofA{\displaystyle A}. Other related concepts understand the corresponding intervals[P_(A),P¯(A)]{\displaystyle [{\underline {P}}(A),{\overline {P}}(A)]}for all events as the basic entity.[9][10]
The idea to use imprecise probability has a long history. The first formal treatment dates back at least to the middle of the nineteenth century, byGeorge Boole,[3]who aimed to reconcile the theories oflogicand probability. In the 1920s, inA Treatise on Probability,Keynes[11]formulated and applied an explicit interval estimate approach to probability.
Work on imprecise probability models proceeded fitfully throughout the 20th century, with important contributions byBernard Koopman,C.A.B. Smith,I.J. Good,Arthur Dempster,Glenn Shafer,Peter M. Williams,Henry Kyburg,Isaac Levi, andTeddy Seidenfeld.[12]At the start of the 1990s, the field started to gather some momentum, with the publication ofPeter Walley's bookStatistical Reasoning with Imprecise Probabilities[7](which is also where the term "imprecise probability" originates).
The 1990s also saw important works by Kuznetsov,[13]and by Weichselberger,[9][10]who both use the terminterval probability. Walley's theory extends the traditional subjective probability theory via buying and selling prices for gambles, whereas Weichselberger's approach generalizesKolmogorov's axioms without imposing an interpretation.
Standard consistency conditions relate upper and lower probability assignments to non-empty closed convex sets of probability distributions. Therefore, as a welcome by-product, the theory also provides a formal framework for models used inrobust statistics[14]andnon-parametric statistics.[15]Included are also concepts based onChoquet integration,[16]and so-called two-monotone and totally monotonecapacities,[17]which have become very popular inartificial intelligenceunder the name(Dempster–Shafer) belief functions.[18][19]Moreover, there is a strong connection[20]toShaferandVovk's notion ofgame-theoretic probability.[21]
The term "imprecise probability" is somewhat misleading in that precision is often mistaken for accuracy, whereas an imprecise representation may be more accurate than a spuriously precise representation. In any case, the term appears to have become established in the 1990s, and covers a wide range of extensions of the theory ofprobability, including:
A unification of many of the above-mentioned imprecise probability theories was proposed by Walley,[7]although this is in no way the first attempt to formalize imprecise probabilities. In terms ofprobability interpretations, Walley's formulation of imprecise probabilities is based on thesubjective variant of the Bayesian interpretationof probability. Walley defines upper and lower probabilities as special cases of upper and lower previsions and the gambling framework advanced byBruno de Finetti. In simple terms, a decision maker's lower prevision is the highest price at which the decision maker is sure he or she would buy a gamble, and the upper prevision is the lowest price at which the decision maker is sure he or she would buy the opposite of the gamble (which is equivalent to selling the original gamble). If the upper and lower previsions are equal, then they jointly represent the decision maker'sfair pricefor the gamble, the price at which the decision maker is willing to take either side of the gamble. The existence of a fair price leads to precise probabilities.
The allowance for imprecision, or a gap between a decision maker's upper and lower previsions, is the primary difference between precise and imprecise probability theories. This gap is also given byHenry Kyburgrepeatedly for his interval probabilities, though he andIsaac Levialso give other reasons for intervals, or sets of distributions, representing states of belief.
One issue with imprecise probabilities is that there is often an independent degree of caution or boldness inherent in the use of one interval, rather than a wider or narrower one. This may be a degree of confidence, degree offuzzy membership, or threshold of acceptance. This is not as much of a problem for intervals that are lower and upper bounds derived from a set of probability distributions, e.g., a set of priors followed by conditionalization on each member of the set. However, it can lead to the question why some distributions are included in the set of priors and some are not.
Another issue is why one can be precise about two numbers, a lower bound and an upper bound, rather than a single number, a point probability. This issue may be merely rhetorical, as the robustness of a model with intervals is inherently greater than that of a model with point-valued probabilities. It does raise concerns about inappropriate claims of precision at endpoints, as well as for point values.
A more practical issue is what kind of decision theory can make use of imprecise probabilities.[31]For fuzzy measures, there is the work ofRonald R. Yager.[32]For convex sets of distributions, Levi's works are instructive.[33]Another approach asks whether the threshold controlling the boldness of the interval matters more to a decision than simply taking the average or using aHurwiczdecision rule.[34]Other approaches appear in the literature.[35][36][37][38] | https://en.wikipedia.org/wiki/Imprecise_probability |
Anon-monotonic logicis aformal logicwhoseentailmentrelation is notmonotonic. In other words, non-monotonic logics are devised to capture and representdefeasible inferences, i.e., a kind of inference in which reasoners draw tentative conclusions, enabling reasoners to retract their conclusion(s) based on further evidence.[1]Most studied formal logics have a monotonic entailment relation, meaning that adding a formula to the hypotheses never produces a pruning of its set of conclusions. Intuitively, monotonicity indicates that learning a new piece of knowledge cannot reduce the set of what is known. Monotonic logics cannot handle various reasoning tasks such asreasoning by default(conclusions may be derived only because of lack of evidence of the contrary),abductive reasoning(conclusions are only deduced as most likely explanations), some important approaches to reasoning about knowledge (the ignorance of a conclusion must be retracted when the conclusion becomes known), and similarly,belief revision(new knowledge may contradict old beliefs).
Abductive reasoningis the process of deriving a sufficient explanation of the known facts. An abductive logic should not be monotonic because the likely explanations are not necessarily correct. For example, the likely explanation for seeing wet grass is that it rained; however, this explanation has to be retracted when learning that the real cause of the grass being wet was a sprinkler. Since the old explanation (it rained) is retracted because of the addition of a piece of knowledge (a sprinkler was active), any logic that models explanations is non-monotonic.
If a logic includes formulae that mean that something is not known, this logic should not be monotonic. Indeed, learning something that was previously not known leads to the removal of the formula specifying that this piece of knowledge is not known. This second change (a removal caused by an addition) violates the condition of monotonicity. A logic for reasoning about knowledge is theautoepistemic logic.
Belief revisionis the process of changing beliefs to accommodate a new belief that might be inconsistent with the old ones. In the assumption that the new belief is correct, some of the old ones have to be retracted in order to maintain consistency. This retraction in response to an addition of a new belief makes any logic for belief revision non-monotonic. The belief revision approach is alternative toparaconsistent logics, which tolerate inconsistency rather than attempting to remove it.
Proof-theoreticformalization of a non-monotonic logic begins with adoption of certain non-monotonicrules of inference, and then prescribes contexts in which these non-monotonic rules may be applied in admissible deductions. This typically is accomplished by means of fixed-point equations that relate the sets of premises and the sets of their non-monotonic conclusions.Default logicandautoepistemic logicare the most common examples of non-monotonic logics that have been formalized that way.[2]
Model-theoreticformalization of a non-monotonic logic begins with restriction of thesemanticsof a suitable monotonic logic to some special models, for instance, to minimal models,[3][4]and then derives a set of non-monotonicrules of inference, possibly with some restrictions on which contexts these rules may be applied in, so that the resulting deductive system issoundandcompletewith respect to the restrictedsemantics.[5]Unlike some proof-theoretic formalizations that suffered from well-known paradoxes and were often hard to evaluate with respect of their consistency with the intuitions they were supposed to capture, model-theoretic formalizations were paradox-free and left little, if any, room for confusion about what non-monotonic patterns of reasoning they covered. Examples of proof-theoretic formalizations of non-monotonic reasoning, which revealed some undesirable or paradoxical properties or did not capture the desired intuitive comprehensions, that have been successfully (consistent with respective intuitive comprehensions and with no paradoxical properties, that is) formalized by model-theoretic means includefirst-order circumscription,closed-world assumption,[5]andautoepistemic logic.[2] | https://en.wikipedia.org/wiki/Non-monotonic_logic |
Possibility theoryis a mathematical theory for dealing with certain types ofuncertaintyand is an alternative toprobability theory. It uses measures of possibility and necessity between 0 and 1, ranging from impossible to possible and unnecessary to necessary, respectively. ProfessorLotfi Zadehfirst introduced possibility theory in 1978 as an extension of his theory offuzzy setsandfuzzy logic.Didier Duboisand Henri Prade further contributed to its development. Earlier, in the 1950s, economistG. L. S. Shackleproposed themin/max algebrato describe degrees of potential surprise.
For simplicity, assume that theuniverse of discourseΩ is a finite set. A possibility measure is a functionΠ{\displaystyle \Pi }from2Ω{\displaystyle 2^{\Omega }}to [0, 1] such that:
It follows that, like probability on finiteprobability spaces, the possibility measure is determined by its behavior on singletons:
Axiom 1 can be interpreted as the assumption that Ω is an exhaustive description of future states of the world, because it means that no belief weight is given to elements outside Ω.
Axiom 2 could be interpreted as the assumption that the evidence from whichΠ{\displaystyle \Pi }was constructed is free of any contradiction. Technically, it implies that there is at least one element in Ω with possibility 1.
Axiom 3 corresponds to the additivity axiom in probabilities. However, there is an important practical difference. Possibility theory is computationally more convenient because Axioms 1–3 imply that:
Because one can know the possibility of the union from the possibility of each component, it can be said that possibility iscompositionalwith respect to the union operator. Note however that it is not compositional with respect to the intersection operator. Generally:
When Ω is not finite, Axiom 3 can be replaced by:
Whereasprobability theoryuses a single number, the probability, to describe how likely an event is to occur, possibility theory uses two concepts, thepossibilityand thenecessityof the event. For any setU{\displaystyle U}, the necessity measure is defined by
In the above formula,U¯{\displaystyle {\overline {U}}}denotes the complement ofU{\displaystyle U}, that is the elements ofΩ{\displaystyle \Omega }that do not belong toU{\displaystyle U}. It is straightforward to show that:
and that:
Note that contrary to probability theory, possibility is not self-dual. That is, for any eventU{\displaystyle U}, we only have the inequality:
However, the following duality rule holds:
Accordingly, beliefs about an event can be represented by a number and a bit.
There are four cases that can be interpreted as follows:
N(U)=1{\displaystyle N(U)=1}means thatU{\displaystyle U}is necessary.U{\displaystyle U}is certainly true. It implies thatΠ(U)=1{\displaystyle \Pi (U)=1}.
Π(U)=0{\displaystyle \Pi (U)=0}means thatU{\displaystyle U}is impossible.U{\displaystyle U}is certainly false. It implies thatN(U)=0{\displaystyle N(U)=0}.
Π(U)=1{\displaystyle \Pi (U)=1}means thatU{\displaystyle U}is possible. I would not be surprised at all ifU{\displaystyle U}occurs. It leavesN(U){\displaystyle N(U)}unconstrained.
N(U)=0{\displaystyle N(U)=0}means thatU{\displaystyle U}is unnecessary. I would not be surprised at all ifU{\displaystyle U}does not occur. It leavesΠ(U){\displaystyle \Pi (U)}unconstrained.
The intersection of the last two cases isN(U)=0{\displaystyle N(U)=0}andΠ(U)=1{\displaystyle \Pi (U)=1}meaning that I believe nothing at all aboutU{\displaystyle U}. Because it allows for indeterminacy like this, possibility theory relates to the graduation of amany-valued logic, such asintuitionistic logic, rather than the classicaltwo-valued logic.
Note that unlike possibility, fuzzy logic is compositional with respect to both the union and the intersection operator. The relationship with fuzzy theory can be explained with the following classic example.
There is an extensive formal correspondence between probability and possibility theories, where the addition operator corresponds to the maximum operator.
A possibility measure can be seen as a consonantplausibility measurein theDempster–Shafer theoryof evidence. The operators of possibility theory can be seen as a hyper-cautious version of the operators of thetransferable belief model, a modern development of the theory of evidence.
Possibility can be seen as anupper probability: any possibility distribution defines a uniquecredal setof admissibleprobability distributionsby
This allows one to study possibility theory using the tools ofimprecise probabilities.
We callgeneralized possibilityevery function satisfying Axiom 1 and Axiom 3. We callgeneralized necessitythe dual of a generalized possibility. The generalized necessities are related to a very simple and interesting fuzzy logic callednecessity logic. In the deduction apparatus of necessity logic the logical axioms are the usual classicaltautologies. Also, there is only a fuzzy inference rule extending the usualmodus ponens. Such a rule says that ifαandα→βare proved at degreeλandμ, respectively, then we can assertβat degree min{λ,μ}. It is easy to see that the theories of such a logic are the generalized necessities and that the completely consistent theories coincide with the necessities (see for example Gerla 2001). | https://en.wikipedia.org/wiki/Possibility_theory |
Most real databases contain data whose correctness is uncertain. In order to work with such data, there is a need to quantify the integrity of the data. This is achieved by using probabilistic databases.
Aprobabilistic databaseis anuncertain databasein which thepossible worldshave associatedprobabilities. Probabilisticdatabase management systemsare currently an active area of research. "While there are currently no commercial probabilistic database systems, several research prototypes exist..."[1]
Probabilistic databases distinguish between thelogical data modeland the physical representation of the data much likerelational databasesdo in theANSI-SPARC Architecture.
In probabilistic databases this is even more crucial since such databases have to represent very large numbers of possible worlds, often exponential in the size of one world (a classicaldatabase),succinctly.[2][3]
In a probabilistic database, each tuple is associated with a probability between 0 and 1, with 0 representing that the data is certainly incorrect, and 1 representing that it is certainly correct.
A probabilistic database could exist in multiple states. For example, if there is uncertainty about the existence of a tuple in the database, then the database could be in two different states with respect to that tuple—the first state contains the tuple, while the second one does not. Similarly, if an attribute can take one of the valuesx,yorz, then the database can be in three different states with respect to that attribute.
Each of thesestatesis called a possible world.
Consider the following database:
(Here{b3, b3′, b3′′}denotes that the attribute can take any of the valuesb3,b3′orb3′′)
Then the actual state of the database may or may not contain the first tuple (depending on whether it is correct or not). Similarly, the value of the attributeBmay beb3,b3′orb3′′.
Consequently, the possible worlds corresponding to the database are as follows:
There are essentially two kinds of uncertainties that could exist in a probabilistic database, as described in the table below:
By assigning values to random variables associated with the data items, different possible worlds can be represented.
The first published use of the term "probabilistic database" was probably in the 1987 VLDB conference paper "The theory of probabilistic databases", by Cavallo and Pittarelli.[4]The title (of the 11 page paper) was intended as a bit of a joke, since David Maier's 600 page monograph, The Theory of Relational Databases, would have been familiar at that time to many of the conference participants and readers of the conference proceedings. | https://en.wikipedia.org/wiki/Probabilistic_database |
Probabilistic Soft Logic (PSL)is astatistical relational learning (SRL)framework for modeling probabilistic and relational domains.[2]It is applicable to a variety ofmachine learningproblems, such ascollective classification,entity resolution,link prediction, andontology alignment.
PSL combines two tools:first-order logic, with its ability to succinctly represent complex phenomena, andprobabilistic graphical models, which capture the uncertainty and incompleteness inherent in real-world knowledge.
More specifically, PSL uses"soft" logicas its logical component andMarkov random fieldsas its statistical model.
PSL provides sophisticated inference techniques for finding the most likely answer (i.e. themaximum a posteriori (MAP)state).
The "softening" of the logical formulas makes inference apolynomial timeoperation rather than anNP-hardoperation.
TheSRLcommunity has introduced multiple approaches that combinegraphical modelsandfirst-order logicto allow the development of complex probabilistic models with relational structures.
A notable example of such approaches isMarkov logic networks(MLNs).[3]Like MLNs, PSL is a modelling language (with an accompanying implementation[4]) for learning and predicting in relational domains.
Unlike MLNs, PSL uses soft truth values for predicates in an interval between [0,1].
This allows for the underlying inference to be solved quickly as a convex optimization problem.
This is useful in problems such ascollective classification,link prediction,social networkmodelling, andobject identification/entity resolution/record linkage.
Probabilistic Soft Logic was first released in 2009 byLise Getoorand Matthias Broecheler.[5]This first version focused heavily on reasoning about similarities between entities.
Later versions of PSL would still keep the ability to reason about similarities, but generalize the language to be more expressive.
In 2017, aJournal of Machine Learning Researcharticle detailing PSL and the underlying graphical model was published along with the release of a new major version of PSL (2.0.0).[2]The major new features in PSL 2.0.0 was a new type of rule mainly used in specifying constraints and acommand-line interface.
A PSL model is composed of a series of weighted rules and constraints.
PSL supports two types of rules: Logical and Arithmetic.[6]
Logical rules are composed of an implication with only a single atom or a conjunction of atoms in the body and a single atom or a disjunction of atoms in the head.
Since PSL uses soft logic, hard logic operators are replaced withŁukasiewicz soft logical operators.
An example of a logical rule expression is:
This rule can be interpreted to mean:If A and B are similar and A has the label X, then there is evidence that B also has the label X.
Arithmetic rules are relations of twolinear combinationsof atoms.
Restricting each side to a linear combination ensures that the resulting potential isconvex.
The following relational operators are supported:=,<=, and>=.
This rule encodes the notion that similarity is symmetric in this model.
A commonly used feature of arithmetic rules is the summation operation.
The summation operation can be used to aggregate multiple atoms.
When used, the atom is replaced with the sum of all possible atoms where the non-summation variables are fixed.
Summation variables are made by prefixing a variable with a+.
Fox example:
If the possible values for X arelabel1,label2, andlabel3, then the above rule is equivalent to:
Both of these rules force the sum of all possible labels for an entity to sum to 1.0.
This type of rule is especially useful forcollective classificationproblems, where only one class can be selected.
A PSL program defines a family of probabilisticgraphical modelsthat are parameterized by data.
More specifically, the family of graphical models it defines belongs to a special class ofMarkov random fieldknown as a Hinge-Loss Markov Field (HL-MRF).
An HL-MRF determines a density function over a set of continuous variablesy=(y1,⋯,yn){\displaystyle \mathbf {y} =(y_{1},\cdots ,y_{n})}with joint domain[0,1]n{\displaystyle [0,1]^{n}}using set of evidencex=(x1,⋯,xm){\displaystyle \mathbf {x} =(x_{1},\cdots ,x_{m})}, weightsw=(w1,⋯,wk){\displaystyle \mathbf {w} =(w_{1},\cdots ,w_{k})}, and potential functionsϕ=(ϕ1,⋯,ϕk){\displaystyle \mathbf {\phi } =(\phi _{1},\cdots ,\phi _{k})}of the formϕi(x,y)=max(ℓi(x,y),0)di{\displaystyle \mathbf {\phi _{i}(\mathbf {x} ,\mathbf {y} )} =\max(\ell _{i}(\mathbf {x} ,\mathbf {y} ),0)^{d_{i}}}whereℓi{\displaystyle \ell _{i}}is a linear function anddi∈{1,2}{\displaystyle d_{i}\in \{1,2\}}.
The conditional distribution ofy{\displaystyle \mathbf {y} }given the observed datax{\displaystyle \mathbf {x} }is defined as
P(y|x)=1Z(y)exp(∑i=1kwiϕi(x,y)){\displaystyle P(\mathbf {y} |\mathbf {x} )={\frac {1}{Z(\mathbf {y} )}}\exp(\sum _{i=1}^{k}w_{i}\phi _{i}(\mathbf {x} ,\mathbf {y} ))}
WhereZ(y)=∫yexp(∑i=1kwiϕi(x,y)){\displaystyle Z(\mathbf {y} )=\int _{\mathbf {y} }\exp(\sum _{i=1}^{k}w_{i}\phi _{i}(\mathbf {x} ,\mathbf {y} ))}is the partition function.
This density is alogarithmically convex function, and thus the common inference task in PSL of finding amaximum a posteriori estimationof the joint state ofy{\displaystyle \mathbf {y} }is a convex problem.
This allows inference in PSL to be achievable in polynomial-time.
Predicates in PSL can be labeled as open or closed.
When a predicate is labeled closed, PSL makes theclosed-world assumption: any predicates that are not explicitly provided to PSL are assumed to be false.
In other words, the closed world assumption presumes that a predicate that is partially true is also known to be partially true.
For example, if we had the following constants in the data for representing people:{Alice,Bob}{\displaystyle \{Alice,Bob\}}and the following constant for movies:{Avatar}{\displaystyle \{Avatar\}}, and we provided PSL with the predicate data{rating(Alice,Avatar)=0.8}{\displaystyle \{rating(Alice,Avatar)=0.8\}}andrating(⋅){\displaystyle rating(\cdot )}was labeled closed, then PSL would assume that{rating(Bob,Avatar)=0}{\displaystyle \{rating(Bob,Avatar)=0\}}even though this data was never explicitly provided to the system.
If a predicate is labeled as open, then PSL does not make the closed-world assumption. Instead, PSL will attempt to collectively infer the unobserved instances.
Data is used to instantiate several potential functions in a process called grounding.
The resulting potential functions are then used to define the HL-MRF.
Grounding predicates in PSL is the process of making all possible substitutions of the variables in each predicate with the existing constants in the data, resulting in a collection of ground atoms,y={y1,⋯,yn}{\displaystyle \mathbf {y} =\{y_{1},\cdots ,y_{n}\}}.
Then, all possible substitutions of the ground atoms for the predicates in the rules are made to create ground rules.
Each of the ground rules are interpreted as either potentials or hard constraints in the induced HL-MRF.
A logical rule is translated as a continuous relaxation of Boolean connectives usingŁukasiewicz logic.
A ground logical rule is transformed into itsdisjunctive normal form.
LetI+{\displaystyle I^{+}}be the set of indices of the variables that correspond to atoms that are not negated, and, likewiseI−{\displaystyle I^{-}}the set of indices corresponding to atoms that are negated, in the disjunctive clause.
Then the logical rule maps to the inequality:
1−∑i∈I+yi−∑i∈I−(1−yi)≤0{\displaystyle 1-\sum _{i\in I^{+}}y_{i}-\sum _{i\in I^{-}}(1-y_{i})\leq 0}
If the logical rule is weighted with a weightw{\displaystyle w}and exponentiated withd∈{1,2}{\displaystyle d\in \{1,2\}}, then the potential
ϕ(y)=(max{1−∑i∈I+yi−∑i∈I−(1−yi),0})d{\displaystyle \phi (\mathbf {y} )={\Big (}\max {\Big \{}1-\sum _{i\in I^{+}}y_{i}-\sum _{i\in I^{-}}(1-y_{i}),0{\Big \}}{\Big )}^{d}}
is added to the HL-MRF with a weight parameter ofw{\displaystyle w}.
An arithmetic rule is manipulated toℓ(y)≤0{\displaystyle \ell (\mathbf {y} )\leq 0}and the resulting potential takes the formϕ(y)=(max{ℓ(y),0})d{\displaystyle \phi (\mathbf {y} )=(\max\{\ell (\mathbf {y} ),0\})^{d}}.
PSL is available via three different languageinterfaces:CLI,Java, andPython.
PSL's command line interface (CLI) is the recommended way to use PSL.[7]It supports all the features commonly used in a reproducible form that does not require compilation.
Since PSL is written in Java, the PSL Java interface is the most expansive and users can call directly into the core of PSL.[8]The Java interface is available through theMavencentral repository.[9]The PSL Python interface is available throughPyPi[10]and usespandasDataFrames to pass data between PSL and the user.[11]
PSL previously provided a Groovy interface.[12]It has been deprecated in 2.2.1 release of PSL, and is scheduled to be removed in the 2.3.0 release.[13]
The LINQS lab, developers of the official PSL implementation, maintain a collection of PSL examples.[14]These examples cover both synthetic and real-world datasets and include examples from academic publications using PSL.
Below is a toy example from this repository that can be used to infer relations in a social network.
Along with each rule is a comment describing the motivating intuition behind the statements. | https://en.wikipedia.org/wiki/Probabilistic_soft_logic |
Probabilistic causationis a concept in a group of philosophical theories that aim to characterize the relationship between cause and effect using the tools ofprobability theory. The central idea behind these theories is that causes raise the probabilities of their effects,all else being equal.
Interpretingcausationas adeterministicrelation means that ifAcausesB, thenAmustalwaysbe followed byB. In this sense, war does not cause deaths, nor doessmokingcausecancer. As a result, many turn to a notion of probabilistic causation. Informally,Aprobabilistically causesBifA's occurrence increases the probability ofB. This is sometimes interpreted to reflect imperfect knowledge of a deterministic system but other times interpreted to mean that the causal system under study has an inherentlyindeterministicnature. (Propensity probabilityis an analogous idea, according to which probabilities have an objective existence and are not just limitations in a subject's knowledge).
Philosophers such asHugh Mellor[1]andPatrick Suppes[2]have defined causation in terms of a cause preceding and increasing the probability of the effect. (Additionally, Mellor claims that cause and effect are both facts - not events - since even a non-event, such as the failure of a train to arrive, can cause effects such as my taking the bus. Suppes, by contrast, relies on events defined set-theoretically, and much of his discussion is informed by this terminology.)[3]
Pearl[4]argues that the entire enterprise of probabilistic causation has been misguided from the very beginning, because the central notion that causes "raise the probabilities" of their effects cannot be expressed in the language of probability theory. In particular, the inequalityPr(effect | cause) > Pr(effect | ~cause)which philosophers invoked to define causation, as well as its many variations and nuances, fails to capture the intuition behind "probability raising", which is inherently a manipulative or counterfactual notion.
The correct formulation, according to Pearl, should read:
wheredo(C)stands for an external intervention that compels the truth ofC. The conditional probabilityPr(E | C), in contrast, represents a probability resulting from a passive observation ofC, and rarely coincides withPr(E | do(C)). Indeed, observing the barometer falling increases the probability of a storm coming, but does not
"cause" the storm; were the act of manipulating the barometer to change the probability of storms, the falling barometer would qualify as a cause of storms. In general, formulating the notion of "probability raising" within the calculus ofdo-operators[4]resolves the difficulties that probabilistic causation has encountered in the past half-century,[2][5][6]among them the infamousSimpson's paradox, and clarifies precisely what relationships exist between probabilities and causation.
The establishing of cause and effect, even with this relaxed reading, is notoriously difficult, expressed by the widely accepted statement "Correlation does not imply causation". For instance, the observation that smokers have a dramatically increased lung cancer rate does not establish that smoking must be acauseof that increased cancer rate: maybe there exists a certain genetic defect which both causes cancer and a yearning for nicotine; or even perhaps nicotine craving is a symptom of very early-stage lung cancer which is not otherwise detectable. Scientists are always seeking the exact mechanisms by which EventAproduces EventB. But scientists also are comfortable making a statement like, "Smoking probably causes cancer," when the statistical correlation between the two, according to probability theory, is far greater than chance. In this dual approach, scientists accept both deterministic and probabilistic causation in their terminology.
Instatistics, it is generally accepted that observational studies (like counting cancer cases among smokers and among non-smokers and then comparing the two) can give hints, but can neverestablishcause and effect. Often, however, qualitative causal assumptions (e.g., absence of causation between some variables) may permit the derivation of consistent
causal effect estimates from observational studies.[4]
The gold standard for causation here is therandomized experiment: take a large number of people, randomly divide them into two groups, force one group to smoke and prohibit the other group from smoking, then determine whether one group develops a significantly higher lung cancer rate. Random assignment plays a crucial role in the inference to causation because, in the long run, it renders the two groups equivalent in terms of all other possible effects on the outcome (cancer) so that any changes in the outcome will reflect only the manipulation (smoking). Obviously, for ethical reasons thisexperimentcannot be performed, but the method is widely applicable for less damaging experiments. One limitation of experiments, however, is that whereas they do a good job of testing for the presence of some causal effect they do less well at estimating the size of that effect in a population of interest. (This is a common criticism of studies of safety of food additives that use doses much higher than people consuming the product would actually ingest.)
In aclosed systemthe data may suggest that causeA * Bprecedes effectCin a defined interval of timeτ. This relationship can determine causality with confidence bounded byτ. However, this same relationship may not be deterministic with confidence in an open system where uncontrolled factors may affect the result.[7]
An example would be a system of A, B and C, where A, B and C are known. Characteristics are below and limited to a given time (such as 50 ms, or 50 hours). "^" means "not", "*" means "and":
^A * ^B => ^C (99.9999998027%)
A * ^B => ^C (99.9999998027%)
^A * B => ^C (99.9999998027%)
A * B => C (99.9999998027%)
One can reasonably claim, within 6Standard Deviations, that A * B cause C given the time boundary (such as 50 ms, or 50 hours)IF And Only IFA, B and C are theonlyparts of the system in question. Any result outside of this may be considered a deviation. | https://en.wikipedia.org/wiki/Probabilistic_causation |
Uncertain inferencewas first described byC. J. van Rijsbergen[1]as a way to formally define a query and document relationship inInformation retrieval. This formalization is alogical implicationwith an attached measure of uncertainty.
Rijsbergen proposes that the measure ofuncertaintyof a documentdto a queryqbe the probability of its logical implication, i.e.:
A user's query can be interpreted as a set of assertions about the desired document. It is the system's task toinfer, given a particular document, if the query assertions are true. If they are, the document is retrieved.
In many cases the contents of documents are not sufficient to assert the queries. Aknowledge baseof facts and rules is needed, but some of them may be uncertain because there may be a probability associated to using them for inference. Therefore, we can also refer to this asplausible inference. Theplausibilityof an inferenced→q{\displaystyle d\to q}is a function of the plausibility of each query assertion. Rather than retrieving a document that exactly matches the query we should rank the documents based on their plausibility in regards to that query.
Sincedandqare both generated by users, they are error prone; thusd→q{\displaystyle d\to q}is uncertain. This will affect the plausibility of a given query.
By doing this it accomplishes two things:
Multimediadocuments, like images or videos, have different inference properties for each datatype. They are also different from text document properties. The framework of plausible inference allows us to measure and combine the probabilities coming from these different properties.
Uncertain inference generalizes the notions ofautoepistemic logic, where truth values are either known or unknown, and when known, they are true or false.
If we have a query of the form:
where A, B and C are query assertions, then for a document D we want the probability:
If we transform this into theconditional probabilityP((A∧B∧C)|D){\displaystyle P((A\wedge B\wedge C)|D)}and if the query assertions are independent we can calculate the overall probability of the implication as the product of the individual assertions probabilities.
Croft and Krovetz[2]applied uncertain inference to an information retrieval system for office documents they calledOFFICER. In office documents the independence assumption is valid since the query will focus on their individual attributes. Besides analysing the content of documents one can also query about the author, size, topic or collection for example. They devised methods to compare document and query attributes, infer their plausibility and combine it into an overall rating for each document. Besides that uncertainty of document and query contents also had to be addressed.
Probabilistic logic networksis a system for performing uncertain inference; crisp true/false truth values are replaced not only by a probability, but also by a confidence level, indicating the certitude of the probability.
Markov logic networksallow uncertain inference to be performed; uncertainties are computed using themaximum entropy principle, in analogy to the way thatMarkov chainsdescribe the uncertainty offinite-state machines. | https://en.wikipedia.org/wiki/Uncertain_inference |
Upper and lower probabilitiesare representations ofimprecise probability. Whereasprobability theoryuses a single number, theprobability, to describe how likely an event is to occur, this method uses two numbers: the upper probability of the event and the lower probability of the event.
Becausefrequentist statisticsdisallowsmetaprobabilities,[citation needed]frequentists have had to propose new solutions.Cedric SmithandArthur Dempstereach developed a theory of upper and lower probabilities.Glenn Shaferdeveloped Dempster's theory further, and it is now known asDempster–Shafer theoryor Choquet (1953).
More precisely, in the work of these authors one considers in apower set,P(S){\displaystyle P(S)\,\!}, amassfunctionm:P(S)→R{\displaystyle m:P(S)\rightarrow R}satisfying the conditions
In turn, a mass is associated with two non-additive continuous measures calledbeliefandplausibilitydefined as follows:
In the case whereS{\displaystyle S}is infinite there can bebel{\displaystyle \operatorname {bel} }such that there is no associated mass function. See p. 36 of Halpern (2003). Probability measures are a special case of belief functions in which the mass function assigns positive mass to singletons of the event space only.
A different notion of upper and lower probabilities is obtained by thelower and upper envelopesobtained from a classCof probability distributions by setting
The upper and lower probabilities are also related withprobabilistic logic: see Gerla (1994).
Observe also that anecessity measurecan be seen as a lower probability and apossibility measurecan be seen as an upper probability. | https://en.wikipedia.org/wiki/Upper_and_lower_probabilities |
Inmathematical logicandmetalogic, aformal systemis calledcompletewith respect to a particularpropertyif everyformulahaving the property can bederivedusing that system, i.e. is one of itstheorems; otherwise the system is said to beincomplete.
The term "complete" is also used without qualification, with differing meanings depending on the context, mostly referring to the property of semanticalvalidity. Intuitively, a system is called complete in this particular sense, if it can derive every formula that is true.
The propertyconverseto completeness is calledsoundness: a system is sound with respect to a property (mostly semantical validity) if each of its theorems has that property.
Aformal languageisexpressively completeif it can express the subject matter for which it is intended.
A set oflogical connectivesassociated with a formal system isfunctionally completeif it can express allpropositional functions.
Semantic completenessis theconverseofsoundnessfor formal systems. A formal system is complete with respect to tautologousness or "semantically complete" when all itstautologiesaretheorems, whereas a formal system is "sound" when all theorems are tautologies (that is, they are semantically valid formulas: formulas that are true under everyinterpretationof the language of the system that is consistent with the rules of the system). That is, a formal system is semantically complete if:
For example,Gödel's completeness theoremestablishes semantic completeness forfirst-order logic.
A formal systemSisstrongly completeorcomplete in the strong senseif for every set of premises Γ, any formula that semantically follows from Γ is derivable from Γ. That is:
A formal systemSisrefutation-completeif it is able to derivefalsefrom everyunsatisfiableset of formulas. That is:
Every strongly complete system is also refutation complete. Intuitively, strong completeness means that, given a formula setΓ{\displaystyle \Gamma }, it is possible tocomputeevery semantical consequenceφ{\displaystyle \varphi }ofΓ{\displaystyle \Gamma }, while refutation completeness means that, given a formula setΓ{\displaystyle \Gamma }and a formulaφ{\displaystyle \varphi }, it is possible tocheckwhetherφ{\displaystyle \varphi }is a semantical consequence ofΓ{\displaystyle \Gamma }.
Examples of refutation-complete systems include:SLD resolutiononHorn clauses,superpositionon equationalclausalfirst-order logic,Robinson's resolutionon clause sets.[3]The latter is not strongly complete: e.g.{a}⊨a∨b{\displaystyle \{a\}\models a\lor b}holds even in the propositional subset of first-order logic, buta∨b{\displaystyle a\lor b}cannot be derived from{a}{\displaystyle \{a\}}by resolution. However,{a,¬(a∨b)}⊢⊥{\displaystyle \{a,\lnot (a\lor b)\}\vdash \bot }can be derived.
A formal systemSissyntactically completeordeductively completeormaximally completeornegation completeif for eachsentence(closed formula) φ of the language of the system either φ or ¬φ is a theorem ofS. Syntactical completeness is a stronger property than semantic completeness. If a formal system is syntactically complete, a corresponding formal theory is calledcompleteif it is aconsistenttheory.Gödel's incompleteness theoremshows that anycomputablesystem that is sufficiently powerful, such asPeano arithmetic, cannot be both consistent and syntactically complete.
Syntactical completenesscan also refer to another unrelated concept, also calledPost completenessorHilbert-Post completeness. In this sense, a formal system issyntactically completeif and only if no unprovable sentence can be added to it without introducing aninconsistency.Truth-functional propositional logicandfirst-order predicate logicare semantically complete, but not syntactically complete (for example, the propositional logic statement consisting of a single propositional variableAis not a theorem, and neither is its negation).[citation needed]
Insuperintuitionisticandmodal logics, a logic isstructurally completeif everyadmissible ruleis derivable.
A theory ismodel completeif and only if every embedding of its models is anelementary embedding. | https://en.wikipedia.org/wiki/Completeness_(logic) |
ANOR gateor a NOT OR gate is a logic gate which gives a positive output only when both inputs are negative.
LikeNAND gates, NOR gates are so-called "universal gates" that can be combined to form any other kind oflogic gate. For example, the firstembedded system, theApollo Guidance Computer, was built exclusively from NOR gates, about 5,600 in total for the later versions. Today,integrated circuitsare not constructed exclusively from a single type of gate. Instead,EDAtools are used to convert the description of a logical circuit to anetlistof complex gates (standard cells) or transistors (full customapproach).
A NOR gate is logically an inverted OR gate. It has the following truth table:
Q=ANORB
A NOR gate is a universal gate, meaning that any other gate can be represented as a combination of NOR gates.
This is made by joining the inputs of a NOR gate. As a NOR gate is equivalent to an OR gate leading to NOT gate, joining the inputs makes the output of the "OR" part of the NOR gate the same as the input, eliminating it from consideration and leaving only the NOT part.
An OR gate is made by inverting the output of a NOR gate. Note that we already know that a NOT gate is equivalent to a NOR gate with its inputs joined.
An AND gate gives a 1 output when both inputs are 1. Therefore, an AND gate is made by inverting the inputs of a NOR gate. Again, note that a NOR gate is equivalent to a NOT with its inputs joined.
A NAND gate is made by inverting the output of an AND gate. The word NAND means that it is not AND. As the name suggests, it will give 0 when both the inputs are 1.
An XNOR gate is made by connecting four NOR gates as shown below. This construction entails a propagation delay three times that of a single NOR gate.
Alternatively, an XNOR gate is made by considering theconjunctive normal form(A+B¯)⋅(A¯+B){\displaystyle (A+{\overline {B}})\cdot ({\overline {A}}+B)}, noting fromde Morgan's Lawthat a NOR gate is an inverted-input AND gate. This construction uses five gates instead of four.
An XOR gate is made by considering theconjunctive normal form(A+B)⋅(A¯+B¯){\displaystyle (A+B)\cdot ({\overline {A}}+{\overline {B}})}, noting fromde Morgan's Lawthat a NOR gate is an inverted-input OR gate. This construction entails a propagation delay three times that of a single NOR gate and uses five gates.
Alternatively, the 4-gate version of the XNOR gate can be used with an inverter. This construction has a propagation delay four times (instead of three times) that of a single NOR gate. | https://en.wikipedia.org/wiki/NOR_logic |
Aone-instruction set computer(OISC), sometimes referred to as anultimatereduced instruction set computer(URISC), is anabstract machinethat uses only one instruction – obviating the need for amachine languageopcode.[1][2][3]With a judicious choice for the single instruction and given arbitrarily many resources, an OISC is capable of being auniversal computerin the same manner as traditional computers that have multiple instructions.[2]: 55OISCs have been recommended as aids in teaching computer architecture[1]: 327[2]: 2and have been used as computational models in structural computing research.[3]The firstcarbon nanotube computeris a1-bitone-instruction set computer (and has only 178 transistors).[4]
In aTuring-complete model, each memory location can store an arbitrary integer, and – depending on the mode, there may be arbitrarily many locations. The instructions themselves reside in memory as a sequence of such integers.
There exists a class ofuniversal computerswith a single instruction based on bit manipulation such asbit copyingorbit inversion. Since their memory model is finite, as is the memory structure used in real computers, those bit manipulation machines are equivalent to real computers rather than to Turing machines.[5]
Currently known OISCs can be roughly separated into three broad categories:
Bit-manipulatingmachines are the simplest class.
TheFlipJumpmachine has 1 instruction, a;b - flips the bit a, then jumps to b. This is the most primitive OISC, but it's still useful. It can successfully do math/logic calculations, branching, pointers, and calling functions with the help of its standard library.
A bit copying machine,[5]called BitBitJump, copies one bit in memory and passes the execution unconditionally to the address specified by one of the operands of the instruction. This process turns out to be capable ofuniversal computation(i.e. being able to execute any algorithm and to interpret any other universal machine) because copying bits can conditionally modify the copying address that will be subsequently executed.
Another machine, called theToga Computer, inverts a bit and passes the execution conditionally depending on the result of inversion. The unique instruction is TOGA(a,b) which stands forTOGgleaAnd branch tobif the result of the toggle operation is true.
Similar to BitBitJump, a multi-bit copying machine copies several bits at the same time. The problem ofcomputational universalityis solved in this case by keeping predefined jump tables in the memory.[clarification needed]
Transport triggered architecture(TTA) is a design in which computation is a side effect of data transport. Usually, some memory registers (triggering ports) within common address space perform an assigned operation when the instruction references them. For example, in an OISC using a single memory-to-memory copy instruction, this is done by triggering ports that perform arithmetic and instruction pointer jumps when written to.
Arithmetic-based Turing-complete machines use an arithmetic operation and a conditional jump. Like the two previous universal computers, this class is also Turing-complete. The instruction operates on integers which may also be addresses in memory.
Currently there are several known OISCs of this class, based on different arithmetic operations:
Common choices for the single instruction are:
Onlyoneof these instructions is used in a given implementation. Hence, there is no need for an opcode to identify which instruction to execute; the choice of instruction is inherent in the design of the machine, and an OISC is typically named after the instruction it uses (e.g., an SBN OISC,[2]: 41the SUBLEQ language,[3]: 4etc.). Each of the above instructions can be used to construct a Turing-complete OISC.
This article presents only subtraction-based instructions among those that are not transport triggered. However, it is possible to construct Turing complete machines using an instruction based on other arithmetic operations, e.g., addition. For example, one variation known as DLN (Decrement and jump if not zero) has only two operands and uses decrement as the base operation. For more information see Subleq derivative languages[1].
TheSBNZ a, b, c, dinstruction ("subtract and branch if not equal to zero") subtracts the contents at addressafrom the contents at addressb, stores the result at addressc, and then,if the result is not 0, transfers control to addressd(if the result is equal to zero, execution proceeds to the next instruction in sequence).[3]
Thesubleqinstruction ("subtract and branch if less than or equal to zero") subtracts the contents at addressafrom the contents at addressb, stores the result at addressb, and then,if the result is not positive, transfers control to addressc(if the result is positive, execution proceeds to the next instruction in sequence).[3]: 4–7Pseudocode:
Conditional branching can be suppressed by setting the third operand equal to the address of the next instruction in sequence. If the third operand is not written, this suppression is implied.
A variant is also possible with two operands and an internalaccumulator, where the accumulator is subtracted from the memory location specified by the first operand. The result is stored in both the accumulator and the memory location, and the second operand specifies the branch address:
Although this uses only two (instead of three) operands per instruction, correspondingly more instructions are then needed to effect various logical operations.
It is possible to synthesize many types of higher-order instructions using only thesubleqinstruction.[3]: 9–10
Unconditional branch:
Addition can be performed by repeated subtraction, with no conditional branching; e.g., the following instructions result in the content at locationabeing added to the content at locationb:
The first instruction subtracts the content at locationafrom the content at locationZ(which is 0) and stores the result (which is the negative of the content ata) in locationZ. The second instruction subtracts this result fromb, storing inbthis difference (which is now the sum of the contents originally ataandb); the third instruction restores the value 0 toZ.
A copy instruction can be implemented similarly; e.g., the following instructions result in the content at locationbgetting replaced by the content at locationa, again assuming the content at locationZis maintained as 0:
Any desired arithmetic test can be built. For example, a branch-if-zero condition can be assembled from the following instructions:
Subleq2 can also be used to synthesize higher-order instructions, although it generally requires more operations for a given task. For example, no fewer than 10 subleq2 instructions are required to flip all the bits in a given byte:
The following program (written inpseudocode) emulates the execution of asubleq-based OISC:
This program assumes thatmemory[]is indexed bynonnegativeintegers. Consequently, for asubleqinstruction (a,b,c), the program interpretsa < 0,b < 0, or an executed branch toc < 0as a halting condition. Similar interpreters written in asubleq-based language (i.e.,self-interpreters, which may useself-modifying codeas allowed by the nature of thesubleqinstruction) can be found in the external links below.
A general purposeSMP-capable 64-bitoperating systemcalledDawn OShas been implemented in an emulated Subleq machine. The OS contains aC-like compiler. Some memory areas in the virtual machine are used for peripherals like the keyboard, mouse, hard drives, network card, etc. Basic applications written for it include a media player, painting tool, document reader and scientific calculator.[13]
A 32-bit Subleq computer with a graphic display and a keyboard calledIzhorahas been constructed byYoel Matveyevas a largecellular automatonpattern.[14][15]
There is acompilercalledHigher Subleqwritten by Oleg Mazonka that compiles a simplified C program intosubleqcode.[16]
Alternatively there is a self hostingForthimplementation written by Richard James Howe that runs on top of a Subleq VM and is capable of interactive programming of the Subleq machine[17]
Thesubneginstruction ("subtract and branch if negative"), also calledSBN, is defined similarly tosubleq:[2]: 41, 51–52
Conditional branching can be suppressed by setting the third operand equal to the address of the next instruction in sequence. If the third operand is not written, this suppression is implied.
It is possible to synthesize many types of higher-order instructions using only thesubneginstruction. For simplicity, only one synthesized instruction is shown here to illustrate the difference betweensubleqandsubneg.
Unconditional branch:[2]: 88–89
whereZandPOSare locations previously set to contain 0 and a positive integer, respectively;
Unconditional branching is assured only ifZinitially contains 0 (or a value less than the integer stored inPOS). A follow-up instruction is required to clearZafter the branching, assuming that the content ofZmust be maintained as 0.
A variant is also possible with four operands – subneg4. The reversal of minuend and subtrahend eases implementation in hardware. The non-destructive result simplifies the synthetic instructions.
In an attempt to make Turing machine more intuitive, Z. A. Melzak consider the task of computing with positive numbers. The machine has an infinite abacus, an infinite number of counters (pebbles, tally sticks) initially at a special location S. The machine is able to do one operation:
Take from location X as many counters as there are in location Y and transfer them to location Z and proceed to instruction y.
If this operation is not possible because there is not enough counters in X, then leave the abacus as it is and proceed to instruction n.[18]
In order to keep all numbers positive and mimic a human operator computing on a real world abacus, the test is performed before any subtraction. Pseudocode:
After giving a few programs: multiplication, gcd, computing then-th prime number, representation in basebof an arbitrary number, sorting in order of magnitude, Melzak shows explicitly how to simulate an arbitrary Turing machine on his arithmetic machine.
where the memory location P isp, Q isq, ONE is 1, ANS is initially 0 and at the endpq, and S is a large number.
He mentions that it can easily be shown using the elements of recursive functions that every number calculable on the arithmetic machine is computable. A proof of which was given by Lambek[19]on an equivalent two instruction machine : X+ (increment X) and X− else T (decrement X if it not empty, else jump to T).
In areverse subtract and skip if borrow(RSSB) instruction, theaccumulatoris subtracted from the memory location and the next instruction is skipped if there was a borrow (memory location was smaller than the accumulator). The result is stored in both the accumulator and the memory location. Theprogram counteris mapped to memory location 0. The accumulator is mapped to memory location 1.[2]
To set x to the value of y minus z:
A transport triggered architecture uses only themoveinstruction, hence it was originally called a "move machine". This instruction moves the contents of one memory location to another memory location combining with the current content of the new location:[2]: 42[20]
The operation performed is defined by the destination memory cell. Some cells are specialized in addition, some other in multiplication, etc. So memory cells are not simple store but coupled with anarithmetic logic unit(ALU) setup to perform only one sort of operation with the current value of the cell. Some of the cells arecontrol flowinstructions to alter the program execution with jumps,conditional execution,subroutines,if-then-else,for-loop, etc...
A commercial transport triggered architecture microcontroller has been produced called MAXQ, which hides the apparent inconvenience of an OISC by using a "transfer map" that represents all possible destinations for themoveinstructions.[21]
Cryptoleq[22]is a language similar to Subleq. It consists of oneeponymousinstruction and is capable of performing general-purpose computation on encrypted programs. Cryptoleq works on continuous cells of memory using direct and indirect addressing, and performs two operationsO1andO2on three values A, B, and C:
where a, b and c are addressed by the instruction pointer, IP, with the value of IP addressing a, IP + 1 point to b and IP + 2 to c.
In Cryptoleq operationsO1andO2are defined as follows:
The main difference with Subleq is that in Subleq,O1(x,y)simply subtractsyfromxandO2(x)equals tox. Cryptoleq is also homomorphic to Subleq, modular inversion and multiplication is homomorphic to subtraction and the operation ofO2corresponds the Subleq test if the values were unencrypted. A program written in Subleq can run on a Cryptoleq machine, meaning backwards compatibility. However, Cryptoleq implements fully homomorphic calculations and is capable of multiplications. Multiplication on an encrypted domain is assisted by a unique function G that is assumed to be difficult to reverse engineer and allows re-encryption of a value based on theO2operation:
wherey~{\displaystyle {\tilde {y}}}is the re-encrypted value ofyand0~{\displaystyle {\tilde {0}}}is encrypted zero.xis the encrypted value of a variable, let it bem, andx¯{\displaystyle {\bar {x}}}equalsNm+1{\displaystyle Nm+1}.
The multiplication algorithm is based on addition and subtraction, uses the function G and does not have conditional jumps nor branches. Cryptoleq encryption is based onPaillier cryptosystem. | https://en.wikipedia.org/wiki/One-instruction_set_computer |
Informal semantics,homogeneityis the phenomenon wherepluralexpressions that seem to mean "all" negate to "none" rather than "not all". For example, theEnglishsentence "Robin read the books" requires Robin to have read all of the books, while "Robin didn't read the books" requires her to have read none of them. Neither sentence is true if she read exactly half of the books. Homogeneity effects have been observed in a variety of languages includingJapanese,Russian, andHungarian. Semanticists have proposed a variety of explanations for homogeneity, often involving a combination ofpresupposition,plural quantification, andtrivalent logics. Because analogous effects have been observed withconditionalsand othermodalexpressions, some semanticists have proposed that these phenomena involve pluralities ofpossible worlds.
Homogeneous interpretations arise when apluralexpression seems to mean "all" whenassertedbut "none" whennegated. For example, theEnglishsentence in (1a) is typically interpreted to mean that Robin read all the books, while (1b) is interpreted to mean that she read none of them. This is a puzzle since (1b) would merely mean that some books went unread if "the books" expresseduniversal quantification, as it appears to do in the positive sentence.[1][2]
Homogeneous readings are also possible with other expressions includingconjunctionsandbare plurals. For instance, (2a) means that Robin read both books while (2b) means that she read neither; example (3a) means that in general Robin likes books while (3b) means that in general she does not.[1]
Homogeneity effects have been studied in a variety of languages including English,Russian,JapaneseandHungarian. For instance, the Hungarian example in (4) behaves analogously to the English one in (1b).[3]
Homogeneity can be suspended in certain circumstances. For instance, the definite plurals in (1) lose their homogeneous interpretation when an overt universal quantifier is inserted, as shown in (5).[1]
Additionally, the conjunctions in (3) lose their homogeneous interpretation when theconnectivereceivesfocus.[3]
Homogeneity is important to semantic theory in part because it results in apparenttruth valuegaps. For example, neither of the sentences in (1) are assertable if Robin read exactly half of the relevant books. As a result, some linguists have attempted to provide unified analyses with other gappy phenomena such aspresupposition,scalar implicature,free choice inferences, andvagueness.[1]Homogeneity effects have been argued to appear withsemantic typesother than individuals. For instance, negated conditionals and modals have been argued to show similar effects, potentially suggesting that they refer to pluralities ofpossible worlds.[1][4] | https://en.wikipedia.org/wiki/Homogeneity_(semantics) |
Inlogic, thelaw of excluded middleor theprinciple of excluded middlestates that for everyproposition,eitherthis proposition or itsnegationistrue.[1][2]It is one of thethree laws of thought, along with thelaw of noncontradiction, and thelaw of identity; however, no system of logic is built on just these laws, and none of these laws providesinference rules, such asmodus ponensorDe Morgan's laws. The law is also known as thelaw/principleof the excluded third, inLatinprincipium tertii exclusi. Another Latin designation for this law istertium non daturor "no third [possibility] is given". Inclassical logic, the law is atautology.
In contemporary logic the principle is distinguished from the semanticalprinciple of bivalence, which states that every proposition is either true or false. The principle of bivalence always implies the law of excluded middle, while the converse is not always true. A commonly cited counterexample uses statements unprovable now, but provable in the future to show that the law of excluded middle may apply when the principle of bivalence fails.[3]
The earliest known formulation is in Aristotle's discussion of theprinciple of non-contradiction, first proposed inOn Interpretation,[4]where he says that of twocontradictorypropositions (i.e. where one proposition is the negation of the other) one must be true, and the other false.[5]He also states it as a principle in theMetaphysicsbook 4, saying that it is necessary in every case to affirm or deny,[6]and that it is impossible that there should be anything between the two parts of a contradiction.[7]
Aristotlewrote that ambiguity can arise from the use of ambiguous names, but cannot exist in the facts themselves:
It is impossible, then, that "being a man" should mean precisely "not being a man", if "man" not only signifies something about one subject but also has one significance. … And it will not be possible to be and not to be the same thing, except in virtue of an ambiguity, just as if one whom we call "man", and others were to call "not-man"; but the point in question is not this, whether the same thing can at the same time be and not be a man in name, but whether it can be in fact. (Metaphysics4.4, W. D. Ross (trans.), GBWW 8, 525–526).
Aristotle's assertion that "it will not be possible to be and not to be the same thing" would be written in propositional logic as ~(P∧ ~P). In modern so called classical logic, this statement is equivalent to the law of excluded middle (P∨ ~P), through distribution of the negation in Aristotle's assertion. The former claims that no statement isbothtrue and false, while the latter requires that any statement iseithertrue or false.
But Aristotle also writes, "since it is impossible that contradictories should be at the same time true of the same thing, obviously contraries also cannot belong at the same time to the same thing" (Book IV, CH 6, p. 531). He then proposes that "there cannot be an intermediate between contradictories, but of one subject we must either affirm or deny any one predicate" (Book IV, CH 7, p. 531). In the context of Aristotle'straditional logic, this is a remarkably precise statement of the law of excluded middle,P∨ ~P.
Yet inOn InterpretationAristotle seems to deny the law of excluded middle in the case offuture contingents, in his discussion on the sea battle.
Its usual form, "Every judgment is either true or false" [footnote 9] …"(from Kolmogorov in van Heijenoort, p. 421) footnote 9: "This isLeibniz's very simple formulation (seeNouveaux Essais, IV,2)" (ibid p 421)
The principle was stated as atheoremofpropositional logicbyRussellandWhiteheadinPrincipia Mathematicaas:
∗2⋅11.⊢.p∨∼p{\displaystyle \mathbf {*2\cdot 11} .\ \ \vdash .\ p\ \vee \thicksim p}.[8]
So just what is "truth" and "falsehood"? At the openingPMquickly announces some definitions:
Truth-values. The "truth-value" of a proposition istruthif it is true andfalsehoodif it is false* [*This phrase is due to Frege] … the truth-value of "p ∨ q" is truth if the truth-value of either p or q is truth, and is falsehood otherwise … that of "~ p" is the opposite of that of p …" (pp. 7–8)
This is not much help. But later, in a much deeper discussion ("Definition and systematic ambiguity of Truth and Falsehood" Chapter II part III, p. 41 ff),PMdefines truth and falsehood in terms of a relationship between the "a" and the "b" and the "percipient". For example "This 'a' is 'b'" (e.g. "This 'object a' is 'red'") really means"'object a' is a sense-datum" and"'red' is a sense-datum", and they "stand in relation" to one another and in relation to "I". Thus what we really mean is: "I perceive that 'This object a is red'" and this is an undeniable-by-3rd-party "truth".
PMfurther defines a distinction between a "sense-datum" and a "sensation":
That is, when we judge (say) "this is red", what occurs is a relation of three terms, the mind, and "this", and "red". On the other hand, when we perceive "the redness of this", there is a relation of two terms, namely the mind and the complex object "the redness of this" (pp. 43–44).
Russell reiterated his distinction between "sense-datum" and "sensation" in his bookThe Problems of Philosophy(1912), published at the same time asPM(1910–1913):
Let us give the name of "sense-data" to the things that are immediately known in sensation: such things as colours, sounds, smells, hardnesses, roughnesses, and so on. We shall give the name "sensation" to the experience of being immediately aware of these things … The colour itself is a sense-datum, not a sensation. (p. 12)
Russell further described his reasoning behind his definitions of "truth" and "falsehood" in the same book (Chapter XII,Truth and Falsehood).
From the law of excluded middle, formula ✸2.1 inPrincipia Mathematica,Whitehead and Russell derive some of the most powerful tools in the logician's argumentation toolkit. (InPrincipia Mathematica,formulas and propositions are identified by a leading asterisk and two numbers, such as "✸2.1".)
✸2.1 ~p∨p"This is the Law of excluded middle" (PM, p. 101).
The proof of ✸2.1 is roughly as follows: "primitive idea" 1.08 definesp→q= ~p∨q. Substitutingpforqin this rule yieldsp→p= ~p∨p. Sincep→pis true (this is Theorem 2.08, which is proved separately), then ~p∨pmust be true.
✸2.11p∨ ~p(Permutation of the assertions is allowed by axiom 1.4)✸2.12p→ ~(~p) (Principle of double negation, part 1: if "this rose is red" is true then it's not true that"'this rose is not-red' is true".)✸2.13p∨ ~{~(~p)} (Lemma together with 2.12 used to derive 2.14)✸2.14 ~(~p) →p(Principle of double negation, part 2)✸2.15 (~p→q) → (~q→p) (One of the four "Principles of transposition". Similar to 1.03, 1.16 and 1.17. A very long demonstration was required here.)✸2.16 (p→q) → (~q→ ~p) (If it's true that "If this rose is red then this pig flies" then it's true that "If this pig doesn't fly then this rose isn't red.")✸2.17 ( ~p→ ~q) → (q→p) (Another of the "Principles of transposition".)✸2.18 (~p→p) →p(Called "The complement ofreductio ad absurdum. It states that a proposition whichfollows fromthe hypothesis of its own falsehood is true" (PM, pp. 103–104).)
Most of these theorems—in particular ✸2.1, ✸2.11, and ✸2.14—are rejected by intuitionism. These tools are recast into another form that Kolmogorov cites as "Hilbert's four axioms of implication" and "Hilbert's two axioms of negation" (Kolmogorov in van Heijenoort, p. 335).
Propositions ✸2.12 and ✸2.14, "double negation":
Theintuitionistwritings ofL. E. J. Brouwerrefer to what he calls "theprinciple of the reciprocity of the multiple species, that is, the principle that for every system the correctness of a property follows from the impossibility of the impossibility of this property" (Brouwer, ibid, p. 335).
This principle is commonly called "the principle of double negation" (PM, pp. 101–102). From the law of excluded middle (✸2.1 and ✸2.11),PMderives principle ✸2.12 immediately. We substitute ~pforpin 2.11 to yield ~p∨ ~(~p), and by the definition of implication (i.e. 1.01 p → q = ~p ∨ q) then ~p ∨ ~(~p)= p → ~(~p). QED (The derivation of 2.14 is a bit more involved.)
It is correct, at least for bivalent logic—i.e. it can be seen with aKarnaugh map—that this law removes "the middle" of theinclusive-orused in his law (3). And this is the point of Reichenbach's demonstration that some believe theexclusive-orshould take the place of theinclusive-or.
About this issue (in admittedly very technical terms) Reichenbach observes:
In line (30) the "(x)" means "for all" or "for every", a form used by Russell and Reichenbach; today the symbolism is usually∀{\displaystyle \forall }x. Thus an example of the expression would look like this:
From the late 1800s through the 1930s, a bitter, persistent debate raged between Hilbert and his followers versusHermann WeylandL. E. J. Brouwer. Brouwer's philosophy, calledintuitionism, started in earnest withLeopold Kroneckerin the late 1800s.
Hilbert intensely disliked Kronecker's ideas:
Kronecker insisted that there could be no existence without construction. For him, as for Paul Gordan [another elderly mathematician], Hilbert's proof of the finiteness of the basis of the invariant system was simply not mathematics. Hilbert, on the other hand, throughout his life was to insist that if one can prove that the attributes assigned to a concept will never lead to a contradiction, the mathematical existence of the concept is thereby established (Reid p. 34)
It was his [Kronecker's] contention that nothing could be said to have mathematical existence unless it could actually be constructed with a finite number of positive integers (Reid p. 26)
The debate had a profound effect on Hilbert. Reid indicates thatHilbert's second problem(one ofHilbert's problemsfrom the Second International Conference in Paris in 1900) evolved from this debate (italics in the original):
Thus, Hilbert was saying: "Ifpand ~pare both shown to be true, thenpdoes not exist", and was thereby invoking the law of excluded middle cast into the form of the law of contradiction.
And finally constructivists … restricted mathematics to the study of concrete operations on finite or potentially (but not actually) infinite structures; completed infinite totalities … were rejected, as were indirect proof based on the Law of Excluded Middle. Most radical among the constructivists were the intuitionists, led by the erstwhile topologist L. E. J. Brouwer (Dawson p. 49)
The rancorous debate continued through the early 1900s into the 1920s; in 1927 Brouwer complained about "polemicizing against it [intuitionism] in sneering tones" (Brouwer in van Heijenoort, p. 492). But the debate was fertile: it resulted inPrincipia Mathematica(1910–1913), and that work gave a precise definition to the law of excluded middle, and all this provided an intellectual setting and the tools necessary for the mathematicians of the early 20th century:
Out of the rancor, and spawned in part by it, there arose several important logical developments; Zermelo's axiomatization of set theory (1908a), that was followed two years later by the first volume ofPrincipia Mathematica, in which Russell and Whitehead showed how, via the theory of types: much of arithmetic could be developed by logicist means (Dawson p. 49)
Brouwer reduced the debate to the use of proofs designed from "negative" or "non-existence" versus "constructive" proof:
In his lecture in 1941 at Yale and the subsequent paper,Gödelproposed a solution: "that the negation of a universal proposition was to be understood as asserting the existence … of a counterexample" (Dawson, p. 157)
Gödel's approach to the law of excluded middle was to assert that objections against "the use of 'impredicative definitions'" had "carried more weight" than "the law of excluded middle and related theorems of the propositional calculus" (Dawson p. 156). He proposed his "system Σ … and he concluded by mentioning several applications of his interpretation. Among them were a proof of the consistency withintuitionistic logicof the principle ~ (∀A: (A ∨ ~A)) (despite the inconsistency of the assumption ∃ A: ~ (A ∨ ~A))" (Dawson, p. 157)
The debate seemed to weaken: mathematicians, logicians and engineers continue to use the law of excluded middle (and double negation) in their daily work.
The following highlights the deep mathematical and philosophic problem behind what it means to "know", and also helps elucidate what the "law" implies (i.e. what the law really means). Their difficulties with the law emerge: that they do not want to accept as true implications drawn from that which is unverifiable (untestable, unknowable) or from the impossible or the false. (All quotes are from van Heijenoort, italics added).
Brouweroffers his definition of "principle of excluded middle"; we see here also the issue of "testability":
Kolmogorov's definition cites Hilbert's two axioms of negation
where ∨ means "or". The equivalence of the two forms is easily proved (p. 421)
For example, ifPis the proposition:
then the law of excluded middle holds that thelogical disjunction:
is true by virtue of its form alone. That is, the "middle" position, that Socrates is neither mortal nor not-mortal, is excluded by logic, and therefore either the first possibility (Socrates is mortal) or its negation (it is not the case that Socrates is mortal) must be true.
An example of an argument that depends on the law of excluded middle follows.[10]We seek to prove that
It is known that2{\displaystyle {\sqrt {2}}}is irrational (seeproof). Consider the number
Clearly (excluded middle) this number is either rational or irrational. If it is rational, the proof is complete, and
But if22{\displaystyle {\sqrt {2}}^{\sqrt {2}}}is irrational, then let
Then
and 2 is certainly rational. This concludes the proof.
In the above argument, the assertion "this number is either rational or irrational" invokes the law of excluded middle. Anintuitionist, for example, would not accept this argument without further support for that statement. This might come in the form of a proof that the number in question is in fact irrational (or rational, as the case may be); or a finite algorithm that could determine whether the number is rational.
The above proof is an example of anon-constructiveproof disallowed by intuitionists:
The proof is non-constructive because it doesn't give specific numbersa{\displaystyle a}andb{\displaystyle b}that satisfy the theorem but only two separate possibilities, one of which must work. (Actuallya=22{\displaystyle a={\sqrt {2}}^{\sqrt {2}}}is irrational but there is no known easy proof of that fact.) (Davis 2000:220)
(Constructive proofs of the specific example above are not hard to produce; for examplea=2{\displaystyle a={\sqrt {2}}}andb=log29{\displaystyle b=\log _{2}9}are both easily shown to be irrational, andab=3{\displaystyle a^{b}=3}; a proof allowed by intuitionists).
Bynon-constructiveDavis means that "a proof that there actually are mathematic entities satisfying certain conditions would not have to provide a method to exhibit explicitly the entities in question." (p. 85). Such proofs presume the existence of a totality that is complete, a notion disallowed by intuitionists when extended to theinfinite—for them the infinite can never be completed:
In classical mathematics there occurnon-constructiveorindirectexistence proofs, which intuitionists do not accept. For example, to provethere exists an n such that P(n), the classical mathematician may deduce a contradiction from the assumption for alln, notP(n). Under both the classical and the intuitionistic logic, by reductio ad absurdum this givesnot for all n, not P(n). The classical logic allows this result to be transformed intothere exists an n such that P(n), but not in general the intuitionistic … the classical meaning, that somewhere in the completed infinite totality of the natural numbers there occurs annsuch thatP(n), is not available to him, since he does not conceive the natural numbers as a completed totality.[11](Kleene 1952:49–50)
David HilbertandLuitzen E. J. Brouwerboth give examples of the law of excluded middle extended to the infinite. Hilbert's example: "the assertion that either there are only finitely many prime numbers or there are infinitely many" (quoted in Davis 2000:97); and Brouwer's: "Every mathematical species is either finite or infinite." (Brouwer 1923 in van Heijenoort 1967:336). In general, intuitionists allow the use of the law of excluded middle when it is confined to discourse over finite collections (sets), but not when it is used in discourse over infinite sets (e.g. the natural numbers). Thus intuitionists absolutely disallow the blanket assertion: "For all propositionsPconcerning infinite setsD:Por ~P" (Kleene 1952:48).[12]
Putative counterexamples to the law of excluded middle include theliar paradoxorQuine's paradox. Certain resolutions of these paradoxes, particularlyGraham Priest'sdialetheismas formalised in LP, have the law of excluded middle as a theorem, but resolve out the Liar as both true and false. In this way, the law of excluded middle is true, but because truth itself, and therefore disjunction, is not exclusive, it says next to nothing if one of the disjuncts is paradoxical, or both true and false.
TheCatuṣkoṭi(tetralemma) is an ancient alternative to the law of excluded middle, which examines all four possible assignments of truth values to a proposition and its negation. It has been important inIndian logicandBuddhist logicas well as the ancient Greek philosophical school known asPyrrhonism.
Many modern logic systems replace the law of excluded middle with the concept ofnegation as failure. Instead of a proposition's being either true or false, a proposition is either true or not able to be proved true.[13]These two dichotomies only differ in logical systems that are notcomplete. The principle of negation as failure is used as a foundation forautoepistemic logic, and is widely used inlogic programming. In these systems, the programmer is free to assert the law of excluded middle as a true fact, but it is not built-ina prioriinto these systems.
Mathematicians such asL. E. J. BrouwerandArend Heytinghave also contested the usefulness of the law of excluded middle in the context of modern mathematics.[14]
In modernmathematical logic, the excluded middle has been argued to result in possibleself-contradiction. It is possible in logic to make well-constructed propositions that can be neither true nor false; a common example of this is the "Liar's paradox",[15]the statement "this statement is false", which is argued to itself be neither true nor false.Arthur Priorhas argued thatThe Paradoxis not an example of a statement that cannot be true or false. The law of excluded middle still holds here as the negation of this statement "This statement is not false", can be assigned true. Inset theory, such a self-referential paradox can be constructed by examining the set "the set of all sets that do not contain themselves". This set is unambiguously defined, but leads to aRussell's paradox:[16][17]does the set contain, as one of its elements, itself? However, in the modernZermelo–Fraenkel set theory, this type of contradiction is no longer admitted. Furthermore, paradoxes of self reference can be constructed without even invoking negation at all, as inCurry's paradox.[citation needed]
Some systems of logic have different but analogous laws. For some finiten-valued logics, there is an analogous law called thelaw of excludedn+1th. If negation iscyclicand "∨" is a "max operator", then the law can be expressed in the object language by (P ∨ ~P ∨ ~~P ∨ ... ∨ ~...~P), where "~...~" representsn−1 negation signs and "∨ ... ∨"n−1 disjunction signs. It is easy to check that the sentence must receive at least one of thentruth values(and not a value that is not one of then).
Other systems reject the law entirely.[specify]
A particularly well-studiedintermediate logicis given byDe Morgan logic, which adds the axiom¬P∨¬¬P{\displaystyle \neg P\lor \neg \neg P}tointuitionistic logic, which is sometimes called the law of the weak excluded middle.
This is equivalent to a few other statements: | https://en.wikipedia.org/wiki/Law_of_excluded_middle |
Inlogic, athree-valued logic(alsotrinary logic,trivalent,ternary, ortrilean,[1]sometimes abbreviated3VL) is any of severalmany-valued logicsystems in which there are threetruth valuesindicatingtrue,false, and some third value. This is contrasted with the more commonly knownbivalentlogics (such as classical sentential orBoolean logic) which provide only fortrueandfalse.
Emil Leon Postis credited with first introducing additional logical truth degrees in his 1921 theory of elementary propositions.[2]The conceptual form and basic ideas of three-valued logic were initially published byJan ŁukasiewiczandClarence Irving Lewis. These were then re-formulated byGrigore Constantin Moisilin an axiomatic algebraic form, and also extended ton-valued logics in 1945.
Around 1910,Charles Sanders Peircedefined amany-valued logic system. He never published it. In fact, he did not even number the three pages of notes where he defined his three-valued operators.[3]Peirce soundly rejected the idea all propositions must be either true or false; boundary-propositions, he writes, are "at the limit between P and not P."[4]However, as confident as he was that "Triadic Logic is universally true,"[5]he also jotted down that "All this is mighty close to nonsense."[6]Only in 1966, when Max Fisch and Atwell Turquette began publishing what they rediscovered in his unpublished manuscripts, did Peirce's triadic ideas become widely known.[7]
Broadly speaking, the primary motivation for research of three valued logic is to represent the truth value of a statement that cannot be represented as true or false.[8]Łukasiewicz initially developed three-valued logic for theproblem of future contingentsto represent the truth value of statements about the undetermined future.[9][10][11]Bruno de Finettiused a third value to represent when "a given individual does not know the [correct] response, at least at a given moment."[12][8]Hilary Putnamused it to represent values that cannot physically be decided:[13]
For example, if we have verified (by using a speedometer) that the velocity of a motor car is such and such, it might be impossible in such a world to verify or falsify certain statements concerning its position at that moment. If we know by reference to a physical law together with certain observational data that a statement as to the position of a motor car can never be falsified or verified, then there may be some point to not regarding the statement as true or false, but regarding it as "middle". It is only because, in macrocosmic experience, everything that we regard as an empirically meaningful statement seems to be at least potentially verifiable or falsifiable that we prefer the convention according to which we say that every such statement is either true or false, but in many cases we don't know which.
Similarly,Stephen Cole Kleeneused a third value to representpredicatesthat are "undecidable by [any] algorithms whether true or false"[14][8]
As with bivalent logic, truth values in ternary logic may be represented numerically using various representations of theternary numeral system. A few of the more common examples are:
Inside aternary computer, ternary values are represented byternary signals.
This article mainly illustrates a system of ternarypropositional logicusing the truth values {false, unknown, true}, and extends conventional Booleanconnectivesto a trivalent context.
Boolean logicallows 22= 4unary operators; the addition of a third value in ternary logic leads to a total of 33= 27 distinct operators on a single input value. (This may be made clear by considering all possible truth tables for an arbitrary unary operator. Given 2 possible values TF of the single Boolean input, there are four different patterns of output TT, TF, FT, FF resulting from the following unary operators acting on each value: always T, Identity, NOT, always F. Given three possible values of a ternary variable, each times three possible results of a unary operation, there are 27 different output patterns: TTT, TTU, TTF, TUT, TUU, TUF, TFT, TFU, TFF, UTT, UTU, UTF, UUT, UUU, UUF, UFT, UFU, UFF, FTT, FTU, FTF, FUT, FUU, FUF, FFT, FFU, and FFF.) Similarly, where Boolean logic has 22×2= 16 distinct binary operators (operators with 2 inputs) possible, ternary logic has 33×3= 19,683 such operators. Where the nontrival Boolean operators can be named (AND,NAND,OR,NOR,XOR,XNOR(equivalence), and 4 variants ofimplicationor inequality), with six trivial operators considering 0 or 1 inputs only, it is unreasonable to attempt to name all but a small fraction of the possible ternary operators.[18]Just as in bivalent logic, where not all operators are given names and subsets offunctionally completeoperators are used, there may be functionally complete sets of ternary-valued operators.
Below is a set oftruth tablesshowing the logic operations forStephen Cole Kleene's "strong logic of indeterminacy" andGraham Priest's "logic of paradox".
If the truth values 1, 0, and −1 are interpreted as integers, these operations may be expressed with the ordinary operations of arithmetic (wherex+yuses addition,xyuses multiplication, andx2uses exponentiation), or by the minimum/maximum functions:
In these truth tables, theunknownstate can be thought of as neither true nor false in Kleene logic, or thought of as both true and false in Priest logic. The difference lies in the definition of tautologies. Where Kleene logic's only designated truth value is T, Priest logic's designated truth values are both T and U. In Kleene logic, the knowledge of whether any particularunknownstate secretly representstrueorfalseat any moment in time is not available. However, certain logical operations can yield an unambiguous result, even if they involve anunknownoperand. For example, becausetrueORtrueequalstrue, andtrueORfalsealso equalstrue, thentrueORunknownequalstrueas well. In this example, because either bivalent state could be underlying theunknownstate, and either state also yields the same result,trueresults in all three cases.
If numeric values, e.g.balanced ternaryvalues, are assigned tofalse,unknownandtruesuch thatfalseis less thanunknownandunknownis less thantrue, then A AND B AND C... = MIN(A, B, C ...) and A OR B OR C ... = MAX(A, B, C...).
Material implication for Kleene logic can be defined as:
A→B=defOR(NOT(A),B){\displaystyle A\rightarrow B\ {\overset {\underset {\mathrm {def} }{}}{=}}\ {\mbox{OR}}(\ {\mbox{NOT}}(A),\ B)}, and its truth table is
which differs from that for Łukasiewicz logic (described below).
Kleene logic has no tautologies (valid formulas) because whenever all of the atomic components of a well-formed formula are assigned the value Unknown, the formula itself must also have the value Unknown. (And the onlydesignatedtruth value for Kleene logic is True.) However, the lack of valid formulas does not mean that it lacks valid arguments and/or inference rules. An argument is semantically valid in Kleene logic if, whenever (for any interpretation/model) all of its premises are True, the conclusion must also be True. (TheLogic of Paradox(LP) has the same truth tables as Kleene logic, but it has twodesignatedtruth values instead of one; these are: True and Both (the analogue of Unknown), so that LP does have tautologies but it has fewer valid inference rules).[19]
The Łukasiewicz Ł3 has the same tables for AND, OR, and NOT as the Kleene logic given above, but differs in its definition of implication in that "unknown implies unknown" istrue. This section follows the presentation from Malinowski's chapter of theHandbook of the History of Logic, vol 8.[20]
Material implication for Łukasiewicz logic truth table is
In fact, using Łukasiewicz's implication and negation, the other usual connectives may be derived as:
It is also possible to derive a few other useful unary operators (first derived by Tarski in 1921):[citation needed]
They have the following truth tables:
M is read as "it is not false that..." or in the (unsuccessful) Tarski–Łukasiewicz attempt to axiomatizemodal logicusing a three-valued logic, "it is possible that..." L is read "it is true that..." or "it is necessary that..." Finally I is read "it is unknown that..." or "it is contingent that..."
In Łukasiewicz's Ł3 thedesignated valueis True, meaning that only a proposition having this value everywhere is considered atautology. For example,A→AandA↔Aare tautologies in Ł3 and also in classical logic. Not all tautologies of classical logic lift to Ł3 "as is". For example, thelaw of excluded middle,A∨ ¬A, and thelaw of non-contradiction,¬(A∧ ¬A)are not tautologies in Ł3. However, using the operatorIdefined above, it is possible to state tautologies that are their analogues:
The truth table for the material implication of R-mingle 3 (RM3) is
A defining characteristic of RM3 is the lack of the axiom of Weakening:
which, by adjointness, is equivalent to the projection from the product:
RM3 is a non-cartesian symmetric monoidal closed category; the product, which is left-adjoint to the implication, lacks valid projections, and hasUas the monoid identity.
This logic is equivalent to an"ideal" paraconsistent logicwhich also obeys the contrapositive.
The logic of here and there (HT, also referred as Smetanov logicSmTor asGödelG3 logic), introduced byHeytingin 1930[21]as a model for studyingintuitionistic logic, is a three-valuedintermediate logicwhere the third truth value NF (not false) has the semantics of a proposition that can be intuitionistically proven to not be false, but does not have an intuitionistic proof of correctness.
It may be defined either by appending one of the two equivalent axioms(¬q→p) → (((p→q) →p) →p)or equivalentlyp∨(¬q)∨(p→q)to the axioms ofintuitionistic logic, or by explicit truth tables for its operations. In particular, conjunction and disjunction are the same as for Kleene's and Łukasiewicz's logic, while the negation is different.
HT logic is the uniquecoatomin the lattice of intermediate logics. In this sense it may be viewed as the "second strongest" intermediate logic after classical logic.
This logic is also known as a weak form of Kleene's three-valued logic.
These functions may also be expressed with arithmetic expressions. Given the set of truth values is{0,12,1}{\displaystyle \{0,{\frac {1}{2}},1\}}they are:
Some 3VLmodulars arithmeticshave been introduced more recently, motivated by circuit problems rather than philosophical issues:[22]
The database query languageSQLimplements ternary logic as a means of handling comparisons withNULLfield content. SQL uses a common fragment of the Kleene K3 logic, restricted to AND, OR, and NOT tables. | https://en.wikipedia.org/wiki/Trivalent_logic |
Inlogic, thecorresponding conditionalof anargument(or derivation) is amaterial conditionalwhoseantecedentis theconjunctionof the argument's (or derivation's)premisesand whoseconsequentis the argument's conclusion. An argument isvalidif and only ifits corresponding conditional is alogical truth. It follows that an argument is valid if and only if the negation of its corresponding conditional is acontradiction. Therefore, the construction of a corresponding conditional provides a useful technique for determining the validity of an argument.
Consider the argumentA:
Either it is hot or it is coldIt is not hotTherefore it is cold
This argument is of the form:
Either P or QNot PTherefore Qor (using standard symbols ofpropositional calculus):P∨{\displaystyle \lor }Q¬{\displaystyle \neg }P____________Q
The corresponding conditionalCis:
IF ((P or Q) and not P) THEN Qor (using standard symbols):((P∨{\displaystyle \lor }Q)∧{\displaystyle \wedge }¬{\displaystyle \neg }P)→{\displaystyle \to }Q
and the argumentAis valid just in case the corresponding conditionalCis a logical truth.
IfCis a logical truth then¬{\displaystyle \neg }Centails Falsity (The False).
Thus, any argument is valid if and only if the denial of its corresponding conditional leads to a contradiction.
If we construct atruth tableforCwe will find that it comes outT(true) on every row (and of course if we construct a truth table for the negation ofCit will come outF(false) in every row. These results confirm the validity of the argumentA
Some arguments needfirst-order predicate logicto reveal their forms and they cannot be tested properly by truth tables forms.
Consider the argumentA1:
Some mortals are not GreeksSome Greeks are not menNot every man is a logicianTherefore Some mortals are not logicians
To test this argument for validity, construct the corresponding conditionalC1(you will need first-order predicate logic), negate it, and see if you can derive a contradiction from it. If you succeed, then the argument is valid.
Instead of attempting to derive the conclusion from the premises proceed as follows.
To test the validity of an argument (a) translate, as necessary, each premise and the conclusion into sentential or predicate logic sentences (b) construct from these the negation of the corresponding conditional (c) see if from it a contradiction can be derived (or if feasible construct a truth table for it and see if it comes out false on every row.) Alternatively construct a truth tree and see if every branch is closed. Success proves the validity of the original argument.
In case of the difficulty in trying to derive a contradiction, one should proceed as follows. From the negation of the corresponding conditional derive a theorem inconjunctive normal formin the methodical fashions described in text books. If, and only if, the original argument was valid will the theorem in conjunctive normal form be a contradiction, and if it is, then that it is will be apparent. | https://en.wikipedia.org/wiki/Corresponding_conditional |
Counterfactual conditionals(alsocontrafactual,subjunctiveorX-marked) areconditional sentenceswhich discuss what would have been true under different circumstances, e.g. "If Peter believed in ghosts, he would be afraid to be here." Counterfactuals are contrasted withindicatives, which are generally restricted to discussing open possibilities. Counterfactuals are characterized grammatically by their use offake tense morphology, which some languages use in combination with other kinds ofmorphologyincludingaspectandmood.
Counterfactuals are one of the most studied phenomena inphilosophical logic,formal semantics, andphilosophy of language. They were first discussed as a problem for thematerial conditionalanalysis of conditionals, which treats them all as trivially true. Starting in the 1960s, philosophers and linguists developed the now-classicpossible worldapproach, in which a counterfactual's truth hinges on its consequent holding at certain possible worlds where its antecedent holds. More recent formal analyses have treated them using tools such ascausal modelsanddynamic semantics. Other research has addressed their metaphysical, psychological, and grammatical underpinnings, while applying some of the resultant insights to fields including history, marketing, and epidemiology.
An example of the difference betweenindicativeand counterfactual conditionals is the followingEnglishminimal pair:
These conditionals differ in both form and meaning. The indicative conditional uses the present tense form "owns" and therefore conveys that the speaker is agnostic about whether Sally in fact owns a donkey. The counterfactual example uses thefake tenseform "owned" in the "if" clause and the past-inflectedmodal"would" in the "then" clause. As a result, it conveys that Sally does not in fact own a donkey. English has several other grammatical forms whose meanings are sometimes included under the umbrella of counterfactuality. One is thepast perfectcounterfactual, which contrasts with indicatives and simple past counterfactuals in its use of pluperfect morphology:[5]
Another kind of conditional uses the form "were", generally referred to as theirrealisor subjunctive form.[6]
Past perfect and irrealis counterfactuals can undergoconditional inversion:[7]
The termcounterfactual conditionalis widely used as an umbrella term for the kinds of sentences shown above. However, not all conditionals of this sort express contrary-to-fact meanings. For instance, the classic example known as the "Anderson Case" has the characteristic grammatical form of a counterfactual conditional, but does not convey that its antecedent is false or unlikely.[8][9]
Such conditionals are also widely referred to assubjunctive conditionals, though this term is likewise acknowledged as a misnomer even by those who use it.[11]Many languages do not have a morphologicalsubjunctive(e.g.DanishandDutch) and many that do have it do not use it for this sort of conditional (e.g.French,Swahili, allIndo-Aryan languagesthat have a subjunctive). Moreover, languages that do use the subjunctive for such conditionals only do so if they have a specific past subjunctive form. Thus, subjunctive marking is neither necessary nor sufficient for membership in this class of conditionals.[12][13][9]
The termscounterfactualandsubjunctivehave sometimes been repurposed for more specific uses. For instance, the term "counterfactual" is sometimes applied to conditionals that express a contrary-to-fact meaning, regardless of their grammatical structure.[14][8]Along similar lines, the term "subjunctive" is sometimes used to refer to conditionals that bear fake past or irrealis marking, regardless of the meaning they convey.[14][15]
Recently the termX-Markedhas been proposed as a replacement, evoking theextra marking that these conditionals bear. Those adopting this terminology refer to indicative conditionals asO-Markedconditionals, reflecting theirordinary marking.[16][17][3]
Theantecedentof a conditional is sometimes referred to as its"if"-clauseorprotasis. Theconsequentof a conditional is sometimes referred to as a"then"-clause or as an apodosis.
Counterfactuals were first discussed byNelson Goodmanas a problem for thematerial conditionalused inclassical logic. Because of these problems, early work such as that ofW.V. Quineheld that counterfactuals are not strictly logical, and do not make true or false claims about the world. However, in the 1960s and 1970s, work byRobert StalnakerandDavid Lewisshowed that these problems are surmountable given an appropriateintensionallogical framework. Work since then informal semantics,philosophical logic,philosophy of language, andcognitive sciencehas built on this insight, taking it in a variety of different directions.[18]
According to thematerial conditionalanalysis, a natural language conditional, a statement of the form "if P then Q", is true whenever its antecedent, P, is false. Since counterfactual conditionals are those whose antecedents are false, this analysis would wrongly predict that all counterfactuals are vacuously true. Goodman illustrates this point using the following pair in a context where it is understood that the piece of butter under discussion had not been heated.[19]
More generally, such examples show that counterfactuals are not truth-functional. In other words, knowing whether the antecedent and consequent are actually true is not sufficient to determine whether the counterfactual itself is true.[18]
Counterfactuals arecontext dependentandvague. For example, either of the following statements can be reasonably held true, though not at the same time:[20]
Counterfactuals arenon-monotonicin the sense that their truth values can be changed by adding extra material to their antecedents. This fact is illustrated bySobel sequencessuch as the following:[19][21][22]
One way of formalizing this fact is to say that the principle ofAntecedent Strengtheningshouldnothold for any connective > intended as a formalization of natural language conditionals.
The most common logical accounts of counterfactuals are couched in thepossible world semantics. Broadly speaking, these approaches have in common that they treat a counterfactualA>Bas true ifBholds across some set of possible worlds where A is true. They vary mainly in how they identify the set of relevant A-worlds.
David Lewis'svariably strict conditionalis considered the classic analysis within philosophy. The closely relatedpremise semanticsproposed byAngelika Kratzeris often taken as the standard within linguistics. However, there are numerous possible worlds approaches on the market, includingdynamicvariants of thestrict conditionalanalysis originally dismissed by Lewis.
Thestrict conditionalanalysis treats natural language counterfactuals as being equivalent to themodal logicformula◻(P→Q){\displaystyle \Box (P\rightarrow Q)}. In this formula,◻{\displaystyle \Box }expresses necessity and→{\displaystyle \rightarrow }is understood asmaterial implication. This approach was first proposed in 1912 byC.I. Lewisas part of hisaxiomatic approachto modal logic.[18]In modernrelational semantics, this means that the strict conditional is true atwiff the corresponding material conditional is true throughout the worlds accessible fromw. More formally:
Unlike the material conditional, the strict conditional is not vacuously true when its antecedent is false. To see why, observe that bothP{\displaystyle P}and◻(P→Q){\displaystyle \Box (P\rightarrow Q)}will be false atw{\displaystyle w}if there is some accessible worldv{\displaystyle v}whereP{\displaystyle P}is true andQ{\displaystyle Q}is not. The strict conditional is also context-dependent, at least when given a relational semantics (or something similar). In the relational framework, accessibility relations are parameters of evaluation which encode the range of possibilities which are treated as "live" in the context. Since the truth of a strict conditional can depend on the accessibility relation used to evaluate it, this feature of the strict conditional can be used to capture context-dependence.
The strict conditional analysis encounters many known problems, notably monotonicity. In the classical relational framework, when using a standard notion of entailment, the strict conditional is monotonic, i.e. it validatesAntecedent Strengthening. To see why, observe that ifP→Q{\displaystyle P\rightarrow Q}holds at every world accessible fromw{\displaystyle w}, the monotonicity of the material conditional guarantees thatP∧R→Q{\displaystyle P\land R\rightarrow Q}will be too. Thus, we will have that◻(P→Q)⊨◻(P∧R→Q){\displaystyle \Box (P\rightarrow Q)\models \Box (P\land R\rightarrow Q)}.
This fact led to widespread abandonment of the strict conditional, in particular in favor of Lewis'svariably strict analysis. However, subsequent work has revived the strict conditional analysis by appealing to context sensitivity. This approach was pioneered by Warmbrōd (1981), who argued thatSobel sequencesdo not demand anon-monotoniclogic, but in fact can rather be explained by speakers switching to more permissive accessibility relations as the sequence proceeds. In his system, a counterfactual like "If Hannah had drunk coffee, she would be happy" would normally be evaluated using a model where Hannah's coffee is gasoline-free in all accessible worlds. If this same model were used to evaluate a subsequent utterance of "If Hannah had drunk coffee and the coffee had gasoline in it...", this second conditional would come out as trivially true, since there are no accessible worlds where its antecedent holds. Warmbrōd's idea was that speakers will switch to a model with a more permissive accessibility relation in order to avoid this triviality.
Subsequent work by Kai von Fintel (2001), Thony Gillies (2007), and Malte Willer (2019) has formalized this idea in the framework ofdynamic semantics, and given a number of linguistic arguments in favor. One argument is that conditional antecedents licensenegative polarity items, which are thought to be licensed only by monotonic operators.
Another argument in favor of the strict conditional comes fromIrene Heim'sobservation that Sobel Sequences are generallyinfelicitous(i.e. sound strange) in reverse.
Sarah Moss (2012) and Karen Lewis (2018) have responded to these arguments, showing that a version of the variably strict analysis can account for these patterns, and arguing that such an account is preferable since it can also account for apparent exceptions. As of 2020, this debate continues in the literature, with accounts such as Willer (2019) arguing that a strict conditional account can cover these exceptions as well.[18]
In the variably strict approach, the semantics of a conditionalA>Bis given by some function on the relative closeness of worlds where A is true and B is true, on the one hand, and worlds where A is true but B is not, on the other.
On Lewis's account, A > C is (a) vacuously true if and only if there are no worlds where A is true (for example, if A is logically or metaphysically impossible); (b) non-vacuously true if and only if, among the worlds where A is true, some worlds where C is true are closer to the actual world than any world where C is not true; or (c) false otherwise. Although in Lewis'sCounterfactualsit was unclear what he meant by 'closeness', in later writings, Lewis made it clear that he didnotintend the metric of 'closeness' to be simply our ordinary notion ofoverall similarity.
Example:
On Lewis's account, the truth of this statement consists in the fact that, among possible worlds where he ate more for breakfast, there is at least one world where he is not hungry at 11 am and which is closer to our world than any world where he ate more for breakfast but is still hungry at 11 am.
Stalnaker's account differs from Lewis's most notably in his acceptance of thelimitanduniqueness assumptions. The uniqueness assumption is the thesis that, for any antecedent A, among the possible worlds where A is true, there is a single (unique) one that isclosestto the actual world. The limit assumption is the thesis that, for a given antecedent A, if there is a chain of possible worlds where A is true, each closer to the actual world than its predecessor, then the chain has alimit: a possible world where A is true that is closer to the actual worlds than all worlds in the chain. (The uniqueness assumptionentailsthe limit assumption, but the limit assumption does not entail the uniqueness assumption.) On Stalnaker's account, A > C is non-vacuously true if and only if, at the closest world where A is true, C is true. So, the above example is true just in case at the single, closest world where he ate more breakfast, he does not feel hungry at 11 am. Although it is controversial, Lewis rejected the limit assumption (and therefore the uniqueness assumption) because it rules out the possibility that there might be worlds that get closer and closer to the actual world without limit. For example, there might be an infinite series of worlds, each with a coffee cup a smaller fraction of an inch to the left of its actual position, but none of which is uniquely the closest. (See Lewis 1973: 20.)
One consequence of Stalnaker's acceptance of the uniqueness assumption is that, if thelaw of excluded middleis true, then all instances of the formula (A > C) ∨ (A > ¬C) are true. The law of excluded middle is the thesis that for all propositions p, p ∨ ¬p is true. If the uniqueness assumption is true, then for every antecedent A, there is a uniquely closest world where A is true. If the law of excluded middle is true, any consequent C is either true or false at that world where A is true. So for every counterfactual A > C, either A > C or A > ¬C is true. This is called conditional excluded middle (CEM). Example:
On Stalnaker's analysis, there is a closest world where the fair coin mentioned in (1) and (2) is flipped and at that world either it lands heads or it lands tails. So either (1) is true and (2) is false or (1) is false and (2) true. On Lewis's analysis, however, both (1) and (2) are false, for the worlds where the fair coin lands heads are no more or less close than the worlds where they land tails. For Lewis, "If the coin had been flipped, it would have landed heads or tails" is true, but this does not entail that "If the coin had been flipped, it would have landed heads, or: If the coin had been flipped it would have landed tails."
Thecausal models frameworkanalyzes counterfactuals in terms of systems ofstructural equations. In a system of equations, each variable is assigned a value that is an explicit function of other variables in the system. Given such a model, the sentence "Ywould beyhadXbeenx" (formally,X = x>Y = y) is defined as the assertion: If we replace the equation currently determiningXwith a constantX = x, and solve the set of equations for variableY, the solution obtained will beY = y. This definition has been shown to be compatible with the axioms of possible world semantics and forms the basis for causal inference in the natural and social sciences, since each structural equation in those domains corresponds to a familiar causal mechanism that can be meaningfully reasoned about by investigators. This approach was developed byJudea Pearl(2000) as a means of encoding fine-grained intuitions about causal relations which are difficult to capture in other proposed systems.[23]
In thebelief revisionframework, counterfactuals are treated using a formal implementation of theRamsey test. In these systems, a counterfactualA>Bholds if and only if the addition ofAto the current body of knowledge hasBas a consequence. This condition relates counterfactual conditionals tobelief revision, as the evaluation ofA>Bcan be done by first revising the current knowledge withAand then checking whetherBis true in what results. Revising is easy whenAis consistent with the current beliefs, but can be hard otherwise. Every semantics for belief revision can be used for evaluating conditional statements. Conversely, every method for evaluating conditionals can be seen as a way for performing revision.
Ginsberg (1986) has proposed a semantics for conditionals which assumes that the current beliefs form a set ofpropositional formulae, considering the maximal sets of these formulae that are consistent withA, and addingAto each. The rationale is that each of these maximal sets represents a possible state of belief in whichAis true that is as similar as possible to the original one. The conditional statementA>Btherefore holds if and only ifBis true in all such sets.[24]
Languages use different strategies for expressing counterfactuality. Some have a dedicated counterfactualmorphemes, while others recruit morphemes which otherwise expresstense,aspect,mood, or a combination thereof. Since the early 2000s, linguists, philosophers of language, and philosophical logicians have intensely studied the nature of this grammatical marking, and it continues to be an active area of study.
In many languages, counterfactuality is marked bypast tensemorphology.[25]Since these uses of the past tense do not convey their typical temporal meaning, they are calledfake pastorfake tense.[26][27][28]English is one language which uses fake past to mark counterfactuality, as shown in the followingminimal pair.[29]In the indicative example, the bolded words are present tense forms. In the counterfactual example, both words take their past tense form. This use of the past tense cannot have its ordinary temporal meaning, since it can be used with the adverb "tomorrow" without creating a contradiction.[25][26][27][28]
Modern Hebrewis another language where counterfactuality is marked with a fake past morpheme:[30]
im
if
Dani
Dani
haya
be.PST.3S.M
ba-bayit
in-home
maχa ɾ
tomorrow
hayinu
be.PST.1PL
mevakRim
visit.PTC.PL
oto
he.ACC
im Danihayaba-bayit {maχa ɾ}hayinumevakRim oto
if Dani be.PST.3S.M in-home tomorrow be.PST.1PL visit.PTC.PL he.ACC
"If Dani had been home tomorrow, we would've visited him."
Palestinian Arabicis another:[30]
iza
if
kaan
be.PST.3S.M
fi
in
l-bet
the-house
bukra
tomorrow
kunna
be.PST.1PL
zurna-a
visit.PST.PFV.1PL-him
izakaanfi l-bet bukra kunnazurna-a
if be.PST.3S.M in the-house tomorrow be.PST.1PL visit.PST.PFV.1PL-him
"If he had been home tomorrow, we would've visited him."
Fake past is extremely prevalent cross-linguistically, either on its own or in combination with other morphemes. Moreover,theoretical linguistsandphilosophers of languagehave argued that other languages' strategies for marking counterfactuality are actuallyrealizationsof fake tense along with other morphemes. For this reason, fake tense has often been treated as the locus of the counterfactual meaning itself.[26][31]
Informal semanticsandphilosophical logic, fake past is regarded as a puzzle, since it is not obvious why so many unrelated languages would repurpose a tensemorphemeto mark counterfactuality. Proposed solutions to this puzzle divide into two camps:past as modalandpast as past. These approaches differ in whether or not they take the past tense's core meaning to be about time.[32][33]
In thepast as modal approach, thedenotationof the past tense is not fundamentally about time. Rather, it is anunderspecifiedskeleton which can apply either tomodalor temporal content.[26][32][34]For instance, the particular past as modal proposal of Iatridou (2000), the past tense's core meaning is what is shown schematically below:
Depending on how this denotationcomposes,xcan be a time interval or apossible world. Whenxis a time, the past tense will convey that the sentence is talking about non-current times, i.e. the past. Whenxis a world, it will convey that the sentence is talking about a potentially non-actual possibility. The latter is what allows for a counterfactual meaning.
Thepast as past approachtreats the past tense as having an inherently temporal denotation. On this approach, so-called fake tense is not actually fake. It differs from "real" tense only in how it takesscope, i.e. which component of the sentence's meaning is shifted to an earlier time. When a sentence has "real" past marking, it discusses something that happened at an earlier time; when a sentence has so-called fake past marking, it discusses possibilities that wereaccessibleat an earlier time but may no longer be.[35][36][37]
Fakeaspectoften accompanies fake tense in languages that mark aspect. In some languages (e.g.Modern Greek,Zulu, and theRomance languages) this fake aspect isimperfective. In other languages (e.g.Palestinian Arabic) it isperfective. However, in other languages includingRussianandPolish, counterfactuals can have either perfective or imperfective aspect.[31]
Fake imperfective aspect is demonstrated by the twoModern Greeksentences below. These examples form aminimal pair, since they are identical except that the first uses past imperfective marking where the second uses past perfective marking. As a result of this morphological difference, the first has a counterfactual meaning, while the second does not.[26]
An
if
eperne
take.PST.IPFV
afto
this
to
siropi
syrup
θa
FUT
γinotan
become.PST.IPFV
kala
well
An eperne afto to siropi θa γinotan kala
if take.PST.IPFVthis {} syrup FUT become.PST.IPFVwell
'If he took this syrup, he would get better'
An
if
ipχe
take.PST.PFV
afto
this
to
siropi
syrup
θa
FUT
eγine
become.PST.PFV
kala
well
An ipχe afto to siropi θa eγine kala
if take.PST.PFVthis {} syrup FUT become.PST.PFVwell
"If he took this syrup, he must be better."
This imperfective marking has been argued to be fake on the grounds that it is compatible withcompletive adverbialssuch as "in one month":[26]
An
if
eχtizes
build.IPFV
to
the
spiti
house
(mesa)
se
in
ena
one
mina
month
θa
FUT
prolavenes
have-time-enough.IPFV
na
to
to
it
pulisis
sell
prin
before
to
the
kalokeri
summer
An eχtizes to spiti (mesa) se ena mina θa prolavenes na to pulisis prin to kalokeri
if build.IPFVthe house {} in one month FUT have-time-enough.IPFVto it sell before the summer
"If you built this house in a month, you would be able to sell it before the summer."
In ordinary non-conditional sentences, such adverbials are compatible with perfective aspect but not with imperfective aspect:[26]
Eχtise
build.PFV
afto
this
to
spiti
house
(mesa)
in
se
ena
one
mina
month
Eχtise afto to spiti (mesa) se ena mina
build.PFVthis {} house in {} one month
"She built this house in one month."
*
Eχtize
build.IPFV
afto
this
to
spiti
house
(mesa)
in
se
ena
one
mina
month
* Eχtize afto to spiti (mesa) se ena mina
{} build.IPFVthis {} house in {} one month
"She was building this house in one month."
People engage incounterfactual thinkingfrequently. Experimental evidence indicates that people's thoughts about counterfactual conditionals differ in important ways from their thoughts about indicative conditionals.
Participants in experiments were asked to read sentences, including counterfactual conditionals, e.g., "If Mark had left home early, he would have caught the train". Afterwards, they were asked to identify which sentences they had been shown. They often mistakenly believed they had been shown sentences corresponding to the presupposed facts, e.g., "Mark did not leave home early" and "Mark did not catch the train".[38]In other experiments, participants were asked to read short stories that contained counterfactual conditionals, e.g., "If there had been roses in the flower shop then there would have been lilies". Later in the story, they read sentences corresponding to the presupposed facts, e.g., "there were no roses and there were no lilies". The counterfactual conditionalprimedthem to read the sentence corresponding to the presupposed facts very rapidly; no such priming effect occurred for indicative conditionals.[39]They spent different amounts of time 'updating' a story that contains a counterfactual conditional compared to one that contains factual information[40]and focused on different parts of counterfactual conditionals.[41]
Experiments have compared the inferences people make from counterfactual conditionals and indicative conditionals. Given a counterfactual conditional, e.g., "If there had been a circle on the blackboard then there would have been a triangle", and the subsequent information "in fact there was no triangle", participants make themodus tollensinference "there was no circle" more often than they do from an indicative conditional.[42]Given the counterfactual conditional and the subsequent information "in fact there was a circle", participants make themodus ponensinference as often as they do from an indicative conditional.
Byrneargues that people constructmental representationsthat encompass two possibilities when they understand, and reason from, a counterfactual conditional, e.g., "if Oswald had not shot Kennedy, then someone else would have". They envisage the conjecture "Oswald did not shoot Kennedy and someone else did" and they also think about the presupposed facts "Oswald did shoot Kennedy and someone else did not".[43]According to themental model theory of reasoning, they constructmental modelsof the alternative possibilities.[44] | https://en.wikipedia.org/wiki/Counterfactual_conditional |
Dynamic semanticsis a framework inlogicandnatural language semanticsthat treats the meaning of a sentence as its potential to update a context. In static semantics, knowing the meaning of a sentence amounts to knowing when it is true; in dynamic semantics, knowing the meaning of a sentence means knowing "the change it brings about in the information state of anyone who accepts the news conveyed by it."[1]In dynamic semantics, sentences are mapped to functions calledcontext change potentials, which take an input context and return an output context. Dynamic semantics was originally developed byIrene HeimandHans Kampin 1981 to modelanaphora, but has since been applied widely to phenomena includingpresupposition,plurals,questions,discourse relations, andmodality.[2]
The first systems of dynamic semantics were the closely relatedFile Change Semanticsanddiscourse representation theory, developed simultaneously and independently byIrene HeimandHans Kamp. These systems were intended to capturedonkey anaphora, which resists an elegant compositional treatment in classic approaches to semantics such asMontague grammar.[2][3]Donkey anaphora is exemplified by the infamous donkey sentences, first noticed by the medieval logicianWalter Burleyand brought to modern attention byPeter Geach.[4][5]
To capture the empirically observed truth conditions of such sentences infirst order logic, one would need to translate theindefinite noun phrase"a donkey" as auniversal quantifierscoping over the variable corresponding to the pronoun "it".
While this translation captures (or approximates) the truth conditions of the natural language sentences, its relationship to the syntactic form of the sentence is puzzling in two ways. First, indefinites in non-donkey contexts normally expressexistentialrather than universal quantification. Second, the syntactic position of the donkey pronoun would not normally allow it to beboundby the indefinite.
To explain these peculiarities, Heim and Kamp proposed that natural language indefinites are special in that they introduce a newdiscourse referentthat remains available outside the syntactic scope of the operator that introduced it. To cash this idea out, they proposed their respective formal systems that capture donkey anaphora because they validateEgli's theoremand its corollary.[6]
Update semanticsis a framework within dynamic semantics that was developed byFrank Veltman.[1][7]In update semantics, each formulaφ{\displaystyle \varphi }is mapped to a function[φ]{\displaystyle [\varphi ]}that takes and returns adiscourse context. Thus, ifC{\displaystyle C}is a context, thenC[φ]{\displaystyle C[\varphi ]}is the context one gets by updatingC{\displaystyle C}withφ{\displaystyle \varphi }. Systems of update semantics vary both in how they define a context and in the semantic entries they assign to formulas. The simplest update systems areintersectiveones, which simply lift static systems into the dynamic framework. However, update semantics includes systems more expressive than what can be defined in the static framework. In particular, it allowsinformation sensitivesemantic entries, in which the information contributed by updating with some formula can depend on the information already present in the context.[8]This property of update semantics has led to its widespread application topresuppositions,modals, andconditionals.
An update withφ{\displaystyle \varphi }is calledintersectiveif it amounts to taking the intersection of the input context with the proposition denoted byφ{\displaystyle \varphi }. Crucially, this definition assumes that there is a single fixed proposition thatφ{\displaystyle \varphi }always denotes, regardless of the context.[8]
Intersective update was proposed byRobert Stalnakerin 1978 as a way of formalizing thespeech actof assertion.[9][8]In Stalnaker's original system, a context (orcontext set) is defined as a set ofpossible worldsrepresenting the information in the common ground of a conversation. For instance, ifC={w,v,u}{\displaystyle C=\{w,v,u\}}this represents a scenario where the information agreed upon by all participants in the conversation indicates that the actual world must be eitherw{\displaystyle w},v{\displaystyle v}, oru{\displaystyle u}. If[[φ]]={w,v}{\displaystyle [\![\varphi ]\!]=\{w,v\}}, then updatingC{\displaystyle C}withφ{\displaystyle \varphi }would return a new contextC[φ]={w,v}{\displaystyle C[\varphi ]=\{w,v\}}. Thus, an assertion ofφ{\displaystyle \varphi }would be understood as an attempt to rule out the possibility that the actual world isu{\displaystyle u}.
From a formal perspective, intersective update can be taken as a recipe for lifting one's preferred static semantics to dynamic semantics. For instance, if we take classical propositional semantics as our starting point, this recipe delivers the following intersective update semantics.[8]
The notion of intersectivity can be decomposed into the two properties known aseliminativityanddistributivity. Eliminativity says that an update can only ever remove worlds from the context—it can't add them. Distributivity says that updatingC{\displaystyle C}withφ{\displaystyle \varphi }is equivalent to updating each singleton subset ofC{\displaystyle C}withφ{\displaystyle \varphi }and then pooling the results.[8]
Intersectivity amounts to the conjunction of these two properties, as proven byJohan van Benthem.[8][10]
The framework of update semantics is more general than static semantics because it is not limited to intersective meanings. Nonintersective meanings are theoretically useful because they contribute different information depending on what information is already present in the context. For instance, ifφ{\displaystyle \varphi }is intersective, then it will update any input context with the exact same information, namely the information encoded by the proposition[[φ]]{\displaystyle [\![\varphi ]\!]}. On the other hand, ifφ{\displaystyle \varphi }is nonintersective, it could contribute[[φ]]{\displaystyle [\![\varphi ]\!]}when it updates some contexts, but some completely different information when it updates other contexts.[8]
Many natural language expressions have been argued to have nonintersective meanings. The nonintersectivity of epistemic modals can be seen in theinfelicityofepistemic contradictions.[11][8]
These sentences have been argued to be bona fide logical contradictions, unlike superficially similar examples such asMoore sentences, which can be given apragmaticexplanation.[12][8]
These sentences cannot be analysed as logical contradictions within purely intersective frameworks such as therelational semanticsformodal logic. The Epistemic Contradiction Principle only holds on the class ofrelational framessuch thatRwv⇒(w=v){\displaystyle Rwv\Rightarrow (w=v)}. However, such frames also validate an entailment from◊φ{\displaystyle \Diamond \varphi }toφ{\displaystyle \varphi }. Thus, accounting for the infelicity of epistemic contradictions within a classical semantics for modals would bring along the unwelcome prediction that "It might be raining" entails "It is raining".[12][8]Update Semantics skirts this problem by providing a nonintersective denotation for modals. When given such a denotation, the formula◊¬φ{\displaystyle \Diamond \neg \varphi }can update input contexts differently depending on whether they already contain the information thatφ{\displaystyle \varphi }provides. The most widely adopted semantic entry for modals in update semantics is thetest semanticsproposed byFrank Veltman.[1]
On this semantics,◊φ{\displaystyle \Diamond \varphi }tests whether the input context could be updated withφ{\displaystyle \varphi }without getting trivialized, i.e. without returning the empty set. If the input context passes the test, it remains unchanged. If it fails the test, the update trivializes the context by returning the empty set. This semantics can handle epistemic contradictions because no matter the input context, updating withφ{\displaystyle \varphi }will always output a context that fails the test imposed by◊¬φ{\displaystyle \Diamond \neg \varphi }.[8][13] | https://en.wikipedia.org/wiki/Dynamic_semantics |
Innatural languages, anindicative conditionalis aconditional sentencesuch as "If Leona is at home, she isn't in Paris", whose grammatical form restricts it to discussing what could be true. Indicatives are typically defined in opposition tocounterfactual conditionals, which have extra grammatical marking which allows them to discuss eventualities which are no longer possible.
Indicatives are a major topic of research inphilosophy of language,philosophical logic, andlinguistics. Open questions include whichlogical operationindicatives denote, how such denotations could becomposedfrom their grammatical form, and the implications of those denotations for areas includingmetaphysics,psychology of reasoning, andphilosophy of mathematics.
Early analyses identified indicative conditionals with thelogical operationknown as thematerial conditional. According to the material conditional analysis, an indicative "If A then B" is true unless A is true and B is not. Although this analysis covers many observed cases, it misses some crucial properties of actual conditional speech and reasoning.
One problem for the material conditional analysis is that it allows indicatives to be true even when their antecedent andconsequentare unrelated. For instance, the indicative "If Paris is in France then trout are fish" is intuitively strange since the location of Paris has nothing to do with the classification of trout. However, since its antecedent and the consequent are both true, the material conditional analysis treats it as a true statement. Similarly, the material conditional analysis treats conditionals with false antecedents asvacuously true. For instance, since Paris is not in Australia, the conditional "If Paris is in Australia, then trout are fish" would be treated as true on a material conditional analysis. These arguments have been taken to show that notruth-functionaloperator will suffice as a semantics for indicative conditionals. In the mid-20th century, work byH.P. Grice,Frank Cameron Jackson, and others attempted to maintain the material conditional as an analysis of indicatives' literal semantic denotation, while appealing topragmaticsin order to explain the apparent discrepancies.[1]
Contemporary work inphilosophical logicandformal semanticsgenerally proposes alternative denotations for indicative conditionals. Proposed alternatives include analyses based onrelevance logic,modal logic,probability theory,Kratzerianmodal semantics, anddynamic semantics.[2]
Most behavioral experiments on conditionals in the psychology of reasoning have been carried out with indicative conditionals, causal conditionals, andcounterfactual conditionals. People readily make themodus ponensinference, that is, givenif A then B, and givenA, they concludeB, but only about half of participants in experiments make themodus tollensinference, that is, givenif A then B, and givennot-B, only about half of participants concludenot-A, the remainder say that nothing follows (Evanset al., 1993). When participants are given counterfactual conditionals, they make both the modus ponens and the modus tollens inferences (Byrne, 2005). | https://en.wikipedia.org/wiki/Indicative_conditional |
Logical consequence(alsoentailmentorlogical implication) is a fundamentalconceptinlogicwhich describes the relationship betweenstatementsthat hold true when one statement logicallyfollows fromone or more statements. Avalidlogicalargumentis one in which theconclusionis entailed by thepremises, because the conclusion is the consequence of the premises. Thephilosophical analysisof logical consequence involves the questions: In what sense does a conclusion follow from its premises? and What does it mean for a conclusion to be a consequence of premises?[1]All ofphilosophical logicis meant to provide accounts of the nature of logical consequence and the nature oflogical truth.[2]
Logical consequence isnecessaryandformal, by way of examples that explain withformal proofandmodels of interpretation.[1]A sentence is said to be a logical consequence of a set of sentences, for a givenlanguage,if and only if, using only logic (i.e., without regard to anypersonalinterpretations of the sentences) the sentence must be true if every sentence in the set is true.[3]
Logicians make precise accounts of logical consequence regarding a givenlanguageL{\displaystyle {\mathcal {L}}}, either by constructing adeductive systemforL{\displaystyle {\mathcal {L}}}or by formalintended semanticsfor languageL{\displaystyle {\mathcal {L}}}. The Polish logicianAlfred Tarskiidentified three features of an adequate characterization of entailment: (1) The logical consequence relation relies on thelogical formof the sentences: (2) The relation isa priori, i.e., it can be determined with or without regard toempirical evidence(sense experience); and (3) The logical consequence relation has amodalcomponent.[3]
The most widely prevailing view on how best to account for logical consequence is to appeal to formality. This is to say that whether statements follow from one another logically depends on the structure orlogical formof the statements without regard to the contents of that form.
Syntactic accounts of logical consequence rely onschemesusinginference rules. For instance, we can express the logical form of a valid argument as:
This argument is formally valid, because everyinstanceof arguments constructed using this scheme is valid.
This is in contrast to an argument like "Fred is Mike's brother's son. Therefore Fred is Mike's nephew." Since this argument depends on the meanings of the words "brother", "son", and "nephew", the statement "Fred is Mike's nephew" is a so-calledmaterial consequenceof "Fred is Mike's brother's son", not a formal consequence. A formal consequence must be truein all cases, however this is an incomplete definition of formal consequence, since even the argument "PisQ's brother's son, thereforePisQ's nephew" is valid in all cases, but is not aformalargument.[1]
If it is known thatQ{\displaystyle Q}follows logically fromP{\displaystyle P}, then no information about the possible interpretations ofP{\displaystyle P}orQ{\displaystyle Q}will affect that knowledge. Our knowledge thatQ{\displaystyle Q}is a logical consequence ofP{\displaystyle P}cannot be influenced byempirical knowledge.[1]Deductively valid arguments can be known to be so without recourse to experience, so they must be knowable a priori.[1]However, formality alone does not guarantee that logical consequence is not influenced by empirical knowledge. So the a priori property of logical consequence is considered to be independent of formality.[1]
The two prevailing techniques for providing accounts of logical consequence involve expressing the concept in terms ofproofsand viamodels. The study of the syntactic consequence (of a logic) is called (its)proof theorywhereas the study of (its) semantic consequence is called (its)model theory.[4]
A formulaA{\displaystyle A}is asyntactic consequence[5][6][7][8][9]within someformal systemFS{\displaystyle {\mathcal {FS}}}of a setΓ{\displaystyle \Gamma }of formulas if there is aformal proofinFS{\displaystyle {\mathcal {FS}}}ofA{\displaystyle A}from the setΓ{\displaystyle \Gamma }. This is denotedΓ⊢FSA{\displaystyle \Gamma \vdash _{\mathcal {FS}}A}. The turnstile symbol⊢{\displaystyle \vdash }was originally introduced by Frege in 1879, but its current use only dates back to Rosser and Kleene (1934–1935).[9]
Syntactic consequence does not depend on anyinterpretationof the formal system.[10]
A formulaA{\displaystyle A}is asemantic consequencewithin some formal systemFS{\displaystyle {\mathcal {FS}}}of a set of statementsΓ{\displaystyle \Gamma }if and only if there is no modelI{\displaystyle {\mathcal {I}}}in which all members ofΓ{\displaystyle \Gamma }are true andA{\displaystyle A}is false.[11]This is denotedΓ⊨FSA{\displaystyle \Gamma \models _{\mathcal {FS}}A}. Or, in other words, the set of the interpretations that make all members ofΓ{\displaystyle \Gamma }true is a subset of the set of the interpretations that makeA{\displaystyle A}true.
Modalaccounts of logical consequence are variations on the following basic idea:
Alternatively (and, most would say, equivalently):
Such accounts are called "modal" because they appeal to the modal notions oflogical necessityandlogical possibility. 'It is necessary that' is often expressed as auniversal quantifieroverpossible worlds, so that the accounts above translate as:
Consider the modal account in terms of the argument given as an example above:
The conclusion is a logical consequence of the premises because we can not imagine a possible world where (a) all frogs are green; (b) Kermit is a frog; and (c) Kermit is not green.
Modal-formal accounts of logical consequence combine the modal and formal accounts above, yielding variations on the following basic idea:
The accounts considered above are all "truth-preservational", in that they all assume that the characteristic feature of a good inference is that it never allows one to move from true premises to an untrue conclusion. As an alternative, some have proposed "warrant-preservational" accounts, according to which the characteristic feature of a good inference is that it never allows one to move from justifiably assertible premises to a conclusion that is not justifiably assertible. This is (roughly) the account favored byintuitionists.
The accounts discussed above all yieldmonotonicconsequence relations, i.e. ones such that ifA{\displaystyle A}is a consequence ofΓ{\displaystyle \Gamma }, thenA{\displaystyle A}is a consequence of any superset ofΓ{\displaystyle \Gamma }. It is also possible to specify non-monotonic consequence relations to capture the idea that, e.g., 'Tweety can fly' is a logical consequence of
but not of | https://en.wikipedia.org/wiki/Logical_consequence |
InBoolean algebra, thealgebraic normal form(ANF),ring sum normal form(RSNForRNF),Zhegalkin normal form, orReed–Muller expansionis a way of writingpropositional logicformulas in one of three subforms:
Formulas written in ANF are also known asZhegalkin polynomialsand Positive Polarity (or Parity)Reed–Muller expressions(PPRM).[1]
ANF is acanonical form, which means that twologically equivalentformulas will convert to the same ANF, easily showing whether two formulas are equivalent forautomated theorem proving. Unlike other normal forms, it can be represented as a simple list of lists of variable names—conjunctiveanddisjunctivenormal forms also require recording whether each variable is negated or not.Negation normal formis unsuitable for determining equivalence, since on negation normal forms, equivalence does not imply equality: a ∨ ¬a is not reduced to the same thing as 1, even though they are logically equivalent.
Putting a formula into ANF also makes it easy to identifylinearfunctions (used, for example, inlinear-feedback shift registers): a linear function is one that is a sum of single literals. Properties of nonlinear-feedbackshift registerscan also be deduced from certain properties of the feedback function in ANF.
There are straightforward ways to perform the standard Boolean operations on ANF inputs in order to get ANF results.
XOR (logical exclusive disjunction) is performed directly:
NOT (logical negation) is XORing 1:[2]
AND (logical conjunction) isdistributed algebraically[3]
OR (logical disjunction) uses either 1 ⊕ (1 ⊕ a)(1 ⊕ b)[4](easier when both operands have purely true terms) or a ⊕ b ⊕ ab[5](easier otherwise):
Each variable in a formula is already in pure ANF, so one only needs to perform the formula's Boolean operations as shown above to get the entire formula into ANF. For example:
ANF is sometimes described in an equivalent way:
There are only four functions with one argument:
To represent a function with multiple arguments one can use the following equality:
Indeed,
Since bothg{\displaystyle g}andh{\displaystyle h}have fewer arguments thanf{\displaystyle f}it follows that using this process recursively we will finish with functions with one variable. For example, let us construct ANF off(x,y)=x∨y{\displaystyle f(x,y)=x\lor y}(logical or): | https://en.wikipedia.org/wiki/Algebraic_normal_form |
InBoolean algebra, aformulais inconjunctive normal form(CNF) orclausal normal formif it is aconjunctionof one or moreclauses, where a clause is adisjunctionofliterals; otherwise put, it is aproduct of sumsoran AND of ORs.
In automated theorem proving, the notion "clausal normal form" is often used in a narrower sense, meaning a particular representation of a CNF formula as a set of sets of literals.
A logical formula is considered to be in CNF if it is aconjunctionof one or moredisjunctionsof one or moreliterals. As indisjunctive normal form(DNF), the only propositional operators in CNF areor(∨{\displaystyle \vee }),and(∧{\displaystyle \land }), andnot(¬{\displaystyle \neg }). Thenotoperator can only be used as part of a literal, which means that it can only precede apropositional variable.
The following is acontext-free grammarfor CNF:
WhereVariableis any variable.
All of the following formulas in the variablesA,B,C,D,E,{\displaystyle A,B,C,D,E,}andF{\displaystyle F}are in conjunctive normal form:
The following formulas arenotin conjunctive normal form:
Inclassical logiceachpropositional formulacan be converted to anequivalentformula that is in CNF.[1]This transformation is based on rules aboutlogical equivalences:double negation elimination,De Morgan's laws, and thedistributive law.
The algorithm to compute a CNF-equivalent of a given propositional formulaϕ{\displaystyle \phi }builds upon¬ϕ{\displaystyle \lnot \phi }indisjunctive normal form (DNF): step 1.[2]Then¬ϕDNF{\displaystyle \lnot \phi _{DNF}}is converted toϕCNF{\displaystyle \phi _{CNF}}by swapping ANDs with ORs and vice versa while negating all the literals. Remove all¬¬{\displaystyle \lnot \lnot }.[1]
Convert to CNF the propositional formulaϕ{\displaystyle \phi }.
Step 1: Convert its negation to disjunctive normal form.[2]
¬ϕDNF=(C1∨C2∨…∨Ci∨…∨Cm){\displaystyle \lnot \phi _{DNF}=(C_{1}\lor C_{2}\lor \ldots \lor C_{i}\lor \ldots \lor C_{m})},[3]
where eachCi{\displaystyle C_{i}}is a conjunction of literalsli1∧li2∧…∧lini{\displaystyle l_{i1}\land l_{i2}\land \ldots \land l_{in_{i}}}.[4]
Step 2: Negate¬ϕDNF{\displaystyle \lnot \phi _{DNF}}.
Then shift¬{\displaystyle \lnot }inwards by applying the(generalized) De Morgan's equivalencesuntil no longer possible.ϕ↔¬¬ϕDNF=¬(C1∨C2∨…∨Ci∨…∨Cm)↔¬C1∧¬C2∧…∧¬Ci∧…∧¬Cm// (generalized) D.M.{\displaystyle {\begin{aligned}\phi &\leftrightarrow \lnot \lnot \phi _{DNF}\\&=\lnot (C_{1}\lor C_{2}\lor \ldots \lor C_{i}\lor \ldots \lor C_{m})\\&\leftrightarrow \lnot C_{1}\land \lnot C_{2}\land \ldots \land \lnot C_{i}\land \ldots \land \lnot C_{m}&&{\text{// (generalized) D.M.}}\end{aligned}}}where¬Ci=¬(li1∧li2∧…∧lini)↔(¬li1∨¬li2∨…∨¬lini)// (generalized) D.M.{\displaystyle {\begin{aligned}\lnot C_{i}&=\lnot (l_{i1}\land l_{i2}\land \ldots \land l_{in_{i}})\\&\leftrightarrow (\lnot l_{i1}\lor \lnot l_{i2}\lor \ldots \lor \lnot l_{in_{i}})&&{\text{// (generalized) D.M.}}\end{aligned}}}
Step 3: Remove all double negations.
Example
Convert to CNF the propositional formulaϕ=((¬(p∧q))↔(¬r↑(p⊕q))){\displaystyle \phi =((\lnot (p\land q))\leftrightarrow (\lnot r\uparrow (p\oplus q)))}.[5]
The (full) DNF equivalent of its negation is[2]¬ϕDNF=(p∧q∧r)∨(p∧q∧¬r)∨(p∧¬q∧¬r)∨(¬p∧q∧¬r){\displaystyle \lnot \phi _{DNF}=(p\land q\land r)\lor (p\land q\land \lnot r)\lor (p\land \lnot q\land \lnot r)\lor (\lnot p\land q\land \lnot r)}
ϕ↔¬¬ϕDNF=¬{(p∧q∧r)∨(p∧q∧¬r)∨(p∧¬q∧¬r)∨(¬p∧q∧¬r)}↔¬(p∧q∧r)_∧¬(p∧q∧¬r)_∧¬(p∧¬q∧¬r)_∧¬(¬p∧q∧¬r)_// generalized D.M.↔(¬p∨¬q∨¬r)∧(¬p∨¬q∨¬¬r)∧(¬p∨¬¬q∨¬¬r)∧(¬¬p∨¬q∨¬¬r)// generalized D.M.(4×)↔(¬p∨¬q∨¬r)∧(¬p∨¬q∨r)∧(¬p∨q∨r)∧(p∨¬q∨r)// remove all¬¬=ϕCNF{\displaystyle {\begin{aligned}\phi &\leftrightarrow \lnot \lnot \phi _{DNF}\\&=\lnot \{(p\land q\land r)\lor (p\land q\land \lnot r)\lor (p\land \lnot q\land \lnot r)\lor (\lnot p\land q\land \lnot r)\}\\&\leftrightarrow {\underline {\lnot (p\land q\land r)}}\land {\underline {\lnot (p\land q\land \lnot r)}}\land {\underline {\lnot (p\land \lnot q\land \lnot r)}}\land {\underline {\lnot (\lnot p\land q\land \lnot r)}}&&{\text{// generalized D.M. }}\\&\leftrightarrow (\lnot p\lor \lnot q\lor \lnot r)\land (\lnot p\lor \lnot q\lor \lnot \lnot r)\land (\lnot p\lor \lnot \lnot q\lor \lnot \lnot r)\land (\lnot \lnot p\lor \lnot q\lor \lnot \lnot r)&&{\text{// generalized D.M. }}(4\times )\\&\leftrightarrow (\lnot p\lor \lnot q\lor \lnot r)\land (\lnot p\lor \lnot q\lor r)\land (\lnot p\lor q\lor r)\land (p\lor \lnot q\lor r)&&{\text{// remove all }}\lnot \lnot \\&=\phi _{CNF}\end{aligned}}}
A CNF equivalent of a formula can be derived from itstruth table. Again, consider the formulaϕ=((¬(p∧q))↔(¬r↑(p⊕q))){\displaystyle \phi =((\lnot (p\land q))\leftrightarrow (\lnot r\uparrow (p\oplus q)))}.[5]
The correspondingtruth tableis
A CNF equivalent ofϕ{\displaystyle \phi }is(¬p∨¬q∨¬r)∧(¬p∨¬q∨r)∧(¬p∨q∨r)∧(p∨¬q∨r){\displaystyle (\lnot p\lor \lnot q\lor \lnot r)\land (\lnot p\lor \lnot q\lor r)\land (\lnot p\lor q\lor r)\land (p\lor \lnot q\lor r)}
Each disjunction reflects an assignment of variables for whichϕ{\displaystyle \phi }evaluates to F(alse).If in such an assignment a variableV{\displaystyle V}
Since all propositional formulas can be converted into an equivalent formula in conjunctive normal form, proofs are often based on the assumption that all formulae are CNF. However, in some cases this conversion to CNF can lead to an exponential explosion of the formula. For example, translating the non-CNF formula
(X1∧Y1)∨(X2∧Y2)∨…∨(Xn∧Yn){\displaystyle (X_{1}\wedge Y_{1})\vee (X_{2}\wedge Y_{2})\vee \ldots \vee (X_{n}\wedge Y_{n})}
into CNF produces a formula with2n{\displaystyle 2^{n}}clauses:
(X1∨X2∨…∨Xn)∧(Y1∨X2∨…∨Xn)∧(X1∨Y2∨…∨Xn)∧(Y1∨Y2∨…∨Xn)∧…∧(Y1∨Y2∨…∨Yn).{\displaystyle (X_{1}\vee X_{2}\vee \ldots \vee X_{n})\wedge (Y_{1}\vee X_{2}\vee \ldots \vee X_{n})\wedge (X_{1}\vee Y_{2}\vee \ldots \vee X_{n})\wedge (Y_{1}\vee Y_{2}\vee \ldots \vee X_{n})\wedge \ldots \wedge (Y_{1}\vee Y_{2}\vee \ldots \vee Y_{n}).}
Each clause contains eitherXi{\displaystyle X_{i}}orYi{\displaystyle Y_{i}}for eachi{\displaystyle i}.
There exist transformations into CNF that avoid an exponential increase in size by preservingsatisfiabilityrather thanequivalence.[6][7]These transformations are guaranteed to only linearly increase the size of the formula, but introduce new variables. For example, the above formula can be transformed into CNF by adding variablesZ1,…,Zn{\displaystyle Z_{1},\ldots ,Z_{n}}as follows:
(Z1∨…∨Zn)∧(¬Z1∨X1)∧(¬Z1∨Y1)∧…∧(¬Zn∨Xn)∧(¬Zn∨Yn).{\displaystyle (Z_{1}\vee \ldots \vee Z_{n})\wedge (\neg Z_{1}\vee X_{1})\wedge (\neg Z_{1}\vee Y_{1})\wedge \ldots \wedge (\neg Z_{n}\vee X_{n})\wedge (\neg Z_{n}\vee Y_{n}).}
Aninterpretationsatisfies this formula only if at least one of the new variables is true. If this variable isZi{\displaystyle Z_{i}}, then bothXi{\displaystyle X_{i}}andYi{\displaystyle Y_{i}}are true as well. This means that everymodelthat satisfies this formula also satisfies the original one. On the other hand, only some of the models of the original formula satisfy this one: since theZi{\displaystyle Z_{i}}are not mentioned in the original formula, their values are irrelevant to satisfaction of it, which is not the case in the last formula. This means that the original formula and the result of the translation areequisatisfiablebut notequivalent.
An alternative translation, theTseitin transformation, includes also the clausesZi∨¬Xi∨¬Yi{\displaystyle Z_{i}\vee \neg X_{i}\vee \neg Y_{i}}. With these clauses, the formula impliesZi≡Xi∧Yi{\displaystyle Z_{i}\equiv X_{i}\wedge Y_{i}}; this formula is often regarded to "define"Zi{\displaystyle Z_{i}}to be a name forXi∧Yi{\displaystyle X_{i}\wedge Y_{i}}.
Consider a propositional formula withn{\displaystyle n}variables,n≥1{\displaystyle n\geq 1}.
There are2n{\displaystyle 2n}possible literals:L={p1,¬p1,p2,¬p2,…,pn,¬pn}{\displaystyle L=\{p_{1},\lnot p_{1},p_{2},\lnot p_{2},\ldots ,p_{n},\lnot p_{n}\}}.
L{\displaystyle L}has(22n−1){\displaystyle (2^{2n}-1)}non-empty subsets.[8]
This is the maximum number of disjunctions a CNF can have.[9]
All truth-functional combinations can be expressed with2n{\displaystyle 2^{n}}disjunctions, one for each row of the truth table.In the example below they are underlined.
Example
Consider a formula with two variablesp{\displaystyle p}andq{\displaystyle q}.
The longest possible CNF has2(2×2)−1=15{\displaystyle 2^{(2\times 2)}-1=15}disjunctions:[9](¬p)∧(p)∧(¬q)∧(q)∧(¬p∨p)∧(¬p∨¬q)_∧(¬p∨q)_∧(p∨¬q)_∧(p∨q)_∧(¬q∨q)∧(¬p∨p∨¬q)∧(¬p∨p∨q)∧(¬p∨¬q∨q)∧(p∨¬q∨q)∧(¬p∨p∨¬q∨q){\displaystyle {\begin{array}{lcl}(\lnot p)\land (p)\land (\lnot q)\land (q)\land \\(\lnot p\lor p)\land {\underline {(\lnot p\lor \lnot q)}}\land {\underline {(\lnot p\lor q)}}\land {\underline {(p\lor \lnot q)}}\land {\underline {(p\lor q)}}\land (\lnot q\lor q)\land \\(\lnot p\lor p\lor \lnot q)\land (\lnot p\lor p\lor q)\land (\lnot p\lor \lnot q\lor q)\land (p\lor \lnot q\lor q)\land \\(\lnot p\lor p\lor \lnot q\lor q)\end{array}}}
This formula is acontradiction. It can be simplified to(¬p∧p){\displaystyle (\neg p\land p)}or to(¬q∧q){\displaystyle (\neg q\land q)}, which are also contradictions, as well as valid CNFs.
An important set of problems incomputational complexityinvolves finding assignments to the variables of a Boolean formula expressed in conjunctive normal form, such that the formula is true. Thek-SAT problem is the problem of finding a satisfying assignment to a Boolean formula expressed in CNF in which each disjunction contains at mostkvariables.3-SATisNP-complete(like any otherk-SAT problem withk>2) while2-SATis known to have solutions inpolynomial time. As a consequence,[10]the task of converting a formula into aDNF, preserving satisfiability, isNP-hard;dually, converting into CNF, preservingvalidity, is also NP-hard; hence equivalence-preserving conversion into DNF or CNF is again NP-hard.
Typical problems in this case involve formulas in "3CNF": conjunctive normal form with no more than three variables per conjunct. Examples of such formulas encountered in practice can be very large, for example with 100,000 variables and 1,000,000 conjuncts.
A formula in CNF can be converted into an equisatisfiable formula in "kCNF" (fork≥3) by replacing each conjunct with more thankvariablesX1∨…∨Xk∨…∨Xn{\displaystyle X_{1}\vee \ldots \vee X_{k}\vee \ldots \vee X_{n}}by two conjunctsX1∨…∨Xk−1∨Z{\displaystyle X_{1}\vee \ldots \vee X_{k-1}\vee Z}and¬Z∨Xk∨…∨Xn{\displaystyle \neg Z\vee X_{k}\lor \ldots \vee X_{n}}withZa new variable, and repeating as often as necessary.
In first order logic, conjunctive normal form can be taken further to yield the clausal normal form of a logical formula, which can be then used to performfirst-order resolution.
In resolution-based automated theorem-proving, a CNF formula
See below for an example.
To convertfirst-order logicto CNF:[12]
Example
As an example, the formula saying"Anyone who loves all animals, is in turn loved by someone"is converted into CNF (and subsequently intoclauseform in the last line) as follows (highlighting replacement ruleredexesinred{\displaystyle {\color {red}{\text{red}}}}):
Informally, the Skolem functiong(x){\displaystyle g(x)}can be thought of as yielding the person by whomx{\displaystyle x}is loved, whilef(x){\displaystyle f(x)}yields the animal (if any) thatx{\displaystyle x}doesn't love. The 3rd last line from below then reads as"x{\displaystyle x}doesn't love the animalf(x){\displaystyle f(x)}, or elsex{\displaystyle x}is loved byg(x){\displaystyle g(x)}".
The 2nd last line from above,(Animal(f(x))∨Loves(g(x),x))∧(¬Loves(x,f(x))∨Loves(g(x),x)){\displaystyle (\mathrm {Animal} (f(x))\lor \mathrm {Loves} (g(x),x))\land (\lnot \mathrm {Loves} (x,f(x))\lor \mathrm {Loves} (g(x),x))}, is the CNF. | https://en.wikipedia.org/wiki/Conjunctive_normal_form |
Inboolean logic, adisjunctive normal form(DNF) is acanonical normal formof a logical formula consisting of a disjunction of conjunctions; it can also be described as anOR of ANDs, asum of products, or — inphilosophical logic— acluster concept.[1]As anormal form, it is useful inautomated theorem proving.
A logical formula is considered to be in DNF if it is adisjunctionof one or moreconjunctionsof one or moreliterals.[2][3][4]A DNF formula is infull disjunctive normal formif each of its variables appears exactly once in every conjunction and each conjunction appears at most once (up to the order of variables). As inconjunctive normal form(CNF), the only propositional operators in DNF areand(∧{\displaystyle \wedge }),or(∨{\displaystyle \vee }), andnot(¬{\displaystyle \neg }). Thenotoperator can only be used as part of a literal, which means that it can only precede apropositional variable.
The following is acontext-free grammarfor DNF:
WhereVariableis any variable.
For example, all of the following formulas are in DNF:
The formulaA∨B{\displaystyle A\lor B}is in DNF, but not in full DNF; an equivalent full-DNF version is(A∧B)∨(A∧¬B)∨(¬A∧B){\displaystyle (A\land B)\lor (A\land \lnot B)\lor (\lnot A\land B)}.
The following formulas arenotin DNF:
Inclassical logiceach propositional formula can be converted to DNF[6]...
The conversion involves usinglogical equivalences, such asdouble negation elimination,De Morgan's laws, and thedistributive law. Formulas built from theprimitiveconnectives{∧,∨,¬}{\displaystyle \{\land ,\lor ,\lnot \}}[7]can be converted to DNF by the followingcanonical term rewriting system:[8]
The full DNF of a formula can be read off itstruth table.[9][10]For example, consider the formula
The correspondingtruth tableis
A propositional formula can be represented by one and only one full DNF.[13]In contrast, severalplainDNFs may be possible. For example, by applying the rule((a∧b)∨(¬a∧b))⇝b{\displaystyle ((a\land b)\lor (\lnot a\land b))\rightsquigarrow b}three times, the full DNF of the aboveϕ{\displaystyle \phi }can be simplified to(¬p∧¬q)∨(¬p∧r)∨(¬q∧r){\displaystyle (\lnot p\land \lnot q)\lor (\lnot p\land r)\lor (\lnot q\land r)}. However, there are also equivalent DNF formulas that cannot be transformed one into another by this rule, see the pictures for an example.
It is a theorem that all consistent formulas inpropositional logiccan be converted to disjunctive normal form.[14][15][16][17]This is called theDisjunctive Normal Form Theorem.[14][15][16][17]The formal statement is as follows:
Disjunctive Normal Form Theorem:SupposeX{\displaystyle X}is a sentence in a propositional languageL{\displaystyle {\mathcal {L}}}withn{\displaystyle n}sentence letters, which we shall denote byA1,...,An{\displaystyle A_{1},...,A_{n}}. IfX{\displaystyle X}is not a contradiction, then it is truth-functionally equivalent to a disjunction of conjunctions of the form±A1∧...∧±An{\displaystyle \pm A_{1}\land ...\land \pm A_{n}}, where+Ai=Ai{\displaystyle +A_{i}=A_{i}}, and−Ai=¬Ai{\displaystyle -A_{i}=\neg A_{i}}.[15]
The proof follows from the procedure given above for generating DNFs fromtruth tables. Formally, the proof is as follows:
SupposeX{\displaystyle X}is a sentence in a propositional language whose sentence letters areA,B,C,…{\displaystyle A,B,C,\ldots }. For each row ofX{\displaystyle X}'s truth table, write out a correspondingconjunction±A∧±B∧±C∧…{\displaystyle \pm A\land \pm B\land \pm C\land \ldots }, where±A{\displaystyle \pm A}is defined to beA{\displaystyle A}ifA{\displaystyle A}takes the valueT{\displaystyle T}at that row, and is¬A{\displaystyle \neg A}ifA{\displaystyle A}takes the valueF{\displaystyle F}at that row; similarly for±B{\displaystyle \pm B},±C{\displaystyle \pm C}, etc. (thealphabetical orderingofA,B,C,…{\displaystyle A,B,C,\ldots }in the conjunctions is quite arbitrary; any other could be chosen instead). Now form thedisjunctionof all these conjunctions which correspond toT{\displaystyle T}rows ofX{\displaystyle X}'s truth table. This disjunction is a sentence inL[A,B,C,…;∧,∨,¬]{\displaystyle {\mathcal {L}}[A,B,C,\ldots ;\land ,\lor ,\neg ]},[18]which by the reasoning above is truth-functionally equivalent toX{\displaystyle X}. This construction obviously presupposes thatX{\displaystyle X}takes the valueT{\displaystyle T}on at least one row of its truth table; ifX{\displaystyle X}doesn’t, i.e., ifX{\displaystyle X}is acontradiction, thenX{\displaystyle X}is equivalent toA∧¬A{\displaystyle A\land \neg A}, which is, of course, also a sentence inL[A,B,C,…;∧,∨,¬]{\displaystyle {\mathcal {L}}[A,B,C,\ldots ;\land ,\lor ,\neg ]}.[15]
This theorem is a convenient way to derive many usefulmetalogicalresults in propositional logic, such as,trivially, the result that the set of connectives{∧,∨,¬}{\displaystyle \{\land ,\lor ,\neg \}}isfunctionally complete.[15]
Any propositional formula is built fromn{\displaystyle n}variables, wheren≥1{\displaystyle n\geq 1}.
There are2n{\displaystyle 2n}possible literals:L={p1,¬p1,p2,¬p2,…,pn,¬pn}{\displaystyle L=\{p_{1},\lnot p_{1},p_{2},\lnot p_{2},\ldots ,p_{n},\lnot p_{n}\}}.
L{\displaystyle L}has(22n−1){\displaystyle (2^{2n}-1)}non-empty subsets.[19]
This is the maximum number of conjunctions a DNF can have.[13]
A full DNF can have up to2n{\displaystyle 2^{n}}conjunctions, one for each row of the truth table.
Example 1
Consider a formula with two variablesp{\displaystyle p}andq{\displaystyle q}.
The longest possible DNF has2(2×2)−1=15{\displaystyle 2^{(2\times 2)}-1=15}conjunctions:[13]
The longest possible full DNF has 4 conjunctions: they are underlined.
This formula is atautology. It can be simplified to(¬p∨p){\displaystyle (\neg p\lor p)}or to(¬q∨q){\displaystyle (\neg q\lor q)}, which are also tautologies, as well as valid DNFs.
Example 2
Each DNF of the e.g. formula(X1∨Y1)∧(X2∨Y2)∧⋯∧(Xn∨Yn){\displaystyle (X_{1}\lor Y_{1})\land (X_{2}\lor Y_{2})\land \dots \land (X_{n}\lor Y_{n})}has2n{\displaystyle 2^{n}}conjunctions.
TheBoolean satisfiability problemonconjunctive normal formformulas isNP-complete. By theduality principle, so is the falsifiability problem on DNF formulas. Therefore, it isco-NP-hardto decide if a DNF formula is atautology.
Conversely, a DNF formula is satisfiable if, and only if, one of its conjunctions is satisfiable. This can be decided inpolynomial timesimply by checking that at least one conjunction does not contain conflicting literals.
An important variation used in the study ofcomputational complexityisk-DNF. A formula is ink-DNFif it is in DNF and each conjunction contains at most k literals.[20] | https://en.wikipedia.org/wiki/Disjunctive_normal_form |
Logic optimizationis a process of finding an equivalent representation of the specifiedlogic circuitunder one or more specified constraints. This process is a part of alogic synthesisapplied indigital electronicsandintegrated circuit design.
Generally, the circuit is constrained to a minimum chip area meeting a predefined response delay. The goal of logic optimization of a given circuit is to obtain the smallestlogic circuitthat evaluates to the same values as the original one.[1]Usually, the smaller circuit with the same function is cheaper,[2]takes less space,consumes less power, has shorter latency, and minimizes risks of unexpectedcross-talk,hazard of delayed signal processing, and other issues present at the nano-scale level of metallic structures on anintegrated circuit.
In terms ofBoolean algebra, the optimization of a complexBoolean expressionis a process of finding a simpler one, which would upon evaluation ultimately produce the same results as the original one.
The problem with having a complicatedcircuit(i.e. one with many elements, such aslogic gates) is that each element takes up physical space and costs time and money to produce. Circuit minimization may be one form of logic optimization used to reduce the area of complex logic inintegrated circuits.
With the advent oflogic synthesis, one of the biggest challenges faced by theelectronic design automation(EDA) industry was to find the most simple circuit representation of the given design description.[nb 1]Whiletwo-level logic optimizationhad long existed in the form of theQuine–McCluskey algorithm, later followed by theEspresso heuristic logic minimizer, the rapidly improving chip densities, and the wide adoption ofHardware description languagesfor circuit description, formalized the logic optimization domain as it exists today, includingLogic Friday(graphical interface), Minilog, and ESPRESSO-IISOJS (many-valued logic).[3]
The methods of logic circuit simplifications are equally applicable toBoolean expression minimization.
Today, logic optimization is divided into various categories:
Graphical methods represent the required logical function by a diagram representing the logic variables and value of the function. By manipulating or inspecting a diagram, much tedious calculation may be eliminated.
Graphical minimization methods for two-level logic include:
The same methods of Boolean expression minimization (simplification) listed below may be applied to the circuit optimization.
For the case when the Boolean function is specified by a circuit (that is, we want to find an equivalent circuit of minimum size possible), the unbounded circuit minimization problem was long-conjectured to beΣ2P{\displaystyle \Sigma _{2}^{P}}-completeintime complexity, a result finally proved in 2008,[4]but there are effective heuristics such asKarnaugh mapsand theQuine–McCluskey algorithmthat facilitate the process.
Boolean function minimizing methods include:
Methods that find optimal circuit representations of Boolean functions are often referred to asexact synthesisin the literature. Due to the computational complexity, exact synthesis is tractable only for small Boolean functions. Recent approaches map the optimization problem to aBoolean satisfiabilityproblem.[5][6]This allows finding optimal circuit representations using aSAT solver.
Aheuristicmethod uses established rules that solve a practical useful subset of the much larger possible set of problems. The heuristic method may not produce the theoretically optimum solution, but if useful, will provide most of the optimization desired with a minimum of effort. An example of a computer system that uses heuristic methods for logic optimization is theEspresso heuristic logic minimizer.
While a two-level circuit representation of circuits strictly refers to the flattened view of the circuit in terms of SOPs (sum-of-products) — which is more applicable to aPLAimplementation of the design[clarification needed]— amulti-level representationis a more generic view of the circuit in terms of arbitrarily connected SOPs, POSs (product-of-sums), factored form etc. Logic optimization algorithms generally work either on the structural (SOPs, factored form) or functional representation (binary decision diagrams,algebraic decision diagrams) of the circuit. In sum-of-products (SOP) form, AND gates form the smallest unit and are stitched together using ORs, whereas in product-of-sums (POS) form it is opposite. POS form requires parentheses to group the OR terms together under AND gates, because OR has lower precedence than AND. Both SOP and POS forms translate nicely into circuit logic.
If we have two functionsF1andF2:
The above 2-level representation takes six product terms and 24 transistors in CMOS Rep.
A functionally equivalent representation in multilevel can be:
While the number of levels here is 3, the total number of product terms and literals reduce[quantify]because of the sharing of the term B + C.
Similarly, we distinguish betweencombinational circuitsandsequential circuits. Combinational circuits produce their outputs based only on the current inputs. They can be represented by Booleanrelations. Some examples arepriority encoders,binary decoders,multiplexers,demultiplexers.
Sequential circuits produce their output based on both current and past inputs, depending on a clock signal to distinguish the previous inputs from the current inputs. They can be represented by finite state machines. Some examples areflip-flopsandcounters.
While there are many ways to minimize a circuit, this is an example that minimizes (or simplifies) a Boolean function. The Boolean function carried out by the circuit is directly related to the algebraic expression from which the function is implemented.[7]Consider the circuit used to represent(A∧B¯)∨(A¯∧B){\displaystyle (A\wedge {\bar {B}})\vee ({\bar {A}}\wedge B)}. It is evident that two negations, two conjunctions, and a disjunction are used in this statement. This means that to build the circuit one would need twoinverters, twoAND gates, and anOR gate.
The circuit can simplified (minimized) by applying laws ofBoolean algebraor using intuition. Since the example states thatA{\displaystyle A}is true whenB{\displaystyle B}is false and the other way around, one can conclude that this simply meansA≠B{\displaystyle A\neq B}. In terms of logical gates,inequalitysimply means anXOR gate(exclusive or). Therefore,(A∧B¯)∨(A¯∧B)⟺A≠B{\displaystyle (A\wedge {\bar {B}})\vee ({\bar {A}}\wedge B)\iff A\neq B}. Then the two circuits shown below are equivalent, as can be checked using atruth table: | https://en.wikipedia.org/wiki/Logic_optimization |
Q.E.D.orQEDis aninitialismof theLatin phrasequod erat demonstrandum, meaning "that which was to be demonstrated". Literally, it states "what was to be shown".[1]Traditionally, the abbreviation is placed at the end ofmathematical proofsandphilosophicalargumentsin print publications, to indicate that the proof or the argument is complete.
The phrasequod erat demonstrandumis a translation intoLatinfrom theGreekὅπερ ἔδει δεῖξαι(hoper edei deixai; abbreviated asΟΕΔ). The meaning of the Latin phrase is "that [thing] which was to be demonstrated" (withdemonstrandumin thegerundive).
The Greek phrase was used by many early Greek mathematicians, includingEuclid[2]andArchimedes.
The Latin phrase is attested in a 1501 Euclid translation ofGiorgio Valla.[3]Its abbreviationq.e.d.is used once in 1598 byJohannes Praetorius,[4]more in 1643 byAnton Deusing,[5]extensively in 1655 byIsaac Barrowin the formQ.E.D.,[6]and subsequently by many post-Renaissancemathematicians and philosophers.[7]
During the European Renaissance, scholars often wrote in Latin, and phrases such asQ.E.D.were often used to conclude proofs.
Perhaps the most famous use ofQ.E.D.in a philosophical argument is found in theEthicsofBaruch Spinoza,published posthumouslyin 1677.[9]Written in Latin, it is considered by many to be Spinoza'smagnum opus. The style and system of the book are, as Spinoza says, "demonstrated ingeometricalorder", withaxiomsand definitions followed bypropositions. For Spinoza, this is a considerable improvement overRené Descartes's writing style in theMeditations, which follows the form of adiary.[10]
There is another Latin phrase with a slightly different meaning, usually shortened similarly, but being less common in use.Quod erat faciendum, originating from the Greek geometers' closingὅπερ ἔδει ποιῆσαι(hoper edei poiēsai), meaning "which had to be done".[11]Because of the difference in meaning, the two phrases should not be confused.
Euclidused the Greek original of Quod Erat Faciendum (Q.E.F.) to close propositions that were not proofs of theorems, but constructions of geometric objects.[12]For example, Euclid's first proposition showing how to construct anequilateral triangle, given one side, is concluded this way.[13]
There is no common formal English equivalent, although the end of a proof may be announced with a simple statement such as "thus it is proved", "this completes the proof", "as required", "as desired", "as expected", "hence proved", "ergo", "so correct", or other similar phrases.
Due to the paramount importance ofproofs in mathematics, mathematicians since the time ofEuclidhave developed conventions to demarcate the beginning and end of proofs. In printed English language texts, the formal statements oftheorems,lemmas, and propositions are set in italics by tradition. The beginning of a proof usually follows immediately thereafter, and is indicated by the word "proof" in boldface or italics. On the other hand, several symbolic conventions exist to indicate the end of a proof.
While some authors still use the classical abbreviation, Q.E.D., it is relatively uncommon in modern mathematical texts.Paul Halmosclaims to have pioneered the use of a solid black square (or rectangle) at the end of a proof as a Q.E.D. symbol,[14]a practice which has become standard, although not universal. Halmos noted that he adopted this use of a symbol from magazinetypographycustoms in which simple geometric shapes had been used to indicate the end of an article, so-calledend marks.[15][16]This symbol was later called thetombstone, theHalmos symbol, or even ahalmosby mathematicians. Often the Halmos symbol is drawn on chalkboard to signal the end of a proof during a lecture, although this practice is not so common as its use in printed text.
The tombstone symbol appears inTeXas the character◼{\displaystyle \blacksquare }(filled square, \blacksquare) and sometimes, as a◻{\displaystyle \square }(hollow square, \square or \Box).[17]In the AMS Theorem Environment forLaTeX, the hollow square is the default end-of-proof symbol.Unicodeexplicitly provides the "end of proof" character, U+220E (∎). Some authors use other Unicode symbols to note the end of a proof, including, ▮ (U+25AE, a black vertical rectangle), and ‣ (U+2023, a triangular bullet). Other authors have adopted two forward slashes (//,//{\displaystyle //}) or four forward slashes (////,////{\displaystyle ////}).[18]In other cases, authors have elected to segregate proofs typographically—by displaying them as indented blocks.[19]
InJoseph Heller's 1961 novelCatch-22,the Chaplain, having been told to examine a forged letter allegedly signed by him (which he knew he didn't sign), verified that hisnamewas in fact there. His investigator replied, "Then you wrote it. Q.E.D." The chaplain said he did not write it and that it was not his handwriting, to which the investigator replied, "Then you signed your name in somebody else's handwriting again."[20] | https://en.wikipedia.org/wiki/Q.E.D. |
Amathematical symbolis a figure or a combination of figures that is used to represent amathematical object, an action on mathematical objects, a relation between mathematical objects, or for structuring the other symbols that occur in aformula. As formulas are entirely constituted with symbols of various types, many symbols are needed for expressing all mathematics.
The most basic symbols are thedecimal digits(0, 1, 2, 3, 4, 5, 6, 7, 8, 9), and the letters of theLatin alphabet. The decimal digits are used for representing numbers through theHindu–Arabic numeral system. Historically, upper-case letters were used for representingpointsin geometry, and lower-case letters were used forvariablesandconstants. Letters are used for representing many other types ofmathematical object. As the number of these types has increased, theGreek alphabetand someHebrew lettershave also come to be used. For more symbols, other typefaces are also used, mainlyboldfacea,A,b,B,…{\displaystyle \mathbf {a,A,b,B} ,\ldots },script typefaceA,B,…{\displaystyle {\mathcal {A,B}},\ldots }(the lower-case script face is rarely used because of the possible confusion with the standard face),German fraktura,A,b,B,…{\displaystyle {\mathfrak {a,A,b,B}},\ldots }, andblackboard boldN,Z,Q,R,C,H,Fq{\displaystyle \mathbb {N,Z,Q,R,C,H,F} _{q}}(the other letters are rarely used in this face, or their use is unconventional). It is commonplace to use alphabets, fonts and typefaces to group symbols by type.
The use of specific Latin and Greek letters as symbols for denoting mathematical objects is not described in this article. For such uses, seeVariable § Conventional variable namesandList of mathematical constants. However, some symbols that are described here have the same shape as the letter from which they are derived, such as∏{\displaystyle \textstyle \prod {}}and∑{\displaystyle \textstyle \sum {}}.
These letters alone are not sufficient for the needs of mathematicians, and many other symbols are used. Some take their origin inpunctuation marksanddiacriticstraditionally used intypography; others by deformingletter forms, as in the cases of∈{\displaystyle \in }and∀{\displaystyle \forall }. Others, such as+and=, were specially designed for mathematics.
Severallogical symbolsare widely used in all mathematics, and are listed here. For symbols that are used only inmathematical logic, or are rarely used, seeList of logic symbols.
Theblackboard boldtypefaceis widely used for denoting the basicnumber systems. These systems are often also denoted by the corresponding uppercase bold letter. A clear advantage of blackboard bold is that these symbols cannot be confused with anything else. This allows using them in any area of mathematics, without having to recall their definition. For example, if one encountersR{\displaystyle \mathbb {R} }incombinatorics, one should immediately know that this denotes thereal numbers, although combinatorics does not study the real numbers (but it uses them for many proofs).
Many types ofbracketare used in mathematics. Their meanings depend not only on their shapes, but also on the nature and the arrangement of what is delimited by them, and sometimes what appears between or before them. For this reason, in the entry titles, the symbol□is used as a placeholder for schematizing the syntax that underlies the meaning.
In this section, the symbols that are listed are used as some sorts of punctuation marks in mathematical reasoning, or as abbreviations of natural language phrases. They are generally not used inside a formula. Some were used inclassical logicfor indicating the logical dependence between sentences written in plain language. Except for the first two, they are normally not used in printed mathematical texts since, for readability, it is generally recommended to have at least one word between two formulas. However, they are still used on ablack boardfor indicating relationships between formulas. | https://en.wikipedia.org/wiki/List_of_mathematical_symbols |
Inmathematics, aconstructive proofis a method ofproofthat demonstrates the existence of amathematical objectby creating or providing a method for creating the object. This is in contrast to anon-constructive proof(also known as anexistence prooforpure existence theorem), which proves the existence of a particular kind of object without providing an example. For avoiding confusion with the stronger concept that follows, such a constructive proof is sometimes called aneffective proof.
Aconstructive proofmay also refer to the stronger concept of a proof that is valid inconstructive mathematics.Constructivismis a mathematical philosophy that rejects all proof methods that involve the existence of objects that are not explicitly built. This excludes, in particular, the use of thelaw of the excluded middle, theaxiom of infinity, and theaxiom of choice. Constructivism also induces a different meaning for some terminology (for example, the term "or" has a stronger meaning in constructive mathematics than in classical).[1]
Some non-constructive proofs show that if a certain proposition is false, a contradiction ensues; consequently the proposition must be true (proof by contradiction). However, theprinciple of explosion(ex falso quodlibet) has been accepted in some varieties of constructive mathematics, includingintuitionism.
Constructive proofs can be seen as defining certified mathematicalalgorithms: this idea is explored in theBrouwer–Heyting–Kolmogorov interpretationofconstructive logic, theCurry–Howard correspondencebetween proofs and programs, and such logical systems asPer Martin-Löf'sintuitionistic type theory, andThierry CoquandandGérard Huet'scalculus of constructions.
Until the end of 19th century, all mathematical proofs were essentially constructive. The first non-constructive constructions appeared withGeorg Cantor’s theory ofinfinite sets, and the formal definition ofreal numbers.
The first use of non-constructive proofs for solving previously considered problems seems to beHilbert's NullstellensatzandHilbert's basis theorem. From a philosophical point of view, the former is especially interesting, as implying the existence of a well specified object.
The Nullstellensatz may be stated as follows: Iff1,…,fk{\displaystyle f_{1},\ldots ,f_{k}}arepolynomialsinnindeterminates withcomplexcoefficients, which have no common complexzeros, then there are polynomialsg1,…,gk{\displaystyle g_{1},\ldots ,g_{k}}such that
Such a non-constructive existence theorem was such a surprise for mathematicians of that time that one of them,Paul Gordan, wrote:"this is not mathematics, it is theology".[2]
Twenty five years later,Grete Hermannprovided an algorithm for computingg1,…,gk,{\displaystyle g_{1},\ldots ,g_{k},}which is not a constructive proof in the strong sense, as she used Hilbert's result. She proved that, ifg1,…,gk{\displaystyle g_{1},\ldots ,g_{k}}exist, they can be found with degrees less than22n{\displaystyle 2^{2^{n}}}.[3]
This provides an algorithm, as the problem is reduced to solving asystem of linear equations, by considering as unknowns the finite number of coefficients of thegi.{\displaystyle g_{i}.}
First consider the theorem that there are an infinitude ofprime numbers.Euclid'sproofis constructive. But a common way of simplifying Euclid's proof postulates that, contrary to the assertion in the theorem, there are only a finite number of them, in which case there is a largest one, denotedn. Then consider the numbern! + 1 (1 + the product of the firstnnumbers). Either this number is prime, or all of its prime factors are greater thann. Without establishing a specific prime number, this proves that one exists that is greater thann, contrary to the original postulate.
Now consider the theorem "there existirrational numbersa{\displaystyle a}andb{\displaystyle b}such thatab{\displaystyle a^{b}}isrational." This theorem can be proven by using both a constructive proof, and a non-constructive proof.
The following 1953 proof by Dov Jarden has been widely used as an example of a non-constructive proof since at least 1970:[4][5]
CURIOSA339.A Simple Proof That a Power of an Irrational Number to an Irrational Exponent May Be Rational.22{\displaystyle {\sqrt {2}}^{\sqrt {2}}}is either rational or irrational. If it is rational, our statement is proved. If it is irrational,(22)2=2{\displaystyle ({\sqrt {2}}^{\sqrt {2}})^{\sqrt {2}}=2}proves our statement.Dov Jarden Jerusalem
In a bit more detail:
At its core, this proof is non-constructive because it relies on the statement "Eitherqis rational or it is irrational"—an instance of thelaw of excluded middle, which is not valid within a constructive proof. The non-constructive proof does not construct an exampleaandb; it merely gives a number of possibilities (in this case, two mutually exclusive possibilities) and shows that one of them—but does not showwhichone—must yield the desired example.
As it turns out,22{\displaystyle {\sqrt {2}}^{\sqrt {2}}}is irrational because of theGelfond–Schneider theorem, but this fact is irrelevant to the correctness of the non-constructive proof.
Aconstructiveproof of the theorem that a power of an irrational number to an irrational exponent may be rational gives an actual example, such as:
Thesquare root of 2is irrational, and 3 is rational.log29{\displaystyle \log _{2}9}is also irrational: if it were equal tomn{\displaystyle m \over n}, then, by the properties oflogarithms, 9nwould be equal to 2m, but the former is odd, and the latter is even.
A more substantial example is thegraph minor theorem. A consequence of this theorem is that agraphcan be drawn on thetorusif, and only if, none of itsminorsbelong to a certain finite set of "forbidden minors". However, the proof of the existence of this finite set is not constructive, and the forbidden minors are not actually specified.[6]They are still unknown.
Inconstructive mathematics, a statement may be disproved by giving acounterexample, as in classical mathematics. However, it is also possible to give aBrouwerian counterexampleto show that the statement is non-constructive.[7]This sort of counterexample shows that the statement implies some principle that is known to be non-constructive. If it can be proved constructively that the statement implies some principle that is not constructively provable, then the statement itself cannot be constructively provable.
For example, a particular statement may be shown to imply the law of the excluded middle. An example of a Brouwerian counterexample of this type isDiaconescu's theorem, which shows that the fullaxiom of choiceis non-constructive in systems ofconstructive set theory, since the axiom of choice implies the law of excluded middle in such systems. The field ofconstructive reverse mathematicsdevelops this idea further by classifying various principles in terms of "how nonconstructive" they are, by showing they are equivalent to various fragments of the law of the excluded middle.
Brouwer also provided "weak" counterexamples.[8]Such counterexamples do not disprove a statement, however; they only show that, at present, no constructive proof of the statement is known. One weak counterexample begins by taking some unsolved problem of mathematics, such asGoldbach's conjecture, which asks whether every even natural number larger than 4 is the sum of two primes. Define a sequencea(n) of rational numbers as follows:[9]
For eachn, the value ofa(n) can be determined by exhaustive search, and soais a well defined sequence, constructively. Moreover, becauseais aCauchy sequencewith a fixed rate
of convergence,aconverges to some real number α, according to the usual treatment of real numbers in constructive mathematics.
Several facts about the real number α can be proved constructively. However, based on the different meaning of the words in constructive mathematics, if there is a constructive proof that "α = 0 or α ≠ 0" then this would mean that there is a constructive proof of Goldbach's conjecture (in the former case) or a constructive proof that Goldbach's conjecture is false (in the latter case). Because no such proof is known, the quoted statement must also not have a known constructive proof. However, it is entirely possible that Goldbach's conjecture may have a constructive proof (as we do not know at present whether it does), in which case the quoted statement would have a constructive proof as well, albeit one that is unknown at present. The main practical use of weak counterexamples is to identify the "hardness" of a problem. For example, the counterexample just shown shows that the quoted statement is "at least as hard to prove" as Goldbach's conjecture. Weak counterexamples of this sort are often related to thelimited principle of omniscience. | https://en.wikipedia.org/wiki/Constructive_proof |
In thephilosophy of mathematics,constructivismasserts that it is necessary to find (or "construct") a specific example of a mathematical object in order to prove that an example exists. Contrastingly, in classical mathematics, one can prove theexistenceof a mathematical object without "finding" that object explicitly, by assuming its non-existence and then deriving acontradictionfrom that assumption. Such aproof by contradictionmight be called non-constructive, and a constructivist might reject it. The constructive viewpoint involves a verificational interpretation of theexistential quantifier, which is at odds with its classical interpretation.
There are many forms of constructivism.[1]These include the program ofintuitionismfounded byBrouwer, thefinitismofHilbertandBernays, theconstructive recursive mathematicsofShaninandMarkov, andBishop's program ofconstructive analysis.[2]Constructivism also includes the study ofconstructive set theoriessuch asCZFand the study oftopos theory.
Constructivism is often identified with intuitionism, although intuitionism is only one constructivist program. Intuitionism maintains that the foundations of mathematics lie in the individual mathematician's intuition, thereby making mathematics into an intrinsically subjective activity.[3]Other forms of constructivism are not based on this viewpoint of intuition, and are compatible with an objective viewpoint on mathematics.
Much constructive mathematics usesintuitionistic logic, which is essentiallyclassical logicwithout thelaw of the excluded middle. This law states that, for any proposition, either that proposition is true or its negation is. This is not to say that the law of the excluded middle is denied entirely; special cases of the law will be provable. It is just that the general law is not assumed as anaxiom. Thelaw of non-contradiction(which states that contradictory statements cannot both be true at the same time) is still valid.
For instance, inHeyting arithmetic, one can prove that for any propositionpthatdoes not containquantifiers,∀x,y,z,…∈N:p∨¬p{\displaystyle \forall x,y,z,\ldots \in \mathbb {N} :p\vee \neg p}is a theorem (wherex,y,z... are thefree variablesin the propositionp). In this sense, propositions restricted to thefiniteare still regarded as being either true or false, as they are in classical mathematics, but thisbivalencedoes not extend to propositions that refer toinfinitecollections.
In fact,L. E. J. Brouwer, founder of the intuitionist school, viewed the law of the excluded middle as abstracted from finite experience, and then applied to the infinite withoutjustification. For instance,Goldbach's conjectureis the assertion that every even number greater than 2 is the sum of twoprime numbers. It is possible to test for any particular even number whether it is the sum of two primes (for instance by exhaustive search), so any one of them is either the sum of two primes or it is not. And so far, every one thus tested has in fact been the sum of two primes.
But there is no known proof that all of them are so, nor any known proof that not all of them are so; nor is it even known whethereithera proofora disproof of Goldbach's conjecture must exist (the conjecture may beundecidablein traditional ZF set theory). Thus to Brouwer, we are not justified in asserting "either Goldbach's conjecture is true, or it is not." And while the conjecture may one day be solved, the argument applies to similar unsolved problems. To Brouwer, the law of the excluded middle is tantamount to assuming thateverymathematical problem has a solution.
With the omission of the law of the excluded middle as an axiom, the remaininglogical systemhas anexistence propertythat classical logic does not have: whenever∃x∈XP(x){\displaystyle \exists _{x\in X}P(x)}is proven constructively, then in factP(a){\displaystyle P(a)}is proven constructively for (at least) one particulara∈X{\displaystyle a\in X}, often called awitness. Thus the proof of the existence of a mathematical object is tied to the possibility of its construction.
In classicalreal analysis, one way todefine a real numberis as anequivalence classofCauchy sequencesofrational numbers.
In constructive mathematics, one way to construct a real number is as afunctionƒthat takes a positive integern{\displaystyle n}and outputs a rationalƒ(n), together with a functiongthat takes a positive integernand outputs a positive integerg(n) such that
so that asnincreases, the values ofƒ(n) get closer and closer together. We can useƒandgtogether to compute as close a rational approximation as we like to the real number they represent.
Under this definition, a simple representation of the real numbereis:
This definition corresponds to the classical definition using Cauchy sequences, except with a constructive twist: for a classical Cauchy sequence, it is required that, for any given distance,there exists (in a classical sense)a member in the sequence after which all members are closer together than that distance. In the constructive version, it is required that, for any given distance, it is possible to actually specify a point in the sequence where this happens (this required specification is often called themodulus of convergence). In fact, thestandard constructive interpretationof the mathematical statement
is precisely the existence of the function computing the modulus of convergence. Thus the difference between the two definitions of real numbers can be thought of as the difference in the interpretation of the statement "for all... there exists..."
This then opens the question as to what sort offunctionfrom acountablesetto a countable set, such asfandgabove, can actually be constructed. Different versions of constructivism diverge on this point. Constructions can be defined as broadly asfree choice sequences, which is the intuitionistic view, or as narrowly as algorithms (or more technically, thecomputable functions), or even left unspecified. If, for instance, the algorithmic view is taken, then the reals as constructed here are essentially what classically would be called thecomputable numbers.
To take the algorithmic interpretation above would seem at odds with classical notions ofcardinality. By enumerating algorithms, we can show that thecomputable numbersare classically countable. And yetCantor's diagonal argumenthere shows that real numbers have uncountable cardinality. To identify the real numbers with the computable numbers would then be a contradiction. Furthermore, the diagonal argument seems perfectly constructive.
Indeed Cantor's diagonal argument can be presented constructively, in the sense that given abijectionbetween the natural numbers and real numbers, one constructs a real number not in the functions range, and thereby establishes a contradiction. One can enumerate algorithms to construct a functionT, about which we initially assume that it is a function from the natural numbersontothe reals. But, to each algorithm, there may or may not correspond a real number, as the algorithm may fail to satisfy the constraints, or even be non-terminating (Tis apartial function), so this fails to produce the required bijection. In short, one who takes the view that real numbers are (individually) effectively computable interprets Cantor's result as showing that the real numbers (collectively) are notrecursively enumerable.
Still, one might expect that sinceTis a partial function from the natural numbers onto the real numbers, that therefore the real numbers areno more thancountable. And, since every natural number can betriviallyrepresented as a real number, therefore the real numbers areno less thancountable. They are, thereforeexactlycountable. However this reasoning is not constructive, as it still does not construct the required bijection. The classical theorem proving the existence of a bijection in such circumstances, namely theCantor–Bernstein–Schroeder theorem, is non-constructive. It has recently been shown that theCantor–Bernstein–Schroeder theoremimplies thelaw of the excluded middle, hence there can be no constructive proof of the theorem.[4]
The status of theaxiom of choicein constructive mathematics is complicated by the different approaches of different constructivist programs. One trivial meaning of "constructive", used informally by mathematicians, is "provable inZF set theorywithout the axiom of choice." However, proponents of more limited forms of constructive mathematics would assert that ZF itself is not a constructive system.
In intuitionistic theories oftype theory(especially higher-type arithmetic), many forms of the axiom of choice are permitted. For example, the axiom AC11can be paraphrased to say that for any relationRon the set of real numbers, if you have proved that for each real numberxthere is a real numberysuch thatR(x,y) holds, then there is actually a functionFsuch thatR(x,F(x)) holds for all real numbers. Similar choice principles are accepted for all finite types. The motivation for accepting these seemingly nonconstructive principles is the intuitionistic understanding of the proof that "for each real numberxthere is a real numberysuch thatR(x,y) holds". According to theBHK interpretation, this proof itself is essentially the functionFthat is desired. The choice principles that intuitionists accept do not imply thelaw of the excluded middle.
However, in certain axiom systems for constructive set theory, the axiom of choice does imply the law of the excluded middle (in the presence of other axioms), as shown by theDiaconescu-Goodman-Myhill theorem. Some constructive set theories include weaker forms of the axiom of choice, such as theaxiom of dependent choicein Myhill's set theory.
Classicalmeasure theoryis fundamentally non-constructive, since the classical definition ofLebesgue measuredoes not describe any way how to compute the measure of a set or the integral of a function. In fact, if one thinks of a function just as a rule that "inputs a real number and outputs a real number" then there cannot be any algorithm to compute the integral of a function, since any algorithm would only be able to call finitely many values of the function at a time, and finitely many values are not enough to compute the integral to any nontrivial accuracy. The solution to this conundrum, carried out first inBishop (1967), is to consider only functions that are written as the pointwise limit of continuous functions (with known modulus of continuity), with information about the rate of convergence. An advantage of constructivizing measure theory is that if one can prove that a set is constructively of full measure, then there is an algorithm for finding a point in that set (again seeBishop (1967)).
Traditionally, some mathematicians have been suspicious, if not antagonistic, towards mathematical constructivism, largely because of limitations they believed it to pose for constructive analysis.
These views were forcefully expressed byDavid Hilbertin 1928, when he wrote inGrundlagen der Mathematik, "Taking the principle of excluded middle from the mathematician would be the same, say, as proscribing the telescope to the astronomer or to the boxer the use of his fists".[5]
Errett Bishop, in his 1967 workFoundations of Constructive Analysis,[2]worked to dispel these fears by developing a great deal of traditional analysis in a constructive framework.
Even though most mathematicians do not accept the constructivist's thesis that only mathematics done based on constructive methods is sound, constructive methods are increasingly of interest on non-ideological grounds.[citation needed]For example, constructive proofs in analysis may ensurewitness extraction, in such a way that working within the constraints of the constructive methods may make finding witnesses to theories easier than using classical methods.[citation needed]Applications for constructive mathematics have also been found intyped lambda calculi,topos theoryandcategorical logic, which are notable subjects in foundational mathematics andcomputer science. In algebra, for such entities astopoiandHopf algebras, the structure supports aninternal languagethat is a constructive theory; working within the constraints of that language is often more intuitive and flexible than working externally by such means as reasoning about the set of possible concrete algebras and theirhomomorphisms.[citation needed]
PhysicistLee Smolinwrites inThree Roads to Quantum Gravitythat topos theory is "the right form of logic for cosmology" (page 30) and "In its first forms it was called 'intuitionistic logic'" (page 31). "In this kind of logic, the statements an observer can make about the universe are divided into at least three groups: those that we can judge to be true, those that we can judge to be false and those whose truth we cannot decide upon at the present time" (page 28). | https://en.wikipedia.org/wiki/Constructivism_(philosophy_of_mathematics) |
Ordinary language philosophy(OLP[1]) is aphilosophical methodologythat sees traditional philosophical problems as rooted in misunderstandings philosophers develop bydistorting or forgetting how words are ordinarily usedto conveymeaninginnon-philosophical contexts. "Such 'philosophical' uses of language, on this view, create the very philosophical problems they are employed to solve."[2]
This approach typically involves eschewing philosophical "theories" in favor of close attention to the details of the use of everyday "ordinary" language. Its earliest forms are associated with the later work ofLudwig Wittgensteinand a number of mid-20th century philosophers who can be split into two main groups, neither of which could be described as an organized "school".[3]In its earlier stages, contemporaries of Wittgenstein atCambridge Universitysuch asNorman Malcolm,Alice Ambrose,Friedrich Waismann,Oets Kolk BouwsmaandMorris Lazerowitzstarted to develop ideas recognisable as ordinary language philosophy. These ideas were further elaborated from 1945 onwards through the work of some Oxford University philosophers led initially byGilbert Ryle, then followed byJ. L. AustinandPaul Grice. This Oxford group also includedH. L. A. Hart,Geoffrey Warnock,J. O. UrmsonandP. F. Strawson. The close association between ordinary language philosophy and these later thinkers has led to it sometimes being called "Oxford philosophy". The posthumous publication of Wittgenstein'sPhilosophical Investigationsin 1953 further solidified the notion of ordinary language philosophy. Philosophers a generation after Austin who made use of the method of ordinary language philosophy includeAntony Flew,Stanley Cavell,John SearleandOswald Hanfling. Today,Alice Crary,Nancy Bauer,Sandra Laugier, as well as literary theoristsToril Moi,Rita Felski, andShoshana Felmanhave adopted the teachings of Cavell in particular, generating a resurgence of interest in ordinary language philosophy.
The later Wittgenstein held that the meanings of words reside in their ordinary uses and that this is why philosophers trip over words taken inabstraction. From this came the idea that philosophy had gotten into trouble by trying to use words outside of the context of their use in ordinary language. For example, "understanding" is what you mean when you say "I understand". "Knowledge" is what you mean when you say "I know". The point is that youalready knowwhat "understanding" or "knowledge" are, at least implicitly. Philosophers are ill-advised to construct new definitions of these terms, because this is necessarily aredefinition, and the argument may unravel into self-referential nonsense. Rather, philosophers must explore the definitions these terms already have, without forcing convenient redefinitions onto them.
The controversy really begins when ordinary language philosophers apply the same leveling tendency to questions such asWhat is Truth?orWhat is Consciousness?Philosophers in this school would insist that we cannot assume that (for example) truth 'is' a 'thing' (in the same sense that tables and chairs are 'things') that the word 'truth' represents. Instead, we must look at the differing ways in which the words 'truth' and 'conscious' actually function in ordinary language. We may well discover, after investigation, that there is no single entity to which the word 'truth' corresponds, something Wittgenstein attempts to get across via his concept of a 'family resemblance' (cf.Philosophical Investigations). Therefore, ordinary language philosophers tend to be anti-essentialist.
Earlyanalytic philosophyhad a less positive view of ordinary language.Bertrand Russelltended to dismiss language as being of little philosophical significance, and ordinary language as just too confused to help solve metaphysical and epistemological problems.Gottlob Frege, theVienna Circle(especiallyRudolf Carnap), the young Wittgenstein, andW. V. O. Quineall attempted to improve upon it, in particular using the resources of modernlogic. In hisTractatus Logico-PhilosophicusWittgenstein more or less agreed with Russell that language ought to be reformulated so as to be unambiguous, so as to accurately represent the world, so that we can better deal with philosophical questions.
By contrast, Wittgenstein later described his task as bringing "words back from their metaphysical to their everyday use".[4]The sea change brought on by his unpublished work in the 1930s centered largely on the idea that there is nothingwrongwith ordinary language as it stands, and that many traditional philosophical problems are only illusions brought on by misunderstandings about language and related subjects. The former idea led to rejecting the approaches of earlier analytic philosophy—arguably, of any earlier philosophy—and the latter led to replacing them with careful attention to language in its normal use, in order to "dissolve" the appearance of philosophical problems, rather than attempt to solve them. At its inception, ordinary language philosophy (also calledlinguistic philosophy) was taken as either an extension of or as an alternative to analytic philosophy.
Ordinary language analysis largely flourished and developed atOxford Universityin the 1940s, under Austin and Ryle, and was quite widespread for a time before declining rapidly in popularity in the late 1960s and early 1970s. Despite this decline, Stanley Cavell and John Searle (both students of Austin) published seminal texts which draw significantly from the ordinary language tradition in 1969.[5][6]Cavell more explicitly adopted the banner of ordinary language philosophy and inspired a generation of philosophers and literary theorists to reexamine the merits of this philosophical approach, all the while distancing himself from the limitations of traditional analytic philosophy. This caused a relatively recent resurgence of interest in this methodology, with some updates particularly due to the literature and teachings of Cavell, has also become a mainstay of what might be calledpostanalytic philosophy. Seeking to avoid the increasingly metaphysical and abstruse language found in mainstreamanalytic philosophy,posthumanism, andpost-structuralism, a number of feminist philosophers have adopted the methods of ordinary language philosophy.[7]Many of these philosophers were students or colleagues of Cavell.
There are some affinities between contemporary ordinary language philosophy and philosophicalpragmatism(orneopragmatism). Interestingly, the pragmatist philosopherF. C. S. Schillermight be seen as a forerunner to ordinary language philosophy, especially in his noted publicationRiddles of the Sphinx.[8]
Seneca the Youngerdescribed the activities of other philosophers in ways that reflect some of the same concerns as ordinary language philosophers.[9]
For these men, too, have left to us, not positive discoveries, but problems whose solution is still to be sought. They might perhaps have discovered the essentials, had they not sought the superfluous also. They lost much time in quibbling about words and in sophistical argumentation; all that sort of thing exercises the wit to no purpose. We tie knots and bind up words in double meanings, and then try to untie them. Have we leisure enough for this? Do we already know how to live, or die? We should rather proceed with our whole souls towards the point where it is our duty to take heed lest things, as well as words, deceive us. Why, pray, do you discriminate between similar words, when nobody is ever deceived by them except during the discussion? It is things that lead us astray: it is between things that you must discriminate.
One of the most ardent critics of ordinary language philosophy was a student at Oxford (and later a philosopher himself),Ernest Gellner, who said:[10]
"[A]t that time the orthodoxy best described as linguistic philosophy, inspired by Wittgenstein, was crystallizing and seemed to me totally and utterly misguided. Wittgenstein's basic idea was that there is no general solution to issues other than the custom of the community. Communities are ultimate. He didn't put it this way, but that was what it amounted to. And this doesn't make sense in a world in which communities are not stable and are not clearly isolated from each other. Nevertheless, Wittgenstein managed to sell this idea, and it was enthusiastically adopted as an unquestionable revelation. It is very hard nowadays for people to understand what the atmosphere was like then. This wastheRevelation. It wasn't doubted. But it was quite obvious to me it was wrong. It was obvious to me the moment I came across it, although initially, if your entire environment, and all the bright people in it, hold something to be true, you assumeyoumust be wrong, not understanding it properly, and they must be right. And so I explored it further and finally came to the conclusion that I did understand it right, and itwasrubbish, which indeed it is."
Gellner criticized ordinary language philosophy in his bookWords and Thingspublished in 1959. | https://en.wikipedia.org/wiki/Ordinary_language_philosophy |
Metaphilosophy, sometimes calledthe philosophy of philosophy, is "the investigation of the nature ofphilosophy".[1]Its subject matter includes the aims of philosophy, the boundaries of philosophy, and its methods.[2][3]Thus, while philosophy characteristically inquires into the nature ofbeing, the reality of objects, the possibility of knowledge, the nature oftruth, and so on, metaphilosophy is theself-reflectiveinquiry into the nature, aims, and methods of the activity that makes these kinds of inquiries, by asking whatisphilosophy itself, what sorts of questions it should ask, how it might pose and answer them, and what it can achieve in doing so. It is considered by some to be a subject prior and preparatory to philosophy,[4]while others see it as inherently a part of philosophy,[5]or automatically a part of philosophy[6]while others adopt some combination of these views.[2]
The interest in metaphilosophy led to the establishment of the journalMetaphilosophyin January 1970.[7]
Many sub-disciplines of philosophy have their own branch of 'metaphilosophy', examples beingmeta-aesthetics,meta-epistemology,meta-ethics, andmetametaphysics(meta-ontology).[8]
Although thetermmetaphilosophy and explicit attention to metaphilosophy as a specific domain within philosophy arose in the 20th century, the topic is likely as old as philosophy itself, and can be traced back at least as far as the works ofAncient Greeksand Ancient IndianNyaya.[9]
Some philosophers consider metaphilosophy to be a subject apart from philosophy, above or beyond it,[4]while others object to that idea.[5]Timothy Williamsonargues that the philosophy of philosophy is "automatically part of philosophy", as is the philosophy of anything else.[6]Ludwig Wittgensteinargued that there is no "second-order philosophy" in the same way an explanation of the spelling of "spelling" is not second-order spelling,[10]ororthographyof the word 'orthography' is not second-order orthography.[11]Nicholas Bunnin andJiyuan Yuwrite that the separation of first- from second-order study has lost popularity as philosophers find it hard to observe the distinction.[12]As evidenced by these contrasting opinions, debate persists as to whether the evaluation of the nature of philosophy is 'second-order philosophy' or simply 'plain philosophy'.
Many philosophers have expressed doubts over the value of metaphilosophy.[13]Among them isGilbert Ryle: "preoccupation with questions about methods tends to distract us from prosecuting the methods themselves. We run as a rule, worse, not better, if we think a lot about our feet. So let us ... not speak of it all but just do it."[14]
The designationsmetaphilosophyandphilosophy of philosophyhave a variety of meanings, sometimes taken to be synonyms, and sometimes seen as distinct.
Morris Lazerowitzclaims to have coined the term 'metaphilosophy' around 1940 and used it in print in 1942.[1]Lazerowitz proposed that metaphilosophy is 'the investigation of the nature of philosophy'.[1]Earlier uses have been found in translations from French.[15]The term is derived fromGreekwordmetaμετά ("after", "beyond", "with") andphilosophíaφιλοσοφία ("love of wisdom").
The term 'metaphilosophy' is used byPaul Moser[16]in the sense of a 'second-order' or more fundamental undertaking than philosophy itself, in the manner suggested byCharles Griswold:[4]
"The distinction between philosophy and metaphilosophy has an analogue in the familiar distinction between mathematics and metamathematics."[16]
Some other philosophers treat the prefixmetaas simply meaning 'about...', rather than as referring to ametatheoretical'second-order' form of philosophy, among themRescher[17]and Double.[18]Others, such asWilliamson, prefer the term'philosophy of philosophy'instead of 'metaphilosophy' as it avoids the connotation of a 'second-order' discipline that looks down on philosophy, and instead denotes something that is a part of it.[19]Joll suggests that to take metaphilosophy as 'the application of the methods of philosophy to philosophy itself' is too vague, while the view that sees metaphilosophy as a 'second-order' or more abstract discipline, outside philosophy, "is narrow and tendentious".[20]
In theanalytic tradition, the term "metaphilosophy" is mostly used to tag commenting and research on previous works as opposed to original contributions towards solvingphilosophical problems.[21]
Ludwig Wittgensteinwrote about the nature of philosophical puzzles and philosophical understanding. He suggested philosophical errors arose from confusions about the nature of philosophical inquiry.
C. D. Broaddistinguished Critical from Speculative philosophy in his "The Subject-matter of Philosophy, and its Relations to the special Sciences", inIntroduction to Scientific Thought, 1923.Curt Ducasse, inPhilosophy as a Science, examines several views of the nature of philosophy, and concludes that philosophy has a distinct subject matter:appraisals. Ducasse's view has been among the first to be described as 'metaphilosophy'.[22]
Henri LefebvreinMétaphilosophie(1965) argued, from a Marxian standpoint, in favor of an "ontological break", as a necessary methodological approach for critical social theory (whilst criticizingLouis Althusser's "epistemological break" with subjective Marxism, which represented a fundamental theoretical tool for the school of Marxist structuralism).
Paul Moserwrites that typical metaphilosophical discussion includes determining the conditions under which a claim can be said to be a philosophical one. He regardsmeta-ethics, the study ofethics, to be a form of metaphilosophy, as well asmeta-epistemology, the study ofepistemology.[16]
Many sub-disciplines of philosophy have their own branch of 'metaphilosophy'.[8]However, some topics within 'metaphilosophy' cut across the various subdivisions of philosophy to consider fundamentals important to all its sub-disciplines.
Some philosophers (e.g.existentialists,pragmatists) think philosophy is ultimately a practical discipline that should help us lead meaningful lives by showing us who we are, how we relate to the world around us and what we should do.[citation needed]Others (e.g.analytic philosophers) see philosophy as a technical, formal, and entirely theoretical discipline, with goals such as "the disinterested pursuit of knowledge for its own sake".[23]Other proposed goals of philosophy include discovering the absolutely fundamental reason of everything it investigates, making explicit the nature and significance of ordinary and scientific beliefs,[24]and unifying and transcending the insights given by science and religion.[25]Others proposed that philosophy is a complex discipline because it has 4 or 6 different dimensions.[26][27]
Defining philosophy and its boundaries is itself problematic;Nigel Warburtonhas called it "notoriously difficult".[28]There is no straightforward definition,[25]and most interesting definitions are controversial.[29]AsBertrand Russellwrote:
"We may note one peculiar feature of philosophy. If someone asks the question what is mathematics, we can give him a dictionary definition, let us say the science of number, for the sake of argument. As far as it goes this is an uncontroversial statement... Definitions may be given in this way of any field where a body of definite knowledge exists. But philosophy cannot be so defined. Any definition is controversial and already embodies a philosophic attitude. The only way to find out what philosophy is, is to do philosophy."[30]
While there is some agreement that philosophy involves general or fundamental topics,[23][31]there is no clear agreement about a series of demarcation issues, including:
Philosophical method (or philosophical methodology) is the study of how to do philosophy. A common view among philosophers is that philosophy is distinguished by the ways that philosophers follow in addressing philosophical questions. There is not just one method that philosophers use to answer philosophical questions.
C.D. Broadclassifies philosophy into two methods, he distinguished between critical philosophy and speculative philosophy. He described critical philosophy as analysing "unanalysed concepts in daily life and in science" and then "expos[ing] them to every objection that we can think of". While speculative philosophy's role is to "take over all aspects of human experience, to reflect upon them, and to try to think out a view of Reality as a whole which shall do justice to all of them".[36]
Recently, some philosophers have cast doubt about intuition as a basic tool in philosophical inquiry, from Socrates up to contemporary philosophy of language. InRethinking Intuition[37]various thinkers discard intuition as a valid source of knowledge and thereby call into question 'a priori' philosophy.Experimental philosophyis a form of philosophical inquiry that makes at least partial use ofempirical research—especiallyopinion polling—in order to address persistentphilosophical questions. This is in contrast with the methods found inanalytic philosophy, whereby some say a philosopher will sometimes begin by appealing to his or herintuitionson an issue and then form anargumentwith those intuitions aspremises. However, disagreement about what experimental philosophy can accomplish is widespread and several philosophers have offeredcriticisms. One claim is that the empirical data gathered by experimental philosophers can have an indirect effect on philosophical questions by allowing for a better understanding of the underlying psychological processes which lead to philosophical intuitions.[38]Some analytic philosophers like Timothy Williamson[39]have rejected such a move against 'armchair' philosophy–i.e., philosophical inquiry that is undergirded by intuition–by construing 'intuition' (which they believe to be a misnomer) as merely referring to common cognitive faculties: If one is calling into question 'intuition', one is, they would say, harboring a skeptical attitude towards common cognitive faculties–a consequence that seems philosophically unappealing. For Williamson, instances of intuition are instances of our cognitive faculties processing counterfactuals[40](or subjunctive conditionals) that are specific to the thought experiment or example in question.
A prominent question in metaphilosophy is whether philosophical progress occurs and, moreover, whether such progress in philosophy is even possible.[41]
David Chalmersdivides inquiry into philosophical progress in metaphilosophy into three questions.
Ludwig Wittgenstein, inCulture and Valueremarked, "Philosophy hasn't made any progress? - If somebody scratches the spot where he has an itch, do we have to see some progress?...And can't this reaction to an irritation continue in the same way for a long time before the cure for an itching is discovered?".[43]
According toHilary Putnamphilosophy is more adept at showing people that specific ideas or arguments are wrong than that specific ideas or arguments are right.[44] | https://en.wikipedia.org/wiki/Metaphilosophy |
Theinternal–external distinctionis a distinction used in philosophy to divide anontologyinto two parts: an internal part concerningobservationrelated to philosophy, and an external part concerningquestionrelated to philosophy.
Rudolf Carnapintroduced the idea of a 'linguistic framework' or a 'form of language' that uses a precise specification of the definitions of and the relations between ontological entities. The discussion of a proposition within a framework can take on a logical or an empirical (that is, factual) aspect. The logical aspect concerns whether the proposition respects the definitions and rules set up in the framework. The empirical aspect concerns the application of the framework in some or another practical situation.
“If someone wishes to speak in his language about a new kind of entities, he has to introduce a system of new ways of speaking, subject to new rules; we shall call this procedure the construction of a linguistic framework for the new entities in question.”
“After the new forms are introduced into the language, it is possible to formulate with their help internal questions and possible answers to them. A question of this kind may be either empirical or logical; accordingly a true answer is either factually true or analytic.”
The utility of a linguistic framework constitutes issues that Carnap calls 'external' or 'pragmatic'.
“To be sure, we have to face at this point an important question; but it is a practical, not a theoretical question; it is the question of whether or not to accept the new linguistic forms. The acceptance cannot be judged as being either true or false because it is not an assertion. It can only be judged as being more or less expedient, fruitful, conducive to the aim for which the language is intended. Judgments of this kind supply the motivation for the decision of accepting or rejecting the kind of entities.”
“the decisive question is not the alleged ontological question of the existence of abstract entities but rather the question whether the rise of abstract linguistic forms or, in technical terms, the use of variables beyond those for things (or phenomenal data), is expedient and fruitful for the purposes for which semantical analyses are made, viz. the analysis, interpretation, clarification, or construction of languages of communication, especiallylanguages of science.”
The distinction between 'internal' and 'external' arguments is not as obvious as it might appear. For example, discussion of theimaginary unit√−1mightbe an internal question framed in the language ofcomplex numbersabout the correct usage of√−1, or itmightbe a question about the utility of complex numbers: whether there is any practical advantage in using√−1.[1]Clearly the question of utility is not completely separable from the way a linguistic framework is organized. A more formal statement of the internal-external difference is provided by Myhill:
"A question...is internal relative to [a linguistic framework]Tif the asker acceptsTat the time of his asking, and is prepared to useTin order to obtain an answer; external otherwise, in particular if the question is part of a chain of reflections and discussions aimed at choosing betweenTand some rival theory."[2]
Quine disputed Carnap's position from several points of view. His most famous criticism of Carnap wasTwo dogmas of empiricism, but this work is not directed at the internal-external distinction but at theanalytic–synthetic distinctionbrought up by Carnap in his work on logic:Meaning and Necessity.[3][4]Quine's criticism of the internal-external distinction is found in his worksOn Carnap's views on OntologyandWord and Object.[5][6]
Quine's approach to the internal-external division was to cast internal questions assubclassquestions and external questions ascategoryquestions. What Quine meant by 'subclass' questions were questions like "what are so-and-so's?" where the answers are restricted to lie within a specific linguistic framework. On the other hand, 'category' questions were questions like "what are so-and-so's?" asked outside any specific language where the answers are not so-restricted.[7]The termsubclassarises as follows: Quine supposes that a particular linguistic framework selects from a broadcategoryof meanings for a term, sayfurniture, a particular or subclass of meanings, saychairs.
Quine argued that there is always possible an overarching language that encompasses both types of question and the distinction between the two types is artificial.
It is evident that the question whether there are numbers will be a category question only with respect to languages which appropriate a separate style of variables for the exclusive purpose of referring to numbers. If our language refers to numbers through variables that also take classes other than numbers as values, then the question whether there are numbers becomes a subclass question...Even the question whether there are classes, or whether there are physical objects becomes a subclass question if our language uses a single style of variables to range over both sorts of entities. Whether the statement that there are physical objects and the statement that there are black swans should be put on the same side of the dichotomy, or on opposite sides, comes to depend upon the rather trivial consideration of whether we use one style of variables or two for physical objects and classes.
So we can switch back and forth from internal to external questions just by a shift of vocabulary. As Thomasson puts it,[7]if our language refers to 'things' we can ask of all thethingsthere are, are any of themnumbers; while if our language includes only 'numbers', we can ask only narrower questions like whether anynumbersareprime numbers. In other words, Quine's position is that "Carnap's main objection to metaphysics rests on an unsupported premise, namely the assumption that there is some sort of principled plurality in language which blocks Quine's move to homogenize theexistential quantifier."[8]"What is to stop us treating all ontological issues as internal questions within a single grand framework?"[8]
A view close to Quine’s subclass/category description is called ‘’conceptual relativity’’.[9]To describe conceptual relativity, Putnam points out that while the pages of a book are regarded as part of book when they are attached, they are things-in-themselves if they are detached. My nose is only part of an object, my person. On the other hand, is my nose the same as the collection of atoms or molecules forming it? This arbitrariness of language is called conceptual relativity, a matter of conventions.[10]The point is made that if one wishes to refer only to 'pages', then books may not exist, andvice versaif one wishes to admit only to books. Thus, in this view, the Carnapian multiplicity of possible linguistic frameworks proposes a variety of 'realities' and the prospect of choosing between them, a form of what is calledontological pluralism, or multiple realities.[7][11][12]The notion of 'one reality' behind our everyday perceptions is common in everyday life, and some find it unsettling that what 'exists' might be a matter of what language one chooses to use.
A related idea isquantifier variance.[13]Loosely speaking a'quantifier expression'is just a function that saysthere exists at least one such-and-such. Then 'quantifier variance' combines the notion that the same object can have different names, so the quantifier may refer to the same thing even though different names are employed by it, and the notion that quantifier expressions can be formed in a variety of ways. Hirsch says this arbitrariness over what 'exists' is a quandary only due to Putnam's formulation, and it is resolved by turning things upside down and saying things that exist can have different names. In other words, Hirsch agrees with Quine that there is an overarching language that we can adapt to different situations. The Carnapian internal/external distinction in this view, as with the subclass/category distinction, is just a matter of language, and has nothing to do with 'reality'.[14]
More recently, some philosophers have stressed that the real issue is not one of language as such, but the difference between questions askedusinga linguistic framework and those asked somehow before the adoption of a linguistic framework, the difference between questions about the construction and rules of a framework, and questions about the decision whether to use a framework.[7]This distinction is called by Thomasson and Price the difference between ‘’using’’ a term and ‘’mentioning’’ a term.[7][8]As Price notes, Carnap holds that there is a mistake involved in "assimilating issues of the existence of numbers (say) and of the existence of physical objects...the distinctions in question are not grounded at the syntactical level."[8]Price suggests a connection with Ryle's view of different functions of language:
Ryle's functional orientation attention – his attention to the question as to what a linguistic categorydoes– will instead lead us to focus on the difference between thefunctionsof talk of beliefs and talk of tables; on the issue of what the two kinds of talk arefor, rather than that of what they areabout.[15]
Although not supporting an entire lack of distinction like the subclass/category view of Quine, as apragmaticissue, the use/mention distinction still does not provide a sharp division between the issues of forming and conceptualizing a framework and deciding whether to adopt it: each informs the other.[16]An example is the well-known tension between mathematicians and physicists, the one group very concerned over questions of logic and rigor, and the other inclined to sacrifice a bit of rigor to explain observations.[17]
But the poor mathematician translates it into equations, and as the symbols do not mean anything to him he has no guide but precise mathematical rigor and care in the argument. The physicist, who knows more or less how the answer is going to come out, can sort of guess part way, and so go along rather rapidly. The mathematical rigor of great precision is not very useful in physics. But one should not criticize the mathematicians on this score...They are doing their own job.[18]
One approach to selecting a framework is based upon an examination of the conceptual relations between entities in a framework, which entities are more 'fundamental'. One framework may then 'include' another because the entities in one framework apparently can be derived from or'supervene'upon those in the more fundamental one.[19]While Carnap claims such decisions are pragmatic in nature, external questions with no philosophical importance, Schaffer suggests we avoid this formulation. Instead, we should go back to Aristotle and look upon nature as hierarchical, and pursue philosophicaldiagnostics: that is, examination of criteria for what is fundamental and what relations exist between all entities and these fundamental ones.[20]But "how can we discover what grounds what?...questions regarding not only what grounds what, but also what the grounding consists in, and how one may discover or discern grounding facts, seem to be part of an emerging set of relational research problems in metaphysics."[21] | https://en.wikipedia.org/wiki/Internal%E2%80%93external_distinction |
Inphilosophical logic,metaphysics, and thephilosophy of language, theproblem of absolute generalityis the problem of referring to absolutely everything.[1]Historically, philosophers have assumed that some of their statements are absolutely general, referring to truly everything.[1]In recent years, logicians working in the logic ofquantificationandparadoxhave challenged this view, arguing that it is impossible for the logical quantifiers to range over an absolutely unrestricted domain.[2]
Philosophers who deny the possibility of absolutely unrestricted quantification (often calledgenerality relativists) argue that attempting to speak absolutely generally generates paradoxes such asRussell'sorGrelling's, that absolute generality leads to indeterminacy due to theLöwenheim–Skolem theorem, or that absolute generality fails because the notion of "object" is relative.[3]
Philosophers who believe that we can indeed quantify over absolutely everything (known asgenerality absolutists), such asTimothy Williamson, may respond by noting that it is difficult to see how a skeptic of absolute generality can frame this view without invoking the concept of absolute generality.[4]
A 2006 book,Absolute Generality, published byOxford University Press, contains essays on the subject written by both the leading proponents and opponents of absolutely unrestricted quantification.[5]
Thislogic-related article is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Absolute_generality |
Inmathematics, aclassification theoremanswers theclassificationproblem: "What are the objects of a given type, up to someequivalence?". It gives a non-redundantenumeration: each object is equivalent to exactly one class.
A few issues related to classification are the following.
There exist manyclassification theoremsinmathematics, as described below. | https://en.wikipedia.org/wiki/Classification_theorem |
In mathematics, the termmodulo("with respect to a modulus of", theLatinablativeofmoduluswhich itself means "a small measure") is often used to assert that two distinct mathematical objects can be regarded as equivalent—if their difference is accounted for by an additional factor. It was initially introduced intomathematicsin the context ofmodular arithmeticbyCarl Friedrich Gaussin 1801.[1]Since then, the term has gained many meanings—some exact and some imprecise (such as equating "modulo" with "except for").[2]For the most part, the term often occurs in statements of the form:
which is often equivalent to "Ais the same asBup toC", and means
Modulois amathematical jargonthat was introduced intomathematicsin the bookDisquisitiones ArithmeticaebyCarl Friedrich Gaussin 1801.[3]Given theintegersa,bandn, the expression "a≡b(modn)", pronounced "ais congruent tobmodulon", means thata−bis an integer multiple ofn, or equivalently,aandbboth share the same remainder when divided byn. It is theLatinablativeofmodulus, which itself means "a small measure."[4]
The term has gained many meanings over the years—some exact and some imprecise. The most general precise definition is simply in terms of anequivalence relationR, whereaisequivalent(orcongruent)tobmoduloRifaRb.
Gauss originally intended to use "modulo" as follows: given theintegersa,bandn, the expressiona≡b(modn) (pronounced "ais congruent tobmodulon") means thata−bis an integer multiple ofn, or equivalently,aandbboth leave the same remainder when divided byn. For example:
means that
Incomputingandcomputer science, the term can be used in several ways:
The term "modulo" can be used differently—when referring to different mathematical structures. For example:
In general,modding outis a somewhat informal term that means declaring things equivalent that otherwise would be considered distinct. For example, suppose the sequence 1 4 2 8 5 7 is to be regarded as the same as the sequence 7 1 4 2 8 5, because each is a cyclicly-shifted version of the other:
In that case, one is"modding out by cyclic shifts". | https://en.wikipedia.org/wiki/Modulo_(jargon) |
Inmathematics, more specifically incategory theory, auniversal propertyis a property that characterizesup toanisomorphismthe result of some constructions. Thus, universal properties can be used for defining some objects independently from the method chosen for constructing them. For example, the definitions of theintegersfrom thenatural numbers, of therational numbersfrom the integers, of thereal numbersfrom the rational numbers, and ofpolynomial ringsfrom thefieldof their coefficients can all be done in terms of universal properties. In particular, the concept of universal property allows a simple proof that allconstructions of real numbersare equivalent: it suffices to prove that they satisfy the same universal property.
Technically, a universal property is defined in terms ofcategoriesandfunctorsby means of auniversal morphism(see§ Formal definition, below). Universal morphisms can also be thought more abstractly asinitial or terminal objectsof acomma category(see§ Connection with comma categories, below).
Universal properties occur almost everywhere in mathematics, and the use of the concept allows the use of general properties of universal properties for easily proving some properties that would need boring verifications otherwise. For example, given acommutative ringR, thefield of fractionsof thequotient ringofRby aprime idealpcan be identified with theresidue fieldof thelocalizationofRatp; that isRp/pRp≅Frac(R/p){\displaystyle R_{p}/pR_{p}\cong \operatorname {Frac} (R/p)}(all these constructions can be defined by universal properties).
Other objects that can be defined by universal properties include: allfree objects,direct productsanddirect sums,free groups,free lattices,Grothendieck group,completion of a metric space,completion of a ring,Dedekind–MacNeille completion,product topologies,Stone–Čech compactification,tensor products,inverse limitanddirect limit,kernelsandcokernels,quotient groups,quotient vector spaces, and otherquotient spaces.
Before giving a formal definition of universal properties, we offer some motivation for studying such constructions.
To understand the definition of a universal construction, it is important to look at examples. Universal constructions were not defined out of thin air, but were rather defined after mathematicians began noticing a pattern in many mathematical constructions (see Examples below). Hence, the definition may not make sense to one at first, but will become clear when one reconciles it with concrete examples.
LetF:C→D{\displaystyle F:{\mathcal {C}}\to {\mathcal {D}}}be a functor between categoriesC{\displaystyle {\mathcal {C}}}andD{\displaystyle {\mathcal {D}}}. In what follows, letX{\displaystyle X}be an object ofD{\displaystyle {\mathcal {D}}},A{\displaystyle A}andA′{\displaystyle A'}be objects ofC{\displaystyle {\mathcal {C}}}, andh:A→A′{\displaystyle h:A\to A'}be a morphism inC{\displaystyle {\mathcal {C}}}.
Then, the functorF{\displaystyle F}mapsA{\displaystyle A},A′{\displaystyle A'}andh{\displaystyle h}inC{\displaystyle {\mathcal {C}}}toF(A){\displaystyle F(A)},F(A′){\displaystyle F(A')}andF(h){\displaystyle F(h)}inD{\displaystyle {\mathcal {D}}}.
Auniversal morphism fromX{\displaystyle X}toF{\displaystyle F}is a unique pair(A,u:X→F(A)){\displaystyle (A,u:X\to F(A))}inD{\displaystyle {\mathcal {D}}}which has the following property, commonly referred to as auniversal property:
For any morphism of the formf:X→F(A′){\displaystyle f:X\to F(A')}inD{\displaystyle {\mathcal {D}}}, there exists auniquemorphismh:A→A′{\displaystyle h:A\to A'}inC{\displaystyle {\mathcal {C}}}such that the following diagramcommutes:
We candualizethis categorical concept. Auniversal morphism fromF{\displaystyle F}toX{\displaystyle X}is a unique pair(A,u:F(A)→X){\displaystyle (A,u:F(A)\to X)}that satisfies the following universal property:
For any morphism of the formf:F(A′)→X{\displaystyle f:F(A')\to X}inD{\displaystyle {\mathcal {D}}}, there exists auniquemorphismh:A′→A{\displaystyle h:A'\to A}inC{\displaystyle {\mathcal {C}}}such that the following diagram commutes:
Note that in each definition, the arrows are reversed. Both definitions are necessary to describe universal constructions which appear in mathematics; but they also arise due to the inherent duality present in category theory.
In either case, we say that the pair(A,u){\displaystyle (A,u)}which behaves as above satisfies a universal property.
Universal morphisms can be described more concisely as initial and terminal objects in acomma category(i.e. one where morphisms are seen as objects in their own right).
LetF:C→D{\displaystyle F:{\mathcal {C}}\to {\mathcal {D}}}be a functor andX{\displaystyle X}an object ofD{\displaystyle {\mathcal {D}}}. Then recall that the comma category(X↓F){\displaystyle (X\downarrow F)}is the category where
Now suppose that the object(A,u:X→F(A)){\displaystyle (A,u:X\to F(A))}in(X↓F){\displaystyle (X\downarrow F)}is initial. Then
for every object(A′,f:X→F(A′)){\displaystyle (A',f:X\to F(A'))}, there exists a unique morphismh:A→A′{\displaystyle h:A\to A'}such that the following diagram commutes.
Note that the equality here simply means the diagrams are the same. Also note that the diagram on the right side of the equality is the exact same as the one offered in defining auniversal morphism fromX{\displaystyle X}toF{\displaystyle F}. Therefore, we see that a universal morphism fromX{\displaystyle X}toF{\displaystyle F}is equivalent to an initial object in the comma category(X↓F){\displaystyle (X\downarrow F)}.
Conversely, recall that the comma category(F↓X){\displaystyle (F\downarrow X)}is the category where
Suppose(A,u:F(A)→X){\displaystyle (A,u:F(A)\to X)}is a terminal object in(F↓X){\displaystyle (F\downarrow X)}. Then for every object(A′,f:F(A′)→X){\displaystyle (A',f:F(A')\to X)},
there exists a unique morphismh:A′→A{\displaystyle h:A'\to A}such that the following diagrams commute.
The diagram on the right side of the equality is the same diagram pictured when defining auniversal morphism fromF{\displaystyle F}toX{\displaystyle X}. Hence, a universal morphism fromF{\displaystyle F}toX{\displaystyle X}corresponds with a terminal object in the comma category(F↓X){\displaystyle (F\downarrow X)}.
Below are a few examples, to highlight the general idea. The reader can construct numerous other examples by consulting the articles mentioned in the introduction.
LetC{\displaystyle {\mathcal {C}}}be thecategory of vector spacesK{\displaystyle K}-Vectover afieldK{\displaystyle K}and letD{\displaystyle {\mathcal {D}}}be the category ofalgebrasK{\displaystyle K}-AlgoverK{\displaystyle K}(assumed to beunitalandassociative). Let
be theforgetful functorwhich assigns to each algebra its underlying vector space.
Given anyvector spaceV{\displaystyle V}overK{\displaystyle K}we can construct thetensor algebraT(V){\displaystyle T(V)}. The tensor algebra is characterized by the fact:
This statement is an initial property of the tensor algebra since it expresses the fact that the pair(T(V),i){\displaystyle (T(V),i)}, wherei:V→U(T(V)){\displaystyle i:V\to U(T(V))}is the inclusion map, is a universal morphism from the vector spaceV{\displaystyle V}to the functorU{\displaystyle U}.
Since this construction works for any vector spaceV{\displaystyle V}, we conclude thatT{\displaystyle T}is a functor fromK{\displaystyle K}-VecttoK{\displaystyle K}-Alg. This means thatT{\displaystyle T}isleft adjointto the forgetful functorU{\displaystyle U}(see the section below onrelation to adjoint functors).
Acategorical productcan be characterized by a universal construction. For concreteness, one may consider theCartesian productinSet, thedirect productinGrp, or theproduct topologyinTop, where products exist.
LetX{\displaystyle X}andY{\displaystyle Y}be objects of a categoryC{\displaystyle {\mathcal {C}}}with finite products. The product ofX{\displaystyle X}andY{\displaystyle Y}is an objectX{\displaystyle X}×Y{\displaystyle Y}together with two morphisms
such that for any other objectZ{\displaystyle Z}ofC{\displaystyle {\mathcal {C}}}and morphismsf:Z→X{\displaystyle f:Z\to X}andg:Z→Y{\displaystyle g:Z\to Y}there exists a unique morphismh:Z→X×Y{\displaystyle h:Z\to X\times Y}such thatf=π1∘h{\displaystyle f=\pi _{1}\circ h}andg=π2∘h{\displaystyle g=\pi _{2}\circ h}.
To understand this characterization as a universal property, take the categoryD{\displaystyle {\mathcal {D}}}to be theproduct categoryC×C{\displaystyle {\mathcal {C}}\times {\mathcal {C}}}and define thediagonal functor
byΔ(X)=(X,X){\displaystyle \Delta (X)=(X,X)}andΔ(f:X→Y)=(f,f){\displaystyle \Delta (f:X\to Y)=(f,f)}. Then(X×Y,(π1,π2)){\displaystyle (X\times Y,(\pi _{1},\pi _{2}))}is a universal morphism fromΔ{\displaystyle \Delta }to the object(X,Y){\displaystyle (X,Y)}ofC×C{\displaystyle {\mathcal {C}}\times {\mathcal {C}}}: if(f,g){\displaystyle (f,g)}is any morphism from(Z,Z){\displaystyle (Z,Z)}to(X,Y){\displaystyle (X,Y)}, then it must equal
a morphismΔ(h:Z→X×Y)=(h,h){\displaystyle \Delta (h:Z\to X\times Y)=(h,h)}fromΔ(Z)=(Z,Z){\displaystyle \Delta (Z)=(Z,Z)}toΔ(X×Y)=(X×Y,X×Y){\displaystyle \Delta (X\times Y)=(X\times Y,X\times Y)}followed by(π1,π2){\displaystyle (\pi _{1},\pi _{2})}. As a commutative diagram:
For the example of the Cartesian product inSet, the morphism(π1,π2){\displaystyle (\pi _{1},\pi _{2})}comprises the two projectionsπ1(x,y)=x{\displaystyle \pi _{1}(x,y)=x}andπ2(x,y)=y{\displaystyle \pi _{2}(x,y)=y}. Given any setZ{\displaystyle Z}and functionsf,g{\displaystyle f,g}the unique map such that the required diagram commutes is given byh=⟨x,y⟩(z)=(f(z),g(z)){\displaystyle h=\langle x,y\rangle (z)=(f(z),g(z))}.[3]
Categorical products are a particular kind oflimitin category theory. One can generalize the above example to arbitrary limits and colimits.
LetJ{\displaystyle {\mathcal {J}}}andC{\displaystyle {\mathcal {C}}}be categories withJ{\displaystyle {\mathcal {J}}}asmallindex categoryand letCJ{\displaystyle {\mathcal {C}}^{\mathcal {J}}}be the correspondingfunctor category. Thediagonal functor
is the functor that maps each objectN{\displaystyle N}inC{\displaystyle {\mathcal {C}}}to the constant functorΔ(N):J→C{\displaystyle \Delta (N):{\mathcal {J}}\to {\mathcal {C}}}(i.e.Δ(N)(X)=N{\displaystyle \Delta (N)(X)=N}for eachX{\displaystyle X}inJ{\displaystyle {\mathcal {J}}}andΔ(N)(f)=1N{\displaystyle \Delta (N)(f)=1_{N}}for eachf:X→Y{\displaystyle f:X\to Y}inJ{\displaystyle {\mathcal {J}}}) and each morphismf:N→M{\displaystyle f:N\to M}inC{\displaystyle {\mathcal {C}}}to the natural transformationΔ(f):Δ(N)→Δ(M){\displaystyle \Delta (f):\Delta (N)\to \Delta (M)}inCJ{\displaystyle {\mathcal {C}}^{\mathcal {J}}}defined as, for every objectX{\displaystyle X}ofJ{\displaystyle {\mathcal {J}}}, the componentΔ(f)(X):Δ(N)(X)→Δ(M)(X)=f:N→M{\displaystyle \Delta (f)(X):\Delta (N)(X)\to \Delta (M)(X)=f:N\to M}atX{\displaystyle X}. In other words, the natural transformation is the one defined by having constant componentf:N→M{\displaystyle f:N\to M}for every object ofJ{\displaystyle {\mathcal {J}}}.
Given a functorF:J→C{\displaystyle F:{\mathcal {J}}\to {\mathcal {C}}}(thought of as an object inCJ{\displaystyle {\mathcal {C}}^{\mathcal {J}}}), thelimitofF{\displaystyle F}, if it exists, is nothing but a universal morphism fromΔ{\displaystyle \Delta }toF{\displaystyle F}. Dually, thecolimitofF{\displaystyle F}is a universal morphism fromF{\displaystyle F}toΔ{\displaystyle \Delta }.
Defining a quantity does not guarantee its existence. Given a functorF:C→D{\displaystyle F:{\mathcal {C}}\to {\mathcal {D}}}and an objectX{\displaystyle X}ofD{\displaystyle {\mathcal {D}}},
there may or may not exist a universal morphism fromX{\displaystyle X}toF{\displaystyle F}. If, however, a universal morphism(A,u){\displaystyle (A,u)}does exist, then it is essentially unique.
Specifically, it is uniqueup toauniqueisomorphism: if(A′,u′){\displaystyle (A',u')}is another pair, then there exists a unique isomorphismk:A→A′{\displaystyle k:A\to A'}such thatu′=F(k)∘u{\displaystyle u'=F(k)\circ u}.
This is easily seen by substituting(A,u′){\displaystyle (A,u')}in the definition of a universal morphism.
It is the pair(A,u){\displaystyle (A,u)}which is essentially unique in this fashion. The objectA{\displaystyle A}itself is only unique up to isomorphism. Indeed, if(A,u){\displaystyle (A,u)}is a universal morphism andk:A→A′{\displaystyle k:A\to A'}is any isomorphism then the pair(A′,u′){\displaystyle (A',u')}, whereu′=F(k)∘u{\displaystyle u'=F(k)\circ u}is also a universal morphism.
The definition of a universal morphism can be rephrased in a variety of ways. LetF:C→D{\displaystyle F:{\mathcal {C}}\to {\mathcal {D}}}be a functor and letX{\displaystyle X}be an object ofD{\displaystyle {\mathcal {D}}}. Then the following statements are equivalent:
(F(∙)∘u)B(f:A→B):X→F(B)=F(f)∘u:X→F(B){\displaystyle (F(\bullet )\circ u)_{B}(f:A\to B):X\to F(B)=F(f)\circ u:X\to F(B)}
for each objectB{\displaystyle B}inC.{\displaystyle {\mathcal {C}}.}
The dual statements are also equivalent:
(u∘F(∙))B(f:B→A):F(B)→X=u∘F(f):F(B)→X{\displaystyle (u\circ F(\bullet ))_{B}(f:B\to A):F(B)\to X=u\circ F(f):F(B)\to X}
for each objectB{\displaystyle B}inC.{\displaystyle {\mathcal {C}}.}
Suppose(A1,u1){\displaystyle (A_{1},u_{1})}is a universal morphism fromX1{\displaystyle X_{1}}toF{\displaystyle F}and(A2,u2){\displaystyle (A_{2},u_{2})}is a universal morphism fromX2{\displaystyle X_{2}}toF{\displaystyle F}.
By the universal property of universal morphisms, given any morphismh:X1→X2{\displaystyle h:X_{1}\to X_{2}}there exists a unique morphismg:A1→A2{\displaystyle g:A_{1}\to A_{2}}such that the following diagram commutes:
IfeveryobjectXi{\displaystyle X_{i}}ofD{\displaystyle {\mathcal {D}}}admits a universal morphism toF{\displaystyle F}, then the assignmentXi↦Ai{\displaystyle X_{i}\mapsto A_{i}}andh↦g{\displaystyle h\mapsto g}defines a functorG:D→C{\displaystyle G:{\mathcal {D}}\to {\mathcal {C}}}. The mapsui{\displaystyle u_{i}}then define anatural transformationfrom1D{\displaystyle 1_{\mathcal {D}}}(the identity functor onD{\displaystyle {\mathcal {D}}}) toF∘G{\displaystyle F\circ G}. The functors(F,G){\displaystyle (F,G)}are then a pair ofadjoint functors, withG{\displaystyle G}left-adjoint toF{\displaystyle F}andF{\displaystyle F}right-adjoint toG{\displaystyle G}.
Similar statements apply to the dual situation of terminal morphisms fromF{\displaystyle F}. If such morphisms exist for everyX{\displaystyle X}inC{\displaystyle {\mathcal {C}}}one obtains a functorG:C→D{\displaystyle G:{\mathcal {C}}\to {\mathcal {D}}}which is right-adjoint toF{\displaystyle F}(soF{\displaystyle F}is left-adjoint toG{\displaystyle G}).
Indeed, all pairs of adjoint functors arise from universal constructions in this manner. LetF{\displaystyle F}andG{\displaystyle G}be a pair of adjoint functors with unitη{\displaystyle \eta }and co-unitϵ{\displaystyle \epsilon }(see the article onadjoint functorsfor the definitions). Then we have a universal morphism for each object inC{\displaystyle {\mathcal {C}}}andD{\displaystyle {\mathcal {D}}}:
Universal constructions are more general than adjoint functor pairs: a universal construction is like an optimization problem; it gives rise to an adjoint pair if and only if this problem has a solution for every object ofC{\displaystyle {\mathcal {C}}}(equivalently, every object ofD{\displaystyle {\mathcal {D}}}).
Universal properties of various topological constructions were presented byPierre Samuelin 1948. They were later used extensively byBourbaki. The closely related concept of adjoint functors was introduced independently byDaniel Kanin 1958. | https://en.wikipedia.org/wiki/Universal_property |
Inset theory, a branch ofmathematics, anurelementorur-element(from theGermanprefixur-, 'primordial') is an object that is not aset(has no elements), but that may be anelementof a set. It is also referred to as anatomorindividual. Ur-elements are also not identical with the empty set.
There are several different but essentially equivalent ways to treat urelements in afirst-order theory.
One way is to work in a first-order theory with two sorts, sets and urelements, witha∈bonly defined whenbis a set.
In this case, ifUis an urelement, it makes no sense to sayX∈U{\displaystyle X\in U}, althoughU∈X{\displaystyle U\in X}is perfectly legitimate.
Another way is to work in aone-sortedtheory with aunary relationused to distinguish sets and urelements. As non-empty sets contain members while urelements do not, the unary relation is only needed to distinguish the empty set from urelements. Note that in this case, theaxiom of extensionalitymust be formulated to apply only to objects that are not urelements.
This situation is analogous to the treatments of theories of sets andclasses. Indeed, urelements are in some sensedualtoproper classes: urelements cannot have members whereas proper classes cannot be members. Put differently, urelements areminimalobjects while proper classes are maximal objects by the membership relation (which, of course, is not an order relation, so this analogy is not to be taken literally).
TheZermelo set theoryof 1908 included urelements, and hence is a version now called ZFA or ZFCA (i.e. ZFA withaxiom of choice).[1]It was soon realized that in the context of this and closely relatedaxiomatic set theories, the urelements were not needed because they can easily be modeled in a set theory without urelements.[2]Thus, standard expositions of the canonicalaxiomatic set theoriesZFandZFCdo not mention urelements (for an exception, see Suppes[3]).Axiomatizationsof set theory that do invoke urelements includeKripke–Platek set theory with urelementsand the variant ofVon Neumann–Bernays–Gödel set theorydescribed by Mendelson.[4]Intype theory, an object of type 0 can be called an urelement; hence the name "atom".
Adding urelements to the systemNew Foundations(NF) to produce NFU has surprising consequences. In particular, Jensen proved[5]theconsistencyof NFU relative toPeano arithmetic; meanwhile, the consistency of NF relative to anything remains an open problem, pending verification of Holmes's proof of its consistency relative to ZF. Moreover, NFU remainsrelatively consistentwhen augmented with anaxiom of infinityand theaxiom of choice. Meanwhile, the negation of the axiom of choice is, curiously, an NF theorem. Holmes (1998) takes these facts as evidence that NFU is a more successful foundation for mathematics than NF. Holmes further argues that set theory is more natural with than without urelements, since we may take as urelements the objects of any theory or of the physicaluniverse.[6]Infinitist set theory, urelements are mapped to the lowest-level components of the target phenomenon, such as atomic constituents of a physical object or members of an organisation.
An alternative approach to urelements is to consider them, instead of as a type of object other than sets, as a particular type of set.Quine atoms(named afterWillard Van Orman Quine) are sets that only contain themselves, that is, sets that satisfy the formulax= {x}.[7]
Quine atoms cannot exist in systems of set theory that include theaxiom of regularity, but they can exist innon-well-founded set theory. ZF set theory with the axiom of regularity removed cannot prove that any non-well-founded sets exist (unless it is inconsistent, in which case it willprove any arbitrary statement), but it is compatible with the existence of Quine atoms.Aczel's anti-foundation axiomimplies that there is a unique Quine atom. Other non-well-founded theories may admit many distinct Quine atoms; at the opposite end of the spectrum lies Boffa'saxiom of superuniversality, which implies that the distinct Quine atoms form aproper class.[8]
Quine atoms also appear in Quine'sNew Foundations, which allows more than one such set to exist.[9]
Quine atoms are the only sets calledreflexive setsbyPeter Aczel,[8]although other authors, e.g.Jon Barwiseand Lawrence Moss, use the latter term to denote the larger class of sets with the propertyx∈x.[10] | https://en.wikipedia.org/wiki/Urelement |
Inmathematics, thehorizontal line testis a test used to determine whether afunctionisinjective(i.e., one-to-one).[1]
Ahorizontal lineis a straight, flat line that goes from left to right. Given a functionf:R→R{\displaystyle f\colon \mathbb {R} \to \mathbb {R} }(i.e. from thereal numbersto the real numbers), we can decide if it isinjectiveby looking at horizontal lines that intersect the function'sgraph. If any horizontal liney=c{\displaystyle y=c}intersects the graph in more than one point, the function is not injective. To see this, note that the points of intersection have the same y-value (because they lie on the liney=c{\displaystyle y=c}) but different x values, which by definition means the function cannot be injective.[1]
Passes the test (injective)
Fails the test (not injective)
Variations of the horizontal line test can be used to determine whether a function issurjectiveorbijective:
Consider a functionf:X→Y{\displaystyle f\colon X\to Y}with its correspondinggraphas a subset of theCartesian productX×Y{\displaystyle X\times Y}. Consider the horizontal lines inX×Y{\displaystyle X\times Y}:{(x,y0)∈X×Y:y0is constant}=X×{y0}{\displaystyle \{(x,y_{0})\in X\times Y:y_{0}{\text{ is constant}}\}=X\times \{y_{0}\}}. The functionfisinjectiveif and only ifeach horizontal line intersects the graph at most once. In this case the graph is said to pass the horizontal line test. If any horizontal line intersects the graph more than once, the function fails the horizontal line test and is not injective.[2] | https://en.wikipedia.org/wiki/Horizontal_line_test |
Inmathematics, especially in the area ofabstract algebraknown asmodule theory, aninjective moduleis amoduleQthat shares certain desirable properties with theZ-moduleQof allrational numbers. Specifically, ifQis asubmoduleof some other module, then it is already adirect summandof that module; also, given a submodule of a moduleY, anymodule homomorphismfrom this submodule toQcan be extended to a homomorphism from all ofYtoQ. This concept isdualto that ofprojective modules. Injective modules were introduced in (Baer 1940) and are discussed in some detail in the textbook (Lam 1999, §3).
Injective modules have been heavily studied, and a variety of additional notions are defined in terms of them:Injective cogeneratorsare injective modules that faithfully represent the entire category of modules. Injective resolutions measure how far from injective a module is in terms of theinjective dimensionand represent modules in thederived category.Injective hullsare maximalessential extensions, and turn out to be minimal injective extensions. Over aNoetherian ring, every injective module is uniquely a direct sum ofindecomposablemodules, and their structure is well understood. An injective module over one ring may be not injective over another, but there are well-understood methods of changing rings which handle special cases. Rings which are themselves injective modules have a number of interesting properties and include rings such asgroup ringsoffinite groupsoverfields. Injective modules includedivisible groupsand are generalized by the notion ofinjective objectsincategory theory.
A left moduleQover theringRis injective if it satisfies one (and therefore all) of the following equivalent conditions:
Injective rightR-modules are defined in complete analogy.
Trivially, the zero module {0} is injective.
Given afieldk, everyk-vector spaceQis an injectivek-module. Reason: ifQis a subspace ofV, we can find abasisofQand extend it to a basis ofV. The new extending basis vectorsspana subspaceKofVandVis the internal direct sum ofQandK. Note that the direct complementKofQis not uniquely determined byQ, and likewise the extending maphin the above definition is typically not unique.
The rationalsQ(with addition) form an injective abelian group (i.e. an injectiveZ-module). Thefactor groupQ/Zand thecircle groupare also injectiveZ-modules. The factor groupZ/nZforn> 1 is injective as aZ/nZ-module, butnotinjective as an abelian group.
More generally, for anyintegral domainRwith field of fractionsK, theR-moduleKis an injectiveR-module, and indeed the smallest injectiveR-module containingR. For anyDedekind domain, thequotient moduleK/Ris also injective, and itsindecomposablesummands are thelocalizationsRp/R{\displaystyle R_{\mathfrak {p}}/R}for the nonzeroprime idealsp{\displaystyle {\mathfrak {p}}}. Thezero idealis also prime and corresponds to the injectiveK. In this way there is a 1-1 correspondence between prime ideals and indecomposable injective modules.
A particularly rich theory is available forcommutativenoetherian ringsdue toEben Matlis, (Lam 1999, §3I). Every injective module is uniquely a direct sum of indecomposable injective modules, and the indecomposable injective modules are uniquely identified as the injective hulls of the quotientsR/PwherePvaries over theprime spectrumof the ring. The injective hull ofR/Pas anR-module is canonically anRPmodule, and is theRP-injective hull ofR/P. In other words, it suffices to considerlocal rings. Theendomorphism ringof the injective hull ofR/Pis thecompletionR^P{\displaystyle {\hat {R}}_{P}}ofRatP.[1]
Two examples are the injective hull of theZ-moduleZ/pZ(thePrüfer group), and the injective hull of thek[x]-modulek(the ring of inverse polynomials). The latter is easily described ask[x,x−1]/xk[x]. This module has a basis consisting of "inverse monomials", that isx−nforn= 0, 1, 2, …. Multiplication by scalars is as expected, and multiplication byxbehaves normally except thatx·1 = 0. The endomorphism ring is simply the ring offormal power series.
IfGis afinite groupandka field withcharacteristic0, then one shows in the theory ofgroup representationsthat any subrepresentation of a given one is already a direct summand of the given one. Translated into module language, this means that all modules over thegroup algebrakGare injective. If the characteristic ofkis not zero, the following example may help.
IfAis a unitalassociative algebraover the fieldkwith finitedimensionoverk, then Homk(−,k) is adualitybetween finitely generated leftA-modules and finitely generated rightA-modules. Therefore, the finitely generated injective leftA-modules are precisely the modules of the form Homk(P,k) wherePis a finitely generated projective rightA-module. Forsymmetric algebras, the duality is particularly well-behaved and projective modules and injective modules coincide.
For anyArtinian ring, just as forcommutative rings, there is a 1-1 correspondence between prime ideals and indecomposable injective modules. The correspondence in this case is perhaps even simpler: a prime ideal is an annihilator of a unique simple module, and the corresponding indecomposable injective module is itsinjective hull. For finite-dimensional algebras over fields, these injective hulls arefinitely-generated modules(Lam 1999, §3G, §3J).
IfR{\displaystyle R}is a Noetherian ring andp{\displaystyle {\mathfrak {p}}}is a prime ideal, setE=E(R/p){\displaystyle E=E(R/{\mathfrak {p}})}as the injective hull. The injective hull ofR/p{\displaystyle R/{\mathfrak {p}}}over the Artinian ringR/pk{\displaystyle R/{\mathfrak {p}}^{k}}can be computed as the module(0:Epk){\displaystyle (0:_{E}{\mathfrak {p}}^{k})}. It is a module of the same length asR/pk{\displaystyle R/{\mathfrak {p}}^{k}}.[2]In particular, for the standard graded ringR∙=k[x1,…,xn]∙{\displaystyle R_{\bullet }=k[x_{1},\ldots ,x_{n}]_{\bullet }}andp=(x1,…,xn){\displaystyle {\mathfrak {p}}=(x_{1},\ldots ,x_{n})},E=⊕iHom(Ri,k){\displaystyle E=\oplus _{i}{\text{Hom}}(R_{i},k)}is an injective module, giving the tools for computing the indecomposable injective modules for artinian rings overk{\displaystyle k}.
An Artin local ring(R,m,K){\displaystyle (R,{\mathfrak {m}},K)}is injective over itself if and only ifsoc(R){\displaystyle soc(R)}is a 1-dimensional vector space overK{\displaystyle K}. This implies every local Gorenstein ring which is also Artin is injective over itself since has a 1-dimensional socle.[3]A simple non-example is the ringR=C[x,y]/(x2,xy,y2){\displaystyle R=\mathbb {C} [x,y]/(x^{2},xy,y^{2})}which has maximal ideal(x,y){\displaystyle (x,y)}and residue fieldC{\displaystyle \mathbb {C} }. Its socle isC⋅x⊕C⋅y{\displaystyle \mathbb {C} \cdot x\oplus \mathbb {C} \cdot y}, which is 2-dimensional. The residue field has the injective hullHomC(C⋅x⊕C⋅y,C){\displaystyle {\text{Hom}}_{\mathbb {C} }(\mathbb {C} \cdot x\oplus \mathbb {C} \cdot y,\mathbb {C} )}.
For a Lie algebrag{\displaystyle {\mathfrak {g}}}over a fieldk{\displaystyle k}of characteristic 0, the category of modulesM(g){\displaystyle {\mathcal {M}}({\mathfrak {g}})}has a relatively straightforward description of its injective modules.[4]Using the universal enveloping algebra any injectiveg{\displaystyle {\mathfrak {g}}}-module can be constructed from theg{\displaystyle {\mathfrak {g}}}-module
Homk(U(g),V){\displaystyle {\text{Hom}}_{k}(U({\mathfrak {g}}),V)}
for somek{\displaystyle k}-vector spaceV{\displaystyle V}. Note this vector space has ag{\displaystyle {\mathfrak {g}}}-module structure from the injection
g↪U(g){\displaystyle {\mathfrak {g}}\hookrightarrow U({\mathfrak {g}})}
In fact, everyg{\displaystyle {\mathfrak {g}}}-module has an injection into someHomk(U(g),V){\displaystyle {\text{Hom}}_{k}(U({\mathfrak {g}}),V)}and every injectiveg{\displaystyle {\mathfrak {g}}}-module is a direct summand of someHomk(U(g),V){\displaystyle {\text{Hom}}_{k}(U({\mathfrak {g}}),V)}.
Over a commutativeNoetherian ringR{\displaystyle R}, every injective module is a direct sum of indecomposable injective modules and every indecomposable injective module is the injective hull of the residue field at a primep{\displaystyle {\mathfrak {p}}}. That is, for an injectiveI∈Mod(R){\displaystyle I\in {\text{Mod}}(R)}, there is an isomorphism
I≅⨁iE(R/pi){\displaystyle I\cong \bigoplus _{i}E(R/{\mathfrak {p}}_{i})}
whereE(R/pi){\displaystyle E(R/{\mathfrak {p}}_{i})}are the injective hulls of the modulesR/pi{\displaystyle R/{\mathfrak {p}}_{i}}.[5]In addition, ifI{\displaystyle I}is the injective hull of some moduleM{\displaystyle M}then thepi{\displaystyle {\mathfrak {p}}_{i}}are the associated primes ofM{\displaystyle M}.[2]
Anyproductof (even infinitely many) injective modules is injective; conversely, if a direct product of modules is injective, then each module is injective (Lam 1999, p. 61). Every direct sum of finitely many injective modules is injective. In general, submodules, factor modules, or infinitedirect sumsof injective modules need not be injective. Every submodule of every injective module is injective if and only if the ring isArtiniansemisimple(Golan & Head 1991, p. 152); every factor module of every injective module is injective if and only if the ring ishereditary, (Lam 1999, Th. 3.22).
Bass-Papp Theorem states that every infinite direct sum of right (left) injective modules is injective if and only if the ring is right (left)Noetherian, (Lam 1999, p. 80-81, Th 3.46).[6]
In Baer's original paper, he proved a useful result, usually known as Baer's Criterion, for checking whether a module is injective: a leftR-moduleQis injective if and only if any homomorphismg:I→Qdefined on aleft idealIofRcan be extended to all ofR.
Using this criterion, one can show thatQis an injectiveabelian group(i.e. an injective module overZ). More generally, an abelian group is injective if and only if it isdivisible. More generally still: a module over aprincipal ideal domainis injective if and only if it is divisible (the case of vector spaces is an example of this theorem, as every field is a principal ideal domain and every vector space is divisible). Over a general integral domain, we still have one implication: every injective module over an integral domain is divisible.
Baer's criterion has been refined in many ways (Golan & Head 1991, p. 119), including a result of (Smith 1981) and (Vámos 1983) that for a commutative Noetherian ring, it suffices to consider onlyprime idealsI. The dual of Baer's criterion, which would give a test for projectivity, is false in general. For instance, theZ-moduleQsatisfies the dual of Baer's criterion but is not projective.
Maybe the most important injective module is the abelian groupQ/Z. It is aninjective cogeneratorin thecategory of abelian groups, which means that it is injective and any other module is contained in a suitably large product of copies ofQ/Z. So in particular, every abelian group is a subgroup of an injective one. It is quite significant that this is also true over any ring: every module is a submodule of an injective one, or "the category of leftR-modules has enough injectives." To prove this, one uses the peculiar properties of the abelian groupQ/Zto construct an injective cogenerator in the category of leftR-modules.
For a leftR-moduleM, the so-called "character module"M+= HomZ(M,Q/Z) is a rightR-module that exhibits an interesting duality, not between injective modules andprojective modules, but between injective modules andflat modules(Enochs & Jenda 2000, pp. 78–80). For any ringR, a leftR-module is flat if and only if its character module is injective. IfRis left noetherian, then a leftR-module is injective if and only if its character module is flat.
Theinjective hullof a module is the smallest injective module containing the given one and was described in (Eckmann & Schopf 1953).
One can use injective hulls to define a minimal injective resolution (see below). If each term of the injective resolution is the injective hull of the cokernel of the previous map, then the injective resolution has minimal length.
Every moduleMalso has an injectiveresolution: anexact sequenceof the form
where theIjare injective modules. Injective resolutions can be used to definederived functorssuch as theExt functor.
Thelengthof a finite injective resolution is the first indexnsuch thatInis nonzero andIi= 0 forigreater thann. If a moduleMadmits a finite injective resolution, the minimal length among all finite injective resolutions ofMis called its injective dimension and denoted id(M). IfMdoes not admit a finite injective resolution, then by convention the injective dimension is said to be infinite. (Lam 1999, §5C) As an example, consider a moduleMsuch that id(M) = 0. In this situation, the exactness of the sequence 0 →M→I0→ 0 indicates that the arrow in the center is an isomorphism, and henceMitself is injective.[7]
Equivalently, the injective dimension ofMis the minimal integer (if there is such, otherwise ∞)nsuch that ExtNA(–,M) = 0 for allN>n.
Every injective submodule of an injective module is a direct summand, so it is important to understandindecomposableinjective modules, (Lam 1999, §3F).
Every indecomposable injective module has alocalendomorphism ring. A module is called auniform moduleif every two nonzero submodules have nonzero intersection. For an injective moduleMthe following are equivalent:
Over a Noetherian ring, every injective module is the direct sum of (uniquely determined) indecomposable injective modules. Over a commutative Noetherian ring, this gives a particularly nice understanding of all injective modules, described in (Matlis 1958). The indecomposable injective modules are the injective hulls of the modulesR/pforpa prime ideal of the ringR. Moreover, the injective hullMofR/phas an increasing filtration by modulesMngiven by the annihilators of the idealspn, andMn+1/Mnis isomorphic as finite-dimensional vector space over the quotient fieldk(p) ofR/pto HomR/p(pn/pn+1,k(p)).
It is important to be able to consider modules oversubringsorquotient rings, especially for instancepolynomial rings. In general, this is difficult, but a number of results are known, (Lam 1999, p. 62).
LetSandRbe rings, andPbe a left-R, right-Sbimodulethat isflatas a left-Rmodule. For any injective rightS-moduleM, the set ofmodule homomorphismsHomS(P,M) is an injective rightR-module. The same statement holds of course after interchanging left- and right- attributes.
For instance, ifRis a subring ofSsuch thatSis a flatR-module, then every injectiveS-module is an injectiveR-module. In particular, ifRis an integral domain andSitsfield of fractions, then every vector space overSis an injectiveR-module. Similarly, every injectiveR[x]-module is an injectiveR-module.
In the opposite direction, a ring homomorphismf:S→R{\displaystyle f:S\to R}makesRinto a left-R, right-Sbimodule, by left and right multiplication. Beingfreeover itselfRis alsoflatas a leftR-module. Specializing the above statement forP = R, it says that whenMis an injective rightS-module thecoinduced modulef∗M=HomS(R,M){\displaystyle f_{*}M=\mathrm {Hom} _{S}(R,M)}is an injective rightR-module. Thus, coinduction overfproduces injectiveR-modules from injectiveS-modules.
For quotient ringsR/I, the change of rings is also very clear. AnR-module is anR/I-module precisely when it is annihilated byI. The submodule annI(M) = {minM:im= 0 for alliinI} is a left submodule of the leftR-moduleM, and is the largest submodule ofMthat is anR/I-module. IfMis an injective leftR-module, then annI(M) is an injective leftR/I-module. Applying this toR=Z,I=nZandM=Q/Z, one gets the familiar fact thatZ/nZis injective as a module over itself. While it is easy to convert injectiveR-modules into injectiveR/I-modules, this process does not convert injectiveR-resolutions into injectiveR/I-resolutions, and the homology of the resulting complex is one of the early and fundamental areas of study of relative homological algebra.
The textbook (Rotman 1979, p. 103) has an erroneous proof thatlocalizationpreserves injectives, but a counterexample was given in (Dade 1981).
Every ring with unity is afree moduleand hence is aprojectiveas a module over itself, but it is rarer for a ring to be injective as a module over itself, (Lam 1999, §3B). If a ring is injective over itself as a right module, then it is called a right self-injective ring. EveryFrobenius algebrais self-injective, but nointegral domainthat is not afieldis self-injective. Every properquotientof aDedekind domainis self-injective.
A rightNoetherian, right self-injective ring is called aquasi-Frobenius ring, and is two-sidedArtinianand two-sided injective, (Lam 1999, Th. 15.1). An important module theoretic property of quasi-Frobenius rings is that the projective modules are exactly the injective modules.
One also talks aboutinjective objectsincategoriesmore general than module categories, for instance infunctor categoriesor in categories ofsheavesof OX-modules over someringed space(X,OX). The following general definition is used: an objectQof the categoryCis injective if for anymonomorphismf:X→YinCand any morphismg:X→Qthere exists a morphismh:Y→Qwithhf=g.
The notion of injective object in the category of abelian groups was studied somewhat independently of injective modules under the termdivisible group. Here aZ-moduleMis injective if and only ifn⋅M=Mfor every nonzero integern. Here the relationships betweenflat modules,pure submodules, and injective modules is more clear, as it simply refers to certain divisibility properties of module elements by integers.
In relative homological algebra, the extension property of homomorphisms may be required only for certain submodules, rather than for all. For instance, apure injective moduleis a module in which a homomorphism from apure submodulecan be extended to the whole module. | https://en.wikipedia.org/wiki/Injective_module |
In themathematicalfield ofnumerical analysis,monotone cubic interpolationis a variant ofcubic interpolationthat preservesmonotonicityof thedata setbeing interpolated.
Monotonicity is preserved bylinear interpolationbut not guaranteed bycubic interpolation.
Monotone interpolation can be accomplished usingcubic Hermite splinewith the tangentsmi{\displaystyle m_{i}}modified to ensure the monotonicity of the resulting Hermite spline.
An algorithm is also available for monotonequinticHermite interpolation.
There are several ways of selecting interpolating tangents for each data point. This section will outline the use of the Fritsch–Carlson method. Note that only one pass of the algorithm is required.
Let the data points be(xk,yk){\displaystyle (x_{k},y_{k})}indexed in sorted order fork=1,…n{\displaystyle k=1,\,\dots \,n}.
δk=yk+1−ykxk+1−xk{\displaystyle \delta _{k}={\frac {y_{k+1}-y_{k}}{x_{k+1}-x_{k}}}}
mk=δk−1+δk2{\displaystyle m_{k}={\frac {\delta _{k-1}+\delta _{k}}{2}}}
m1=δ1andmn=δn−1{\displaystyle m_{1}=\delta _{1}\quad {\text{ and }}\quad m_{n}=\delta _{n-1}\,}.
αk=mk/δkandβk=mk+1/δk{\displaystyle \alpha _{k}=m_{k}/\delta _{k}\quad {\text{ and }}\quad \beta _{k}=m_{k+1}/\delta _{k}}.
ϕk=αk−(2αk+βk−3)23(αk+βk−2)>0{\displaystyle \phi _{k}=\alpha _{k}-{\frac {(2\alpha _{k}+\beta _{k}-3)^{2}}{3(\alpha _{k}+\beta _{k}-2)}}>0\,},or
τk=3αk2+βk2{\displaystyle \tau _{k}={\frac {3}{\sqrt {\alpha _{k}^{2}+\beta _{k}^{2}}}}\,},
mk=τkαkδkandmk+1=τkβkδk{\displaystyle m_{k}=\tau _{k}\,\alpha _{k}\,\delta _{k}\quad {\text{ and }}\quad m_{k+1}=\tau _{k}\,\beta _{k}\,\delta _{k}\,}.
After the preprocessing above, evaluation of the interpolated spline is equivalent to acubic Hermite spline, using the dataxk{\displaystyle x_{k}},yk{\displaystyle y_{k}}, andmk{\displaystyle m_{k}}fork=1,…n{\displaystyle k=1,\,\dots \,n}.
To evaluate atx{\displaystyle x}, find the indexk{\displaystyle k}in the sequence wherex{\displaystyle x}, lies betweenxk{\displaystyle x_{k}}, andxk+1{\displaystyle x_{k+1}}, that is:xk≤x≤xk+1{\displaystyle x_{k}\leq x\leq x_{k+1}}. Calculate
then the interpolated value is
wherehii{\displaystyle h_{ii}}are the basis functions for thecubic Hermite spline.
The followingJavaScriptimplementation takes a data set and produces a monotone cubic spline interpolant function: | https://en.wikipedia.org/wiki/Monotone_cubic_interpolation |
Inmathematics, apseudo-monotone operatorfrom areflexiveBanach spaceinto itscontinuous dual spaceis one that is, in some sense, almost aswell-behavedas amonotone operator. Many problems in thecalculus of variationscan be expressed using operators that are pseudo-monotone, and pseudo-monotonicity in turn implies the existence of solutions to these problems.
Let (X, || ||) be a reflexive Banach space. A mapT:X→X∗fromXinto its continuous dual spaceX∗is said to bepseudo-monotoneifTis abounded operator(not necessarily continuous) and if whenever
(i.e.ujconverges weaklytou) and
it follows that, for allv∈X,
Using a very similar proof to that of theBrowder–Minty theorem, one can show the following:
Let (X, || ||) be areal, reflexive Banach space and suppose thatT:X→X∗isbounded,coerciveand pseudo-monotone. Then, for eachcontinuous linear functionalg∈X∗, there exists a solutionu∈Xof the equationT(u) =g. | https://en.wikipedia.org/wiki/Pseudo-monotone_operator |
Instatistics,Spearman's rank correlation coefficientorSpearman'sρ, named afterCharles Spearman[1]and often denoted by the Greek letterρ{\displaystyle \rho }(rho) or asrs{\displaystyle r_{s}}, is anonparametricmeasure ofrank correlation(statistical dependencebetween therankingsof twovariables). It assesses how well the relationship between two variables can be described using amonotonic function.
The Spearman correlation between two variables is equal to thePearson correlationbetween the rank values of those two variables; while Pearson's correlation assesses linear relationships, Spearman's correlation assesses monotonic relationships (whether linear or not). If there are no repeated data values, a perfect Spearman correlation of +1 or −1 occurs when each of the variables is a perfect monotone function of the other.
Intuitively, the Spearman correlation between two variables will be high when observations have a similar (or identical for a correlation of 1)rank(i.e. relative position label of the observations within the variable: 1st, 2nd, 3rd, etc.) between the two variables, and low when observations have a dissimilar (or fully opposed for a correlation of −1) rank between the two variables.
Spearman's coefficient is appropriate for bothcontinuousand discreteordinal variables.[2][3]Both Spearman'sρ{\displaystyle \rho }andKendall'sτ{\displaystyle \tau }can be formulated as special cases of a moregeneral correlation coefficient.
The coefficient can be used to determine how well data fits a model,[4]like when determining the similarity of text documents.[5]
The Spearman correlation coefficient is defined as thePearson correlation coefficientbetween therank variables.[6]
For a sample of sizen,{\displaystyle \ n\ ,}then{\displaystyle \ n\ }pairs ofraw scores(Xi,Yi){\displaystyle \ \left(X_{i},Y_{i}\right)\ }are converted to ranksR[Xi],R[Yi],{\displaystyle \ \operatorname {R} [{X_{i}}],\operatorname {R} [{Y_{i}}]\ ,}andrs{\displaystyle \ r_{s}\ }is computed as
where
Only when alln{\displaystyle \ n\ }ranks aredistinct integers(no ties), it can be computed using the popular formula
where
Consider a bivariate sample(Xi,Yi),i=1,…n{\displaystyle \ (X_{i},Y_{i})\ ,\ i=1,\ldots \ n\ }with corresponding rank pairs(R[Xi],R[Yi])=(Ri,Si).{\displaystyle \ \left(\operatorname {R} [X_{i}],\operatorname {R} [Y_{i}]\right)=(R_{i},S_{i})~.}Then the Spearman correlation coefficient of(X,Y){\displaystyle \ (X,Y)\ }is
where, as usual,
and
We shall show thatrs{\displaystyle \ r_{s}\ }can be expressed purely in terms ofdi≡Ri−Si,{\displaystyle \ d_{i}\equiv R_{i}-S_{i}\ ,}provided we assume that there be no ties within each sample.
Under this assumption, we have thatR,S{\displaystyle \ R,S\ }can be viewed as random variables
distributed like a uniformly distributed discrete random variable,U,{\displaystyle \ U\ ,}on{1,2,…,n}.{\displaystyle \ \{\ 1,2,\ \ldots ,\ n\ \}~.}HenceR¯=S¯=E[U]{\displaystyle \ {\overline {R}}={\overline {S}}=\operatorname {\mathbb {E} } \left[\ U\ \right]\ }andσR2=σS2=Var[U]=E[U2]−E[U]2,{\displaystyle \ \sigma _{R}^{2}=\sigma _{S}^{2}=\operatorname {\mathsf {Var}} \left[\ U\ \right]=\operatorname {\mathbb {E} } \left[\ U^{2}\ \right]-\operatorname {\mathbb {E} } \left[\ U\ \right]^{2}\ ,}where
and thus
(These sums can be computed using the formulas for thetriangular numbersandsquare pyramidal numbers, or basicsummation resultsfromumbral calculus.)
Observe now that
Putting this all together thus yields
Identical values are usually[7]each assignedfractional ranksequal to the average of their positions in the ascending order of the values, which is equivalent to averaging over all possible permutations.
If ties are present in the data set, the simplified formula above yields incorrect results: Only if in both variables all ranks are distinct, thenσR[X]σR[Y]={\displaystyle \ \sigma _{\operatorname {R} [X]}\ \sigma _{\operatorname {R} [Y]}=}var[R[X]]={\displaystyle \ \operatorname {{\mathsf {v}}ar} {\bigl [}\ \operatorname {R} [X]\ {\bigr ]}=}var[R[Y]]={\displaystyle \ \operatorname {{\mathsf {v}}ar} {\bigl [}\ \operatorname {R} [Y]\ {\bigr ]}=}112(n2−1){\displaystyle \ {\tfrac {\ 1\ }{12}}\left(n^{2}-1\right)\ }(calculated according to biased variance).
The first equation — normalizing by the standard deviation — may be used even when ranks are normalized to [0, 1] ("relative ranks") because it is insensitive both to translation and linear scaling.
The simplified method should also not be used in cases where the data set is truncated; that is, when the Spearman's correlation coefficient is desired for the topXrecords (whether by pre-change rank or post-change rank, or both), the user should use the Pearson correlation coefficient formula given above.[8]
There are several other numerical measures that quantify the extent ofstatistical dependencebetween pairs of observations. The most common of these is thePearson product-moment correlation coefficient, which is a similar correlation method to Spearman's rank, that measures the “linear” relationships between the raw numbers rather than between their ranks.
An alternative name for the Spearmanrank correlationis the “grade correlation”;[9]in this, the “rank” of an observation is replaced by the “grade”. In continuous distributions, the grade of an observation is, by convention, always one half less than the rank, and hence the grade and rank correlations are the same in this case. More generally, the “grade” of an observation is proportional to an estimate of the fraction of a population less than a given value, with the half-observation adjustment at observed values. Thus this corresponds to one possible treatment of tied ranks. While unusual, the term “grade correlation” is still in use.[10]
The sign of the Spearman correlation indicates the direction of association betweenX(the independent variable) andY(the dependent variable). IfYtends to increase whenXincreases, the Spearman correlation coefficient is positive. IfYtends to decrease whenXincreases, the Spearman correlation coefficient is negative. A Spearman correlation of zero indicates that there is no tendency forYto either increase or decrease whenXincreases. The Spearman correlation increases in magnitude asXandYbecome closer to being perfectly monotonic functions of each other. WhenXandYare perfectly monotonically related, the Spearman correlation coefficient becomes 1. A perfectly monotonic increasing relationship implies that for any two pairs of data valuesXi,YiandXj,Yj, thatXi−XjandYi−Yjalways have the same sign. A perfectly monotonic decreasing relationship implies that these differences always have opposite signs.
The Spearman correlation coefficient is often described as being "nonparametric". This can have two meanings. First, a perfect Spearman correlation results whenXandYare related by anymonotonic function. Contrast this with the Pearson correlation, which only gives a perfect value whenXandYare related by alinearfunction. The other sense in which the Spearman correlation is nonparametric is that its exact sampling distribution can be obtained without requiring knowledge (i.e., knowing the parameters) of thejoint probability distributionofXandY.
In this example, the arbitrary raw data in the table below is used to calculate the correlation between theIQof a person with the number of hours spent in front ofTVper week [fictitious values used].
Firstly, evaluatedi2{\displaystyle d_{i}^{2}}. To do so use the following steps, reflected in the table below.
Withdi2{\displaystyle d_{i}^{2}}found, add them to find∑di2=194{\displaystyle \sum d_{i}^{2}=194}. The value ofnis 10. These values can now be substituted back into the equation
to give
which evaluates toρ= −29/165 = −0.175757575...with ap-value= 0.627188 (using thet-distribution).
That the value is close to zero shows that the correlation between IQ and hours spent watching TV is very low, although the negative value suggests that the longer the time spent watching television the lower the IQ. In the case of ties in the original values, this formula should not be used; instead, the Pearson correlation coefficient should be calculated on the ranks (where ties are given ranks, as described above).
Confidence intervals for Spearman'sρcan be easily obtained using the Jackknife Euclidean likelihood approach in de Carvalho and Marques (2012).[11]The confidence interval with levelα{\displaystyle \alpha }is based on a Wilks' theorem given in the latter paper, and is given by
whereχ1,α2{\displaystyle \chi _{1,\alpha }^{2}}is theα{\displaystyle \alpha }quantile of a chi-square distribution with one degree of freedom, and theZi{\displaystyle Z_{i}}are jackknife pseudo-values. This approach is implemented in the R packagespearmanCI.
One approach to test whether an observed value ofρis significantly different from zero (rwill always maintain−1 ≤r≤ 1) is to calculate the probability that it would be greater than or equal to the observedr, given thenull hypothesis, by using apermutation test. An advantage of this approach is that it automatically takes into account the number of tied data values in the sample and the way they are treated in computing the rank correlation.
Another approach parallels the use of theFisher transformationin the case of the Pearson product-moment correlation coefficient. That is,confidence intervalsandhypothesis testsrelating to the population valueρcan be carried out using the Fisher transformation:
IfF(r) is the Fisher transformation ofr, the sample Spearman rank correlation coefficient, andnis the sample size, then
is az-scoreforr, which approximately follows a standardnormal distributionunder thenull hypothesisofstatistical independence(ρ= 0).[12][13]
One can also test for significance using
which is distributed approximately asStudent'st-distributionwithn− 2degrees of freedom under thenull hypothesis.[14]A justification for this result relies on a permutation argument.[15]
A generalization of the Spearman coefficient is useful in the situation where there are three or more conditions, a number of subjects are all observed in each of them, and it is predicted that the observations will have a particular order. For example, a number of subjects might each be given three trials at the same task, and it is predicted that performance will improve from trial to trial. A test of the significance of the trend between conditions in this situation was developed by E. B. Page[16]and is usually referred to asPage's trend testfor ordered alternatives.
Classiccorrespondence analysisis a statistical method that gives a score to every value of two nominal variables. In this way the Pearsoncorrelation coefficientbetween them is maximized.
There exists an equivalent of this method, calledgrade correspondence analysis, which maximizes Spearman'sρorKendall's τ.[17]
There are two existing approaches to approximating the Spearman's rank correlation coefficient from streaming data.[18][19]The first approach[18]involves coarsening the joint distribution of(X,Y){\displaystyle (X,Y)}. For continuousX,Y{\displaystyle X,Y}values:m1,m2{\displaystyle m_{1},m_{2}}cutpoints are selected forX{\displaystyle X}andY{\displaystyle Y}respectively, discretizing
these random variables. Default cutpoints are added at−∞{\displaystyle -\infty }and∞{\displaystyle \infty }. A count matrix of size(m1+1)×(m2+1){\displaystyle (m_{1}+1)\times (m_{2}+1)}, denotedM{\displaystyle M}, is then constructed whereM[i,j]{\displaystyle M[i,j]}stores the number of observations that
fall into the two-dimensional cell indexed by(i,j){\displaystyle (i,j)}. For streaming data, when a new observation arrives, the appropriateM[i,j]{\displaystyle M[i,j]}element is incremented. The Spearman's rank
correlation can then be computed, based on the count matrixM{\displaystyle M}, using linear algebra operations (Algorithm 2[18]). Note that for discrete random
variables, no discretization procedure is necessary. This method is applicable to stationary streaming data as well as large data sets. For non-stationary streaming data, where the Spearman's rank correlation coefficient may change over time, the same procedure can be applied, but to a moving window of observations. When using a moving window, memory requirements grow linearly with chosen window size.
The second approach to approximating the Spearman's rank correlation coefficient from streaming data involves the use of Hermite series based estimators.[19]These estimators, based onHermite polynomials,
allow sequential estimation of the probability density function and cumulative distribution function in univariate and bivariate cases. Bivariate Hermite series density
estimators and univariate Hermite series based cumulative distribution function estimators are plugged into a large sample version of the
Spearman's rank correlation coefficient estimator, to give a sequential Spearman's correlation estimator. This estimator is phrased in
terms of linear algebra operations for computational efficiency (equation (8) and algorithm 1 and 2[19]). These algorithms are only applicable to continuous random variable data, but have
certain advantages over the count matrix approach in this setting. The first advantage is improved accuracy when applied to large numbers of observations. The second advantage is that the Spearman's rank correlation coefficient can be
computed on non-stationary streams without relying on a moving window. Instead, the Hermite series based estimator uses an exponential weighting scheme to track time-varying Spearman's rank correlation from streaming data,
which has constant memory requirements with respect to "effective" moving window size. A software implementation of these Hermite series based algorithms exists[20]and is discussed in Software implementations. | https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient |
Inreal analysis, a branch ofmathematics,Bernstein's theoremstates that everyreal-valuedfunctionon the half-line[0, ∞)that istotally monotoneis a mixture ofexponential functions. In one important special case the mixture is aweighted average, orexpected value.
Total monotonicity (sometimes alsocomplete monotonicity) of a functionfmeans thatfiscontinuouson[0, ∞),infinitely differentiableon(0, ∞), and satisfies(−1)ndndtnf(t)≥0{\displaystyle (-1)^{n}{\frac {d^{n}}{dt^{n}}}f(t)\geq 0}for all nonnegativeintegersnand for allt> 0. Another convention puts the oppositeinequalityin the above definition.
The "weighted average" statement can be characterized thus: there is a non-negative finiteBorel measureon[0, ∞)withcumulative distribution functiongsuch thatf(t)=∫0∞e−txdg(x),{\displaystyle f(t)=\int _{0}^{\infty }e^{-tx}\,dg(x),}theintegralbeing aRiemann–Stieltjes integral.
In more abstract language, the theorem characterisesLaplace transformsof positiveBorel measureson[0, ∞). In this form it is known as theBernstein–Widder theorem, orHausdorff–Bernstein–Widder theorem.Felix Hausdorffhad earlier characterisedcompletely monotone sequences. These are the sequences occurring in theHausdorff moment problem.
Nonnegative functions whosederivativeis completely monotone are calledBernstein functions. Every Bernstein function has theLévy–Khintchine representation:f(t)=a+bt+∫0∞(1−e−tx)μ(dx),{\displaystyle f(t)=a+bt+\int _{0}^{\infty }\left(1-e^{-tx}\right)\mu (dx),}wherea,b≥0{\displaystyle a,b\geq 0}andμ{\displaystyle \mu }is a measure on the positive real half-line such that∫0∞(1∧x)μ(dx)<∞.{\displaystyle \int _{0}^{\infty }\left(1\wedge x\right)\mu (dx)<\infty .} | https://en.wikipedia.org/wiki/Total_monotonicity |
Inmathematics,cyclical monotonicityis a generalization of the notion ofmonotonicityto the case ofvector-valued function.[1][2]
Let⟨⋅,⋅⟩{\displaystyle \langle \cdot ,\cdot \rangle }denote the inner product on aninner product spaceX{\displaystyle X}and letU{\displaystyle U}be anonemptysubset ofX{\displaystyle X}. Acorrespondencef:U⇉X{\displaystyle f:U\rightrightarrows X}is calledcyclically monotoneif for every set of pointsx1,…,xm+1∈U{\displaystyle x_{1},\dots ,x_{m+1}\in U}withxm+1=x1{\displaystyle x_{m+1}=x_{1}}it holds that∑k=1m⟨xk+1,f(xk+1)−f(xk)⟩≥0.{\displaystyle \sum _{k=1}^{m}\langle x_{k+1},f(x_{k+1})-f(x_{k})\rangle \geq 0.}[3]
For the case of scalar functions of one variable the definition above is equivalent to usualmonotonicity.Gradientsofconvex functionsare cyclically monotone. In fact, theconverseis true. SupposeU{\displaystyle U}isconvexandf:U⇉Rn{\displaystyle f:U\rightrightarrows \mathbb {R} ^{n}}is a correspondence with nonempty values. Then iff{\displaystyle f}is cyclically monotone, there exists anupper semicontinuousconvex functionF:U→R{\displaystyle F:U\to \mathbb {R} }such thatf(x)⊂∂F(x){\displaystyle f(x)\subset \partial F(x)}for everyx∈U{\displaystyle x\in U}, where∂F(x){\displaystyle \partial F(x)}denotes thesubgradientofF{\displaystyle F}atx{\displaystyle x}.[4]
Thismathematical analysis–related article is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Cyclical_monotonicity |
In linear algebra, theoperator monotone functionis an important type ofreal-valued function, fully classified byCharles Löwnerin 1934.[1]It is closely allied to the operator concave and operator concave functions, and is encountered inoperator theoryand inmatrix theory, and led to theLöwner–Heinz inequality.[2][3]
A functionf:I→R{\displaystyle f:I\to \mathbb {R} }defined on an intervalI⊆R{\displaystyle I\subseteq \mathbb {R} }is said to beoperator monotoneif wheneverA{\displaystyle A}andB{\displaystyle B}areHermitian matrices(of any size/dimensions) whoseeigenvaluesall belong to the domain off{\displaystyle f}and whose differenceA−B{\displaystyle A-B}is apositive semi-definite matrix, then necessarilyf(A)−f(B)≥0{\displaystyle f(A)-f(B)\geq 0}wheref(A){\displaystyle f(A)}andf(B){\displaystyle f(B)}are the values of thematrix functioninduced byf{\displaystyle f}(which are matrices of the same size asA{\displaystyle A}andB{\displaystyle B}).
Notation
This definition is frequently expressed with the notation that is now defined.
WriteA≥0{\displaystyle A\geq 0}to indicate that a matrixA{\displaystyle A}ispositive semi-definiteand writeA≥B{\displaystyle A\geq B}to indicate that the differenceA−B{\displaystyle A-B}of two matricesA{\displaystyle A}andB{\displaystyle B}satisfiesA−B≥0{\displaystyle A-B\geq 0}(that is,A−B{\displaystyle A-B}is positive semi-definite).
Withf:I→R{\displaystyle f:I\to \mathbb {R} }andA{\displaystyle A}as in the theorem's statement, the value of thematrix functionf(A){\displaystyle f(A)}is the matrix (of the same size asA{\displaystyle A})defined in terms ofitsA{\displaystyle A}'sspectral decompositionA=∑jλjPj{\displaystyle A=\sum _{j}\lambda _{j}P_{j}}byf(A)=∑jf(λj)Pj,{\displaystyle f(A)=\sum _{j}f(\lambda _{j})P_{j}~,}where theλj{\displaystyle \lambda _{j}}are the eigenvalues ofA{\displaystyle A}with correspondingprojectorsPj.{\displaystyle P_{j}.}
The definition of an operator monotone function may now be restated as:
A functionf:I→R{\displaystyle f:I\to \mathbb {R} }defined on an intervalI⊆R{\displaystyle I\subseteq \mathbb {R} }said to beoperator monotoneif (and only if) for all positive integersn,{\displaystyle n,}and alln×n{\displaystyle n\times n}Hermitian matricesA{\displaystyle A}andB{\displaystyle B}with eigenvalues inI,{\displaystyle I,}ifA≥B{\displaystyle A\geq B}thenf(A)≥f(B).{\displaystyle f(A)\geq f(B).}
Thislinear algebra-related article is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Operator_monotone_function |
In mathematics, asubmodular set function(also known as asubmodular function) is aset functionthat, informally, describes the relationship between a set of inputs and an output, where adding more of one input has a decreasing additional benefit (diminishing returns). The naturaldiminishing returnsproperty which makes them suitable for many applications, includingapproximation algorithms,game theory(as functions modeling user preferences) andelectrical networks. Recently, submodular functions have also found utility in several real world problems inmachine learningandartificial intelligence, includingautomatic summarization,multi-document summarization,feature selection,active learning, sensor placement, image collection summarization and many other domains.[1][2][3][4]
IfΩ{\displaystyle \Omega }is a finiteset, a submodular function is a set functionf:2Ω→R{\displaystyle f:2^{\Omega }\rightarrow \mathbb {R} }, where2Ω{\displaystyle 2^{\Omega }}denotes thepower setofΩ{\displaystyle \Omega }, which satisfies one of the following equivalent conditions.[5]
A nonnegative submodular function is also asubadditivefunction, but a subadditive function need not be submodular.
IfΩ{\displaystyle \Omega }is not assumed finite, then the above conditions are not equivalent. In particular a functionf{\displaystyle f}defined byf(S)=1{\displaystyle f(S)=1}ifS{\displaystyle S}is finite andf(S)=0{\displaystyle f(S)=0}ifS{\displaystyle S}is infinite
satisfies the first condition above, but the second condition fails whenS{\displaystyle S}andT{\displaystyle T}are infinite sets with finite intersection.
A set functionf{\displaystyle f}ismonotoneif for everyT⊆S{\displaystyle T\subseteq S}we have thatf(T)≤f(S){\displaystyle f(T)\leq f(S)}. Examples of monotone submodular functions include:
A submodular function that is not monotone is callednon-monotone. In particular, a function is called non-monotone if it has the property that adding more elements to a set can decrease the value of the function. More formally, the functionf{\displaystyle f}is non-monotone if there are setsS,T{\displaystyle S,T}in its domain s.t.S⊂T{\displaystyle S\subset T}andf(S)>f(T){\displaystyle f(S)>f(T)}.
A non-monotone submodular functionf{\displaystyle f}is calledsymmetricif for everyS⊆Ω{\displaystyle S\subseteq \Omega }we have thatf(S)=f(Ω−S){\displaystyle f(S)=f(\Omega -S)}.
Examples of symmetric non-monotone submodular functions include:
A non-monotone submodular function which is not symmetric is called asymmetric.
Often, given a submodular set function that describes the values of various sets, we need to compute the values offractionalsets. For example: we know that the value of receiving house A and house B is V, and we want to know the value of receiving 40% of house A and 60% of house B. To this end, we need acontinuous extensionof the submodular set function.
Formally, a set functionf:2Ω→R{\displaystyle f:2^{\Omega }\rightarrow \mathbb {R} }with|Ω|=n{\displaystyle |\Omega |=n}can be represented as a function on{0,1}n{\displaystyle \{0,1\}^{n}}, by associating eachS⊆Ω{\displaystyle S\subseteq \Omega }with a binary vectorxS∈{0,1}n{\displaystyle x^{S}\in \{0,1\}^{n}}such thatxiS=1{\displaystyle x_{i}^{S}=1}wheni∈S{\displaystyle i\in S}, andxiS=0{\displaystyle x_{i}^{S}=0}otherwise. Acontinuousextensionoff{\displaystyle f}is a continuous functionF:[0,1]n→R{\displaystyle F:[0,1]^{n}\rightarrow \mathbb {R} }, that matches the value off{\displaystyle f}onx∈{0,1}n{\displaystyle x\in \{0,1\}^{n}}, i.e.F(xS)=f(S){\displaystyle F(x^{S})=f(S)}.
Several kinds of continuous extensions of submodular functions are commonly used, which are described below.
This extension is named after mathematicianLászló Lovász.[9]Consider any vectorx={x1,x2,…,xn}{\displaystyle \mathbf {x} =\{x_{1},x_{2},\dots ,x_{n}\}}such that each0≤xi≤1{\displaystyle 0\leq x_{i}\leq 1}. Then the Lovász extension is defined as
fL(x)=E(f({i|xi≥λ})){\displaystyle f^{L}(\mathbf {x} )=\mathbb {E} (f(\{i|x_{i}\geq \lambda \}))}
where the expectation is overλ{\displaystyle \lambda }chosen from theuniform distributionon the interval[0,1]{\displaystyle [0,1]}. The Lovász extension is a convex function if and only iff{\displaystyle f}is a submodular function.
Consider any vectorx={x1,x2,…,xn}{\displaystyle \mathbf {x} =\{x_{1},x_{2},\ldots ,x_{n}\}}such that each0≤xi≤1{\displaystyle 0\leq x_{i}\leq 1}. Then the multilinear extension is defined as[10][11]F(x)=∑S⊆Ωf(S)∏i∈Sxi∏i∉S(1−xi){\displaystyle F(\mathbf {x} )=\sum _{S\subseteq \Omega }f(S)\prod _{i\in S}x_{i}\prod _{i\notin S}(1-x_{i})}.
Intuitively,xirepresents the probability that itemiis chosen for the set. For every setS, the two inner products represent the probability that the chosen set is exactlyS. Therefore, the sum represents the expected value offfor the set formed by choosing each itemiat random with probability xi, independently of the other items.
Consider any vectorx={x1,x2,…,xn}{\displaystyle \mathbf {x} =\{x_{1},x_{2},\dots ,x_{n}\}}such that each0≤xi≤1{\displaystyle 0\leq x_{i}\leq 1}. Then the convex closure is defined asf−(x)=min(∑SαSf(S):∑SαS1S=x,∑SαS=1,αS≥0){\displaystyle f^{-}(\mathbf {x} )=\min \left(\sum _{S}\alpha _{S}f(S):\sum _{S}\alpha _{S}1_{S}=\mathbf {x} ,\sum _{S}\alpha _{S}=1,\alpha _{S}\geq 0\right)}.
The convex closure of any set function is convex over[0,1]n{\displaystyle [0,1]^{n}}.
Consider any vectorx={x1,x2,…,xn}{\displaystyle \mathbf {x} =\{x_{1},x_{2},\dots ,x_{n}\}}such that each0≤xi≤1{\displaystyle 0\leq x_{i}\leq 1}. Then the concave closure is defined asf+(x)=max(∑SαSf(S):∑SαS1S=x,∑SαS=1,αS≥0){\displaystyle f^{+}(\mathbf {x} )=\max \left(\sum _{S}\alpha _{S}f(S):\sum _{S}\alpha _{S}1_{S}=\mathbf {x} ,\sum _{S}\alpha _{S}=1,\alpha _{S}\geq 0\right)}.
For the extensions discussed above, it can be shown thatf+(x)≥F(x)≥f−(x)=fL(x){\displaystyle f^{+}(\mathbf {x} )\geq F(\mathbf {x} )\geq f^{-}(\mathbf {x} )=f^{L}(\mathbf {x} )}whenf{\displaystyle f}is submodular.[12]
Submodular functions have properties which are very similar toconvexandconcave functions. For this reason, anoptimization problemwhich concerns optimizing a convex or concave function can also be described as the problem of maximizing or minimizing a submodular function subject to some constraints.
The hardness of minimizing a submodular set function depends on constraints imposed on the problem.
Unlike the case of minimization, maximizing a generic submodular function isNP-hardeven in the unconstrained setting. Thus, most of the works in this field are concerned with polynomial-time approximation algorithms, includinggreedy algorithmsorlocal search algorithms.
Many of these algorithms can be unified within a semi-differential based framework of algorithms.[18]
Apart from submodular minimization and maximization, there are several other natural optimization problems related to submodular functions.
Submodular functions naturally occur in several real world applications, ineconomics,game theory,machine learningandcomputer vision.[4][29]Owing to the diminishing returns property, submodular functions naturally model costs of items, since there is often a larger discount, with an increase in the items one buys. Submodular functions model notions of complexity, similarity and cooperation when they appear in minimization problems. In maximization problems, on the other hand, they model notions of diversity, information and coverage. | https://en.wikipedia.org/wiki/Monotone_set_function |
In mathematics, the notions of anabsolutely monotonic functionand acompletely monotonic functionare two very closely related concepts. Both imply very strong monotonicity properties. Both types of functions have derivatives of all orders. In the case of an absolutely monotonic function, the function as well as its derivatives of all orders must be non-negative in its domain of definition which would imply that the function as well as its derivatives of all orders are monotonically increasing functions in the domain of definition. In the case of a completely monotonic function, the function and its derivatives must be alternately non-negative and non-positive in its domain of definition which would imply that function and its derivatives are alternately monotonically increasing and monotonically decreasing functions.
Such functions were first studied by S. Bernshtein in 1914 and the terminology is also due to him.[1][2][3]There are several other related notions like the concepts of almost completely monotonic function, logarithmically completely monotonic function, strongly logarithmically completely monotonic function, strongly completely monotonic function and almost strongly completely monotonic function.[4][5]Another related concept is that of acompletely/absolutely monotonic sequence. This notion was introduced by Hausdorff in 1921.
The notions of completely and absolutely monotone function/sequence play an important role in several areas of mathematics. For example, in classical analysis they occur in the proof of the positivity of integrals involving Bessel functions or the positivity of Cesàro means of certain
Jacobi series.[6]Such functions occur in other areas of mathematics such as probability theory, numerical analysis, and elasticity.[7]
A real valued functionf(x){\displaystyle f(x)}defined over an intervalI{\displaystyle I}in the real line is called an absolutely monotonic function if it has derivativesf(n)(x){\displaystyle f^{(n)}(x)}of all ordersn=0,1,2,…{\displaystyle n=0,1,2,\ldots }andf(n)(x)≥0{\displaystyle f^{(n)}(x)\geq 0}for allx{\displaystyle x}inI{\displaystyle I}.[1]The functionf(x){\displaystyle f(x)}is called a completely monotonic function if(−1)nf(n)(x)≥0{\displaystyle (-1)^{n}f^{(n)}(x)\geq 0}for allx{\displaystyle x}inI{\displaystyle I}.[1]
The two notions are mutually related. The functionf(x){\displaystyle f(x)}is completely monotonic if and only iff(−x){\displaystyle f(-x)}is absolutely monotonic on−I{\displaystyle -I}where−I{\displaystyle -I}the interval obtained by reflectingI{\displaystyle I}with respect to the origin. (Thus, ifI{\displaystyle I}is the interval(a,b){\displaystyle (a,b)}then−I{\displaystyle -I}is the interval(−b,−a){\displaystyle (-b,-a)}.)
In applications, the interval on the real line that is usually considered is the closed-open right half of the real line, that is, the interval[0,∞){\displaystyle [0,\infty )}.
The following functions are absolutely monotonic in the specified regions.[8]: 142–143
A sequence{μn}n=0∞{\displaystyle \{\mu _{n}\}_{n=0}^{\infty }}is called an absolutely monotonic sequence if its elements are non-negative and its successive differences are all non-negative, that is, if
whereΔkμn=∑m=0k(−1)m(km)μn+k−m{\displaystyle \Delta ^{k}\mu _{n}=\sum _{m=0}^{k}(-1)^{m}{k \choose m}\mu _{n+k-m}}.
A sequence{μn}n=0∞{\displaystyle \{\mu _{n}\}_{n=0}^{\infty }}is called a completely monotonic sequence if its elements are non-negative and its successive differences are alternately non-positive and non-negative,[8]: 101that is, if
The sequences{1n+1}0∞{\displaystyle \left\{{\frac {1}{n+1}}\right\}_{0}^{\infty }}and{cn}0∞{\displaystyle \{c^{n}\}_{0}^{\infty }}for0≤c≤1{\displaystyle 0\leq c\leq 1}are completely monotonic sequences.
Both the extensions and applications of the theory of absolutely monotonic functions derive from theorems.
The following is a selection from the large body of literature on absolutely/completely monotonic functions/sequences. | https://en.wikipedia.org/wiki/Absolutely_and_completely_monotonic_functions_and_sequences |
In themathematical theoryof functions ofoneormore complex variables, and also incomplex algebraic geometry, abiholomorphismorbiholomorphic functionis abijectiveholomorphic functionwhoseinverseis alsoholomorphic.
Formally, abiholomorphic functionis a functionϕ{\displaystyle \phi }defined on anopen subsetUof then{\displaystyle n}-dimensional complex spaceCnwith values inCnwhich isholomorphicandone-to-one, such that itsimageis an open setV{\displaystyle V}inCnand the inverseϕ−1:V→U{\displaystyle \phi ^{-1}:V\to U}is alsoholomorphic. More generally,UandVcan becomplex manifolds. As in the case of functions of a single complex variable, a sufficient condition for a holomorphic map to be biholomorphic onto its image is that the map is injective, in which case the inverse is also holomorphic (e.g., see Gunning 1990, Theorem I.11 or Corollary E.10 pg. 57).
If there exists a biholomorphismϕ:U→V{\displaystyle \phi \colon U\to V}, we say thatUandVarebiholomorphically equivalentor that they arebiholomorphic.
Ifn=1,{\displaystyle n=1,}everysimply connectedopen set other than the whole complex plane is biholomorphic to theunit disc(this is theRiemann mapping theorem). The situation is very different in higher dimensions. For example, openunit ballsand open unitpolydiscsare not biholomorphically equivalent forn>1.{\displaystyle n>1.}In fact, there does not exist even aproperholomorphic function from one to the other.
In the case of mapsf:U→Cdefined on an open subsetUof the complex planeC, some authors (e.g., Freitag 2009, Definition IV.4.1) define aconformal mapto be an injective map with nonzero derivative i.e.,f’(z)≠ 0 for everyzinU. According to this definition, a mapf:U→Cis conformal if and only iff:U→f(U) is biholomorphic. Notice that per definition of biholomorphisms, nothing is assumed about their derivatives, so, this equivalence contains the claim that a homeomorphism that is complex differentiable must actually have nonzero derivative everywhere. Other authors (e.g., Conway 1978) define a conformal map as one with nonzero derivative, but without requiring that the map be injective. According to this weaker definition, a conformal map need not be biholomorphic, even though it is locally biholomorphic, for example, by the inverse function theorem. For example, iff:U→Uis defined byf(z) =z2withU=C–{0}, thenfis conformal onU, since its derivativef’(z) = 2z≠ 0, but it is not biholomorphic, since it is 2-1.
This article incorporates material frombiholomorphically equivalentonPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License. | https://en.wikipedia.org/wiki/Biholomorphic_mapping |
Incomplex analysis,de Branges's theorem, or theBieberbach conjecture, is a theorem that gives anecessary conditionon aholomorphic functionin order for it to map theopen unit diskof thecomplex planeinjectivelyto the complex plane. It was posed byLudwig Bieberbach(1916) and finally proven byLouis de Branges(1985).
The statement concerns theTaylor coefficientsan{\displaystyle a_{n}}of aunivalent function, i.e. a one-to-one holomorphic function that maps the unit disk into the complex plane, normalized as is always possible so thata0=0{\displaystyle a_{0}=0}anda1=1{\displaystyle a_{1}=1}. That is, we consider a function defined on the open unit disk which isholomorphicand injective (univalent) with Taylor series of the form
Such functions are calledschlicht. The theorem then states that
TheKoebe function(see below) is a function for whichan=n{\displaystyle a_{n}=n}for alln{\displaystyle n}, and it is schlicht, so we cannot find a stricter limit on the absolute value of then{\displaystyle n}th coefficient.
The normalizations
mean that
This can always be obtained by anaffine transformation: starting with an arbitrary injective holomorphic functiong{\displaystyle g}defined on the open unit disk and setting
Such functionsg{\displaystyle g}are of interest because they appear in theRiemann mapping theorem.
Aschlicht functionis defined as an analytic functionf{\displaystyle f}that is one-to-one and satisfiesf(0)=0{\displaystyle f(0)=0}andf′(0)=1{\displaystyle f'(0)=1}. A family of schlicht functions are therotated Koebe functions
withα{\displaystyle \alpha }acomplex numberofabsolute value1{\displaystyle 1}. Iff{\displaystyle f}is a schlicht function and|an|=n{\displaystyle |a_{n}|=n}for somen≥2{\displaystyle n\geq 2}, thenf{\displaystyle f}is a rotated Koebe function.
The condition of de Branges' theorem is not sufficient to show the function is schlicht, as the function
shows: it is holomorphic on the unit disc and satisfies|an|≤n{\displaystyle |a_{n}|\leq n}for alln{\displaystyle n}, but it is not injective sincef(−1/2+z)=f(−1/2−z){\displaystyle f(-1/2+z)=f(-1/2-z)}.
A survey of the history is given byKoepf (2007).
Bieberbach (1916)proved|a2|≤2{\displaystyle |a_{2}|\leq 2}, and stated the conjecture that|an|≤n{\displaystyle |a_{n}|\leq n}.Löwner (1917)andNevanlinna (1921)independently proved the conjecture forstarlike functions.
ThenCharles Loewner(Löwner (1923)) proved|a3|≤3{\displaystyle |a_{3}|\leq 3}, using theLöwner equation. His work was used by most later attempts, and is also applied in the theory ofSchramm–Loewner evolution.
Littlewood (1925, theorem 20) proved that|an|≤en{\displaystyle |a_{n}|\leq en}for alln{\displaystyle n}, showing that the Bieberbach conjecture is true up to a factor ofe=2.718…{\displaystyle e=2.718\ldots }Several authors later reduced the constant in the inequality belowe{\displaystyle e}.
Iff(z)=z+⋯{\displaystyle f(z)=z+\cdots }is a schlicht function thenφ(z)=z(f(z2)/z2)1/2{\displaystyle \varphi (z)=z(f(z^{2})/z^{2})^{1/2}}is an odd schlicht function.PaleyandLittlewood(1932) showed that its Taylor coefficients satisfybk≤14{\displaystyle b_{k}\leq 14}for allk{\displaystyle k}. They conjectured that14{\displaystyle 14}can be replaced by1{\displaystyle 1}as a natural generalization of the Bieberbach conjecture. The Littlewood–Paley conjecture easily implies the Bieberbach conjecture using the Cauchy inequality, but it was soon disproved byFekete & Szegő (1933), who showed there is an odd schlicht function withb5=1/2+exp(−2/3)=1.013…{\displaystyle b_{5}=1/2+\exp(-2/3)=1.013\ldots }, and that this is the maximum possible value ofb5{\displaystyle b_{5}}.Isaak Milinlater showed that14{\displaystyle 14}can be replaced by1.14{\displaystyle 1.14}, and Hayman showed that the numbersbk{\displaystyle b_{k}}have a limit less than1{\displaystyle 1}iff{\displaystyle f}is not a Koebe function (for which theb2k+1{\displaystyle b_{2k+1}}are all1{\displaystyle 1}). So the limit is always less than or equal to1{\displaystyle 1}, meaning that Littlewood and Paley's conjecture is true for all but a finite number of coefficients. A weaker form of Littlewood and Paley's conjecture was found byRobertson (1936).
TheRobertson conjecturestates that if
is an odd schlicht function in the unit disk withb1=1{\displaystyle b_{1}=1}then for all positive integersn{\displaystyle n},
Robertson observed that his conjecture is still strong enough to imply the Bieberbach conjecture, and proved it forn=3{\displaystyle n=3}. This conjecture introduced the key idea of bounding various quadratic functions of the coefficients rather than the coefficients themselves, which is equivalent to bounding norms of elements in certain Hilbert spaces of schlicht functions.
There were several proofs of the Bieberbach conjecture for certain higher values ofn{\displaystyle n}, in particularGarabedian & Schiffer (1955)proved|a4|≤4{\displaystyle |a_{4}|\leq 4},Ozawa (1969)andPederson (1968)proved|a6|≤6{\displaystyle |a_{6}|\leq 6}, andPederson & Schiffer (1972)proved|a5|≤5{\displaystyle |a_{5}|\leq 5}.
Hayman (1955)proved that the limit ofan/n{\displaystyle a_{n}/n}exists, and has absolute value less than1{\displaystyle 1}unlessf{\displaystyle f}is a Koebe function. In particular this showed that for anyf{\displaystyle f}there can be at most a finite number of exceptions to the Bieberbach conjecture.
TheMilin conjecturestates that for each schlicht function on the unit disk, and for all positive integersn{\displaystyle n},
where thelogarithmic coefficientsγn{\displaystyle \gamma _{n}}off{\displaystyle f}are given by
Milin (1977)showed using theLebedev–Milin inequalitythat the Milin conjecture (later proved by de Branges) implies the Robertson conjecture and therefore the Bieberbach conjecture.
Finallyde Branges (1987)proved|an|≤n{\displaystyle |a_{n}|\leq n}for alln{\displaystyle n}.
The proof uses a type ofHilbert spaceofentire functions. The study of these spaces grew into a sub-field of complex analysis and the spaces have come to be calledde Branges spaces. De Branges proved the stronger Milin conjecture (Milin 1977) on logarithmic coefficients. This was already known to imply the Robertson conjecture (Robertson 1936) about odd univalent functions, which in turn was known to imply the Bieberbach conjecture about schlicht functions (Bieberbach 1916). His proof uses theLoewner equation, theAskey–Gasper inequalityaboutJacobi polynomials, and theLebedev–Milin inequalityon exponentiated power series.
De Branges reduced the conjecture to some inequalities for Jacobi polynomials, and verified the first few by hand.Walter Gautschiverified more of these inequalities by computer for de Branges (proving the Bieberbach conjecture for the first 30 or so coefficients) and then askedRichard Askeywhether he knew of any similar inequalities. Askey pointed out thatAskey & Gasper (1976)had proved the necessary inequalities eight years before, which allowed de Branges to complete his proof. The first version was very long and had some minor mistakes which caused some skepticism about it, but these were corrected with the help of members of the Leningrad seminar on Geometric Function Theory (Leningrad Department of Steklov Mathematical Institute) when de Branges visited in 1984.
De Branges proved the following result, which forν=0{\displaystyle \nu =0}implies the Milin conjecture (and therefore the Bieberbach conjecture).
Suppose thatν>−3/2{\displaystyle \nu >-3/2}andσn{\displaystyle \sigma _{n}}are real numbers for positive integersn{\displaystyle n}with limit0{\displaystyle 0}and such that
is non-negative, non-increasing, and has limit0{\displaystyle 0}. Then for all Riemann mapping functionsF(z)=z+⋯{\displaystyle F(z)=z+\cdots }univalent in the unit disk with
the maximum value of
is achieved by the Koebe functionz/(1−z)2{\displaystyle z/(1-z)^{2}}.
A simplified version of the proof was published in 1985 byCarl FitzGeraldandChristian Pommerenke(FitzGerald & Pommerenke (1985)), and an even shorter description byJacob Korevaar(Korevaar (1986)). A very short proof avoiding use of the inequalities of Askey and Gasper was later found by Lenard Weinstein (Weinstein (1991)). | https://en.wikipedia.org/wiki/De_Branges%27s_theorem |
Incomplex analysis, a branch ofmathematics, theKoebe 1/4 theoremstates the following:
Koebe Quarter Theorem.The image of an injective analytic functionf:D→C{\displaystyle f:\mathbf {D} \to \mathbb {C} }from theunit diskD{\displaystyle \mathbf {D} }onto asubsetof thecomplex planecontains the disk whose center isf(0){\displaystyle f(0)}and whose radius is|f′(0)|/4{\displaystyle |f'(0)|/4}.
The theorem is named afterPaul Koebe, who conjectured the result in 1907. The theorem was proven byLudwig Bieberbachin 1916. The example of the Koebe function shows that the constant1/4{\displaystyle 1/4}in the theorem cannot be improved (increased).
A related result is theSchwarz lemma, and a notion related to both isconformal radius.
Suppose that
is univalent in|z|>1{\displaystyle |z|>1}. Then
In fact, ifr>1{\displaystyle r>1}, the complement of the image of the disk|z|>r{\displaystyle |z|>r}is a bounded domainX(r){\displaystyle X(r)}. Its area is given by
Since the area is positive, the result follows by lettingr{\displaystyle r}decrease to1{\displaystyle 1}. The above proof shows equality holds if and only if the complement of the image ofg{\displaystyle g}has zero area, i.e.Lebesgue measurezero.
This result was proved in 1914 by the Swedish mathematicianThomas Hakon Grönwall.
TheKoebe functionis defined by
Application of the theorem to this function shows that the constant1/4{\displaystyle 1/4}in the theorem cannot be improved, as the image domainf(D){\displaystyle f(\mathbf {D} )}does not contain the pointz=−1/4{\displaystyle z=-1/4}and so cannot contain any disk centred at0{\displaystyle 0}with radius larger than1/4{\displaystyle 1/4}.
Therotated Koebe functionis
withα{\displaystyle \alpha }acomplex numberofabsolute value1{\displaystyle 1}. The Koebe function and its rotations areschlicht: that is,univalent(analytic andone-to-one) and satisfyingf(0)=0{\displaystyle f(0)=0}andf′(0)=1{\displaystyle f'(0)=1}.
Let
be univalent in|z|<1{\displaystyle |z|<1}. Then
This follows by applying Gronwall's area theorem to the odd univalent function
Equality holds if and only ifg{\displaystyle g}is a rotated Koebe function.
This result was proved byLudwig Bieberbachin 1916 and provided the basis for hiscelebrated conjecturethat|an|≤n{\displaystyle |a_{n}|\leq n}, proved in 1985 byLouis de Branges.
Applying an affine map, it can be assumed that
so that
In particular, the coefficient inequality gives that|a2|≤2{\displaystyle |a_{2}|\leq 2}.
Ifw{\displaystyle w}is not inf(D){\displaystyle f(\mathbf {D} )}, then
is univalent in|z|<1{\displaystyle |z|<1}.
Applying the coefficient inequality toh{\displaystyle h}gives
so that
TheKoebe distortion theoremgives a series of bounds for a univalent function and its derivative. It is a direct consequence of Bieberbach's inequality for the second coefficient and the Koebe quarter theorem.[1]
Letf(z){\displaystyle f(z)}be a univalent function on|z|<1{\displaystyle |z|<1}normalized so thatf(0)=0{\displaystyle f(0)=0}andf′(0)=1{\displaystyle f'(0)=1}and letr=|z|{\displaystyle r=|z|}. Then
with equality if and only iff{\displaystyle f}is a Koebe function | https://en.wikipedia.org/wiki/Koebe_quarter_theorem |
Incomplex analysis, theRiemann mapping theoremstates that ifU{\displaystyle U}is anon-emptysimply connectedopen subsetof thecomplex number planeC{\displaystyle \mathbb {C} }which is not all ofC{\displaystyle \mathbb {C} }, then there exists abiholomorphicmappingf{\displaystyle f}(i.e. abijectiveholomorphicmapping whose inverse is also holomorphic) fromU{\displaystyle U}onto theopen unit disk
This mapping is known as aRiemann mapping.[1]
Intuitively, the condition thatU{\displaystyle U}be simply connected means thatU{\displaystyle U}does not contain any “holes”. The fact thatf{\displaystyle f}is biholomorphic implies that it is aconformal mapand therefore angle-preserving. Such a map may be interpreted as preserving the shape of any sufficiently small figure, while possibly rotating and scaling (but not reflecting) it.
Henri Poincaréproved that the mapf{\displaystyle f}is unique up to rotation and recentering: ifz0{\displaystyle z_{0}}is an element ofU{\displaystyle U}andϕ{\displaystyle \phi }is an arbitrary angle, then there exists precisely onefas above such thatf(z0)=0{\displaystyle f(z_{0})=0}and such that theargumentof the derivative off{\displaystyle f}at the pointz0{\displaystyle z_{0}}is equal toϕ{\displaystyle \phi }. This is an easy consequence of theSchwarz lemma.
As a corollary of the theorem, any two simply connected open subsets of theRiemann spherewhich both lack at least two points of the sphere can be conformally mapped into each other.
The theorem was stated (under the assumption that theboundaryofU{\displaystyle U}is piecewise smooth) byBernhard Riemannin 1851 in his PhD thesis.Lars Ahlforswrote once, concerning the original formulation of the theorem, that it was “ultimately formulated in terms which would defy any attempt of proof, even with modern methods”.[2]Riemann's flawed proof depended on theDirichlet principle(which was named by Riemann himself), which was considered sound at the time. However,Karl Weierstrassfound that this principle was not universally valid. Later,David Hilbertwas able to prove that, to a large extent, the Dirichlet principle is valid under the hypothesis that Riemann was working with. However, in order to be valid, the Dirichlet principle needs certain hypotheses concerning the boundary ofU{\displaystyle U}(namely, that it is aJordan curve) which are not valid for simply connecteddomainsin general.
The first rigorous proof of the theorem was given byWilliam Fogg Osgoodin 1900. He proved the existence ofGreen's functionon arbitrary simply connected domains other thanC{\displaystyle \mathbb {C} }itself; this established the Riemann mapping theorem.[3]
Constantin Carathéodorygave another proof of the theorem in 1912, which was the first to rely purely on the methods of function theory rather thanpotential theory.[4]His proof used Montel's concept of normal families, which became the standard method of proof in textbooks.[5]Carathéodory continued in 1913 by resolving the additional question of whether the Riemann mapping between the domains can be extended to a homeomorphism of the boundaries (seeCarathéodory's theorem).[6]
Carathéodory's proof usedRiemann surfacesand it was simplified byPaul Koebetwo years later in a way that did not require them. Another proof, due toLipót Fejérand toFrigyes Riesz, was published in 1922 and it was rather shorter than the previous ones. In this proof, like in Riemann's proof, the desired mapping was obtained as the solution of an extremal problem. The Fejér–Riesz proof was further simplified byAlexander Ostrowskiand by Carathéodory.[7]
The following points detail the uniqueness and power of the Riemann mapping theorem:
Theorem.For an open domainG⊂C{\displaystyle G\subset \mathbb {C} }the following conditions are equivalent:[10]
(1) ⇒ (2) because any continuous closed curve, with base pointa∈G{\displaystyle a\in G}, can be continuously deformed to the constant curvea{\displaystyle a}. So the line integral offdz{\displaystyle f\,\mathrm {d} z}over the curve is0{\displaystyle 0}.
(2) ⇒ (3) because the integral over any piecewise smooth pathγ{\displaystyle \gamma }froma{\displaystyle a}toz{\displaystyle z}can be used to define a primitive.
(3) ⇒ (4) by integratingf−1df/dz{\displaystyle f^{-1}\,\mathrm {d} f/\mathrm {d} z}alongγ{\displaystyle \gamma }froma{\displaystyle a}tox{\displaystyle x}to give a branch of the logarithm.
(4) ⇒ (5) by taking the square root asg(z)=exp(f(x)/2){\displaystyle g(z)=\exp(f(x)/2)}wheref{\displaystyle f}is a holomorphic choice of logarithm.
(5) ⇒ (6) because ifγ{\displaystyle \gamma }is a piecewise closed curve andfn{\displaystyle f_{n}}are successive square roots ofz−w{\displaystyle z-w}forw{\displaystyle w}outsideG{\displaystyle G}, then the winding number offn∘γ{\displaystyle f_{n}\circ \gamma }aboutw{\displaystyle w}is2n{\displaystyle 2^{n}}times the winding number ofγ{\displaystyle \gamma }about0{\displaystyle 0}. Hence the winding number ofγ{\displaystyle \gamma }aboutw{\displaystyle w}must be divisible by2n{\displaystyle 2^{n}}for alln{\displaystyle n}, so it must equal0{\displaystyle 0}.
(6) ⇒ (7) for otherwise the extended planeC∪{∞}∖G{\displaystyle \mathbb {C} \cup \{\infty \}\setminus G}can be written as the disjoint union of two open and closed setsA{\displaystyle A}andB{\displaystyle B}with∞∈B{\displaystyle \infty \in B}andA{\displaystyle A}bounded. Letδ>0{\displaystyle \delta >0}be the shortest Euclidean distance betweenA{\displaystyle A}andB{\displaystyle B}and build a square grid onC{\displaystyle \mathbb {C} }with lengthδ/4{\displaystyle \delta /4}with a pointa{\displaystyle a}ofA{\displaystyle A}at the centre of a square. LetC{\displaystyle C}be the compact set of the union of all squares with distance≤δ/4{\displaystyle \leq \delta /4}fromA{\displaystyle A}. ThenC∩B=∅{\displaystyle C\cap B=\varnothing }and∂C{\displaystyle \partial C}does not meetA{\displaystyle A}orB{\displaystyle B}: it consists of finitely many horizontal and vertical segments inG{\displaystyle G}forming a finite number of closed rectangular pathsγj∈G{\displaystyle \gamma _{j}\in G}. TakingCi{\displaystyle C_{i}}to be all the squares coveringA{\displaystyle A}, then12π∫∂Cdarg(z−a){\displaystyle {\frac {1}{2\pi }}\int _{\partial C}\mathrm {d} \mathrm {arg} (z-a)}equals the sum of the winding numbers ofCi{\displaystyle C_{i}}overa{\displaystyle a}, thus giving1{\displaystyle 1}. On the other hand the sum of the winding numbers ofγj{\displaystyle \gamma _{j}}abouta{\displaystyle a}equals1{\displaystyle 1}. Hence the winding number of at least one of theγj{\displaystyle \gamma _{j}}abouta{\displaystyle a}is non-zero.
(7) ⇒ (1) This is a purely topological argument. Letγ{\displaystyle \gamma }be a piecewise smooth closed curve based atz0∈G{\displaystyle z_{0}\in G}. By approximation γ is in the samehomotopyclass as a rectangular path on the square grid of lengthδ>0{\displaystyle \delta >0}based atz0{\displaystyle z_{0}}; such a rectangular path is determined by a succession ofN{\displaystyle N}consecutive directed vertical and horizontal sides. By induction onN{\displaystyle N}, such a path can be deformed to a constant path at a corner of the grid. If the path intersects at a pointz1{\displaystyle z_{1}}, then it breaks up into two rectangular paths of length<N{\displaystyle <N}, and thus can be deformed to the constant path atz1{\displaystyle z_{1}}by the induction hypothesis and elementary properties of thefundamental group. The reasoning follows a "northeast argument":[11][12]in the non self-intersecting path there will be a cornerz0{\displaystyle z_{0}}with largest real part (easterly) and then amongst those one with largest imaginary part (northerly). Reversing direction if need be, the path go fromz0−δ{\displaystyle z_{0}-\delta }toz0{\displaystyle z_{0}}and then tow0=z0−inδ{\displaystyle w_{0}=z_{0}-in\delta }forn≥1{\displaystyle n\geq 1}and then goes leftwards tow0−δ{\displaystyle w_{0}-\delta }. LetR{\displaystyle R}be the open rectangle with these vertices. The winding number of the path is0{\displaystyle 0}for points to the right of the vertical segment fromz0{\displaystyle z_{0}}tow0{\displaystyle w_{0}}and−1{\displaystyle -1}for points to the right; and hence insideR{\displaystyle R}. Since the winding number is0{\displaystyle 0}offG{\displaystyle G},R{\displaystyle R}lies inG{\displaystyle G}. Ifz{\displaystyle z}is a point of the path, it must lie inG{\displaystyle G}; ifz{\displaystyle z}is on∂R{\displaystyle \partial R}but not on the path, by continuity the winding number of the path aboutz{\displaystyle z}is−1{\displaystyle -1}, soz{\displaystyle z}must also lie inG{\displaystyle G}. HenceR∪∂R⊂G{\displaystyle R\cup \partial R\subset G}. But in this case the path can be deformed by replacing the three sides of the rectangle by the fourth, resulting in two less sides (with self-intersections permitted).
Definitions.A familyF{\displaystyle {\cal {F}}}of holomorphic functions on an open domain is said to benormalif any sequence of functions inF{\displaystyle {\cal {F}}}has a subsequence that converges to a holomorphic function uniformly on compacta.
A familyF{\displaystyle {\cal {F}}}iscompactif whenever a sequencefn{\displaystyle f_{n}}lies inF{\displaystyle {\cal {F}}}and converges uniformly tof{\displaystyle f}on compacta, thenf{\displaystyle f}also lies inF{\displaystyle {\cal {F}}}. A familyF{\displaystyle {\cal {F}}}is said to belocally boundedif their functions are uniformly bounded on each compact disk. Differentiating theCauchy integral formula, it follows that the derivatives of a locally bounded family are also locally bounded.[15][16]
Remark.As a consequence of the Riemann mapping theorem, every simply connected domain in the plane is homeomorphic to the unit disk. If points are omitted, this follows from the theorem. For the whole plane, the homeomorphismϕ(z)=z/(1+|z|){\displaystyle \phi (z)=z/(1+|z|)}gives a homeomorphism ofC{\displaystyle \mathbb {C} }ontoD{\displaystyle D}.
Koebe's uniformization theorem for normal families also generalizes to yield uniformizersf{\displaystyle f}for multiply-connected domains to finiteparallel slit domains, where the slits have angleθ{\displaystyle \theta }to thex-axis. Thus ifG{\displaystyle G}is a domain inC∪{∞}{\displaystyle \mathbb {C} \cup \{\infty \}}containing∞{\displaystyle \infty }and bounded by finitely many Jordan contours, there is a unique univalent functionf{\displaystyle f}onG{\displaystyle G}with
near∞{\displaystyle \infty }, maximizingRe(e−2iθa1){\displaystyle \mathrm {Re} (e^{-2i\theta }a_{1})}and having imagef(G){\displaystyle f(G)}a parallel slit domain with angleθ{\displaystyle \theta }to thex-axis.[23][24][25]
The first proof that parallel slit domains were canonical domains for in the multiply connected case was given byDavid Hilbertin 1909.Jenkins (1958), on his book on univalent functions and conformal mappings, gave a treatment based on the work ofHerbert GrötzschandRené de Posselfrom the early 1930s; it was the precursor ofquasiconformal mappingsandquadratic differentials, later developed as the technique ofextremal metricdue toOswald Teichmüller.[26]Menahem Schiffergave a treatment based on very generalvariational principles, summarised in addresses he gave to theInternational Congress of Mathematiciansin 1950 and 1958. In a theorem on "boundary variation" (to distinguish it from "interior variation"), he derived a differential equation and inequality, that relied on a measure-theoretic characterisation of straight-line segments due to Ughtred Shuttleworth Haslam-Jones from 1936. Haslam-Jones' proof was regarded as difficult and was only given a satisfactory proof in the mid-1970s by Schober and Campbell–Lamoureux.[27][28][29]
Schiff (1993)gave a proof of uniformization for parallel slit domains which was similar to the Riemann mapping theorem. To simplify notation, horizontal slits will be taken. Firstly, byBieberbach's inequality, any univalent function
withz{\displaystyle z}in the open unit disk must satisfy|c|≤2{\displaystyle |c|\leq 2}. As a consequence, if
is univalent in|z|>R{\displaystyle |z|>R}, then|f(z)−a0|≤2|z|{\displaystyle |f(z)-a_{0}|\leq 2|z|}. To see this, takeS>R{\displaystyle S>R}and set
forz{\displaystyle z}in the unit disk, choosingb{\displaystyle b}so the denominator is nowhere-vanishing, and apply theSchwarz lemma. Next the functionfR(z)=z+R2/z{\displaystyle f_{R}(z)=z+R^{2}/z}is characterized by an "extremal condition" as the unique univalent function inz>R{\displaystyle z>R}of the formz+a1z−1+⋯{\displaystyle z+a_{1}z^{-1}+\cdots }that maximisesRe(a1){\displaystyle \mathrm {Re} (a_{1})}: this is an immediate consequence ofGrönwall's area theorem, applied to the family of univalent functionsf(zR)/R{\displaystyle f(zR)/R}inz>1{\displaystyle z>1}.[30][31]
To prove now that the multiply connected domainG⊂C∪{∞}{\displaystyle G\subset \mathbb {C} \cup \{\infty \}}can be uniformized by a horizontal parallel slit conformal mapping
takeR{\displaystyle R}large enough that∂G{\displaystyle \partial G}lies in the open disk|z|<R{\displaystyle |z|<R}. ForS>R{\displaystyle S>R}, univalency and the estimate|f(z)|≤2|z|{\displaystyle |f(z)|\leq 2|z|}imply that, ifz{\displaystyle z}lies inG{\displaystyle G}with|z|≤S{\displaystyle |z|\leq S}, then|f(z)|≤2S{\displaystyle |f(z)|\leq 2S}. Since the family of univalentf{\displaystyle f}are locally bounded inG∖{∞}{\displaystyle G\setminus \{\infty \}}, by Montel's theorem they form a normal family. Furthermore iffn{\displaystyle f_{n}}is in the family and tends tof{\displaystyle f}uniformly on compacta, thenf{\displaystyle f}is also in the family and each coefficient of the Laurent expansion at∞{\displaystyle \infty }of thefn{\displaystyle f_{n}}tends to the corresponding coefficient off{\displaystyle f}. This applies in particular to the coefficient: so by compactness there is a univalentf{\displaystyle f}which maximizesRe(a1){\displaystyle \mathrm {Re} (a_{1})}. To check that
is the required parallel slit transformation, supposereductio ad absurdumthatf(G)=G1{\displaystyle f(G)=G_{1}}has a compact and connected componentK{\displaystyle K}of its boundary which is not a horizontal slit. Then the complementG2{\displaystyle G_{2}}ofK{\displaystyle K}inC∪{∞}{\displaystyle \mathbb {C} \cup \{\infty \}}is simply connected withG2⊃G1{\displaystyle G_{2}\supset G_{1}}. By the Riemann mapping theorem there is a conformal mapping
such thath(G2){\displaystyle h(G_{2})}isC{\displaystyle \mathbb {C} }with a horizontal slit removed. So we have that
and thusRe(a1+b1)≤Re(a1){\displaystyle \mathrm {Re} (a_{1}+b_{1})\leq \mathrm {Re} (a_{1})}by the extremality off{\displaystyle f}. Therefore,Re(b1)≤0{\displaystyle \mathrm {Re} (b_{1})\leq 0}. On the other hand by the Riemann mapping theorem there is a conformal mapping
mapping from|w|>S{\displaystyle |w|>S}ontoG2{\displaystyle G_{2}}. Then
By the strict maximality for the slit mapping in the previous paragraph, we can see thatRe(c1)<Re(b1+c1){\displaystyle \mathrm {Re} (c_{1})<\mathrm {Re} (b_{1}+c_{1})}, so thatRe(b1)>0{\displaystyle \mathrm {Re} (b_{1})>0}. The two inequalities forRe(b1){\displaystyle \mathrm {Re} (b_{1})}are contradictory.[32][33][34]
The proof of the uniqueness of the conformal parallel slit transformation is given inGoluzin (1969)andGrunsky (1978). Applying the inverse of theJoukowsky transformh{\displaystyle h}to the horizontal slit domain, it can be assumed thatG{\displaystyle G}is a domain bounded by the unit circleC0{\displaystyle C_{0}}and contains analytic arcsCi{\displaystyle C_{i}}and isolated points (the images of other the inverse of the Joukowsky transform under the other parallel horizontal slits). Thus, taking a fixeda∈G{\displaystyle a\in G}, there is a univalent mapping
with its image a horizontal slit domain. Suppose thatF1(w){\displaystyle F_{1}(w)}is another uniformizer with
The images underF0{\displaystyle F_{0}}orF1{\displaystyle F_{1}}of eachCi{\displaystyle C_{i}}have a fixedy-coordinate so are horizontal segments. On the other hand,F2(w)=F0(w)−F1(w){\displaystyle F_{2}(w)=F_{0}(w)-F_{1}(w)}is holomorphic inG{\displaystyle G}. If it is constant, then it must be identically zero sinceF2(a)=0{\displaystyle F_{2}(a)=0}. SupposeF2{\displaystyle F_{2}}is non-constant, then by assumptionF2(Ci){\displaystyle F_{2}(C_{i})}are all horizontal lines. Ift{\displaystyle t}is not in one of these lines,Cauchy's argument principleshows that the number of solutions ofF2(w)=t{\displaystyle F_{2}(w)=t}inG{\displaystyle G}is zero (anyt{\displaystyle t}will eventually be encircled by contours inG{\displaystyle G}close to theCi{\displaystyle C_{i}}'s). This contradicts the fact that the non-constant holomorphic functionF2{\displaystyle F_{2}}is anopen mapping.[35]
GivenU{\displaystyle U}and a pointz0∈U{\displaystyle z_{0}\in U}, we want to construct a functionf{\displaystyle f}which mapsU{\displaystyle U}to the unit disk andz0{\displaystyle z_{0}}to0{\displaystyle 0}. For this sketch, we will assume thatUis bounded and its boundary is smooth, much like Riemann did. Write
whereg=u+iv{\displaystyle g=u+iv}is some (to be determined) holomorphic function with real partu{\displaystyle u}and imaginary partv{\displaystyle v}. It is then clear thatz0{\displaystyle z_{0}}is the only zero off{\displaystyle f}. We require|f(z)|=1{\displaystyle |f(z)|=1}forz∈∂U{\displaystyle z\in \partial U}, so we need
on the boundary. Sinceu{\displaystyle u}is the real part of a holomorphic function, we know thatu{\displaystyle u}is necessarily aharmonic function; i.e., it satisfiesLaplace's equation.
The question then becomes: does a real-valued harmonic functionu{\displaystyle u}exist that is defined on all ofU{\displaystyle U}and has the given boundary condition? The positive answer is provided by theDirichlet principle. Once the existence ofu{\displaystyle u}has been established, theCauchy–Riemann equationsfor the holomorphic functiong{\displaystyle g}allow us to findv{\displaystyle v}(this argument depends on the assumption thatU{\displaystyle U}be simply connected). Onceu{\displaystyle u}andv{\displaystyle v}have been constructed, one has to check that the resulting functionf{\displaystyle f}does indeed have all the required properties.[36]
The Riemann mapping theorem can be generalized to the context ofRiemann surfaces: IfU{\displaystyle U}is a non-empty simply-connected open subset of aRiemann surface, thenU{\displaystyle U}is biholomorphic to one of the following: theRiemann sphere, thecomplex planeC{\displaystyle \mathbb {C} }, or theunit diskD{\displaystyle D}. This is known as theuniformization theorem.
In the case of a simply connected bounded domain with smooth boundary, the Riemann mapping function and all its derivatives extend by continuity to the closure of the domain. This can be proved using regularity properties of solutions of the Dirichlet boundary value problem, which follow either from the theory ofSobolev spaces for planar domainsor fromclassical potential theory. Other methods for proving the smooth Riemann mapping theorem include the theory of kernel functions[37]or theBeltrami equation.
Computational conformal mapping is prominently featured in problems of applied analysis and mathematical physics, as well as in engineering disciplines, such as image processing.
In the early 1980s an elementary algorithm for computing conformal maps was discovered. Given pointsz0,…,zn{\displaystyle z_{0},\ldots ,z_{n}}in the plane, the algorithm computes an explicit conformal map of the unit disk onto a region bounded by a Jordan curveγ{\displaystyle \gamma }withz0,…,zn∈γ.{\displaystyle z_{0},\ldots ,z_{n}\in \gamma .}This algorithm converges for Jordan regions[38]in the sense of uniformly close boundaries. There are corresponding uniform estimates on the closed region and the closed disc for the mapping functions and their inverses. Improved estimates are obtained if the data points lie on aC1{\displaystyle C^{1}}curve or aK-quasicircle. The algorithm was discovered as an approximate method for conformal welding; however, it can also be viewed as a discretization of theLoewner differential equation.[39]
The following is known about numerically approximating the conformal mapping between two planar domains.[40]
Positive results:
Negative results: | https://en.wikipedia.org/wiki/Riemann_mapping_theorem |
Inmathematics,Nevanlinna's criterionincomplex analysis, proved in 1920 by the Finnish mathematicianRolf Nevanlinna, characterizesholomorphicunivalent functionson theunit diskwhich arestarlike. Nevanlinna used this criterion to prove theBieberbach conjecturefor starlike univalent functions.
A univalent functionhon the unit disk satisfyingh(0) = 0 andh'(0) = 1 is starlike, i.e. has image invariant under multiplication by real numbers in [0,1], if and only ifzh′(z)/h(z){\displaystyle zh^{\prime }(z)/h(z)}has positive real part for |z| < 1 and takes the value 1 at 0.
Note that, by applying the result toa•h(rz), the criterion applies on any disc |z| < r with only the requirement thatf(0) = 0 andf'(0) ≠ 0.
Leth(z) be a starlike univalent function on |z| < 1 withh(0) = 0 andh'(0) = 1.
Fort< 0, define[1]
a semigroup of holomorphic mappings ofDinto itself fixing 0.
Moreoverhis theKoenigs functionfor the semigroupft.
By theSchwarz lemma, |ft(z)| decreases astincreases.
Hence
But, settingw=ft(z),
where
Hence
and so, dividing by |w|2,
Taking reciprocals and lettingtgo to 0 gives
for all |z| < 1. Since the left hand side is aharmonic function, themaximum principleimplies the inequality is strict.
Conversely if
has positive real part andg(0) = 1, thenhcan vanish only at 0, where it must have a simple zero.
Now
Thus asztraces the circlez=reiθ{\displaystyle z=re^{i\theta }}, the argument of the imageh(reiθ){\displaystyle h(re^{i\theta })}increases strictly. By theargument principle, sinceh{\displaystyle h}has a simple zero at 0,
it circles the origin just once. The interior of the region bounded by the curve it traces is therefore starlike. Ifais a point in the interior then the number of solutionsN(a) ofh(z)=awith |z| <ris given by
Since this is an integer, depends continuously onaandN(0) = 1, it is identically 1. Sohis univalent and starlike in each disk |z| <rand hence everywhere.
Constantin Carathéodoryproved in 1907 that if
is a holomorphic function on the unit diskDwith positive real part, then[2][3]
In fact it suffices to show the result withgreplaced bygr(z) =g(rz) for anyr< 1 and then pass to the limitr= 1.
In that casegextends to a continuous function on the closed disc with positive real part and bySchwarz formula
Using the identity
it follows that
so defines a probability measure, and
Hence
Let
be a univalent starlike function in |z| < 1.Nevanlinna (1921)proved that
In fact by Nevanlinna's criterion
has positive real part for |z|<1. So by Carathéodory's lemma
On the other hand
gives the recurrence relation
wherea1= 1. Thus
so it follows by induction that | https://en.wikipedia.org/wiki/Nevanlinna%27s_criterion |
Inmathematics, anembedding(orimbedding[1]) is one instance of somemathematical structurecontained within another instance, such as agroupthat is asubgroup.
When some objectX{\displaystyle X}is said to be embedded in another objectY{\displaystyle Y}, the embedding is given by someinjectiveand structure-preserving mapf:X→Y{\displaystyle f:X\rightarrow Y}. The precise meaning of "structure-preserving" depends on the kind of mathematical structure of whichX{\displaystyle X}andY{\displaystyle Y}are instances. In the terminology ofcategory theory, a structure-preserving map is called amorphism.
The fact that a mapf:X→Y{\displaystyle f:X\rightarrow Y}is an embedding is often indicated by the use of a "hooked arrow" (U+21AA↪RIGHTWARDS ARROW WITH HOOK);[2]thus:f:X↪Y.{\displaystyle f:X\hookrightarrow Y.}(On the other hand, this notation is sometimes reserved forinclusion maps.)
GivenX{\displaystyle X}andY{\displaystyle Y}, several different embeddings ofX{\displaystyle X}inY{\displaystyle Y}may be possible. In many cases of interest there is a standard (or "canonical") embedding, like those of thenatural numbersin theintegers, the integers in therational numbers, the rational numbers in thereal numbers, and the real numbers in thecomplex numbers. In such cases it is common to identify thedomainX{\displaystyle X}with itsimagef(X){\displaystyle f(X)}contained inY{\displaystyle Y}, so thatX⊆Y{\displaystyle X\subseteq Y}.
Ingeneral topology, an embedding is ahomeomorphismonto its image.[3]More explicitly, an injectivecontinuousmapf:X→Y{\displaystyle f:X\to Y}betweentopological spacesX{\displaystyle X}andY{\displaystyle Y}is atopological embeddingiff{\displaystyle f}yields a homeomorphism betweenX{\displaystyle X}andf(X){\displaystyle f(X)}(wheref(X){\displaystyle f(X)}carries thesubspace topologyinherited fromY{\displaystyle Y}). Intuitively then, the embeddingf:X→Y{\displaystyle f:X\to Y}lets us treatX{\displaystyle X}as asubspaceofY{\displaystyle Y}. Every embedding is injective andcontinuous. Every map that is injective, continuous and eitheropenorclosedis an embedding; however there are also embeddings that are neither open nor closed. The latter happens if the imagef(X){\displaystyle f(X)}is neither anopen setnor aclosed setinY{\displaystyle Y}.
For a given spaceY{\displaystyle Y}, the existence of an embeddingX→Y{\displaystyle X\to Y}is atopological invariantofX{\displaystyle X}. This allows two spaces to be distinguished if one is able to be embedded in a space while the other is not.
If the domain of a functionf:X→Y{\displaystyle f:X\to Y}is atopological spacethen the function is said to belocally injective at a pointif there exists someneighborhoodU{\displaystyle U}of this point such that the restrictionf|U:U→Y{\displaystyle f{\big \vert }_{U}:U\to Y}is injective. It is calledlocally injectiveif it is locally injective around every point of its domain. Similarly, alocal (topological, resp. smooth) embeddingis a function for which every point in its domain has some neighborhood to which its restriction is a (topological, resp. smooth) embedding.
Every injective function is locally injective but not conversely.Local diffeomorphisms,local homeomorphisms, and smoothimmersionsare all locally injective functions that are not necessarily injective. Theinverse function theoremgives a sufficient condition for a continuously differentiable function to be (among other things) locally injective. Everyfiberof a locally injective functionf:X→Y{\displaystyle f:X\to Y}is necessarily adiscrete subspaceof itsdomainX.{\displaystyle X.}
Indifferential topology:
LetM{\displaystyle M}andN{\displaystyle N}be smoothmanifoldsandf:M→N{\displaystyle f:M\to N}be a smooth map. Thenf{\displaystyle f}is called animmersionif itsderivativeis everywhere injective. Anembedding, or asmooth embedding, is defined to be an immersion that is an embedding in the topological sense mentioned above (i.e.homeomorphismonto its image).[4]
In other words, the domain of an embedding isdiffeomorphicto its image, and in particular the image of an embedding must be asubmanifold. An immersion is precisely alocal embedding, i.e. for any pointx∈M{\displaystyle x\in M}there is a neighborhoodx∈U⊂M{\displaystyle x\in U\subset M}such thatf:U→N{\displaystyle f:U\to N}is an embedding.
When the domain manifold is compact, the notion of a smooth embedding is equivalent to that of an injective immersion.
An important case isN=Rn{\displaystyle N=\mathbb {R} ^{n}}. The interest here is in how largen{\displaystyle n}must be for an embedding, in terms of the dimensionm{\displaystyle m}ofM{\displaystyle M}. TheWhitney embedding theorem[5]states thatn=2m{\displaystyle n=2m}is enough, and is the best possible linear bound. For example, thereal projective spaceRPm{\displaystyle \mathbb {R} \mathrm {P} ^{m}}of dimensionm{\displaystyle m}, wherem{\displaystyle m}is a power of two, requiresn=2m{\displaystyle n=2m}for an embedding. However, this does not apply to immersions; for instance,RP2{\displaystyle \mathbb {R} \mathrm {P} ^{2}}can be immersed inR3{\displaystyle \mathbb {R} ^{3}}as is explicitly shown byBoy's surface—which has self-intersections. TheRoman surfacefails to be an immersion as it containscross-caps.
An embedding isproperif it behaves well with respect toboundaries: one requires the mapf:X→Y{\displaystyle f:X\rightarrow Y}to be such that
The first condition is equivalent to havingf(∂X)⊆∂Y{\displaystyle f(\partial X)\subseteq \partial Y}andf(X∖∂X)⊆Y∖∂Y{\displaystyle f(X\setminus \partial X)\subseteq Y\setminus \partial Y}. The second condition, roughly speaking, says thatf(X){\displaystyle f(X)}is not tangent to the boundary ofY{\displaystyle Y}.
InRiemannian geometryand pseudo-Riemannian geometry:
Let(M,g){\displaystyle (M,g)}and(N,h){\displaystyle (N,h)}beRiemannian manifoldsor more generallypseudo-Riemannian manifolds.
Anisometric embeddingis a smooth embeddingf:M→N{\displaystyle f:M\rightarrow N}that preserves the (pseudo-)metricin the sense thatg{\displaystyle g}is equal to thepullbackofh{\displaystyle h}byf{\displaystyle f}, i.e.g=f∗h{\displaystyle g=f^{*}h}. Explicitly, for any two tangent vectorsv,w∈Tx(M){\displaystyle v,w\in T_{x}(M)}we have
Analogously,isometric immersionis an immersion between (pseudo)-Riemannian manifolds that preserves the (pseudo)-Riemannian metrics.
Equivalently, in Riemannian geometry, an isometric embedding (immersion) is a smooth embedding (immersion) that preserves length ofcurves(cf.Nash embedding theorem).[6]
In general, for analgebraic categoryC{\displaystyle C}, an embedding between twoC{\displaystyle C}-algebraic structuresX{\displaystyle X}andY{\displaystyle Y}is aC{\displaystyle C}-morphisme:X→Y{\displaystyle e:X\rightarrow Y}that is injective.
Infield theory, anembeddingof afieldE{\displaystyle E}in a fieldF{\displaystyle F}is aring homomorphismσ:E→F{\displaystyle \sigma :E\rightarrow F}.
Thekernelofσ{\displaystyle \sigma }is anidealofE{\displaystyle E}, which cannot be the whole fieldE{\displaystyle E}, because of the condition1=σ(1)=1{\displaystyle 1=\sigma (1)=1}. Furthermore, any field has as ideals only the zero ideal and the whole field itself (because if there is any non-zero field element in an ideal, it is invertible, showing the ideal is the whole field). Therefore, the kernel is0{\displaystyle 0}, so any embedding of fields is amonomorphism. Hence,E{\displaystyle E}isisomorphicto thesubfieldσ(E){\displaystyle \sigma (E)}ofF{\displaystyle F}. This justifies the nameembeddingfor an arbitrary homomorphism of fields.
Ifσ{\displaystyle \sigma }is asignatureandA,B{\displaystyle A,B}areσ{\displaystyle \sigma }-structures(also calledσ{\displaystyle \sigma }-algebras inuniversal algebraor models inmodel theory), then a maph:A→B{\displaystyle h:A\to B}is aσ{\displaystyle \sigma }-embedding exactly if all of the following hold:
HereA⊨R(a1,…,an){\displaystyle A\models R(a_{1},\ldots ,a_{n})}is a model theoretical notation equivalent to(a1,…,an)∈RA{\displaystyle (a_{1},\ldots ,a_{n})\in R^{A}}. In model theory there is also a stronger notion ofelementary embedding.
Inorder theory, an embedding ofpartially ordered setsis a functionF{\displaystyle F}between partially ordered setsX{\displaystyle X}andY{\displaystyle Y}such that
Injectivity ofF{\displaystyle F}follows quickly from this definition. Indomain theory, an additional requirement is that
A mappingϕ:X→Y{\displaystyle \phi :X\to Y}ofmetric spacesis called anembedding(withdistortionC>0{\displaystyle C>0}) if
for everyx,y∈X{\displaystyle x,y\in X}and some constantL>0{\displaystyle L>0}.
An important special case is that ofnormed spaces; in this case it is natural to consider linear embeddings.
One of the basic questions that can be asked about a finite-dimensionalnormed space(X,‖⋅‖){\displaystyle (X,\|\cdot \|)}is,what is the maximal dimensionk{\displaystyle k}such that theHilbert spaceℓ2k{\displaystyle \ell _{2}^{k}}can be linearly embedded intoX{\displaystyle X}with constant distortion?
The answer is given byDvoretzky's theorem.
Incategory theory, there is no satisfactory and generally accepted definition of embeddings that is applicable in all categories. One would expect that all isomorphisms and all compositions of embeddings are embeddings, and that all embeddings are monomorphisms. Other typical requirements are: anyextremal monomorphismis an embedding and embeddings are stable underpullbacks.
Ideally the class of all embeddedsubobjectsof a given object, up to isomorphism, should also besmall, and thus anordered set. In this case, the category is said to be well powered with respect to the class of embeddings. This allows defining new local structures in the category (such as aclosure operator).
In aconcrete category, anembeddingis a morphismf:A→B{\displaystyle f:A\rightarrow B}that is an injective function from the underlying set ofA{\displaystyle A}to the underlying set ofB{\displaystyle B}and is also aninitial morphismin the following sense:
Ifg{\displaystyle g}is a function from the underlying set of an objectC{\displaystyle C}to the underlying set ofA{\displaystyle A}, and if its composition withf{\displaystyle f}is a morphismfg:C→B{\displaystyle fg:C\rightarrow B}, theng{\displaystyle g}itself is a morphism.
Afactorization systemfor a category also gives rise to a notion of embedding. If(E,M){\displaystyle (E,M)}is a factorization system, then the morphisms inM{\displaystyle M}may be regarded as the embeddings, especially when the category is well powered with respect toM{\displaystyle M}. Concrete theories often have a factorization system in whichM{\displaystyle M}consists of the embeddings in the previous sense. This is the case of the majority of the examples given in this article.
As usual in category theory, there is adualconcept, known as quotient. All the preceding properties can be dualized.
An embedding can also refer to anembedding functor. | https://en.wikipedia.org/wiki/Embedding |
Instatistical mechanicsandmathematics, theBethe lattice(also called aregular tree) is an infinitesymmetricregulartreewhere all vertices have the same number of neighbors. The Bethe lattice was introduced into the physics literature byHans Bethein 1935. In such a graph, each node is connected tozneighbors; the numberzis called either thecoordination numberor thedegree, depending on the field.
Due to its distinctive topological structure, the statistical mechanics oflattice modelson this graph are often easier to solve than on other lattices. The solutions are related to the often usedBethe ansatzfor these systems.
When working with the Bethe lattice, it is often convenient to mark a given vertex as the root, to be used as a reference point when considering local properties of the graph.
Once a vertex is marked as the root, we can group the other vertices into layers based on their distance from the root. The number of vertices at a distanced>0{\displaystyle d>0}from the root isz(z−1)d−1{\displaystyle z(z-1)^{d-1}}, as each vertex other than the root is adjacent toz−1{\displaystyle z-1}vertices at a distance one greater from the root, and the root is adjacent toz{\displaystyle z}vertices at a distance 1.
The Bethe lattice is of interest in statistical mechanics mainly because lattice models on the Bethe lattice are often easier to solve than on other lattices, such as thetwo-dimensional square lattice. This is because the lack of cycles removes some of the more complicated interactions. While the Bethe lattice does not as closely approximate the interactions in physical materials as other lattices, it can still provide useful insight.
TheIsing modelis a mathematical model offerromagnetism, in which the magnetic properties of a material are represented by a "spin" at each node in the lattice, which is either +1 or -1. The model is also equipped with a constantK{\displaystyle K}representing the strength of the interaction between adjacent nodes, and a constanth{\displaystyle h}representing an external magnetic field.
The Ising model on the Bethe lattice is defined by the partition function
In order to compute the local magnetization, we can break the lattice up into several identical parts by removing a vertex. This gives us a recurrence relation which allows us to compute the magnetization of a Cayley tree withnshells (the finite analog to the Bethe lattice) as
wherex0=1{\displaystyle x_{0}=1}and the values ofxi{\displaystyle x_{i}}satisfy the recurrence relation
In theK>0{\displaystyle K>0}case when the system is ferromagnetic, the above sequence converges, so we may take the limit to evaluate the magnetization on the Bethe lattice. We get
There are either 1 or 3 solutions to this equation. In the case where there are 3, the sequencexn{\displaystyle x_{n}}will converge to the smallest whenh>0{\displaystyle h>0}and the largest whenh<0{\displaystyle h<0}.
The free energyfat each site of the lattice in the Ising Model is given by
wherez=exp(−2K){\displaystyle z=\exp(-2K)}andx{\displaystyle x}is as before.[1]
The probability that arandom walkon a Bethe lattice of degreez{\displaystyle z}starting at a given vertex eventually returns to that vertex is given by1z−1{\displaystyle {\frac {1}{z-1}}}. To show this, letP(k){\displaystyle P(k)}be the probability of returning to our starting point if we are a distancek{\displaystyle k}away. We have the recurrence relation
for allk>1{\displaystyle k>1}, as at each location other than the starting vertex there arez−1{\displaystyle z-1}edges going away from the starting vertex and 1 edge going towards it. Summing this equation over allk>1{\displaystyle k>1}, we get
We haveP(0)=1{\displaystyle P(0)=1}, as this indicates that we have just returned to the starting vertex, soP(1)=1/(z−1){\displaystyle P(1)=1/(z-1)}, which is the value we want.
Note that this in stark contrast to the case of random walks on the two-dimensional square lattice, which famously has a return probability of 1.[2]Such a lattice is 4-regular, but the 4-regular Bethe lattice has a return probability of 1/3.
One can easily bound the number of closed walks of length2k{\displaystyle 2k}starting at a given vertex of the Bethe Lattice with degreez{\displaystyle z}from below. By considering each step as either an outward step (away from the starting vertex) or an inward step (toward the starting vertex), we see that any closed walk of length2k{\displaystyle 2k}must have exactlyk{\displaystyle k}outward steps andk{\displaystyle k}inward steps. We also may not have taken more inward steps than outward steps at any point, so the number of sequences of step directions (either inward or outward) is given by thek{\displaystyle k}thCatalan numberCk{\displaystyle C_{k}}. There are at leastz−1{\displaystyle z-1}choices for each outward step, and always exactly 1 choice for each inward step, so the number of closed walks is at least(z−1)kCk{\displaystyle (z-1)^{k}C_{k}}.
This bound is not tight, as there are actuallyz{\displaystyle z}choices for an outward step from the starting vertex, which happens at the beginning and any number of times during the walk. The exact number of walks is trickier to compute, and is given by the formula
where2F1(α,β,γ,z){\displaystyle _{2}F_{1}(\alpha ,\beta ,\gamma ,z)}is theGauss hypergeometric function.[3]
We may use this fact to bound the second largest eigenvalue of ad{\displaystyle d}-regular graph. LetG{\displaystyle G}be ad{\displaystyle d}-regular graph withn{\displaystyle n}vertices, and letA{\displaystyle A}be itsadjacency matrix. ThentrA2k{\displaystyle {\text{tr }}A^{2k}}is the number of closed walks of length2k{\displaystyle 2k}. The number of closed walks onG{\displaystyle G}is at leastn{\displaystyle n}times the number of closed walks on the Bethe lattice with degreed{\displaystyle d}starting at a particular vertex, as we can map the walks on the Bethe lattice to the walks onG{\displaystyle G}that start at a given vertex and only go back on paths that were already tread. There are often more walks onG{\displaystyle G}, as we can make use of cycles to create additional walks. The largest eigenvalue ofA{\displaystyle A}isd{\displaystyle d}, and lettingλ2{\displaystyle \lambda _{2}}be the second largest absolute value of an eigenvalue, we have
This givesλ22k≥1n−1(n(d−1)kCk−d2k){\displaystyle \lambda _{2}^{2k}\geq {\frac {1}{n-1}}(n(d-1)^{k}C_{k}-d^{2k})}. Noting thatCk=(4−o(1))k{\displaystyle C_{k}=(4-o(1))^{k}}ask{\displaystyle k}grows, we can letn{\displaystyle n}grow much faster thank{\displaystyle k}to see that there are only finitely manyd{\displaystyle d}-regular graphsG{\displaystyle G}for which the second largest absolute value of an eigenvalue is at mostλ{\displaystyle \lambda }, for anyλ<2d−1.{\displaystyle \lambda <2{\sqrt {d-1}}.}This is a rather interesting result in the study of(n,d,λ)-graphs.
A Bethe graph of even coordination number 2nis isomorphic to the unoriented Cayley graph of afree groupof ranknwith respect to a free generating set.
Bethe lattices also occur as thediscrete subgroupsof certain hyperbolicLie groups, such as theFuchsian groups. As such, they are also lattices in the sense of alattice in a Lie group.
The vertices and edges of an order-k{\displaystyle k}apeirogonal tilingof thehyperbolic planeform a Bethe lattice of degreek{\displaystyle k}.[4] | https://en.wikipedia.org/wiki/Bethe_lattice |
In themathematicaldiscipline ofgraph theory, agraphCis acovering graphof another graphGif there is acovering mapfrom thevertexset ofCto the vertex set ofG. A covering mapfis asurjectionand a localisomorphism: theneighbourhoodof a vertexvinCis mappedbijectivelyonto the neighbourhood off(v){\displaystyle f(v)}inG.
The termliftis often used as a synonym for a covering graph of aconnected graph.
Though it may be misleading, there is no (obvious) relationship between covering graph andvertex coveroredge cover.
Thecombinatorialformulation of covering graphs is immediately generalized to the case ofmultigraphs. A covering graph is a special case of a covering complex.[1]Both covering complexes and multigraphs with a 1-dimensional cell complex, are nothing but examples ofcovering spacesoftopological spaces, so the terminology in the theory of covering spaces is available; say covering transformation group, universal covering, abelian covering, and maximal abelian covering.[2]
LetG= (V1,E1) andC= (V2,E2) be two graphs, and letf:V2→V1be asurjection. Thenfis acovering mapfromCtoGif for eachv∈V2, the restriction offto theneighbourhoodofvis a bijection onto the neighbourhood off(v) inG. Put otherwise,fmaps edges incident tovone-to-one onto edges incident tof(v).
If there exists a covering map fromCtoG, thenCis acovering graph, or alift, ofG. Anh-liftis a lift such that the covering mapfhas the property that for every vertexvofG, itsfiberf−1(v)has exactlyhelements.
In the following figure, the graphCis a covering graph of the graphH.
The covering mapffromCtoHis indicated with the colours. For example, both blue vertices ofCare mapped to the blue vertex ofH. The mapfis a surjection: each vertex ofHhas a preimage inC. Furthermore,fmaps bijectively each neighbourhood of a vertexvinConto the neighbourhood of the vertexf(v) inH.
For example, letvbe one of the purple vertices inC; it has two neighbours inC, a green vertexuand a blue vertext. Similarly, letv′be the purple vertex inH; it has two neighbours inH, the green vertexu′and the blue vertext′. The mappingfrestricted to {t,u,v} is a bijection onto {t′,u′,v′}. This is illustrated in the following figure:
Similarly, we can check that the neighbourhood of a blue vertex inCis mapped one-to-one onto the neighbourhood of the blue vertex inH:
In the above example, each vertex ofHhas exactly 2 preimages inC. HenceCis a2-fold coveror adouble coverofH.
For any graphG, it is possible to construct thebipartite double coverofG, which is abipartite graphand a double cover ofG. The bipartite double cover ofGis thetensor product of graphsG×K2:
IfGis already bipartite, its bipartite double cover consists of two disjoint copies ofG. A graph may have many different double covers other than the bipartite double cover.
For any connected graphG, it is possible to construct itsuniversal covering graph.[3]This is an instance of the more generaluniversal coverconcept from topology; the topological requirement that a universal cover besimply connectedtranslates in graph-theoretic terms to a requirement that it be acyclic and connected; that is, atree.
The universal covering graph is unique (up to isomorphism). IfGis a tree, thenGitself is the universal covering graph ofG. For any other finite connected graphG, the universal covering graph ofGis a countably infinite (but locally finite) tree.
The universal covering graphTof a connected graphGcan be constructed as follows. Choose an arbitrary vertexrofGas a starting point. Each vertex ofTis a non-backtracking walk that begins fromr, that is, a sequencew= (r,v1,v2, ...,vn) of vertices ofGsuch that
Then, two vertices ofTare adjacent if one is a simple extension of another: the vertex (r,v1,v2, ...,vn) is adjacent to the vertex (r,v1,v2, ...,vn-1). Up to isomorphism, the same treeTis constructed regardless of the choice of the starting pointr.
The covering mapfmaps the vertex (r) inTto the vertexrinG, and a vertex (r,v1,v2, ...,vn) inTto the vertexvninG.
The following figure illustrates the universal covering graphTof a graphH; the colours indicate the covering map.
For anyk, allk-regular graphshave the same universal cover: the infinitek-regular tree.
An infinite-fold abelian covering graph of a finite (multi)graph is called a topological crystal, an abstraction of crystal structures. For example, the diamond crystal as a graph is the maximal abelian covering graph of the four-edgedipole graph. This view combined with the idea of "standard realizations" turns out to be useful in a systematic design of (hypothetical) crystals.[2]
Aplanar coverof a graph is a finite covering graph that is itself aplanar graph. The property of having a planar cover may be characterized byforbidden minors, but the exact characterization of this form remains unknown. Every graph with anembeddingin theprojective planehas a planar cover coming from theorientable double coverof the projective plane; in 1988, Seiya Nagami conjectured that these are the only graphs with planar covers, but this remains unproven.[4]
A common way to form covering graphs usesvoltage graphs, in which the darts of the given graphG(that is, pairs of directed edges corresponding to the undirected edges ofG) are labeled with inverse pairs of elements from somegroup. The derived graph of the voltage graph has as its vertices the pairs (v,x) wherevis a vertex ofGandxis a group element; a dart fromvtowlabeled with the group elementyinGcorresponds to an edge from (v,x) to (w,xy) in the derived graph.
The universal cover can be seen in this way as a derived graph of a voltage graph in which the edges of aspanning treeof the graph are labeled by the identity element of the group, and each remaining pair of darts is labeled by a distinct generating element of afree group. The bipartite double can be seen in this way as a derived graph of a voltage graph in which each dart is labeled by the nonzero element of the group of order two. | https://en.wikipedia.org/wiki/Covering_graph |
Indiscrete mathematics, particularly ingraph theory, agraphis a structure consisting of asetof objects where some pairs of the objects are in some sense "related". The objects are represented by abstractions calledvertices(also callednodesorpoints) and each of the related pairs of vertices is called anedge(also calledlinkorline).[1]Typically, a graph is depicted indiagrammatic formas a set of dots or circles for the vertices, joined by lines or curves for the edges.
The edges may be directed or undirected. For example, if the vertices represent people at a party, and there is an edge between two people if they shake hands, then this graph is undirected because any personAcan shake hands with a personBonly ifBalso shakes hands withA. In contrast, if an edge from a personAto a personBmeans thatAowes money toB, then this graph is directed, because owing money is not necessarily reciprocated.
Graphs are the basic subject studied by graph theory. The word "graph" was first used in this sense byJ. J. Sylvesterin 1878 due to a direct relation between mathematics andchemical structure(what he called a chemico-graphical image).[2][3]
Definitions in graph theory vary. The following are some of the more basic ways of defining graphs and relatedmathematical structures.
Agraph(sometimes called anundirected graphto distinguish it from adirected graph, or asimple graphto distinguish it from amultigraph)[4][5]is apairG= (V,E), whereVis a set whose elements are calledvertices(singular: vertex), andEis a set of unordered pairs{v1,v2}{\displaystyle \{v_{1},v_{2}\}}of vertices, whose elements are callededges(sometimeslinksorlines).
The verticesuandvof an edge{u,v}are called the edge'sendpoints. The edge is said tojoinuandvand to beincidenton them. A vertex may belong to no edge, in which case it is not joined to any other vertex and is calledisolated. When an edge{u,v}{\displaystyle \{u,v\}}exists, the verticesuandvare calledadjacent.
Amultigraphis a generalization that allows multiple edges to have the same pair of endpoints. In some texts, multigraphs are simply called graphs.[6][7]
Sometimes, graphs are allowed to containloops, which are edges that join a vertex to itself. To allow loops, the pairs of vertices inEmust be allowed to have the same node twice. Such generalized graphs are calledgraphs with loopsor simplygraphswhen it is clear from the context that loops are allowed.
Generally, the vertex setVis taken to be finite (which implies that the edge setEis also finite). Sometimesinfinite graphsare considered, but they are usually viewed as a special kind ofbinary relation, because most results on finite graphs either do not extend to the infinite case or need a rather different proof.
Anempty graphis a graph that has anempty setof vertices (and thus an empty set of edges). Theorderof a graph is its number|V|of vertices, usually denoted byn. Thesizeof a graph is its number|E|of edges, typically denoted bym. However, in some contexts, such as for expressing thecomputational complexityof algorithms, the termsizeis used for the quantity|V| + |E|(otherwise, a non-empty graph could have size 0). Thedegreeorvalencyof a vertex is the number of edges that are incident to it; for graphs with loops, a loop is counted twice.
In a graph of ordern, the maximum degree of each vertex isn− 1(orn+ 1if loops are allowed, because a loop contributes 2 to the degree), and the maximum number of edges isn(n− 1)/2(orn(n+ 1)/2if loops are allowed).
The edges of a graph define asymmetric relationon the vertices, called theadjacency relation. Specifically, two verticesxandyareadjacentif{x,y}is an edge. A graph is fully determined by itsadjacency matrixA, which is ann×nsquare matrix, withAijspecifying the number of connections from vertexito vertexj. For a simple graph,Aijis either 0, indicating disconnection, or 1, indicating connection; moreoverAii= 0because an edge in a simple graph cannot start and end at the same vertex. Graphs with self-loops will be characterized by some or allAiibeing equal to a positive integer, and multigraphs (with multiple edges between vertices) will be characterized by some or allAijbeing equal to a positive integer. Undirected graphs will have asymmetricadjacency matrix (meaningAij=Aji).
Adirected graphordigraphis a graph in which edges have orientations.
In one restricted but very common sense of the term,[8]adirected graphis a pairG= (V,E)comprising:
To avoid ambiguity, this type of object may be called precisely adirected simple graph.
In the edge(x,y)directed fromxtoy, the verticesxandyare called theendpointsof the edge,xthetailof the edge andytheheadof the edge. The edge is said tojoinxandyand to beincidentonxand ony. A vertex may exist in a graph and not belong to an edge. The edge(y,x)is called theinverted edgeof(x,y).Multiple edges, not allowed under the definition above, are two or more edges with both the same tail and the same head.
In one more general sense of the term allowing multiple edges,[8]a directed graph is sometimes defined to be an ordered tripleG= (V,E,ϕ)comprising:
To avoid ambiguity, this type of object may be called precisely adirected multigraph.
Aloopis an edge that joins a vertex to itself. Directed graphs as defined in the two definitions above cannot have loops, because a loop joining a vertexx{\displaystyle x}to itself is the edge (for a directed simple graph) or is incident on (for a directed multigraph)(x,x){\displaystyle (x,x)}which is not in{(x,y)∣(x,y)∈V2andx≠y}{\displaystyle \{(x,y)\mid (x,y)\in V^{2}\;{\textrm {and}}\;x\neq y\}}. So to allow loops the definitions must be expanded. For directed simple graphs, the definition ofE{\displaystyle E}should be modified toE⊆V2{\displaystyle E\subseteq V^{2}}. For directed multigraphs, the definition ofϕ{\displaystyle \phi }should be modified toϕ:E→V2{\displaystyle \phi :E\to V^{2}}. To avoid ambiguity, these types of objects may be called precisely adirected simple graph permitting loopsand adirected multigraph permitting loops(or aquiver) respectively.
The edges of a directed simple graph permitting loopsGis ahomogeneous relation~ on the vertices ofGthat is called theadjacency relationofG. Specifically, for each edge(x,y), its endpointsxandyare said to beadjacentto one another, which is denotedx~y.
Amixed graphis a graph in which some edges may be directed and some may be undirected. It is an ordered tripleG= (V,E,A)for amixed simple graphandG= (V,E,A,ϕE,ϕA)for amixed multigraphwithV,E(the undirected edges),A(the directed edges),ϕEandϕAdefined as above. Directed and undirected graphs are special cases.
Aweighted graphor anetwork[9][10]is a graph in which a number (the weight) is assigned to each edge.[11]Such weights might represent for example costs, lengths or capacities, depending on the problem at hand. Such graphs arise in many contexts, for example inshortest path problemssuch as thetraveling salesman problem.
One definition of anoriented graphis that it is a directed graph in which at most one of(x,y)and(y,x)may be edges of the graph. That is, it is a directed graph that can be formed as anorientationof an undirected (simple) graph.
Some authors use "oriented graph" to mean the same as "directed graph". Some authors use "oriented graph" to mean any orientation of a given undirected graph or multigraph.
Aregular graphis a graph in which each vertex has the same number of neighbours, i.e., every vertex has the same degree. A regular graph with vertices of degreekis called ak‑regular graph or regular graph of degreek.
Acomplete graphis a graph in which each pair of vertices is joined by an edge. A complete graph contains all possible edges.
Afinite graphis a graph in which the vertex set and the edge set arefinite sets. Otherwise, it is called aninfinite graph.
Most commonly in graph theory it is implied that the graphs discussed are finite. If the graphs are infinite, that is usually specifically stated.
In an undirected graph, an unordered pair of vertices{x,y}is calledconnectedif a path leads fromxtoy. Otherwise, the unordered pair is calleddisconnected.
Aconnected graphis an undirected graph in which every unordered pair of vertices in the graph is connected. Otherwise, it is called adisconnected graph.
In a directed graph, an ordered pair of vertices(x,y)is calledstrongly connectedif a directed path leads fromxtoy. Otherwise, the ordered pair is calledweakly connectedif an undirected path leads fromxtoyafter replacing all of its directed edges with undirected edges. Otherwise, the ordered pair is calleddisconnected.
Astrongly connected graphis a directed graph in which every ordered pair of vertices in the graph is strongly connected. Otherwise, it is called aweakly connected graphif every ordered pair of vertices in the graph is weakly connected. Otherwise it is called adisconnected graph.
Ak-vertex-connected graphork-edge-connected graphis a graph in which no set ofk− 1vertices (respectively, edges) exists that, when removed, disconnects the graph. Ak-vertex-connected graph is often called simply ak-connected graph.
Abipartite graphis a simple graph in which the vertex set can bepartitionedinto two sets,WandX, so that no two vertices inWshare a common edge and no two vertices inXshare a common edge. Alternatively, it is a graph with achromatic numberof 2.
In acomplete bipartite graph, the vertex set is the union of two disjoint sets,WandX, so that every vertex inWis adjacent to every vertex inXbut there are no edges withinWorX.
Apath graphorlinear graphof ordern≥ 2is a graph in which the vertices can be listed in an orderv1,v2, …,vnsuch that the edges are the{vi,vi+1}wherei= 1, 2, …,n− 1. Path graphs can be characterized as connected graphs in which the degree of all but two vertices is 2 and the degree of the two remaining vertices is 1. If a path graph occurs as asubgraphof another graph, it is apathin that graph.
Aplanar graphis a graph whose vertices and edges can be drawn in a plane such that no two of the edges intersect.
Acycle graphorcircular graphof ordern≥ 3is a graph in which the vertices can be listed in an orderv1,v2, …,vnsuch that the edges are the{vi,vi+1}wherei= 1, 2, …,n− 1, plus the edge{vn,v1}. Cycle graphs can be characterized as connected graphs in which the degree of all vertices is 2. If a cycle graph occurs as a subgraph of another graph, it is a cycle or circuit in that graph.
Atreeis an undirected graph in which any twoverticesare connected byexactly onepath, or equivalently aconnectedacyclicundirected graph.
Aforestis an undirected graph in which any two vertices are connected byat most onepath, or equivalently an acyclic undirected graph, or equivalently adisjoint unionof trees.
Apolytree(ordirected treeororiented treeorsingly connected network) is adirected acyclic graph(DAG) whose underlying undirected graph is a tree.
Apolyforest(ordirected forestororiented forest) is a directed acyclic graph whose underlying undirected graph is a forest.
More advanced kinds of graphs are:
Two edges of a graph are calledadjacentif they share a common vertex. Two edges of a directed graph are calledconsecutiveif the head of the first one is the tail of the second one. Similarly, two vertices are calledadjacentif they share a common edge (consecutiveif the first one is the tail and the second one is the head of an edge), in which case the common edge is said tojointhe two vertices. An edge and a vertex on that edge are calledincident.
The graph with only one vertex and no edges is called thetrivial graph. A graph with only vertices and no edges is known as anedgeless graph. The graph with no vertices and no edges is sometimes called thenull graphorempty graph, but the terminology is not consistent and not all mathematicians allow this object.
Normally, the vertices of a graph, by their nature as elements of a set, are distinguishable. This kind of graph may be calledvertex-labeled. However, for many questions it is better to treat vertices as indistinguishable. (Of course, the vertices may be still distinguishable by the properties of the graph itself, e.g., by the numbers of incident edges.) The same remarks apply to edges, so graphs with labeled edges are callededge-labeled. Graphs with labels attached to edges or vertices are more generally designated aslabeled. Consequently, graphs in which vertices are indistinguishable and edges are indistinguishable are calledunlabeled. (In the literature, the termlabeledmay apply to other kinds of labeling, besides that which serves only to distinguish different vertices or edges.)
Thecategoryof directed multigraphs permitting loops is thecomma categorySet ↓DwhereD: Set → Set is thefunctortaking a setstos×s.
There are several operations that produce new graphs from initial ones, which might be classified into the following categories:
In ahypergraph, an edge can join any positive number of vertices.
An undirected graph can be seen as asimplicial complexconsisting of 1-simplices(the edges) and 0-simplices (the vertices). As such, complexes are generalizations of graphs since they allow for higher-dimensional simplices.
Every graph gives rise to amatroid.
Inmodel theory, a graph is just astructure. But in that case, there is no limitation on the number of edges: it can be anycardinal number, seecontinuous graph.
Incomputational biology,power graph analysisintroduces power graphs as an alternative representation of undirected graphs.
Ingeographic information systems,geometric networksare closely modeled after graphs, and borrow many concepts fromgraph theoryto perform spatial analysis on road networks or utility grids. | https://en.wikipedia.org/wiki/Undirected_graph |
Ingraph theory, thebipartite double coverof anundirected graphGis abipartite,covering graphofG, with twice as manyverticesasG. It can be constructed as thetensor product of graphs,G×K2. It is also called theKronecker double cover,canonical double coveror simply thebipartite doubleofG.
It should not be confused with acycle double coverof a graph, a family of cycles that includes each edge twice.
The bipartite double cover ofGhas two verticesuiandwifor each vertexviofG. Two verticesuiandwjare connected by an edge in the double cover if and only ifviandvjare connected by an edge inG. For instance, below is an illustration of a bipartite double cover of a non-bipartite graphG. In the illustration, each vertex in the tensor product is shown using a color from the first term of the product (G) and a shape from the second term of the product (K2); therefore, the verticesuiin the double cover are shown as circles while the verticeswiare shown as squares.
The bipartite double cover may also be constructed using adjacency matrices (as described below) or as the derived graph of avoltage graphin which each edge ofGis labeled by the nonzero element of the two-elementgroup.
The bipartite double cover of thePetersen graphis theDesargues graph:K2×G(5,2) =G(10,3).
The bipartite double cover of acomplete graphKnis acrown graph(acomplete bipartite graphKn,nminus aperfect matching). In particular, the bipartite double cover of the graph of atetrahedron,K4, is the graph of acube.
The bipartite double cover of an odd-lengthcycle graphis a cycle of twice the length, while the bipartite double of any bipartite graph (such as an even length cycle, shown in the following example) is formed by two disjoint copies of the original graph.
If an undirected graphGhas a matrixAas itsadjacency matrix, then the adjacency matrix of the double cover ofGis
and thebiadjacency matrixof the double cover ofGis justAitself. That is, the conversion from a graph to its double cover can be performed simply by reinterpretingAas a biadjacency matrix instead of as an adjacency matrix. More generally, the reinterpretation the adjacency matrices ofdirected graphsas biadjacency matrices provides acombinatorial equivalencebetween directed graphs and balanced bipartite graphs.[1]
The bipartite double cover of any graphGis abipartite graph; both parts of the bipartite graph have one vertex for each vertex ofG. A bipartite double cover isconnectedif and only ifGis connected and non-bipartite.[2]
The bipartite double cover is a special case of adouble cover(a 2-foldcovering graph). A double cover in graph theory can be viewed as a special case of atopological double cover.
IfGis a non-bipartitesymmetric graph, the double cover ofGis also a symmetric graph; several knowncubicsymmetric graphs may be obtained in this way. For instance, the double cover ofK4is the graph of a cube; the double cover of the Petersen graph is the Desargues graph; and the double cover of the graph of thedodecahedronis a 40-vertex symmetric cubic graph.[3]
It is possible for two different graphs to haveisomorphicbipartite double covers. For instance, the Desargues graph is not only the bipartite double cover of the Petersen graph, but is also the bipartite double cover of a different graph that is not isomorphic to the Petersen graph.[4]Not every bipartite graph is a bipartite double cover of another graph; for a bipartite graphGto be the bipartite cover of another graph, it is necessary and sufficient that theautomorphismsofGinclude aninvolutionthat maps each vertex to a distinct and non-adjacent vertex.[4]For instance, the graph with two vertices and one edge is bipartite but is not a bipartite double cover, because it has no non-adjacent pairs of vertices to be mapped to each other by such an involution; on the other hand, the graph of the cube is a bipartite double cover, and has an involution that maps each vertex to the diametrally opposite vertex. An alternative characterization of the bipartite graphs that may be formed by the bipartite double cover construction was obtained bySampathkumar (1975).
In aconnected graphthat is not bipartite, only one double cover is bipartite, but when the graph is bipartite or disconnected there may be more than one. For this reason,Tomaž Pisanskihas argued that the name "bipartite double cover" should be deprecated in favor of the "canonical double cover" or "Kronecker cover", names which are unambiguous.[5]
In general, a graph may have multiple double covers that are different from the bipartite double cover.[6]
In the following figure, the graphCis a double cover of the graphH:
However,Cis not thebipartite double coverofHor any other graph; it is not a bipartite graph.
If we replace one triangle by a square inHthe resulting graph has four distinct double
covers. Two of them are bipartite but only one of them is the Kronecker cover.
As another example, the graph of theicosahedronis a double cover of the complete graphK6; to obtain a covering map from the icosahedron toK6, map each pair of opposite vertices of the icosahedron to a single vertex ofK6. However, the icosahedron is not bipartite, so it is not the bipartite double cover ofK6. Instead, it can be obtained as theorientable double coverof anembeddingofK6on theprojective plane.
The double covers of a graph correspond to the different ways tosign the edgesof the graph. | https://en.wikipedia.org/wiki/Bipartite_double_cover |
Inmathematics, acovering groupof atopological groupHis acovering spaceGofHsuch thatGis a topological group and the covering mapp:G→His acontinuousgroup homomorphism. The mappis called thecovering homomorphism. A frequently occurring case is adouble covering group, atopological double coverin whichHhasindex2 inG; examples include thespin groups,pin groups, andmetaplectic groups.
Roughly explained, saying that for example the metaplectic group Mp2nis adouble coverof thesymplectic groupSp2nmeans that there are always two elements in the metaplectic group representing one element in the symplectic group.
LetGbe a covering group ofH. ThekernelKof the covering homomorphism is just the fiber over the identity inHand is adiscretenormal subgroupofG. The kernelKisclosedinGif and only ifGisHausdorff(and if and only ifHis Hausdorff). Going in the other direction, ifGis any topological group andKis a discrete normal subgroup ofGthen the quotient mapp:G→G/Kis a covering homomorphism.
IfGisconnectedthenK, being a discrete normal subgroup, necessarily lies in thecenterofGand is thereforeabelian. In this case, the center ofH=G/Kis given by
As with all covering spaces, thefundamental groupofGinjects into the fundamental group ofH. Since the fundamental group of a topological group is always abelian, every covering group is a normal covering space. In particular, ifGispath-connectedthen thequotient groupπ1(H) /π1(G)is isomorphic toK. The groupKactssimply transitively on the fibers (which are just leftcosets) by right multiplication. The groupGis then aprincipalK-bundleoverH.
IfGis a covering group ofHthen the groupsGandHarelocally isomorphic. Moreover, given any two connected locally isomorphic groupsH1andH2, there exists a topological groupGwith discrete normal subgroupsK1andK2such thatH1is isomorphic toG/K1andH2is isomorphic toG/K2.
LetHbe a topological group and letGbe a covering space ofH. IfGandHare bothpath-connectedandlocally path-connected, then for any choice of elemente* in the fiber overe∈H, there exists a unique topological group structure onG, withe* as the identity, for which the covering mapp:G→His a homomorphism.
The construction is as follows. Letaandbbe elements ofGand letfandgbepathsinGstarting ate* and terminating ataandbrespectively. Define a pathh:I→Hbyh(t) =p(f(t))p(g(t)). By the path-lifting property of covering spaces there is a unique lift ofhtoGwith initial pointe*. The productabis defined as the endpoint of this path. By construction we havep(ab) =p(a)p(b). One must show that this definition is independent of the choice of pathsfandg, and also that the group operations are continuous.
Alternatively, the group law onGcan be constructed by lifting the group lawH×H→HtoG, using the lifting property of the covering mapG×G→H×H.
The non-connected case is interesting and is studied in the papers by Taylor and by Brown-Mucuk cited below. Essentially there is an obstruction to the existence of a universal cover that is also a topological group such that the covering map is a morphism: this obstruction lies in the third cohomology group of the group of components ofGwith coefficients in the fundamental group ofGat the identity.
IfHis a path-connected, locally path-connected, andsemilocally simply connectedgroup then it has auniversal cover. By the previous construction the universal cover can be made into a topological group with the covering map a continuous homomorphism. This group is called theuniversal covering groupofH. There is also a more direct construction, which we give below.
LetPHbe thepath groupofH. That is,PHis the space ofpathsinHbased at the identity together with thecompact-open topology. The product of paths is given by pointwise multiplication, i.e. (fg)(t) =f(t)g(t). This givesPHthe structure of a topological group. There is a natural group homomorphismPH→Hthat sends each path to its endpoint. The universal cover ofHis given as the quotient ofPHby the normal subgroup ofnull-homotopicloops. The projectionPH→Hdescends to the quotient giving the covering map. One can show that the universal cover issimply connectedand the kernel is just thefundamental groupofH. That is, we have ashort exact sequence
where~His the universal cover ofH. Concretely, the universal covering group ofHis the space of homotopy classes of paths inHwith pointwise multiplication of paths. The covering map sends each path class to its endpoint.
As the above suggest, if a group has a universal covering group (if it is path-connected, locally path-connected, and semilocally simply connected), with discrete center, then the set of all topological groups that are covered by the universal covering group form a lattice, corresponding to the lattice of subgroups of the center of the universal covering group: inclusion of subgroups corresponds to covering of quotient groups. The maximal element is the universal covering group~H, while the minimal element is the universal covering group mod its center,~H/ Z(~H).
This corresponds algebraically to theuniversal perfect central extension(called "covering group", by analogy) as the maximal element, and a group mod its center as minimal element.
This is particularly important for Lie groups, as these groups are all the (connected) realizations of a particular Lie algebra. For many Lie groups the center is the group of scalar matrices, and thus the group mod its center is the projectivization of the Lie group. These covers are important in studyingprojective representationsof Lie groups, andspin representationslead to the discovery ofspin groups: a projective representation of a Lie group need not come from a linear representation of the group, but does come from a linear representation of some covering group, in particular the universal covering group. The finite analog led to the covering group or Schur cover, as discussed above.
A key example arises fromSL2(R), which has center {±1} and fundamental group Z. It is a double cover of the centerlessprojective special linear groupPSL2(R), which is obtained by taking the quotient by the center. ByIwasawa decomposition, both groups are circle bundles over the complex upper half-plane, and their universal coverSL2(~R){\displaystyle {\mathrm {S} {\widetilde {\mathrm {L} _{2}(}}\mathbf {R} )}}is a real line bundle over the half-plane that forms one ofThurston's eight geometries. Since the half-plane is contractible, all bundle structures are trivial. The preimage of SL2(Z) in the universal cover is isomorphic to thebraid groupon three strands.
The above definitions and constructions all apply to the special case ofLie groups. In particular, every covering of amanifoldis a manifold, and the covering homomorphism becomes asmooth map. Likewise, given any discrete normal subgroup of a Lie group the quotient group is a Lie group and the quotient map is a covering homomorphism.
Two Lie groups are locally isomorphic if and only if theirLie algebrasare isomorphic. This implies that a homomorphismφ:G→Hof Lie groups is a covering homomorphism if and only if the induced map on Lie algebras
is an isomorphism.
Since for every Lie algebrag{\displaystyle {\mathfrak {g}}}there is a unique simply connected Lie groupGwith Lie algebrag{\displaystyle {\mathfrak {g}}}, from this follows that the universal covering group of a connected Lie groupHis the (unique) simply connected Lie groupGhaving the same Lie algebra asH. | https://en.wikipedia.org/wiki/Covering_group |
Inmathematics, especially inorder theory, aGalois connectionis a particular correspondence (typically) between twopartially ordered sets(posets). Galois connections find applications in various mathematical theories. They generalize thefundamental theorem of Galois theoryabout the correspondence betweensubgroupsandsubfields, discovered by the French mathematicianÉvariste Galois.
A Galois connection can also be defined onpreordered setsorclasses; this article presents the common case of posets.
The literature contains two closely related notions of "Galois connection". In this article, we will refer to them as(monotone) Galois connectionsandantitone Galois connections.
A Galois connection is rather weak compared to anorder isomorphismbetween the involved posets, but every Galois connection gives rise to an isomorphism of certain sub-posets, as will be explained below.
The termGalois correspondenceis sometimes used to mean abijectiveGalois connection; this is simply anorder isomorphism(or dual order isomorphism, depending on whether we take monotone or antitone Galois connections).
Let(A, ≤)and(B, ≤)be twopartially ordered sets. Amonotone Galois connectionbetween these posets consists of twomonotone[1]functions,F:A→BandG:B→A, such that for allainAandbinB, we have
In this situation,Fis called thelower adjointofGandGis called theupper adjointofF. Mnemonically, the upper/lower terminology refers to where the function application appears relative to ≤.[2]The term "adjoint" refers to the fact that monotone Galois connections are special cases of pairs ofadjoint functorsincategory theoryas discussed further below. Other terminology encountered here isleft adjoint(respectivelyright adjoint) for the lower (respectively upper) adjoint.
An essential property of a Galois connection is that an upper/lower adjoint of a Galois connectionuniquelydetermines the other:
A consequence of this is that ifForGisbijectivethen each is theinverseof the other, i.e.F=G−1.
Given a Galois connection with lower adjointFand upper adjointG, we can consider thecompositionsGF:A→A, known as the associatedclosure operator, andFG:B→B, known as the associated kernel operator. Both are monotone andidempotent, and we havea≤GF(a)for allainAandFG(b) ≤bfor allbinB.
AGalois insertionofBintoAis a Galois connection in which the kernel operatorFGis theidentityonB, and henceGis an order isomorphism ofBontothe set of closed elementsGF[A] ofA.[3]
The above definition is common in many applications today, and prominent inlatticeanddomain theory. However the original notion in Galois theory is slightly different. In this alternative definition, a Galois connection is a pair ofantitone, i.e. order-reversing, functionsF:A→BandG:B→Abetween two posetsAandB, such that
The symmetry ofFandGin this version erases the distinction between upper and lower, and the two functions are then calledpolaritiesrather than adjoints.[4]Each polarity uniquely determines the other, since
The compositionsGF:A→AandFG:B→Bare the associated closure operators; they are monotone idempotent maps with the propertya≤GF(a)for allainAandb≤FG(b)for allbinB.
The implications of the two definitions of Galois connections are very similar, since an antitone Galois connection betweenAandBis just a monotone Galois connection betweenAand theorder dualBopofB. All of the below statements on Galois connections can thus easily be converted into statements about antitone Galois connections.
Thebijectionof a pair of functionsf:X→Y{\displaystyle f:X\to Y}andg:Y→X,{\displaystyle g:Y\to X,}each other's inverse, forms a (trivial) Galois connection, as follows. Because theequality relationis reflexive, transitive and antisymmetric, it is, trivially, apartial order, making(X,=){\displaystyle (X,=)}and(Y,=){\displaystyle (Y,=)}partially ordered sets. Sincef(x)=y{\displaystyle f(x)=y}if and only ifx=g(y),{\displaystyle x=g(y),}we have a Galois connection.
A monotone Galois connection betweenZ,{\displaystyle \mathbb {Z} ,}the set ofintegersandR,{\displaystyle \mathbb {R} ,}the set ofreal numbers, each with its usual ordering, is given by the usualembeddingfunction of the integers into the reals and thefloor functiontruncating a real number to the greatest integer less than or equal to it. The embedding of integers is customarily done implicitly, but to show the Galois connection we make it explicit. So letF:Z→R{\displaystyle F:\mathbb {Z} \to \mathbb {R} }denote the embedding function, withF(n)=n∈R,{\displaystyle F(n)=n\in \mathbb {R} ,}whileG:R→Z{\displaystyle G:\mathbb {R} \to \mathbb {Z} }denotes the floor function, soG(x)=⌊x⌋.{\displaystyle G(x)=\lfloor x\rfloor .}The equivalenceF(n)≤x⇔n≤G(x){\displaystyle F(n)\leq x~\Leftrightarrow ~n\leq G(x)}then translates to
This is valid because the variablen{\displaystyle n}is restricted to the integers. The well-known properties of the floor function, such as⌊x+n⌋=⌊x⌋+n,{\displaystyle \lfloor x+n\rfloor =\lfloor x\rfloor +n,}can be derived by elementary reasoning from this Galois connection.
The dual orderings give another monotone Galois connection, now with theceiling function:
For an order-theoretic example, letUbe someset, and letAandBboth be thepower setofU, ordered byinclusion. Pick a fixedsubsetLofU. Then the mapsFandG, whereF(M) =L∩M, andG(N) =N∪ (U\L), form a monotone Galois connection, withFbeing the lower adjoint. A similar Galois connection whose lower adjoint is given by the meet (infimum) operation can be found in anyHeyting algebra. Especially, it is present in anyBoolean algebra, where the two mappings can be described byF(x) = (a∧x)andG(y) = (y∨ ¬a) = (a⇒y). Inlogicalterms: "implication froma" is the upper adjoint of "conjunction witha".
Further interesting examples for Galois connections are described in the article oncompleteness properties. Roughly speaking, the usual functions ∨ and ∧ are lower and upper adjoints to thediagonal mapX→X×X. The least and greatest elements of a partial order are given by lower and upper adjoints to the unique functionX→ {1}.Going further, evencomplete latticescan be characterized by the existence of suitable adjoints. These considerations give some impression of the ubiquity of Galois connections in order theory.
LetGacttransitivelyonXand pick some pointxinX. Consider
the set ofblockscontainingx. Further, letG{\displaystyle {\mathcal {G}}}consist of thesubgroupsofGcontaining thestabilizerofx.
Then, the correspondenceB→G{\displaystyle {\mathcal {B}}\to {\mathcal {G}}}:
is a monotone,one-to-oneGalois connection.[5]As acorollary, one can establish that doubly transitive actions have no blocks other than the trivial ones (singletons or the whole ofX): this follows from the stabilizers being maximal inGin that case. SeeDoubly transitive groupfor further discussion.
Iff:X→Yis afunction, then for any subsetMofXwe can form theimageF(M) =fM= {f(m) |m∈M}and for any subsetNofYwe can form theinverse imageG(N) =f−1N= {x∈X|f(x) ∈N}.ThenFandGform a monotone Galois connection between the power set ofXand the power set ofY, both ordered by inclusion ⊆. There is a further adjoint pair in this situation: for a subsetMofX, defineH(M) = {y∈Y|f−1{y} ⊆M}.ThenGandHform a monotone Galois connection between the power set ofYand the power set ofX. In the first Galois connection,Gis the upper adjoint, while in the second Galois connection it serves as the lower adjoint.
In the case of aquotient mapbetween algebraic objects (such asgroups), this connection is called thelattice theorem: subgroups ofGconnect to subgroups ofG/N, and the closure operator on subgroups ofGis given byH=HN.
Pick some mathematical objectXthat has anunderlying set, for instance a group,ring,vector space, etc. For any subsetSofX, letF(S)be the smallestsubobjectofXthat containsS, i.e. thesubgroup,subringorsubspacegenerated byS. For any subobjectUofX, letG(U)be the underlying set ofU. (We can even takeXto be atopological space, letF(S)theclosureofS, and take as "subobjects ofX"theclosed subsetsofX.) NowFandGform a monotone Galois connection between subsets ofXand subobjects ofX, if both are ordered by inclusion.Fis the lower adjoint.
A very general comment ofWilliam Lawvere[6]is thatsyntax and semanticsare adjoint: takeAto be the set of alllogical theories(axiomatizations) reverse ordered by strength, andBthe power set of the set of all mathematical structures. For a theoryT∈A, letMod(T)be the set of all structures that satisfy theaxiomsT; for a set of mathematical structuresS∈B, letTh(S)be the minimum of the axiomatizations that approximateS(infirst-order logic, this is the set of sentences that are true in all structures inS). We can then say thatSis a subset ofMod(T)if and only ifTh(S)logically entailsT: the "semantics functor"Modand the "syntax functor"Thform a monotone Galois connection, with semantics being the upper adjoint.
The motivating example comes from Galois theory: supposeL/Kis afield extension. LetAbe the set of all subfields ofLthat containK, ordered by inclusion ⊆. IfEis such a subfield, writeGal(L/E)for the group offield automorphismsofLthat holdEfixed. LetBbe the set of subgroups ofGal(L/K), ordered by inclusion ⊆. For such a subgroupG, defineFix(G)to be the field consisting of all elements ofLthat are held fixed by all elements ofG. Then the mapsE↦ Gal(L/E)andG↦ Fix(G)form an antitone Galois connection.
Analogously, given apath-connectedtopological spaceX, there is an antitone Galois connection between subgroups of thefundamental groupπ1(X)and path-connectedcovering spacesofX. In particular, ifXissemi-locally simply connected, then for every subgroupGofπ1(X), there is a covering space withGas its fundamental group.
Given aninner product spaceV, we can form theorthogonal complementF(X)of any subspaceXofV. This yields an antitone Galois connection between the set of subspaces ofVand itself, ordered by inclusion; both polarities are equal toF.
Given avector spaceVand a subsetXofVwe can define its annihilatorF(X), consisting of all elements of thedual spaceV∗ofVthat vanish onX. Similarly, given a subsetYofV∗, we define its annihilatorG(Y) = {x∈V|φ(x) = 0 ∀φ∈Y}.This gives an antitone Galois connection between the subsets ofVand the subsets ofV∗.
Inalgebraic geometry, the relation between sets ofpolynomialsand their zero sets is an antitone Galois connection.
Fix anatural numbernand afieldKand letAbe the set of all subsets of thepolynomial ringK[X1, ...,Xn]ordered by inclusion ⊆, and letBbe the set of all subsets ofKnordered by inclusion ⊆. IfSis a set of polynomials, define thevarietyof zeros as
the set of commonzerosof the polynomials inS. IfUis a subset ofKn, defineI(U)as theidealof polynomials vanishing onU, that is
ThenVandIform an antitone Galois connection.
The closure onKnis the closure in theZariski topology, and if the fieldKisalgebraically closed, then the closure on the polynomial ring is theradicalof ideal generated byS.
More generally, given acommutative ringR(not necessarily a polynomial ring), there is an antitone Galois connection between radical ideals in the ring and Zariski closed subsets of theaffine varietySpec(R).
More generally, there is an antitone Galois connection between ideals in the ring andsubschemesof the correspondingaffine variety.
SupposeXandYare arbitrary sets and abinary relationRoverXandYis given. For any subsetMofX, we defineF(M) = {y∈Y|mRy∀m∈M}.Similarly, for any subsetNofY, defineG(N) = {x∈X|xRn∀n∈N}.ThenFandGyield an antitone Galois connection between the power sets ofXandY, both ordered by inclusion ⊆.[7]
Up to isomorphismallantitone Galois connections between power sets arise in this way. This follows from the "Basic Theorem on Concept Lattices".[8]Theory and applications of Galois connections arising from binary relations are studied informal concept analysis. That field uses Galois connections for mathematical data analysis. Many algorithms for Galois connections can be found in the respective literature, e.g., in.[9]
Thegeneral concept latticein its primitive version incorporates both the monotone and antitone Galois connections to furnish its upper and lower bounds of nodes for the concept lattice, respectively.[10]
In the following, we consider a (monotone) Galois connectionf= (f∗,f∗), wheref∗:A→Bis the lower adjoint as introduced above. Some helpful and instructive basic properties can be obtained immediately. By the defining property of Galois connections,f∗(x) ≤f∗(x)is equivalent tox≤f∗(f∗(x)), for allxinA. By a similar reasoning (or just by applying theduality principle for order theory), one finds thatf∗(f∗(y)) ≤y, for allyinB. These properties can be described by saying the compositef∗∘f∗isdeflationary, whilef∗∘f∗isinflationary(orextensive).
Now considerx,y∈Asuch thatx≤y. Then using the above one obtainsx≤f∗(f∗(y)). Applying the basic property of Galois connections, one can now conclude thatf∗(x) ≤f∗(y). But this just shows thatf∗preserves the order of any two elements, i.e. it is monotone. Again, a similar reasoning yields monotonicity off∗. Thus monotonicity does not have to be included in the definition explicitly. However, mentioning monotonicity helps to avoid confusion about the two alternative notions of Galois connections.
Another basic property of Galois connections is the fact thatf∗(f∗(f∗(x))) =f∗(x), for allxinB. Clearly we find that
becausef∗∘f∗is inflationary as shown above. On the other hand, sincef∗∘f∗is deflationary, whilef∗is monotonic, one finds that
This shows the desired equality. Furthermore, we can use this property to conclude that
and
i.e.,f∗∘f∗andf∗∘f∗areidempotent.
It can be shown (see Blyth or Erné for proofs) that a functionfis a lower (respectively upper) adjoint if and only iffis aresiduated mapping(respectively residual mapping). Therefore, the notion of residuated mapping and monotone Galois connection are essentially the same.
The above findings can be summarized as follows: for a Galois connection, the compositef∗∘f∗is monotone (being the composite of monotone functions), inflationary, and idempotent. This states thatf∗∘f∗is in fact aclosure operatoronA. Dually,f∗∘f∗is monotone, deflationary, and idempotent. Such mappings are sometimes calledkernel operators. In the context offrames and locales, the compositef∗∘f∗is called thenucleusinduced byf. Nuclei induce frame homomorphisms; a subset of a locale is called a sublocale if it is given by a nucleus.
Conversely, any closure operatorcon some posetAgives rise to the Galois connection with lower adjointf∗being just the corestriction ofcto the image ofc(i.e. as asurjectivemapping the closure systemc(A)). The upper adjointf∗is then given by theinclusionofc(A)intoA, that maps each closed element to itself, considered as an element ofA. In this way, closure operators and Galois connections are seen to be closely related, each specifying an instance of the other. Similar conclusions hold true for kernel operators.
The above considerations also show that closed elements ofA(elementsxwithf∗(f∗(x)) =x) are mapped to elements within the range of the kernel operatorf∗∘f∗, and vice versa.
Another important property of Galois connections is that lower adjoints preserve allsupremathat exist within theirdomain. Dually, upper adjoints preserve all existinginfima. From these properties, one can also conclude monotonicity of the adjoints immediately. Theadjoint functor theorem for order theorystates that the converse implication is also valid in certain cases: especially, any mapping betweencomplete latticesthat preserves all suprema is the lower adjoint of a Galois connection.
In this situation, an important feature of Galois connections is that one adjoint uniquely determines the other. Hence one can strengthen the above statement to guarantee that any supremum-preserving map between complete lattices is the lower adjoint of a unique Galois connection. The main property to derive this uniqueness is the following: For everyxinA,f∗(x)is the least elementyofBsuch thatx≤f∗(y). Dually, for everyyinB,f∗(y)is the greatestxinAsuch thatf∗(x) ≤y. The existence of a certain Galois connection now implies the existence of the respective least or greatest elements, no matter whether the corresponding posets satisfy anycompleteness properties. Thus, when one upper adjoint of a Galois connection is given, the other upper adjoint can be defined via this same property.
On the other hand, some monotone functionfis a lower adjointif and only ifeach set of the form{x∈A|f(x) ≤b},forbinB, contains a greatest element. Again, this can be dualized for the upper adjoint.
Galois connections also provide an interesting class of mappings between posets which can be used to obtaincategoriesof posets. Especially, it is possible to compose Galois connections: given Galois connections(f∗,f∗)between posetsAandBand(g∗,g∗)betweenBandC, the composite(g∗∘f∗,f∗∘g∗)is also a Galois connection. When considering categories of complete lattices, this can be simplified to considering just mappings preserving all suprema (or, alternatively, infima). Mapping complete lattices to their duals, these categories display autoduality, that are quite fundamental for obtaining other duality theorems. More special kinds ofmorphismsthat induce adjoint mappings in the other direction are the morphisms usually considered forframes(or locales).
Every partially ordered set can be viewed as a category in a natural way: there is a unique morphism fromxtoyif and only ifx≤y. A monotone Galois connection is then nothing but a pair ofadjoint functorsbetween two categories that arise from partially ordered sets. In this context, the upper adjoint is theright adjointwhile the lower adjoint is theleft adjoint. However, this terminology is avoided for Galois connections, since there was a time when posets were transformed into categories in a dual fashion, i.e. with morphisms pointing in the opposite direction. This led to a complementary notation concerning left and right adjoints, which today is ambiguous.
Galois connections may be used to describe many forms of abstraction in the theory ofabstract interpretationofprogramming languages.[11][12]
The following books and survey articles include Galois connections using the monotone definition:
Some publications using the original (antitone) definition: | https://en.wikipedia.org/wiki/Galois_connection |
Intopologyand related areas ofmathematics, thequotient spaceof atopological spaceunder a givenequivalence relationis a new topological space constructed by endowing thequotient setof the original topological space with thequotient topology, that is, with thefinest topologythat makescontinuousthecanonical projection map(the function that maps points to theirequivalence classes). In other words, a subset of a quotient space isopenif and only if itspreimageunder the canonical projection map is open in the original topological space.
Intuitively speaking, the points of each equivalence class areidentifiedor "glued together" for forming a new topological space. For example, identifying the points of aspherethat belong to the samediameterproduces theprojective planeas a quotient space.
LetX{\displaystyle X}be atopological space, and let∼{\displaystyle \sim }be anequivalence relationonX.{\displaystyle X.}Thequotient setY=X/∼{\displaystyle Y=X/{\sim }}is the set ofequivalence classesof elements ofX.{\displaystyle X.}The equivalence class ofx∈X{\displaystyle x\in X}is denoted[x].{\displaystyle [x].}
The construction ofY{\displaystyle Y}defines a canonicalsurjectionq:X∋x↦[x]∈Y.{\textstyle q:X\ni x\mapsto [x]\in Y.}As discussed below,q{\displaystyle q}is a quotient mapping, commonly called the canonical quotient map, or canonical projection map, associated toX/∼.{\displaystyle X/{\sim }.}
Thequotient spaceunder∼{\displaystyle \sim }is the setY{\displaystyle Y}equipped with thequotient topology, whoseopen setsare thosesubsetsU⊆Y{\textstyle U\subseteq Y}whosepreimageq−1(U){\displaystyle q^{-1}(U)}isopen. In other words,U{\displaystyle U}is open in the quotient topology onX/∼{\displaystyle X/{\sim }}if and only if{x∈X:[x]∈U}{\textstyle \{x\in X:[x]\in U\}}is open inX.{\displaystyle X.}Similarly, a subsetS⊆Y{\displaystyle S\subseteq Y}isclosedif and only if{x∈X:[x]∈S}{\displaystyle \{x\in X:[x]\in S\}}is closed inX.{\displaystyle X.}
The quotient topology is thefinal topologyon the quotient set, with respect to the mapx↦[x].{\displaystyle x\mapsto [x].}
A mapf:X→Y{\displaystyle f:X\to Y}is aquotient map(sometimes called anidentification map[1]) if it issurjectiveandY{\displaystyle Y}is equipped with thefinal topologyinduced byf.{\displaystyle f.}The latter condition admits two more-elementary formulations: a subsetV⊆Y{\displaystyle V\subseteq Y}is open (closed) if and only iff−1(V){\displaystyle f^{-1}(V)}is open (resp. closed). Every quotient map is continuous but not every continuous map is a quotient map.
Saturated sets
A subsetS{\displaystyle S}ofX{\displaystyle X}is calledsaturated(with respect tof{\displaystyle f}) if it is of the formS=f−1(T){\displaystyle S=f^{-1}(T)}for some setT,{\displaystyle T,}which is true if and only iff−1(f(S))=S.{\displaystyle f^{-1}(f(S))=S.}The assignmentT↦f−1(T){\displaystyle T\mapsto f^{-1}(T)}establishes aone-to-one correspondence(whose inverse isS↦f(S){\displaystyle S\mapsto f(S)}) between subsetsT{\displaystyle T}ofY=f(X){\displaystyle Y=f(X)}and saturated subsets ofX.{\displaystyle X.}With this terminology, a surjectionf:X→Y{\displaystyle f:X\to Y}is a quotient map if and only if for everysaturatedsubsetS{\displaystyle S}ofX,{\displaystyle X,}S{\displaystyle S}is open inX{\displaystyle X}if and only iff(S){\displaystyle f(S)}is open inY.{\displaystyle Y.}In particular, open subsets ofX{\displaystyle X}that arenotsaturated have no impact on whether the functionf{\displaystyle f}is a quotient map (or, indeed, continuous: a functionf:X→Y{\displaystyle f:X\to Y}is continuous if and only if, for every saturatedS⊆X{\textstyle S\subseteq X}such thatf(S){\displaystyle f(S)}is open inf(X){\textstyle f(X)},the setS{\displaystyle S}is open inX{\textstyle X}).
Indeed, ifτ{\displaystyle \tau }is atopologyonX{\displaystyle X}andf:X→Y{\displaystyle f:X\to Y}is any map, then the setτf{\displaystyle \tau _{f}}of allU∈τ{\displaystyle U\in \tau }that are saturated subsets ofX{\displaystyle X}forms a topology onX.{\displaystyle X.}IfY{\displaystyle Y}is also a topological space thenf:(X,τ)→Y{\displaystyle f:(X,\tau )\to Y}is a quotient map (respectively,continuous) if and only if the same is true off:(X,τf)→Y.{\displaystyle f:\left(X,\tau _{f}\right)\to Y.}
Quotient space of fibers characterization
Given anequivalence relation∼{\displaystyle \,\sim \,}onX,{\displaystyle X,}denote theequivalence classof a pointx∈X{\displaystyle x\in X}by[x]:={z∈X:z∼x}{\displaystyle [x]:=\{z\in X:z\sim x\}}and letX/∼:={[x]:x∈X}{\displaystyle X/{\sim }:=\{[x]:x\in X\}}denote the set of equivalence classes. The mapq:X→X/∼{\displaystyle q:X\to X/{\sim }}that sends points to theirequivalence classes(that is, it is defined byq(x):=[x]{\displaystyle q(x):=[x]}for everyx∈X{\displaystyle x\in X}) is calledthe canonical map. It is asurjective mapand for alla,b∈X,{\displaystyle a,b\in X,}a∼b{\displaystyle a\,\sim \,b}if and only ifq(a)=q(b);{\displaystyle q(a)=q(b);}consequently,q(x)=q−1(q(x)){\displaystyle q(x)=q^{-1}(q(x))}for allx∈X.{\displaystyle x\in X.}In particular, this shows that the set of equivalence classX/∼{\displaystyle X/{\sim }}is exactly the set of fibers of the canonical mapq.{\displaystyle q.}IfX{\displaystyle X}is a topological space then givingX/∼{\displaystyle X/{\sim }}the quotient topology induced byq{\displaystyle q}will make it into a quotient space and makeq:X→X/∼{\displaystyle q:X\to X/{\sim }}into a quotient map.Up toahomeomorphism, this construction is representative of all quotient spaces; the precise meaning of this is now explained.
Letf:X→Y{\displaystyle f:X\to Y}be a surjection between topological spaces (not yet assumed to be continuous or a quotient map) and declare for alla,b∈X{\displaystyle a,b\in X}thata∼b{\displaystyle a\,\sim \,b}if and only iff(a)=f(b).{\displaystyle f(a)=f(b).}Then∼{\displaystyle \,\sim \,}is an equivalence relation onX{\displaystyle X}such that for everyx∈X,{\displaystyle x\in X,}[x]=f−1(f(x)),{\displaystyle [x]=f^{-1}(f(x)),}which implies thatf([x]){\displaystyle f([x])}(defined byf([x])={f(z):z∈[x]}{\displaystyle f([x])=\{\,f(z)\,:z\in [x]\}}) is asingleton set; denote the unique element inf([x]){\displaystyle f([x])}byf^([x]){\displaystyle {\hat {f}}([x])}(so by definition,f([x])={f^([x])}{\displaystyle f([x])=\{\,{\hat {f}}([x])\,\}}).
The assignment[x]↦f^([x]){\displaystyle [x]\mapsto {\hat {f}}([x])}defines abijectionf^:X/∼→Y{\displaystyle {\hat {f}}:X/{\sim }\;\to \;Y}between the fibers off{\displaystyle f}and points inY.{\displaystyle Y.}Define the mapq:X→X/∼{\displaystyle q:X\to X/{\sim }}as above (byq(x):=[x]{\displaystyle q(x):=[x]}) and giveX/∼{\displaystyle X/{\sim }}the quotient topology induced byq{\displaystyle q}(which makesq{\displaystyle q}a quotient map). These maps are related by:f=f^∘qandq=f^−1∘f.{\displaystyle f={\hat {f}}\circ q\quad {\text{ and }}\quad q={\hat {f}}^{-1}\circ f.}From this and the fact thatq:X→X/∼{\displaystyle q:X\to X/{\sim }}is a quotient map, it follows thatf:X→Y{\displaystyle f:X\to Y}is continuous if and only if this is true off^:X/∼→Y.{\displaystyle {\hat {f}}:X/{\sim }\;\to \;Y.}Furthermore,f:X→Y{\displaystyle f:X\to Y}is a quotient map if and only iff^:X/∼→Y{\displaystyle {\hat {f}}:X/{\sim }\;\to \;Y}is ahomeomorphism(or equivalently, if and only if bothf^{\displaystyle {\hat {f}}}and its inverse are continuous).
Ahereditarily quotient mapis a surjective mapf:X→Y{\displaystyle f:X\to Y}with the property that for every subsetT⊆Y,{\displaystyle T\subseteq Y,}the restrictionf|f−1(T):f−1(T)→T{\displaystyle f{\big \vert }_{f^{-1}(T)}~:~f^{-1}(T)\to T}is also a quotient map.
There exist quotient maps that are not hereditarily quotient.
Quotient mapsq:X→Y{\displaystyle q:X\to Y}are characterized among surjective maps by the following property: ifZ{\displaystyle Z}is any topological space andf:Y→Z{\displaystyle f:Y\to Z}is any function, thenf{\displaystyle f}is continuous if and only iff∘q{\displaystyle f\circ q}is continuous.
The quotient spaceX/∼{\displaystyle X/{\sim }}together with the quotient mapq:X→X/∼{\displaystyle q:X\to X/{\sim }}is characterized by the followinguniversal property: ifg:X→Z{\displaystyle g:X\to Z}is a continuous map such thata∼b{\displaystyle a\sim b}impliesg(a)=g(b){\displaystyle g(a)=g(b)}for alla,b∈X,{\displaystyle a,b\in X,}then there exists a unique continuous mapf:X/∼→Z{\displaystyle f:X/{\sim }\to Z}such thatg=f∘q.{\displaystyle g=f\circ q.}In other words, the following diagram commutes:
One says thatg{\displaystyle g}descends to the quotientfor expressing this, that is that it factorizes through the quotient space. The continuous maps defined onX/∼{\displaystyle X/{\sim }}are, therefore, precisely those maps which arise from continuous maps defined onX{\displaystyle X}that respect the equivalence relation (in the sense that they send equivalent elements to the same image). This criterion is copiously used when studying quotient spaces.
Given a continuous surjectionq:X→Y{\displaystyle q:X\to Y}it is useful to have criteria by which one can determine ifq{\displaystyle q}is a quotient map. Two sufficient criteria are thatq{\displaystyle q}beopenorclosed. Note that these conditions are onlysufficient, notnecessary. It is easy to construct examples of quotient maps that are neither open nor closed. For topological groups, the quotient map is open.
Separation
Connectedness
Compactness
Dimension
Topology
Algebra | https://en.wikipedia.org/wiki/Quotient_space_(topology) |
Inset theory, anordinal number, orordinal, is a generalization ofordinal numerals(first, second,nth, etc.) aimed to extendenumerationtoinfinite sets.[1]
A finite set can be enumerated by successively labeling each element with the leastnatural numberthat has not been previously used. To extend this process to variousinfinite sets, ordinal numbers are defined more generally usinglinearly orderedgreek lettervariablesthat include the natural numbers and have the property that every set of ordinals has aleast or "smallest" element(this is needed for giving a meaning to "the least unused element").[2]This more general definition allows us to define an ordinal numberω{\displaystyle \omega }(omega) to be the least element that is greater than every natural number, along with ordinal numbersω+1{\displaystyle \omega +1},ω+2{\displaystyle \omega +2}, etc., which are even greater thanω{\displaystyle \omega }.
A linear order such that every non-empty subset has a least element is called awell-order. Theaxiom of choiceimplies that every set can be well-ordered, and given two well-ordered sets, one isisomorphicto aninitial segmentof the other. So ordinal numbers exist and are essentially unique.
Ordinal numbers are distinct fromcardinal numbers, which measure the size of sets. Although the distinction between ordinals and cardinals is not always apparent on finite sets (one can go from one to the other just by counting labels), they are very different in the infinite case, where different infinite ordinals can correspond to sets having the same cardinal. Like other kinds of numbers, ordinals can beadded, multiplied, and exponentiated, although none of these operations arecommutative.
Ordinals were introduced byGeorg Cantorin 1883[3]in order to accommodate infinite sequences and classifyderived sets, which he had previously introduced in 1872 while studying the uniqueness oftrigonometric series.[4]
Anatural number(which, in this context, includes the number0) can be used for two purposes: to describe thesizeof aset, or to describe thepositionof an element in a sequence. When restricted to finite sets, these two concepts coincide, since alllinear ordersof a finite set areisomorphic.
When dealing with infinite sets, however, one has to distinguish between the notion of size, which leads tocardinal numbers, and the notion of position, which leads to the ordinal numbers described here. This is because while any set has only one size (itscardinality), there are many nonisomorphicwell-orderingsof any infinite set, as explained below.
Whereas the notion of cardinal number is associated with a set with no particular structure on it, the ordinals are intimately linked with the special kind of sets that are calledwell-ordered. A well-ordered set is atotally orderedset (anorderedset such that, given two distinct elements, one is less than the other) in which every non-empty subset has a least element. Equivalently, assuming theaxiom of dependent choice, it is a totally ordered set without any infinite decreasing sequence — though there may be infinite increasing sequences. Ordinals may be used to label the elements of any given well-ordered set (the smallest element being labelled 0, the one after that 1, the next one 2, "and so on"), and to measure the "length" of the whole set by the least ordinal that is not a label for an element of the set. This "length" is called theorder typeof the set.
Any ordinal is defined by the set of ordinals that precede it. In fact, the most common definition of ordinalsidentifieseach ordinalasthe set of ordinals that precede it. For example, the ordinal 42 is generally identified as the set {0, 1, 2, ..., 41}. Conversely, any setSof ordinals that isdownward closed— meaning that for any ordinal α inSand any ordinal β < α, β is also inS— is (or can be identified with) an ordinal.
This definition of ordinals in terms of sets allows for infinite ordinals. The smallest infinite ordinal isω{\displaystyle \omega }, which can be identified with the set of natural numbers (so that the ordinal associated with every natural number precedesω{\displaystyle \omega }). Indeed, the set of natural numbers is well-ordered—as is any set of ordinals—and since it is downward closed, it can be identified with the ordinal associated with it.
Perhaps a clearer intuition of ordinals can be formed by examining a first few of them: as mentioned above, they start with the natural numbers, 0, 1, 2, 3, 4, 5, ... Afterallnatural numbers comes the first infinite ordinal, ω, and after that come ω+1, ω+2, ω+3, and so on. (Exactly what addition means will be defined later on: just consider them as names.) After all of these come ω·2 (which is ω+ω), ω·2+1, ω·2+2, and so on, then ω·3, and then later on ω·4. Now the set of ordinals formed in this way (the ω·m+n, wheremandnare natural numbers) must itself have an ordinal associated with it: and that is ω2. Further on, there will be ω3, then ω4, and so on, and ωω, then ωωω, then later ωωωω, and even later ε0(epsilon nought) (to give a few examples of relatively small—countable—ordinals). This can be continued indefinitely (as every time one says "and so on" when enumerating ordinals, it defines a larger ordinal). The smallestuncountableordinal is the set of all countable ordinals, expressed asω1orΩ{\displaystyle \Omega }.[5]
In awell-orderedset, every non-empty subset contains a distinct smallest element. Given theaxiom of dependent choice, this is equivalent to saying that the set istotally orderedand there is no infinite decreasing sequence (the latter being easier to visualize). In practice, the importance of well-ordering is justified by the possibility of applyingtransfinite induction, which says, essentially, that any property that passes on from the predecessors of an element to that element itself must be true of all elements (of the given well-ordered set). If the states of a computation (computer program or game) can be well-ordered—in such a way that each step is followed by a "lower" step—then the computation will terminate.
It is inappropriate to distinguish between two well-ordered sets if they only differ in the "labeling of their elements", or more formally: if the elements of the first set can be paired off with the elements of the second set such that if one element is smaller than another in the first set, then the partner of the first element is smaller than the partner of the second element in the second set, and vice versa. Such a one-to-one correspondence is called anorder isomorphism, and the two well-ordered sets are said to be order-isomorphic orsimilar(with the understanding that this is anequivalence relation).
Formally, if apartial order≤ is defined on the setS, and a partial order ≤' is defined on the setS', then theposets(S,≤) and (S',≤') areorder isomorphicif there is abijectionfthat preserves the ordering. That is,f(a) ≤'f(b) if and only ifa≤b. Provided there exists an order isomorphism between two well-ordered sets, the order isomorphism is unique: this makes it quite justifiable to consider the two sets as essentially identical, and to seek a"canonical" representativeof the isomorphism type (class). This is exactly what the ordinals provide, and it also provides a canonical labeling of the elements of any well-ordered set. Everywell-orderedset (S,<) is order-isomorphic to the set of ordinals less than one specific ordinal number under their natural ordering. This canonical set is theorder typeof (S,<).
Essentially, an ordinal is intended to be defined as anisomorphism classof well-ordered sets: that is, as anequivalence classfor theequivalence relationof "being order-isomorphic". There is a technical difficulty involved, however, in the fact that the equivalence class is too large to be a set in the usualZermelo–Fraenkel(ZF) formalization of set theory. But this is not a serious difficulty. The ordinal can be said to be theorder typeof any set in the class.
The original definition of ordinal numbers, found for example in thePrincipia Mathematica, defines the order type of a well-ordering as the set of all well-orderings similar (order-isomorphic) to that well-ordering: in other words, an ordinal number is genuinely an equivalence class of well-ordered sets. This definition must be abandoned inZFand related systems ofaxiomatic set theorybecause these equivalence classes are too large to form a set. However, this definition still can be used intype theoryand in Quine's axiomatic set theoryNew Foundationsand related systems (where it affords a rather surprising alternative solution to theBurali-Forti paradoxof the largest ordinal).
Rather than defining an ordinal as anequivalence classof well-ordered sets, it will be defined as a particular well-ordered set that (canonically) represents the class. Thus, an ordinal number will be a well-ordered set; and every well-ordered set will be order-isomorphic to exactly one ordinal number.
For each well-ordered setT,a↦T<a{\displaystyle a\mapsto T_{<a}}defines anorder isomorphismbetweenTand the set of all subsets ofThaving the formT<a:={x∈T∣x<a}{\displaystyle T_{<a}:=\{x\in T\mid x<a\}}ordered by inclusion. This motivates the standard definition, suggested byJohn von Neumannat the age of 19, now called definition ofvon Neumann ordinals: "each ordinal is the well-ordered set of all smaller ordinals". In symbols,λ=[0,λ){\displaystyle \lambda =[0,\lambda )}.[6][7]Formally:
The natural numbers are thus ordinals by this definition. For instance, 2 is an element of4 = {0, 1, 2, 3},and 2 is equal to{0, 1}and so it is a subset of{0, 1, 2, 3}.
It can be shown bytransfinite inductionthat every well-ordered set is order-isomorphic to exactly one of these ordinals, that is, there is an order preservingbijective functionbetween them.
Furthermore, the elements of every ordinal are ordinals themselves. Given two ordinalsSandT,Sis an element ofTif and only ifSis aproper subsetofT. Moreover, eitherSis an element ofT, orTis an element ofS, or they are equal. So every set of ordinals istotally ordered. Further, every set of ordinals is well-ordered. This generalizes the fact that every set of natural numbers is well-ordered.
Consequently, every ordinalSis a set having as elements precisely the ordinals smaller thanS. For example, every set of ordinals has asupremum, the ordinal obtained by taking the union of all the ordinals in the set. This union exists regardless of the set's size, by theaxiom of union.
The class of all ordinals is not a set. If it were a set, one could show that it was an ordinal and thus a member of itself, which would contradict itsstrictordering by membership. This is theBurali-Forti paradox. The class of all ordinals is variously called "Ord", "ON", or "∞".
An ordinal isfiniteif and only if the opposite order is also well-ordered, which is the case if and only if each of its non-empty subsets has agreatest element.
There are other modern formulations of the definition of ordinal. For example, assuming theaxiom of regularity, the following are equivalent for a setx:
These definitions cannot be used innon-well-founded set theories. In set theories withurelements, one has to further make sure that the definition excludes urelements from appearing in ordinals.
If α is any ordinal andXis a set, an α-indexed sequence of elements ofXis a function from α toX. This concept, atransfinite sequence(if α is infinite) orordinal-indexed sequence, is a generalization of the concept of asequence. An ordinary sequence corresponds to the case α = ω, while a finite α corresponds to atuple, a.k.a.string.
Transfinite induction holds in anywell-orderedset, but it is so important in relation to ordinals that it is worth restating here.
That is, ifP(α) is true wheneverP(β) is true for allβ < α, thenP(α) is true forallα. Or, more practically: in order to prove a propertyPfor all ordinals α, one can assume that it is already known for all smallerβ < α.
Transfinite induction can be used not only to prove things, but also to define them. Such a definition is normally said to be bytransfinite recursion– the proof that the result is well-defined uses transfinite induction. LetFdenote a (class) functionFto be defined on the ordinals. The idea now is that, in definingF(α) for an unspecified ordinal α, one may assume thatF(β) is already defined for allβ < αand thus give a formula forF(α) in terms of theseF(β). It then follows by transfinite induction that there is one and only one function satisfying the recursion formula up to and including α.
Here is an example of definition by transfinite recursion on the ordinals (more will be given later): define functionFby lettingF(α) be the smallest ordinal not in the set{F(β) | β < α}, that is, the set consisting of allF(β) forβ < α. This definition assumes theF(β) known in the very process of definingF; this apparent vicious circle is exactly what definition by transfinite recursion permits. In fact,F(0) makes sense since there is no ordinalβ < 0, and the set{F(β) | β < 0}is empty. SoF(0) is equal to 0 (the smallest ordinal of all). Now thatF(0) is known, the definition applied toF(1) makes sense (it is the smallest ordinal not in the singleton set{F(0)} = {0}), and so on (theand so onis exactly transfinite induction). It turns out that this example is not very exciting, since provablyF(α) = αfor all ordinals α, which can be shown, precisely, by transfinite induction.
Any nonzero ordinal has the minimum element, zero. It may or may not have a maximum element. For example, 42 has maximum 41 and ω+6 has maximum ω+5. On the other hand, ω does not have a maximum since there is no largest natural number. If an ordinal has a maximum α, then it is the next ordinal after α, and it is called asuccessor ordinal, namely the successor of α, written α+1. In the von Neumann definition of ordinals, the successor of α isα∪{α}{\displaystyle \alpha \cup \{\alpha \}}since its elements are those of α and α itself.[6]
A nonzero ordinal that isnota successor is called alimit ordinal. One justification for this term is that a limit ordinal is thelimitin a topological sense of all smaller ordinals (under theorder topology).
When⟨αι|ι<γ⟩{\displaystyle \langle \alpha _{\iota }|\iota <\gamma \rangle }is an ordinal-indexed sequence, indexed by a limitγ{\displaystyle \gamma }and the sequence isincreasing, i.e.αι<αρ{\displaystyle \alpha _{\iota }<\alpha _{\rho }}wheneverι<ρ,{\displaystyle \iota <\rho ,}itslimitis defined as the least upper bound of the set{αι|ι<γ},{\displaystyle \{\alpha _{\iota }|\iota <\gamma \},}that is, the smallest ordinal (it always exists) greater than any term of the sequence. In this sense, a limit ordinal is the limit of all smaller ordinals (indexed by itself). Put more directly, it is the supremum of the set of smaller ordinals.
Another way of defining a limit ordinal is to say that α is a limit ordinal if and only if:
So in the following sequence:
ω is a limit ordinal because for any smaller ordinal (in this example, a natural number) there is another ordinal (natural number) larger than it, but still less than ω.
Thus, every ordinal is either zero, or a successor (of a well-defined predecessor), or a limit. This distinction is important, because many definitions by transfinite recursion rely upon it. Very often, when defining a functionFby transfinite recursion on all ordinals, one definesF(0), andF(α+1) assumingF(α) is defined, and then, for limit ordinals δ one definesF(δ) as the limit of theF(β) for all β<δ (either in the sense of ordinal limits, as previously explained, or for some other notion of limit ifFdoes not take ordinal values). Thus, the interesting step in the definition is the successor step, not the limit ordinals. Such functions (especially forFnondecreasing and taking ordinal values) are called continuous. Ordinal addition, multiplication and exponentiation are continuous as functions of their second argument (but can be defined non-recursively).
Any well-ordered set is similar (order-isomorphic) to a unique ordinal numberα{\displaystyle \alpha }; in other words, its elements can be indexed in increasing fashion by the ordinals less thanα{\displaystyle \alpha }. This applies, in particular, to any set of ordinals: any set of ordinals is naturally indexed by the ordinals less than someα{\displaystyle \alpha }. The same holds, with a slight modification, forclassesof ordinals (a collection of ordinals, possibly too large to form a set, defined by some property): any class of ordinals can be indexed by ordinals (and, when the class is unbounded in the class of all ordinals, this puts it in class-bijection with the class of all ordinals). So theγ{\displaystyle \gamma }-th element in the class (with the convention that the "0-th" is the smallest, the "1-st" is the next smallest, and so on) can be freely spoken of. Formally, the definition is by transfinite induction: theγ{\displaystyle \gamma }-th element of the class is defined (provided it has already been defined for allβ<γ{\displaystyle \beta <\gamma }), as the smallest element greater than theβ{\displaystyle \beta }-th element for allβ<γ{\displaystyle \beta <\gamma }.
This could be applied, for example, to the class of limit ordinals: theγ{\displaystyle \gamma }-th ordinal, which is either a limit or zero isω⋅γ{\displaystyle \omega \cdot \gamma }(seeordinal arithmeticfor the definition of multiplication of ordinals). Similarly, one can consideradditively indecomposable ordinals(meaning a nonzero ordinal that is not the sum of two strictly smaller ordinals): theγ{\displaystyle \gamma }-th additively indecomposable ordinal is indexed asωγ{\displaystyle \omega ^{\gamma }}. The technique of indexing classes of ordinals is often useful in the context of fixed points: for example, theγ{\displaystyle \gamma }-th ordinalα{\displaystyle \alpha }such thatωα=α{\displaystyle \omega ^{\alpha }=\alpha }is writtenεγ{\displaystyle \varepsilon _{\gamma }}. These are called the "epsilon numbers".
A classC{\displaystyle C}of ordinals is said to beunbounded, orcofinal, when given any ordinalα{\displaystyle \alpha }, there is aβ{\displaystyle \beta }inC{\displaystyle C}such thatα<β{\displaystyle \alpha <\beta }(then the class must be a proper class, i.e., it cannot be a set). It is said to beclosedwhen the limit of a sequence of ordinals in the class is again in the class: or, equivalently, when the indexing (class-)functionF{\displaystyle F}is continuous in the sense that, forδ{\displaystyle \delta }a limit ordinal,F(δ){\displaystyle F(\delta )}(theδ{\displaystyle \delta }-th ordinal in the class) is the limit of allF(γ){\displaystyle F(\gamma )}forγ<δ{\displaystyle \gamma <\delta }; this is also the same as being closed, in thetopologicalsense, for theorder topology(to avoid talking of topology on proper classes, one can demand that the intersection of the class with any given ordinal is closed for the order topology on that ordinal, this is again equivalent).
Of particular importance are those classes of ordinals that areclosed and unbounded, sometimes calledclubs. For example, the class of all limit ordinals is closed and unbounded: this translates the fact that there is always a limit ordinal greater than a given ordinal, and that a limit of limit ordinals is a limit ordinal (a fortunate fact if the terminology is to make any sense at all!). The class of additively indecomposable ordinals, or the class ofε⋅{\displaystyle \varepsilon _{\cdot }}ordinals, or the class ofcardinals, are all closed unbounded; the set ofregularcardinals, however, is unbounded but not closed, and any finite set of ordinals is closed but not unbounded.
A class is stationary if it has a nonempty intersection with every closed unbounded class. All superclasses of closed unbounded classes are stationary, and stationary classes are unbounded, but there are stationary classes that are not closed and stationary classes that have no closed unbounded subclass (such as the class of all limit ordinals with countable cofinality). Since the intersection of two closed unbounded classes is closed and unbounded, the intersection of a stationary class and a closed unbounded class is stationary. But the intersection of two stationary classes may be empty, e.g. the class of ordinals with cofinality ω with the class of ordinals with uncountable cofinality.
Rather than formulating these definitions for (proper) classes of ordinals, one can formulate them for sets of ordinals below a given ordinalα{\displaystyle \alpha }: A subset of a limit ordinalα{\displaystyle \alpha }is said to be unbounded (or cofinal) underα{\displaystyle \alpha }provided any ordinal less thanα{\displaystyle \alpha }is less than some ordinal in the set. More generally, one can call a subset of any ordinalα{\displaystyle \alpha }cofinal inα{\displaystyle \alpha }provided every ordinal less thanα{\displaystyle \alpha }is less thanor equal tosome ordinal in the set. The subset is said to be closed underα{\displaystyle \alpha }provided it is closed for the order topologyinα{\displaystyle \alpha }, i.e. a limit of ordinals in the set is either in the set or equal toα{\displaystyle \alpha }itself.
There are three usual operations on ordinals: addition, multiplication, and exponentiation. Each can be defined in essentially two different ways: either by constructing an explicit well-ordered set that represents the operation or by using transfinite recursion. TheCantor normal formprovides a standardized way of writing ordinals. It uniquely represents each ordinal as a finite sum of ordinal powers of ω. However, this cannot form the basis of a universal ordinal notation due to such self-referential representations as ε0= ωε0.
Ordinals are a subclass of the class ofsurreal numbers, and the so-called "natural" arithmetical operations for surreal numbers are an alternative way to combine ordinals arithmetically. They retain commutativity at the expense of continuity.
Interpreted asnimbers, a game-theoretic variant of numbers, ordinals can also be combined via nimber arithmetic operations. These operations are commutative but the restriction to natural numbers is generally not the same as ordinary addition of natural numbers.
Each ordinal associates with onecardinal, its cardinality. If there is a bijection between two ordinals (e.g.ω = 1 + ωandω + 1 > ω), then they associate with the same cardinal. Any well-ordered set having an ordinal as its order-type has the same cardinality as that ordinal. The least ordinal associated with a given cardinal is called theinitial ordinalof that cardinal. Every finite ordinal (natural number) is initial, and no other ordinal associates with its cardinal. But most infinite ordinals are not initial, as many infinite ordinals associate with the same cardinal. Theaxiom of choiceis equivalent to the statement that every set can be well-ordered, i.e. that every cardinal has an initial ordinal. In theories with the axiom of choice, the cardinal number of any set has an initial ordinal, and one may employ theVon Neumann cardinal assignmentas the cardinal's representation. (However, we must then be careful to distinguish between cardinal arithmetic and ordinal arithmetic.) In set theories without the axiom of choice, a cardinal may be represented by the set of sets with that cardinality having minimal rank (seeScott's trick).
One issue with Scott's trick is that it identifies the cardinal number0{\displaystyle 0}with{∅}{\displaystyle \{\emptyset \}}, which in some formulations is the ordinal number1{\displaystyle 1}. It may be clearer to apply Von Neumann cardinal assignment to finite cases and to use Scott's trick for sets which are infinite or do not admit well orderings. Note that cardinal and ordinal arithmetic agree for finite numbers.
The α-th infinite initial ordinal is writtenωα{\displaystyle \omega _{\alpha }}, it is always a limit ordinal. Its cardinality is writtenℵα{\displaystyle \aleph _{\alpha }}. For example, the cardinality of ω0= ω isℵ0{\displaystyle \aleph _{0}}, which is also the cardinality of ω2or ε0(all are countable ordinals). So ω can be identified withℵ0{\displaystyle \aleph _{0}}, except that the notationℵ0{\displaystyle \aleph _{0}}is used when writing cardinals, and ω when writing ordinals (this is important since, for example,ℵ02{\displaystyle \aleph _{0}^{2}}=ℵ0{\displaystyle \aleph _{0}}whereasω2>ω{\displaystyle \omega ^{2}>\omega }). Also,ω1{\displaystyle \omega _{1}}is the smallest uncountable ordinal (to see that it exists, consider the set of equivalence classes of well-orderings of the natural numbers: each such well-ordering defines a countable ordinal, andω1{\displaystyle \omega _{1}}is the order type of that set),ω2{\displaystyle \omega _{2}}is the smallest ordinal whose cardinality is greater thanℵ1{\displaystyle \aleph _{1}}, and so on, andωω{\displaystyle \omega _{\omega }}is the limit of theωn{\displaystyle \omega _{n}}for natural numbersn(any limit of cardinals is a cardinal, so this limit is indeed the first cardinal after all theωn{\displaystyle \omega _{n}}).
Thecofinalityof an ordinalα{\displaystyle \alpha }is the smallest ordinalδ{\displaystyle \delta }that is the order type of acofinalsubset ofα{\displaystyle \alpha }. Notice that a number of authors define cofinality or use it only for limit ordinals. The cofinality of a set of ordinals or any other well-ordered set is the cofinality of the order type of that set.
Thus for a limit ordinal, there exists aδ{\displaystyle \delta }-indexed strictly increasing sequence with limitα{\displaystyle \alpha }. For example, the cofinality of ω2is ω, because the sequence ω·m(wheremranges over the natural numbers) tends to ω2; but, more generally, any countable limit ordinal has cofinality ω. An uncountable limit ordinal may have either cofinality ω as doesωω{\displaystyle \omega _{\omega }}or an uncountable cofinality.
The cofinality of 0 is 0. And the cofinality of any successor ordinal is 1. The cofinality of any limit ordinal is at leastω{\displaystyle \omega }.
An ordinal that is equal to its cofinality is called regular and it is always an initial ordinal. Any limit of regular ordinals is a limit of initial ordinals and thus is also initial even if it is not regular, which it usually is not. If the Axiom of Choice, thenωα+1{\displaystyle \omega _{\alpha +1}}is regular for each α. In this case, the ordinals 0, 1,ω{\displaystyle \omega },ω1{\displaystyle \omega _{1}}, andω2{\displaystyle \omega _{2}}are regular, whereas 2, 3,ωω{\displaystyle \omega _{\omega }}, and ωω·2are initial ordinals that are not regular.
The cofinality of any ordinalαis a regular ordinal, i.e. the cofinality of the cofinality ofαis the same as the cofinality ofα. So the cofinality operation isidempotent.
As mentioned above (seeCantor normal form), the ordinal ε0is the smallest satisfying the equationωα=α{\displaystyle \omega ^{\alpha }=\alpha }, so it is the limit of the sequence 0, 1,ω{\displaystyle \omega },ωω{\displaystyle \omega ^{\omega }},ωωω{\displaystyle \omega ^{\omega ^{\omega }}}, etc. Many ordinals can be defined in such a manner as fixed points of certain ordinal functions (theι{\displaystyle \iota }-th ordinal such thatωα=α{\displaystyle \omega ^{\alpha }=\alpha }is calledει{\displaystyle \varepsilon _{\iota }}, then one could go on trying to find theι{\displaystyle \iota }-th ordinal such thatεα=α{\displaystyle \varepsilon _{\alpha }=\alpha }, "and so on", but all the subtlety lies in the "and so on"). One could try to do this systematically, but no matter what system is used to define and construct ordinals, there is always an ordinal that lies just above all the ordinals constructed by the system. Perhaps the most important ordinal that limits a system of construction in this manner is theChurch–Kleene ordinal,ω1CK{\displaystyle \omega _{1}^{\mathrm {CK} }}(despite theω1{\displaystyle \omega _{1}}in the name, this ordinal is countable), which is the smallest ordinal that cannot in any way be represented by acomputable function(this can be made rigorous, of course). Considerably large ordinals can be defined belowω1CK{\displaystyle \omega _{1}^{\mathrm {CK} }}, however, which measure the "proof-theoretic strength" of certainformal systems(for example,ε0{\displaystyle \varepsilon _{0}}measures the strength ofPeano arithmetic). Large countable ordinals such as countableadmissible ordinalscan also be defined above the Church-Kleene ordinal, which are of interest in various parts of logic.[citation needed]
Any ordinal number can be made into atopological spaceby endowing it with theorder topology; this topology isdiscreteif and only if it is less than or equal to ω. A subset of ω + 1 is open in the order topology if and only if either it iscofiniteor it does not contain ω as an element.
See theTopology and ordinalssection of the "Order topology" article.
The transfinite ordinal numbers, which first appeared in 1883,[8]originated in Cantor's work withderived sets. IfPis a set of real numbers, the derived setP′is the set oflimit pointsofP. In 1872, Cantor generated the setsP(n)by applying the derived set operationntimes toP. In 1880, he pointed out that these sets form the sequenceP'⊇ ··· ⊇P(n)⊇P(n+ 1)⊇ ···,and he continued the derivation process by definingP(∞)as the intersection of these sets. Then he iterated the derived set operation and intersections to extend his sequence of sets into the infinite:P(∞)⊇P(∞ + 1)⊇P(∞ + 2)⊇ ··· ⊇P(2∞)⊇ ··· ⊇P(∞2)⊇ ···.[9]The superscripts containing ∞ are just indices defined by the derivation process.[10]
Cantor used these sets in the theorems:
These theorems are proved by partitioningP′intopairwise disjointsets:P′= (P′\P(2)) ∪ (P(2)\P(3)) ∪ ··· ∪ (P(∞)\P(∞ + 1)) ∪ ··· ∪P(α). Forβ<α:sinceP(β+ 1)contains the limit points ofP(β), the setsP(β)\P(β+ 1)have no limit points. Hence, they arediscrete sets, so they are countable. Proof of first theorem: IfP(α)= ∅for some indexα, thenP′is the countable union of countable sets. Therefore,P′is countable.[11]
The second theorem requires proving the existence of anαsuch thatP(α)= ∅. To prove this, Cantor considered the set of allαhaving countably many predecessors. To define this set, he defined the transfinite ordinal numbers and transformed the infinite indices into ordinals by replacing ∞ withω, the first transfinite ordinal number. Cantor called the set of finite ordinals the firstnumber class. The second number class is the set of ordinals whose predecessors form a countably infinite set. The set of allαhaving countably many predecessors—that is, the set of countable ordinals—is the union of these two number classes. Cantor proved that the cardinality of the second number class is the first uncountable cardinality.[12]
Cantor's second theorem becomes: IfP′is countable, then there is a countable ordinalαsuch thatP(α)= ∅. Its proof usesproof by contradiction. LetP′be countable, and assume there is no such α. This assumption produces two cases.
In both cases,P′is uncountable, which contradictsP′being countable. Therefore, there is a countable ordinalαsuch thatP(α)= ∅. Cantor's work with derived sets and ordinal numbers led to theCantor-Bendixson theorem.[14]
Using successors, limits, and cardinality, Cantor generated an unbounded sequence of ordinal numbers and number classes.[15]The(α+ 1)-th number class is the set of ordinals whose predecessors form a set of the same cardinality as theα-th number class. The cardinality of the(α+ 1)-th number class is the cardinality immediately following that of theα-th number class.[16]For a limit ordinalα, theα-th number class is the union of theβ-th number classes forβ<α.[17]Its cardinality is the limit of the cardinalities of these number classes.
Ifnis finite, then-th number class has cardinalityℵn−1{\displaystyle \aleph _{n-1}}. Ifα≥ω, theα-th number class has cardinalityℵα{\displaystyle \aleph _{\alpha }}.[18]Therefore, the cardinalities of the number classes correspond one-to-one with thealeph numbers. Also, theα-th number class consists of ordinals different from those in the preceding number classes if and only ifαis a non-limit ordinal. Therefore, the non-limit number classes partition the ordinals into pairwise disjoint sets. | https://en.wikipedia.org/wiki/Ordinal_number |
Anenumerative definitionof a concept or term is a special type ofextensional definitionthat gives an explicit and exhaustive listing of all theobjectsthat fall under the concept or term in question. Enumerative definitions are only possible for finite sets and only practical for relatively small sets.
An example of an enumerative definition for the setextantmonotremespecies(for which theintensional definitionis "species of currently-living mammals thatlay eggs") would be:
Thislogic-related article is astub. You can help Wikipedia byexpanding it.
Thislinguisticsarticle is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Enumerative_definition |
Inmathematics, asequenceis an enumerated collection ofobjectsin which repetitions are allowed andordermatters. Like aset, it containsmembers(also calledelements, orterms). The number of elements (possiblyinfinite) is called thelengthof the sequence. Unlike a set, the same elements can appear multiple times at different positions in a sequence, and unlike a set, the order does matter. Formally, a sequence can be defined as afunctionfromnatural numbers(the positions of elements in the sequence) to the elements at each position. The notion of a sequence can be generalized to anindexed family, defined as a function from anarbitraryindex set.
For example, (M, A, R, Y) is a sequence of letters with the letter "M" first and "Y" last. This sequence differs from (A, R, M, Y). Also, the sequence (1, 1, 2, 3, 5, 8), which contains the number 1 at two different positions, is a valid sequence. Sequences can befinite, as in these examples, orinfinite, such as the sequence of allevenpositive integers(2, 4, 6, ...).
The position of an element in a sequence is itsrankorindex; it is the natural number for which the element is the image. The first element has index 0 or 1, depending on the context or a specific convention. Inmathematical analysis, a sequence is often denoted by letters in the form ofan{\displaystyle a_{n}},bn{\displaystyle b_{n}}andcn{\displaystyle c_{n}}, where the subscriptnrefers to thenth element of the sequence; for example, thenth element of theFibonacci sequenceF{\displaystyle F}is generally denoted asFn{\displaystyle F_{n}}.
Incomputingandcomputer science, finite sequences are usually calledstrings,wordsorlists,with the specific technical term chosen depending on the type of object the sequence enumerates and the different ways to represent the sequence incomputer memory. Infinite sequences are calledstreams.
The empty sequence ( ) is included in most notions of sequence. It may be excluded depending on the context.
A sequence can be thought of as a list of elements with a particular order.[1][2]Sequences are useful in a number of mathematical disciplines for studyingfunctions,spaces, and other mathematical structures using theconvergenceproperties of sequences. In particular, sequences are the basis forseries, which are important indifferential equationsandanalysis. Sequences are also of interest in their own right, and can be studied as patterns or puzzles, such as in the study ofprime numbers.
There are a number of ways to denote a sequence, some of which are more useful for specific types of sequences. One way to specify a sequence is to list all its elements. For example, the first four odd numbers form the sequence (1, 3, 5, 7). This notation is used for infinite sequences as well. For instance, the infinite sequence of positive odd integers is written as (1, 3, 5, 7, ...). Because notating sequences withellipsisleads to ambiguity, listing is most useful for customary infinite sequences which can be easily recognized from their first few elements. Other ways of denoting a sequence are discussed after the examples.
Theprime numbersare thenatural numbersgreater than 1 that have nodivisorsbut 1 and themselves. Taking these in their natural order gives the sequence (2, 3, 5, 7, 11, 13, 17, ...). The prime numbers are widely used inmathematics, particularly innumber theorywhere many results related to them exist.
TheFibonacci numberscomprise the integer sequence in which each element is the sum of the previous two elements. The first two elements are either 0 and 1 or 1 and 1 so that the sequence is (0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ...).[1]
Other examples of sequences include those made up ofrational numbers,real numbersandcomplex numbers. The sequence (.9, .99, .999, .9999, ...), for instance, approaches the number 1. In fact, every real number can be written as thelimitof a sequence of rational numbers (e.g. via itsdecimal expansion, also seecompleteness of the real numbers). As another example,πis the limit of the sequence (3, 3.1, 3.14, 3.141, 3.1415, ...), which is increasing. A related sequence is the sequence of decimal digits ofπ, that is, (3, 1, 4, 1, 5, 9, ...). Unlike the preceding sequence, this sequence does not have any pattern that is easily discernible by inspection.
Other examples are sequences offunctions, whose elements are functions instead of numbers.
TheOn-Line Encyclopedia of Integer Sequencescomprises a large list of examples of integer sequences.[3]
Other notations can be useful for sequences whose pattern cannot be easily guessed or for sequences that do not have a pattern such as the digits ofπ. One such notation is to write down a general formula for computing thenth term as a function ofn, enclose it in parentheses, and include a subscript indicating the set of values thatncan take. For example, in this notation the sequence of even numbers could be written as(2n)n∈N{\textstyle (2n)_{n\in \mathbb {N} }}. The sequence of squares could be written as(n2)n∈N{\textstyle (n^{2})_{n\in \mathbb {N} }}. The variablenis called anindex, and the set of values that it can take is called theindex set.
It is often useful to combine this notation with the technique of treating the elements of a sequence as individual variables. This yields expressions like(an)n∈N{\textstyle (a_{n})_{n\in \mathbb {N} }}, which denotes a sequence whosenth element is given by the variablean{\displaystyle a_{n}}. For example:
One can consider multiple sequences at the same time by using different variables; e.g.(bn)n∈N{\textstyle (b_{n})_{n\in \mathbb {N} }}could be a different sequence than(an)n∈N{\textstyle (a_{n})_{n\in \mathbb {N} }}. One can even consider a sequence of sequences:((am,n)n∈N)m∈N{\textstyle ((a_{m,n})_{n\in \mathbb {N} })_{m\in \mathbb {N} }}denotes a sequence whosemth term is the sequence(am,n)n∈N{\textstyle (a_{m,n})_{n\in \mathbb {N} }}.
An alternative to writing the domain of a sequence in the subscript is to indicate the range of values that the index can take by listing its highest and lowest legal values. For example, the notation(k2))k=110{\textstyle (k^{2}){\vphantom {)}}_{k=1}^{10}}denotes the ten-term sequence of squares(1,4,9,…,100){\displaystyle (1,4,9,\ldots ,100)}. The limits∞{\displaystyle \infty }and−∞{\displaystyle -\infty }are allowed, but they do not represent valid values for the index, only thesupremumorinfimumof such values, respectively. For example, the sequence(an)n=1∞{\textstyle {(a_{n})}_{n=1}^{\infty }}is the same as the sequence(an)n∈N{\textstyle (a_{n})_{n\in \mathbb {N} }}, and does not contain an additional term "at infinity". The sequence(an)n=−∞∞{\textstyle {(a_{n})}_{n=-\infty }^{\infty }}is abi-infinite sequence, and can also be written as(…,a−1,a0,a1,a2,…){\textstyle (\ldots ,a_{-1},a_{0},a_{1},a_{2},\ldots )}.
In cases where the set of indexing numbers is understood, the subscripts and superscripts are often left off. That is, one simply writes(ak){\textstyle (a_{k})}for an arbitrary sequence. Often, the indexkis understood to run from 1 to ∞. However, sequences are frequently indexed starting from zero, as in
In some cases, the elements of the sequence are related naturally to a sequence of integers whose pattern can be easily inferred. In these cases, the index set may be implied by a listing of the first few abstract elements. For instance, the sequence of squares ofodd numberscould be denoted in any of the following ways.
Moreover, the subscripts and superscripts could have been left off in the third, fourth, and fifth notations, if the indexing set was understood to be thenatural numbers. In the second and third bullets, there is a well-defined sequence(ak)k=1∞{\textstyle {(a_{k})}_{k=1}^{\infty }}, but it is not the same as the sequence denoted by the expression.
Sequences whose elements are related to the previous elements in a straightforward way are often defined usingrecursion. This is in contrast to the definition of sequences of elements as functions of their positions.
To define a sequence by recursion, one needs a rule, calledrecurrence relationto construct each element in terms of the ones before it. In addition, enough initial elements must be provided so that all subsequent elements of the sequence can be computed by successive applications of the recurrence relation.
TheFibonacci sequenceis a simple classical example, defined by the recurrence relation
with initial termsa0=0{\displaystyle a_{0}=0}anda1=1{\displaystyle a_{1}=1}. From this, a simple computation shows that the first ten terms of this sequence are 0, 1, 1, 2, 3, 5, 8, 13, 21, and 34.
A complicated example of a sequence defined by a recurrence relation isRecamán's sequence,[4]defined by the recurrence relation
with initial terma0=0.{\displaystyle a_{0}=0.}
Alinear recurrence with constant coefficientsis a recurrence relation of the form
wherec0,…,ck{\displaystyle c_{0},\dots ,c_{k}}areconstants. There is a general method for expressing the general terman{\displaystyle a_{n}}of such a sequence as a function ofn; seeLinear recurrence. In the case of the Fibonacci sequence, one hasc0=0,c1=c2=1,{\displaystyle c_{0}=0,c_{1}=c_{2}=1,}and the resulting function ofnis given byBinet's formula.
Aholonomic sequenceis a sequence defined by a recurrence relation of the form
wherec1,…,ck{\displaystyle c_{1},\dots ,c_{k}}arepolynomialsinn. For most holonomic sequences, there is no explicit formula for expressingan{\displaystyle a_{n}}as a function ofn. Nevertheless, holonomic sequences play an important role in various areas of mathematics. For example, manyspecial functionshave aTaylor serieswhose sequence of coefficients is holonomic. The use of the recurrence relation allows a fast computation of values of such special functions.
Not all sequences can be specified by a recurrence relation. An example is the sequence ofprime numbersin their natural order (2, 3, 5, 7, 11, 13, 17, ...).
There are many different notions of sequences in mathematics, some of which (e.g.,exact sequence) are not covered by the definitions and notations introduced below.
In this article, a sequence is formally defined as afunctionwhosedomainis anintervalofintegers. This definition covers several different uses of the word "sequence", including one-sided infinite sequences, bi-infinite sequences, and finite sequences (see below for definitions of these kinds of sequences). However, many authors use a narrower definition by requiring the domain of a sequence to be the set ofnatural numbers. This narrower definition has the disadvantage that it rules out finite sequences and bi-infinite sequences, both of which are usually called sequences in standard mathematical practice. Another disadvantage is that, if one removes the first terms of a sequence, one needs reindexing the remainder terms for fitting this definition. In some contexts, to shorten exposition, thecodomainof the sequence is fixed by context, for example by requiring it to be the setRof real numbers,[5]the setCof complex numbers,[6]or atopological space.[7]
Although sequences are a type of function, they are usually distinguished notationally from functions in that the input is written as a subscript rather than in parentheses, that is,anrather thana(n). There are terminological differences as well: the value of a sequence at the lowest input (often 1) is called the "first element" of the sequence, the value at the second smallest input (often 2) is called the "second element", etc. Also, while a function abstracted from its input is usually denoted by a single letter, e.g.f, a sequence abstracted from its input is usually written by a notation such as(an)n∈A{\textstyle (a_{n})_{n\in A}}, or just as(an).{\textstyle (a_{n}).}HereAis the domain, or index set, of the sequence.
Sequences and their limits (see below) are important concepts for studying topological spaces. An important generalization of sequences is the concept ofnets. Anetis a function from a (possiblyuncountable)directed setto a topological space. The notational conventions for sequences normally apply to nets as well.
Thelengthof a sequence is defined as the number of terms in the sequence.
A sequence of a finite lengthnis also called ann-tuple. Finite sequences include theempty sequence( ) that has no elements.
Normally, the terminfinite sequencerefers to a sequence that is infinite in one direction, and finite in the other—the sequence has a first element, but no final element. Such a sequence is called asingly infinite sequenceor aone-sided infinite sequencewhen disambiguation is necessary. In contrast, a sequence that is infinite in both directions—i.e. that has neither a first nor a final element—is called abi-infinite sequence,two-way infinite sequence, ordoubly infinite sequence. A function from the setZofallintegersinto a set, such as for instance the sequence of all even integers ( ..., −4, −2, 0, 2, 4, 6, 8, ... ), is bi-infinite. This sequence could be denoted(2n)n=−∞∞{\textstyle {(2n)}_{n=-\infty }^{\infty }}.
A sequence is said to bemonotonically increasingif each term is greater than or equal to the one before it. For example, the sequence(an)n=1∞{\textstyle {(a_{n})}_{n=1}^{\infty }}is monotonically increasing if and only ifan+1≥an{\textstyle a_{n+1}\geq a_{n}}for alln∈N.{\displaystyle n\in \mathbf {N} .}If each consecutive term is strictly greater than (>) the previous term then the sequence is calledstrictly monotonically increasing. A sequence ismonotonically decreasingif each consecutive term is less than or equal to the previous one, and isstrictly monotonically decreasingif each is strictly less than the previous. If a sequence is either increasing or decreasing it is called amonotonesequence. This is a special case of the more general notion of amonotonic function.
The termsnondecreasingandnonincreasingare often used in place ofincreasinganddecreasingin order to avoid any possible confusion withstrictly increasingandstrictly decreasing, respectively.
If the sequence of real numbers (an) is such that all the terms are less than some real numberM, then the sequence is said to bebounded from above. In other words, this means that there existsMsuch that for alln,an≤M. Any suchMis called anupper bound. Likewise, if, for some realm,an≥mfor allngreater than someN, then the sequence isbounded from belowand any suchmis called alower bound. If a sequence is both bounded from above and bounded from below, then the sequence is said to bebounded.
Asubsequenceof a given sequence is a sequence formed from the given sequence by deleting some of the elements without disturbing the relative positions of the remaining elements. For instance, the sequence of positive even integers (2, 4, 6, ...) is a subsequence of the positive integers (1, 2, 3, ...). The positions of some elements change when other elements are deleted. However, the relative positions are preserved.
Formally, a subsequence of the sequence(an)n∈N{\displaystyle (a_{n})_{n\in \mathbb {N} }}is any sequence of the form(ank)k∈N{\textstyle (a_{n_{k}})_{k\in \mathbb {N} }}, where(nk)k∈N{\displaystyle (n_{k})_{k\in \mathbb {N} }}is a strictly increasing sequence of positive integers.
Some other types of sequences that are easy to define include:
An important property of a sequence isconvergence. If a sequence converges, it converges to a particular value known as thelimit. If a sequence converges to some limit, then it isconvergent. A sequence that does not converge isdivergent.
Informally, a sequence has a limit if the elements of the sequence become closer and closer to some valueL{\displaystyle L}(called the limit of the sequence), and they become and remainarbitrarilyclose toL{\displaystyle L}, meaning that given a real numberd{\displaystyle d}greater than zero, all but a finite number of the elements of the sequence have a distance fromL{\displaystyle L}less thand{\displaystyle d}.
For example, the sequencean=n+12n2{\textstyle a_{n}={\frac {n+1}{2n^{2}}}}shown to the right converges to the value 0. On the other hand, the sequencesbn=n3{\textstyle b_{n}=n^{3}}(which begins 1, 8, 27, ...) andcn=(−1)n{\displaystyle c_{n}=(-1)^{n}}(which begins −1, 1, −1, 1, ...) are both divergent.
If a sequence converges, then the value it converges to is unique. This value is called thelimitof the sequence. The limit of a convergent sequence(an){\displaystyle (a_{n})}is normally denotedlimn→∞an{\textstyle \lim _{n\to \infty }a_{n}}. If(an){\displaystyle (a_{n})}is a divergent sequence, then the expressionlimn→∞an{\textstyle \lim _{n\to \infty }a_{n}}is meaningless.
A sequence of real numbers(an){\displaystyle (a_{n})}converges toa real numberL{\displaystyle L}if, for allε>0{\displaystyle \varepsilon >0}, there exists a natural numberN{\displaystyle N}such that for alln≥N{\displaystyle n\geq N}we have[5]
If(an){\displaystyle (a_{n})}is a sequence of complex numbers rather than a sequence of real numbers, this last formula can still be used to define convergence, with the provision that|⋅|{\displaystyle |\cdot |}denotes the complex modulus, i.e.|z|=z∗z{\displaystyle |z|={\sqrt {z^{*}z}}}. If(an){\displaystyle (a_{n})}is a sequence of points in ametric space, then the formula can be used to define convergence, if the expression|an−L|{\displaystyle |a_{n}-L|}is replaced by the expressiondist(an,L){\displaystyle \operatorname {dist} (a_{n},L)}, which denotes thedistancebetweenan{\displaystyle a_{n}}andL{\displaystyle L}.
If(an){\displaystyle (a_{n})}and(bn){\displaystyle (b_{n})}are convergent sequences, then the following limits exist, and can be computed as follows:[5][10]
Moreover:
A Cauchy sequence is a sequence whose terms become arbitrarily close together as n gets very large. The notion of a Cauchy sequence is important in the study of sequences inmetric spaces, and, in particular, inreal analysis. One particularly important result in real analysis isCauchy characterization of convergence for sequences:
In contrast, there are Cauchy sequences ofrational numbersthat are not convergent in the rationals, e.g. the sequence defined byx1=1{\displaystyle x_{1}=1}andxn+1=12(xn+2xn){\displaystyle x_{n+1}={\tfrac {1}{2}}{\bigl (}x_{n}+{\tfrac {2}{x_{n}}}{\bigr )}}is Cauchy, but has no rational limit (cf.Cauchy sequence § Non-example: rational numbers). More generally, any sequence of rational numbers that converges to anirrational numberis Cauchy, but not convergent when interpreted as a sequence in the set of rational numbers.
Metric spaces that satisfy the Cauchy characterization of convergence for sequences are calledcomplete metric spacesand are particularly nice for analysis.
In calculus, it is common to define notation for sequences which do not converge in the sense discussed above, but which instead become and remain arbitrarily large, or become and remain arbitrarily negative. Ifan{\displaystyle a_{n}}becomes arbitrarily large asn→∞{\displaystyle n\to \infty }, we write
In this case we say that the sequencediverges, or that itconverges to infinity. An example of such a sequence isan=n.
Ifan{\displaystyle a_{n}}becomes arbitrarily negative (i.e. negative and large in magnitude) asn→∞{\displaystyle n\to \infty }, we write
and say that the sequencedivergesorconverges to negative infinity.
Aseriesis, informally speaking, the sum of the terms of a sequence. That is, it is an expression of the form∑n=1∞an{\textstyle \sum _{n=1}^{\infty }a_{n}}ora1+a2+⋯{\displaystyle a_{1}+a_{2}+\cdots }, where(an){\displaystyle (a_{n})}is a sequence of real or complex numbers. Thepartial sumsof a series are the expressions resulting from replacing the infinity symbol with a finite number, i.e. theNth partial sum of the series∑n=1∞an{\textstyle \sum _{n=1}^{\infty }a_{n}}is the number
The partial sums themselves form a sequence(SN)N∈N{\displaystyle (S_{N})_{N\in \mathbb {N} }}, which is called thesequence of partial sumsof the series∑n=1∞an{\textstyle \sum _{n=1}^{\infty }a_{n}}. If the sequence of partial sums converges, then we say that the series∑n=1∞an{\textstyle \sum _{n=1}^{\infty }a_{n}}isconvergent, and the limitlimN→∞SN{\textstyle \lim _{N\to \infty }S_{N}}is called thevalueof the series. The same notation is used to denote a series and its value, i.e. we write∑n=1∞an=limN→∞SN{\textstyle \sum _{n=1}^{\infty }a_{n}=\lim _{N\to \infty }S_{N}}.
Sequences play an important role in topology, especially in the study ofmetric spaces. For instance:
Sequences can be generalized tonetsorfilters. These generalizations allow one to extend some of the above theorems to spaces without metrics.
Thetopological productof a sequence of topological spaces is thecartesian productof those spaces, equipped with anatural topologycalled theproduct topology.
More formally, given a sequence of spaces(Xi)i∈N{\displaystyle (X_{i})_{i\in \mathbb {N} }}, the product space
is defined as the set of all sequences(xi)i∈N{\displaystyle (x_{i})_{i\in \mathbb {N} }}such that for eachi,xi{\displaystyle x_{i}}is an element ofXi{\displaystyle X_{i}}. Thecanonical projectionsare the mapspi:X→Xidefined by the equationpi((xj)j∈N)=xi{\displaystyle p_{i}((x_{j})_{j\in \mathbb {N} })=x_{i}}. Then theproduct topologyonXis defined to be thecoarsest topology(i.e. the topology with the fewest open sets) for which all the projectionspiarecontinuous. The product topology is sometimes called theTychonoff topology.
When discussing sequences inanalysis, one will generally consider sequences of the form
which is to say, infinite sequences of elements indexed bynatural numbers.
A sequence may start with an index different from 1 or 0. For example, the sequence defined byxn= 1/log(n) would be defined only forn≥ 2. When talking about such infinite sequences, it is usually sufficient (and does not change much for most considerations) to assume that the members of the sequence are defined at least for all indiceslarge enough, that is, greater than some givenN.
The most elementary type of sequences are numerical ones, that is, sequences ofrealorcomplexnumbers. This type can be generalized to sequences of elements of somevector space. In analysis, the vector spaces considered are oftenfunction spaces. Even more generally, one can study sequences with elements in sometopological space.
Asequence spaceis avector spacewhose elements are infinite sequences ofrealorcomplexnumbers. Equivalently, it is afunction spacewhose elements are functions from thenatural numbersto thefieldK, whereKis either the field of real numbers or the field of complex numbers. The set of all such functions is naturally identified with the set of all possible infinite sequences with elements inK, and can be turned into avector spaceunder the operations ofpointwise additionof functions and pointwise scalar multiplication. All sequence spaces arelinear subspacesof this space. Sequence spaces are typically equipped with anorm, or at least the structure of atopological vector space.
The most important sequences spaces in analysis are the ℓpspaces, consisting of thep-power summable sequences, with thep-norm. These are special cases ofLpspacesfor thecounting measureon the set of natural numbers. Other important classes of sequences like convergent sequences ornull sequencesform sequence spaces, respectively denotedcandc0, with the sup norm. Any sequence space can also be equipped with thetopologyofpointwise convergence, under which it becomes a special kind ofFréchet spacecalled anFK-space.
Sequences over afieldmay also be viewed asvectorsin avector space. Specifically, the set ofF-valued sequences (whereFis a field) is afunction space(in fact, aproduct space) ofF-valued functions over the set of natural numbers.
Abstract algebra employs several types of sequences, including sequences of mathematical objects such as groups or rings.
IfAis a set, thefree monoidoverA(denotedA*, also calledKleene starofA) is amonoidcontaining all the finite sequences (or strings) of zero or more elements ofA, with the binary operation of concatenation. Thefree semigroupA+is the subsemigroup ofA*containing all elements except the empty sequence.
In the context ofgroup theory, a sequence
ofgroupsandgroup homomorphismsis calledexact, if theimage(orrange) of each homomorphism is equal to thekernelof the next:
The sequence of groups and homomorphisms may be either finite or infinite.
A similar definition can be made for certain otheralgebraic structures. For example, one could have an exact sequence ofvector spacesandlinear maps, or ofmodulesandmodule homomorphisms.
Inhomological algebraandalgebraic topology, aspectral sequenceis a means of computing homology groups by taking successive approximations. Spectral sequences are a generalization ofexact sequences, and since their introduction byJean Leray(1946), they have become an important research tool, particularly inhomotopy theory.
Anordinal-indexed sequenceis a generalization of a sequence. If α is alimit ordinalandXis a set, an α-indexed sequence of elements ofXis a function from α toX. In this terminology an ω-indexed sequence is an ordinary sequence.
Incomputer science, finite sequences are calledlists. Potentially infinite sequences are calledstreams. Finite sequences of characters or digits are calledstrings.
Infinite sequences ofdigits(orcharacters) drawn from afinitealphabetare of particular interest intheoretical computer science. They are often referred to simply assequencesorstreams, as opposed to finitestrings. Infinite binary sequences, for instance, are infinite sequences ofbits(characters drawn from the alphabet {0, 1}). The setC= {0, 1}∞of all infinite binary sequences is sometimes called theCantor space.
An infinite binary sequence can represent aformal language(a set of strings) by setting thenth bit of the sequence to 1 if and only if thenth string (inshortlex order) is in the language. This representation is useful in thediagonalization methodfor proofs.[11] | https://en.wikipedia.org/wiki/Sequence |
In mathematics, anaffine bundleis afiber bundlewhose typical fiber, fibers, trivialization morphisms and transition functions are affine.[1]
Letπ¯:Y¯→X{\displaystyle {\overline {\pi }}:{\overline {Y}}\to X}be avector bundlewith a typical fiber avector spaceF¯{\displaystyle {\overline {F}}}. Anaffine bundlemodelled on a vector bundleπ¯:Y¯→X{\displaystyle {\overline {\pi }}:{\overline {Y}}\to X}is a fiber bundleπ:Y→X{\displaystyle \pi :Y\to X}whose typical fiberF{\displaystyle F}is anaffine spacemodelled onF¯{\displaystyle {\overline {F}}}so that the following conditions hold:
(i) Every fiberYx{\displaystyle Y_{x}}ofY{\displaystyle Y}is an affine space modelled over the corresponding fibersY¯x{\displaystyle {\overline {Y}}_{x}}of a vector bundleY¯{\displaystyle {\overline {Y}}}.
(ii) There is an affine bundle atlas ofY→X{\displaystyle Y\to X}whose local trivializations morphisms and transition functions areaffine isomorphisms.
Dealing with affine bundles, one uses only affine bundle coordinates(xμ,yi){\displaystyle (x^{\mu },y^{i})}possessing affine transition functions
There are thebundle morphisms
where(y¯i){\displaystyle ({\overline {y}}^{i})}are linear bundle coordinates on a vector bundleY¯{\displaystyle {\overline {Y}}}, possessing linear transition functionsy¯′i=Aji(xν)y¯j{\displaystyle {\overline {y}}'^{i}=A_{j}^{i}(x^{\nu }){\overline {y}}^{j}}.
An affine bundle has a globalsection, but in contrast with vector bundles, there is no canonical global section of an affine bundle. Letπ:Y→X{\displaystyle \pi :Y\to X}be an affine bundle modelled on avector bundleπ¯:Y¯→X{\displaystyle {\overline {\pi }}:{\overline {Y}}\to X}. Every global sections{\displaystyle s}of an affine bundleY→X{\displaystyle Y\to X}yields the bundle morphisms
In particular, every vector bundleY{\displaystyle Y}has a natural structure of an affine bundle due to these morphisms wheres=0{\displaystyle s=0}is the canonical zero-valued section ofY{\displaystyle Y}. For instance, thetangent bundleTX{\displaystyle TX}of a manifoldX{\displaystyle X}naturally is an affine bundle.
An affine bundleY→X{\displaystyle Y\to X}is a fiber bundle with ageneral affinestructure groupGA(m,R){\displaystyle GA(m,\mathbb {R} )}of affine transformations of its typical fiberV{\displaystyle V}of dimensionm{\displaystyle m}. This structure group always isreducibleto ageneral linear groupGL(m,R){\displaystyle GL(m,\mathbb {R} )}, i.e., an affine bundle admits an atlas with linear transition functions.
By a morphism of affine bundles is meant a bundle morphismΦ:Y→Y′{\displaystyle \Phi :Y\to Y'}whose restriction to each fiber ofY{\displaystyle Y}is an affine map. Every affine bundle morphismΦ:Y→Y′{\displaystyle \Phi :Y\to Y'}of an affine bundleY{\displaystyle Y}modelled on a vector bundleY¯{\displaystyle {\overline {Y}}}to an affine bundleY′{\displaystyle Y'}modelled on a vector bundleY¯′{\displaystyle {\overline {Y}}'}yields a unique linear bundle morphism
called thelinear derivativeofΦ{\displaystyle \Phi }. | https://en.wikipedia.org/wiki/Affine_bundle |
Inmathematics, analgebra bundleis afiber bundlewhosefibersarealgebrasandlocal trivializationsrespect the algebra structure. It follows that thetransition functionsarealgebra isomorphisms. Since algebras are alsovector spaces, every algebra bundle is avector bundle.
Examples include thetensor-algebra bundle,exterior bundle, andsymmetric bundleassociated to a givenvector bundle, as well as theClifford bundleassociated to any Riemannian vector bundle.
Thistopology-relatedarticle is astub. You can help Wikipedia byexpanding it.
Thisalgebra-related article is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Algebra_bundle |
Inmathematics, acharacteristic classis a way of associating to eachprincipal bundleofXacohomologyclass ofX. The cohomology class measures the extent to which the bundle is "twisted" and whether it possessessections. Characteristic classes are globalinvariantsthat measure the deviation of alocalproduct structure from a global product structure. They are one of the unifying geometric concepts inalgebraic topology,differential geometry, andalgebraic geometry.
The notion of characteristic class arose in 1935 in the work ofEduard StiefelandHassler Whitneyabout vector fields on manifolds.
LetGbe atopological group, and for a topological spaceX{\displaystyle X}, writebG(X){\displaystyle b_{G}(X)}for the set ofisomorphism classesofprincipalG-bundlesoverX{\displaystyle X}. ThisbG{\displaystyle b_{G}}is acontravariant functorfromTop(thecategoryof topological spaces andcontinuous functions) toSet(the category ofsetsandfunctions), sending a mapf:X→Y{\displaystyle f\colon X\to Y}to thepullbackoperationf∗:bG(Y)→bG(X){\displaystyle f^{*}\colon b_{G}(Y)\to b_{G}(X)}.
Acharacteristic classcof principalG-bundles is then anatural transformationfrombG{\displaystyle b_{G}}to a cohomology functorH∗{\displaystyle H^{*}}, regarded also as a functor toSet.
In other words, a characteristic class associates to each principalG-bundleP→X{\displaystyle P\to X}inbG(X){\displaystyle b_{G}(X)}an elementc(P) inH*(X) such that, iff:Y→Xis a continuous map, thenc(f*P) =f*c(P). On the left is the class of the pullback ofPtoY; on the right is the image of the class ofPunder the induced map in cohomology.
Characteristic classes are elements of cohomology groups;[1]one can obtain integers from characteristic classes, calledcharacteristic numbers. Some important examples of characteristic numbers areStiefel–Whitney numbers,Chern numbers,Pontryagin numbers, and theEuler characteristic.
Given an oriented manifoldMof dimensionnwithfundamental class[M]∈Hn(M){\displaystyle [M]\in H_{n}(M)}, and aG-bundle with characteristic classesc1,…,ck{\displaystyle c_{1},\dots ,c_{k}}, one can pair a product of characteristic classes of total degreenwith the fundamental class. The number of distinct characteristic numbers is the number ofmonomialsof degreenin the characteristic classes, or equivalently the partitions ofnintodegci{\displaystyle {\mbox{deg}}\,c_{i}}.
Formally, giveni1,…,il{\displaystyle i_{1},\dots ,i_{l}}such that∑degcij=n{\displaystyle \sum {\mbox{deg}}\,c_{i_{j}}=n}, the corresponding characteristic number is:
where⌣{\displaystyle \smile }denotes thecup productof cohomology classes.
These are notated variously as either the product of characteristic classes, such asc12{\displaystyle c_{1}^{2}}, or by some alternative notation, such asP1,1{\displaystyle P_{1,1}}for thePontryagin numbercorresponding top12{\displaystyle p_{1}^{2}}, orχ{\displaystyle \chi }for the Euler characteristic.
From the point of view ofde Rham cohomology, one can takedifferential formsrepresenting the characteristic classes,[2]take a wedge product so that one obtains a top dimensional form, then integrate over the manifold; this is analogous to taking the product in cohomology and pairing with the fundamental class.
This also works for non-orientable manifolds, which have aZ/2Z{\displaystyle \mathbf {Z} /2\mathbf {Z} }-orientation, in which case one obtainsZ/2Z{\displaystyle \mathbf {Z} /2\mathbf {Z} }-valued characteristic numbers, such as the Stiefel-Whitney numbers.
Characteristic numbers solve the oriented and unorientedbordism questions: two manifolds are (respectively oriented or unoriented) cobordant if and only if their characteristic numbers are equal.
Characteristic classes are phenomena ofcohomology theoryin an essential way — they arecontravariantconstructions, in the way that asectionis a kind of functionona space, and to lead to a contradiction from the existence of a section one does need that variance. In fact cohomology theory grew up afterhomologyandhomotopy theory, which are bothcovarianttheories based on mappingintoa space; and characteristic class theory in its infancy in the 1930s (as part ofobstruction theory) was one major reason why a 'dual' theory to homology was sought. The characteristic class approach tocurvatureinvariants was a particular reason to make a theory, to prove a generalGauss–Bonnet theorem.
When the theory was put on an organised basis around 1950 (with the definitions reduced to homotopy theory) it became clear that the most fundamental characteristic classes known at that time (theStiefel–Whitney class, theChern class, and thePontryagin classes) were reflections of the classical linear groups and theirmaximal torusstructure. What is more, the Chern class itself was not so new, having been reflected in theSchubert calculusonGrassmannians, and the work of theItalian school of algebraic geometry. On the other hand there was now a framework which produced families of classes, whenever there was avector bundleinvolved.
The prime mechanism then appeared to be this: Given a spaceXcarrying a vector bundle, that implied in thehomotopy categorya mapping fromXto aclassifying spaceBG, for the relevant linear groupG. For the homotopy theory the relevant information is carried by compact subgroups such as theorthogonal groupsandunitary groupsofG. Once the cohomologyH∗(BG){\displaystyle H^{*}(BG)}was calculated, once and for all, the contravariance property of cohomology meant that characteristic classes for the bundle would be defined inH∗(X){\displaystyle H^{*}(X)}in the same dimensions. For example theChern classis really one class with graded components in each even dimension.
This is still the classic explanation, though in a given geometric theory it is profitable to take extra structure into account. When cohomology became 'extraordinary' with the arrival ofK-theoryandcobordism theoryfrom 1955 onwards, it was really only necessary to change the letterHeverywhere to say what the characteristic classes were.
Characteristic classes were later found forfoliationsofmanifolds; they have (in a modified sense, for foliations with some allowed singularities) a classifying space theory inhomotopytheory.
In later work after therapprochementof mathematics andphysics, new characteristic classes were found bySimon DonaldsonandDieter Kotschickin theinstantontheory. The work and point of view ofChernhave also proved important: seeChern–Simons theory.
In the language ofstable homotopy theory, theChern class,Stiefel–Whitney class, andPontryagin classarestable, while theEuler classisunstable.
Concretely, a stable class is one that does not change when one adds a trivial bundle:c(V⊕1)=c(V){\displaystyle c(V\oplus 1)=c(V)}. More abstractly, it means that the cohomology class in theclassifying spaceforBG(n){\displaystyle BG(n)}pulls back from the cohomology class inBG(n+1){\displaystyle BG(n+1)}under the inclusionBG(n)→BG(n+1){\displaystyle BG(n)\to BG(n+1)}(which corresponds to the inclusionRn→Rn+1{\displaystyle \mathbf {R} ^{n}\to \mathbf {R} ^{n+1}}and similar). Equivalently, all finite characteristic classes pull back from a stable class inBG{\displaystyle BG}.
This is not the case for the Euler class, as detailed there, not least because the Euler class of ak-dimensional bundle lives inHk(X){\displaystyle H^{k}(X)}(hence pulls back fromHk(BO(k)){\displaystyle H^{k}(BO(k))}, so it can't pull back from a class inHk+1{\displaystyle H^{k+1}}, as the dimensions differ. | https://en.wikipedia.org/wiki/Characteristic_class |
Ingeometryandtopology, given agroupG(which may be a topological or Lie group), anequivariant bundleis afiber bundleπ:E→B{\displaystyle \pi \colon E\to B}such that the total spaceE{\displaystyle E}and the base spaceB{\displaystyle B}are bothG-spaces(continuous or smooth, depending on the setting) and the projection mapπ{\displaystyle \pi }between them is equivariant:π∘g=g∘π{\displaystyle \pi \circ g=g\circ \pi }with some extra requirement depending on a typical fiber.
For example, anequivariant vector bundleis an equivariant bundle such that the action ofGrestricts to a linear isomorphism between fibres.
Thisdifferential geometry-related article is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Equivariant_bundle |
Indifferential geometry, in the category ofdifferentiable manifolds, afibered manifoldis asurjectivesubmersionπ:E→B{\displaystyle \pi :E\to B\,}that is, a surjective differentiable mapping such that at each pointy∈E{\displaystyle y\in E}the tangent mappingTyπ:TyE→Tπ(y)B{\displaystyle T_{y}\pi :T_{y}E\to T_{\pi (y)}B}is surjective, or, equivalently, its rank equalsdimB.{\displaystyle \dim B.}[1]
Intopology, the wordsfiber(Faserin German) andfiber space(gefaserter Raum) appeared for the first time in a paper byHerbert Seifertin1932, but his definitions are limited to a very special case.[2]The main difference from the present day conception of a fiber space, however, was that for Seifert what is now called thebase space(topological space) of a fiber (topological) spaceE{\displaystyle E}was not part of the structure, but derived from it as a quotient space ofE.{\displaystyle E.}The first definition offiber spaceis given byHassler Whitneyin1935under the namesphere space, but in1940Whitney changed the name tosphere bundle.[3][4]
The theory of fibered spaces, of whichvector bundles,principal bundles, topologicalfibrationsand fibered manifolds are a special case, is attributed toSeifert,Hopf,Feldbau,Whitney,Steenrod,Ehresmann,Serre, and others.[5][6][7][8][9]
A triple(E,π,B){\displaystyle (E,\pi ,B)}whereE{\displaystyle E}andB{\displaystyle B}are differentiable manifolds andπ:E→B{\displaystyle \pi :E\to B}is a surjective submersion, is called afibered manifold.[10]E{\displaystyle E}is called thetotal space,B{\displaystyle B}is called thebase.
LetB{\displaystyle B}(resp.E{\displaystyle E}) be ann{\displaystyle n}-dimensional (resp.p{\displaystyle p}-dimensional) manifold. A fibered manifold(E,π,B){\displaystyle (E,\pi ,B)}admitsfiber charts. We say that achart(V,ψ){\displaystyle (V,\psi )}onE{\displaystyle E}is afiber chart, or isadaptedto the surjective submersionπ:E→B{\displaystyle \pi :E\to B}if there exists a chart(U,φ){\displaystyle (U,\varphi )}onB{\displaystyle B}such thatU=π(V){\displaystyle U=\pi (V)}andu1=x1∘π,u2=x2∘π,…,un=xn∘π,{\displaystyle u^{1}=x^{1}\circ \pi ,\,u^{2}=x^{2}\circ \pi ,\,\dots ,\,u^{n}=x^{n}\circ \pi \,,}whereψ=(u1,…,un,y1,…,yp−n).y0∈V,φ=(x1,…,xn),π(y0)∈U.{\displaystyle {\begin{aligned}\psi &=\left(u^{1},\dots ,u^{n},y^{1},\dots ,y^{p-n}\right).\quad y_{0}\in V,\\\varphi &=\left(x^{1},\dots ,x^{n}\right),\quad \pi \left(y_{0}\right)\in U.\end{aligned}}}
The above fiber chart condition may be equivalently expressed byφ∘π=pr1∘ψ,{\displaystyle \varphi \circ \pi =\mathrm {pr} _{1}\circ \psi ,}wherepr1:Rn×Rp−n→Rn{\displaystyle {\mathrm {pr} _{1}}:{\mathbb {R} ^{n}}\times {\mathbb {R} ^{p-n}}\to {\mathbb {R} ^{n}}\,}is the projection onto the firstn{\displaystyle n}coordinates. The chart(U,φ){\displaystyle (U,\varphi )}is then obviously unique. In view of the above property, thefibered coordinatesof a fiber chart(V,ψ){\displaystyle (V,\psi )}are usually denoted byψ=(xi,yσ){\displaystyle \psi =\left(x^{i},y^{\sigma }\right)}wherei∈{1,…,n},{\displaystyle i\in \{1,\ldots ,n\},}σ∈{1,…,m},{\displaystyle \sigma \in \{1,\ldots ,m\},}m=p−n{\displaystyle m=p-n}the coordinates of the corresponding chart(U,φ){\displaystyle (U,\varphi )}onB{\displaystyle B}are then denoted, with the obvious convention, byφ=(xi){\displaystyle \varphi =\left(x_{i}\right)}wherei∈{1,…,n}.{\displaystyle i\in \{1,\ldots ,n\}.}
Conversely, if a surjectionπ:E→B{\displaystyle \pi :E\to B}admits afiberedatlas, thenπ:E→B{\displaystyle \pi :E\to B}is a fibered manifold.
LetE→B{\displaystyle E\to B}be a fibered manifold andV{\displaystyle V}any manifold. Then an open covering{Uα}{\displaystyle \left\{U_{\alpha }\right\}}ofB{\displaystyle B}together with mapsψ:π−1(Uα)→Uα×V,{\displaystyle \psi :\pi ^{-1}\left(U_{\alpha }\right)\to U_{\alpha }\times V,}calledtrivialization maps, such thatpr1∘ψα=π,for allα{\displaystyle \mathrm {pr} _{1}\circ \psi _{\alpha }=\pi ,{\text{ for all }}\alpha }is alocal trivializationwith respect toV.{\displaystyle V.}[13]
A fibered manifold together with a manifoldV{\displaystyle V}is afiber bundlewithtypical fiber(or justfiber)V{\displaystyle V}if it admits a local trivialization with respect toV.{\displaystyle V.}The atlasΨ={(Uα,ψα)}{\displaystyle \Psi =\left\{\left(U_{\alpha },\psi _{\alpha }\right)\right\}}is then called abundle atlas. | https://en.wikipedia.org/wiki/Fibered_manifold |
The notion of afibrationgeneralizes the notion of afiber bundleand plays an important role inalgebraic topology, a branch of mathematics.
Fibrations are used, for example, inPostnikov systemsorobstruction theory.
In this article, all mappings arecontinuousmappings betweentopological spaces.
A mappingp:E→B{\displaystyle p\colon E\to B}satisfies thehomotopy lifting propertyfor a spaceX{\displaystyle X}if:
there exists a (not necessarily unique) homotopyh~:X×[0,1]→E{\displaystyle {\tilde {h}}\colon X\times [0,1]\to E}liftingh{\displaystyle h}(i.e.h=p∘h~{\displaystyle h=p\circ {\tilde {h}}}) withh~0=h~|X×0.{\displaystyle {\tilde {h}}_{0}={\tilde {h}}|_{X\times 0}.}
The followingcommutative diagramshows the situation:[1]: 66
Afibration(also called Hurewicz fibration) is a mappingp:E→B{\displaystyle p\colon E\to B}satisfying the homotopy lifting property for all spacesX.{\displaystyle X.}The spaceB{\displaystyle B}is calledbase spaceand the spaceE{\displaystyle E}is calledtotal space. Thefiber overb∈B{\displaystyle b\in B}is the subspaceFb=p−1(b)⊆E.{\displaystyle F_{b}=p^{-1}(b)\subseteq E.}[1]: 66
ASerre fibration(also called weak fibration) is a mappingp:E→B{\displaystyle p\colon E\to B}satisfying the homotopy lifting property for allCW-complexes.[2]: 375-376
Every Hurewicz fibration is a Serre fibration.
A mappingp:E→B{\displaystyle p\colon E\to B}is calledquasifibration, if for everyb∈B,{\displaystyle b\in B,}e∈p−1(b){\displaystyle e\in p^{-1}(b)}andi≥0{\displaystyle i\geq 0}holds that the induced mappingp∗:πi(E,p−1(b),e)→πi(B,b){\displaystyle p_{*}\colon \pi _{i}(E,p^{-1}(b),e)\to \pi _{i}(B,b)}is anisomorphism.
Every Serre fibration is a quasifibration.[3]: 241-242
A mappingf:E1→E2{\displaystyle f\colon E_{1}\to E_{2}}between total spaces of two fibrationsp1:E1→B{\displaystyle p_{1}\colon E_{1}\to B}andp2:E2→B{\displaystyle p_{2}\colon E_{2}\to B}with the same base space is afibration homomorphismif the following diagram commutes:
The mappingf{\displaystyle f}is afiber homotopy equivalenceif in addition a fibration homomorphismg:E2→E1{\displaystyle g\colon E_{2}\to E_{1}}exists, such that the mappingsf∘g{\displaystyle f\circ g}andg∘f{\displaystyle g\circ f}are homotopic, by fibration homomorphisms, to the identitiesIdE2{\displaystyle \operatorname {Id} _{E_{2}}}andIdE1.{\displaystyle \operatorname {Id} _{E_{1}}.}[2]: 405-406
Given a fibrationp:E→B{\displaystyle p\colon E\to B}and a mappingf:A→B{\displaystyle f\colon A\to B}, the mappingpf:f∗(E)→A{\displaystyle p_{f}\colon f^{*}(E)\to A}is a fibration, wheref∗(E)={(a,e)∈A×E|f(a)=p(e)}{\displaystyle f^{*}(E)=\{(a,e)\in A\times E|f(a)=p(e)\}}is thepullbackand the projections off∗(E){\displaystyle f^{*}(E)}ontoA{\displaystyle A}andE{\displaystyle E}yield the following commutative diagram:
The fibrationpf{\displaystyle p_{f}}is called thepullback fibrationor induced fibration.[2]: 405-406
With the pathspace construction, any continuous mapping can be extended to a fibration by enlarging its domain to a homotopy equivalent space. This fibration is calledpathspace fibration.
The total spaceEf{\displaystyle E_{f}}of thepathspace fibrationfor a continuous mappingf:A→B{\displaystyle f\colon A\to B}between topological spaces consists of pairs(a,γ){\displaystyle (a,\gamma )}witha∈A{\displaystyle a\in A}andpathsγ:I→B{\displaystyle \gamma \colon I\to B}with starting pointγ(0)=f(a),{\displaystyle \gamma (0)=f(a),}whereI=[0,1]{\displaystyle I=[0,1]}is theunit interval. The spaceEf={(a,γ)∈A×BI|γ(0)=f(a)}{\displaystyle E_{f}=\{(a,\gamma )\in A\times B^{I}|\gamma (0)=f(a)\}}carries thesubspace topologyofA×BI,{\displaystyle A\times B^{I},}whereBI{\displaystyle B^{I}}describes the space of all mappingsI→B{\displaystyle I\to B}and carries thecompact-open topology.
The pathspace fibration is given by the mappingp:Ef→B{\displaystyle p\colon E_{f}\to B}withp(a,γ)=γ(1).{\displaystyle p(a,\gamma )=\gamma (1).}The fiberFf{\displaystyle F_{f}}is also called thehomotopy fiberoff{\displaystyle f}and consists of the pairs(a,γ){\displaystyle (a,\gamma )}witha∈A{\displaystyle a\in A}and pathsγ:[0,1]→B,{\displaystyle \gamma \colon [0,1]\to B,}whereγ(0)=f(a){\displaystyle \gamma (0)=f(a)}andγ(1)=b0∈B{\displaystyle \gamma (1)=b_{0}\in B}holds.
For the special case of the inclusion of the base pointi:b0→B{\displaystyle i\colon b_{0}\to B}, an important example of the pathspace fibration emerges. The total spaceEi{\displaystyle E_{i}}consists of all paths inB{\displaystyle B}which starts atb0.{\displaystyle b_{0}.}This space is denoted byPB{\displaystyle PB}and is called path space. The pathspace fibrationp:PB→B{\displaystyle p\colon PB\to B}maps each path to its endpoint, hence the fiberp−1(b0){\displaystyle p^{-1}(b_{0})}consists of all closed paths. The fiber is denoted byΩB{\displaystyle \Omega B}and is calledloop space.[2]: 407-408
For a fibrationp:E→B{\displaystyle p\colon E\to B}with fiberF{\displaystyle F}and base pointb0∈B{\displaystyle b_{0}\in B}the inclusionF↪Fp{\displaystyle F\hookrightarrow F_{p}}of the fiber into the homotopy fiber is ahomotopy equivalence. The mappingi:Fp→E{\displaystyle i\colon F_{p}\to E}withi(e,γ)=e{\displaystyle i(e,\gamma )=e}, wheree∈E{\displaystyle e\in E}andγ:I→B{\displaystyle \gamma \colon I\to B}is a path fromp(e){\displaystyle p(e)}tob0{\displaystyle b_{0}}in the base space, is a fibration. Specifically it is the pullback fibration of the pathspace fibrationPB→B{\displaystyle PB\to B}alongp{\displaystyle p}. This procedure can now be applied again to the fibrationi{\displaystyle i}and so on. This leads to a long sequence:
⋯→Fj→Fi→jFp→iE→pB.{\displaystyle \cdots \to F_{j}\to F_{i}\xrightarrow {j} F_{p}\xrightarrow {i} E\xrightarrow {p} B.}
The fiber ofi{\displaystyle i}over a pointe0∈p−1(b0){\displaystyle e_{0}\in p^{-1}(b_{0})}consists of the pairs(e0,γ){\displaystyle (e_{0},\gamma )}whereγ{\displaystyle \gamma }is a path fromp(e0)=b0{\displaystyle p(e_{0})=b_{0}}tob0{\displaystyle b_{0}}, i.e. the loop spaceΩB{\displaystyle \Omega B}. The inclusionΩB↪Fi{\displaystyle \Omega B\hookrightarrow F_{i}}of the fiber ofi{\displaystyle i}into the homotopy fiber ofi{\displaystyle i}is again a homotopy equivalence and iteration yields the sequence:
⋯Ω2B→ΩF→ΩE→ΩB→F→E→B.{\displaystyle \cdots \Omega ^{2}B\to \Omega F\to \Omega E\to \Omega B\to F\to E\to B.}
Due to the duality of fibration andcofibration, there also exists a sequence of cofibrations. These two sequences are known as thePuppe sequencesor the sequences of fibrations and cofibrations.[2]: 407-409
A fibrationp:E→B{\displaystyle p\colon E\to B}with fiberF{\displaystyle F}is calledprincipal, if there exists a commutative diagram:
The bottom row is a sequence of fibrations and the vertical mappings are weak homotopy equivalences. Principal fibrations play an important role inPostnikov towers.[2]: 412
For a Serre fibrationp:E→B{\displaystyle p\colon E\to B}there exists a long exact sequence ofhomotopy groups. For base pointsb0∈B{\displaystyle b_{0}\in B}andx0∈F=p−1(b0){\displaystyle x_{0}\in F=p^{-1}(b_{0})}this is given by:
⋯→πn(F,x0)→πn(E,x0)→πn(B,b0)→πn−1(F,x0)→{\displaystyle \cdots \rightarrow \pi _{n}(F,x_{0})\rightarrow \pi _{n}(E,x_{0})\rightarrow \pi _{n}(B,b_{0})\rightarrow \pi _{n-1}(F,x_{0})\rightarrow }⋯→π0(F,x0)→π0(E,x0).{\displaystyle \cdots \rightarrow \pi _{0}(F,x_{0})\rightarrow \pi _{0}(E,x_{0}).}
Thehomomorphismsπn(F,x0)→πn(E,x0){\displaystyle \pi _{n}(F,x_{0})\rightarrow \pi _{n}(E,x_{0})}andπn(E,x0)→πn(B,b0){\displaystyle \pi _{n}(E,x_{0})\rightarrow \pi _{n}(B,b_{0})}are the induced homomorphisms of the inclusioni:F↪E{\displaystyle i\colon F\hookrightarrow E}and the projectionp:E→B.{\displaystyle p\colon E\rightarrow B.}[2]: 376
Hopf fibrationsare a family offiber bundleswhose fiber, total space and base space arespheres:
S0↪S1→S1,{\displaystyle S^{0}\hookrightarrow S^{1}\rightarrow S^{1},}
S1↪S3→S2,{\displaystyle S^{1}\hookrightarrow S^{3}\rightarrow S^{2},}
S3↪S7→S4,{\displaystyle S^{3}\hookrightarrow S^{7}\rightarrow S^{4},}
S7↪S15→S8.{\displaystyle S^{7}\hookrightarrow S^{15}\rightarrow S^{8}.}
Thelong exact sequenceof homotopy groups of the hopf fibrationS1↪S3→S2{\displaystyle S^{1}\hookrightarrow S^{3}\rightarrow S^{2}}yields:
⋯→πn(S1,x0)→πn(S3,x0)→πn(S2,b0)→πn−1(S1,x0)→{\displaystyle \cdots \rightarrow \pi _{n}(S^{1},x_{0})\rightarrow \pi _{n}(S^{3},x_{0})\rightarrow \pi _{n}(S^{2},b_{0})\rightarrow \pi _{n-1}(S^{1},x_{0})\rightarrow }⋯→π1(S1,x0)→π1(S3,x0)→π1(S2,b0).{\displaystyle \cdots \rightarrow \pi _{1}(S^{1},x_{0})\rightarrow \pi _{1}(S^{3},x_{0})\rightarrow \pi _{1}(S^{2},b_{0}).}
This sequence splits into short exact sequences, as the fiberS1{\displaystyle S^{1}}inS3{\displaystyle S^{3}}is contractible to a point:
0→πi(S3)→πi(S2)→πi−1(S1)→0.{\displaystyle 0\rightarrow \pi _{i}(S^{3})\rightarrow \pi _{i}(S^{2})\rightarrow \pi _{i-1}(S^{1})\rightarrow 0.}
This short exact sequencesplitsbecause of thesuspensionhomomorphismϕ:πi−1(S1)→πi(S2){\displaystyle \phi \colon \pi _{i-1}(S^{1})\to \pi _{i}(S^{2})}and there areisomorphisms:
πi(S2)≅πi(S3)⊕πi−1(S1).{\displaystyle \pi _{i}(S^{2})\cong \pi _{i}(S^{3})\oplus \pi _{i-1}(S^{1}).}
The homotopy groupsπi−1(S1){\displaystyle \pi _{i-1}(S^{1})}are trivial fori≥3,{\displaystyle i\geq 3,}so there exist isomorphisms betweenπi(S2){\displaystyle \pi _{i}(S^{2})}andπi(S3){\displaystyle \pi _{i}(S^{3})}fori≥3.{\displaystyle i\geq 3.}
Analog the fibersS3{\displaystyle S^{3}}inS7{\displaystyle S^{7}}andS7{\displaystyle S^{7}}inS15{\displaystyle S^{15}}are contractible to a point. Further the short exact sequences split and there are families of isomorphisms:[6]: 111
πi(S4)≅πi(S7)⊕πi−1(S3){\displaystyle \pi _{i}(S^{4})\cong \pi _{i}(S^{7})\oplus \pi _{i-1}(S^{3})}andπi(S8)≅πi(S15)⊕πi−1(S7).{\displaystyle \pi _{i}(S^{8})\cong \pi _{i}(S^{15})\oplus \pi _{i-1}(S^{7}).}
Spectral sequencesare important tools in algebraic topology for computing (co-)homology groups.
TheLeray-Serre spectral sequenceconnects the (co-)homology of the total space and the fiber with the (co-)homology of the base space of a fibration. For a fibrationp:E→B{\displaystyle p\colon E\to B}with fiberF,{\displaystyle F,}where the base space is a path connected CW-complex, and an additivehomology theoryG∗{\displaystyle G_{*}}there exists a spectral sequence:[7]: 242
Fibrations do not yield long exact sequences in homology, as they do in homotopy. But under certain conditions, fibrations provide exact sequences in homology. For a fibrationp:E→B{\displaystyle p\colon E\to B}with fiberF,{\displaystyle F,}where base space and fiber arepath connected, thefundamental groupπ1(B){\displaystyle \pi _{1}(B)}acts trivially onH∗(F){\displaystyle H_{*}(F)}and in addition the conditionsHp(B)=0{\displaystyle H_{p}(B)=0}for0<p<m{\displaystyle 0<p<m}andHq(F)=0{\displaystyle H_{q}(F)=0}for0<q<n{\displaystyle 0<q<n}hold, an exact sequence exists (also known under the name Serre exact sequence):
Hm+n−1(F)→i∗Hm+n−1(E)→f∗Hm+n−1(B)→τHm+n−2(F)→i∗⋯→f∗H1(B)→0.{\displaystyle H_{m+n-1}(F)\xrightarrow {i_{*}} H_{m+n-1}(E)\xrightarrow {f_{*}} H_{m+n-1}(B)\xrightarrow {\tau } H_{m+n-2}(F)\xrightarrow {i^{*}} \cdots \xrightarrow {f_{*}} H_{1}(B)\to 0.}[7]: 250
This sequence can be used, for example, to proveHurewicz's theoremor to compute the homology of loopspaces of the formΩSn:{\displaystyle \Omega S^{n}:}[8]: 162
Hk(ΩSn)={Z∃q∈Z:k=q(n−1)0otherwise.{\displaystyle H_{k}(\Omega S^{n})={\begin{cases}\mathbb {Z} &\exists q\in \mathbb {Z} \colon k=q(n-1)\\0&{\text{otherwise}}\end{cases}}.}
For the special case of a fibrationp:E→Sn{\displaystyle p\colon E\to S^{n}}where the base space is an{\displaystyle n}-sphere with fiberF,{\displaystyle F,}there exist exact sequences (also calledWang sequences) for homology and cohomology:[1]: 456
⋯→Hq(F)→i∗Hq(E)→Hq−n(F)→Hq−1(F)→⋯{\displaystyle \cdots \to H_{q}(F)\xrightarrow {i_{*}} H_{q}(E)\to H_{q-n}(F)\to H_{q-1}(F)\to \cdots }⋯→Hq(E)→i∗Hq(F)→Hq−n+1(F)→Hq+1(E)→⋯{\displaystyle \cdots \to H^{q}(E)\xrightarrow {i^{*}} H^{q}(F)\to H^{q-n+1}(F)\to H^{q+1}(E)\to \cdots }
For a fibrationp:E→B{\displaystyle p\colon E\to B}with fiberF{\displaystyle F}and a fixed commutativeringR{\displaystyle R}with a unit, there exists a contravariantfunctorfrom thefundamental groupoidofB{\displaystyle B}to the category of gradedR{\displaystyle R}-modules, which assigns tob∈B{\displaystyle b\in B}the moduleH∗(Fb,R){\displaystyle H_{*}(F_{b},R)}and to the path class[ω]{\displaystyle [\omega ]}the homomorphismh[ω]∗:H∗(Fω(0),R)→H∗(Fω(1),R),{\displaystyle h[\omega ]_{*}\colon H_{*}(F_{\omega (0)},R)\to H_{*}(F_{\omega (1)},R),}whereh[ω]{\displaystyle h[\omega ]}is a homotopy class in[Fω(0),Fω(1)].{\displaystyle [F_{\omega (0)},F_{\omega (1)}].}
A fibration is calledorientableoverR{\displaystyle R}if for any closed pathω{\displaystyle \omega }inB{\displaystyle B}the following holds:h[ω]∗=1.{\displaystyle h[\omega ]_{*}=1.}[1]: 476
For an orientable fibrationp:E→B{\displaystyle p\colon E\to B}over thefieldK{\displaystyle \mathbb {K} }with fiberF{\displaystyle F}and path connected base space, theEuler characteristicof the total space is given by:
χ(E)=χ(B)χ(F).{\displaystyle \chi (E)=\chi (B)\chi (F).}
Here the Euler characteristics of the base space and the fiber are defined over the fieldK{\displaystyle \mathbb {K} }.[1]: 481 | https://en.wikipedia.org/wiki/Fibration |
Inphysics, agauge theoryis a type offield theoryin which theLagrangian, and hence the dynamics of the system itself, does not change underlocal transformationsaccording to certain smooth families of operations (Lie groups). Formally, the Lagrangian isinvariantunder these transformations.
The term "gauge" refers to any specific mathematical formalism to regulate redundantdegrees of freedomin the Lagrangian of a physical system. The transformations between possible gauges, calledgauge transformations, form a Lie group—referred to as thesymmetry groupor thegauge groupof the theory. Associated with any Lie group is theLie algebraofgroup generators. For each group generator there necessarily arises a correspondingfield(usually avector field) called thegauge field. Gauge fields are included in the Lagrangian to ensure its invariance under the local group transformations (calledgauge invariance). When such a theory isquantized, thequantaof the gauge fields are calledgauge bosons. If the symmetry group is non-commutative, then the gauge theory is referred to asnon-abeliangauge theory, the usual example being theYang–Mills theory.
Many powerful theories in physics are described by Lagrangians that are invariant under some symmetry transformation groups. When they are invariant under a transformation identically performed ateverypointin thespacetimein which the physical processes occur, they are said to have aglobal symmetry.Local symmetry, the cornerstone of gauge theories, is a stronger constraint. In fact, a global symmetry is just a local symmetry whose group's parameters are fixed in spacetime (the same way a constant value can be understood as a function of a certain parameter, the output of which is always the same).
Gauge theories are important as the successful field theories explaining the dynamics ofelementary particles.Quantum electrodynamicsis anabeliangauge theory with the symmetry groupU(1)and has one gauge field, theelectromagnetic four-potential, with thephotonbeing the gauge boson. TheStandard Modelis a non-abelian gauge theory with the symmetry group U(1) ×SU(2)×SU(3)and has a total of twelve gauge bosons: thephoton, threeweak bosonsand eightgluons.
Gauge theories are also important in explaininggravitationin the theory ofgeneral relativity. Its case is somewhat unusual in that the gauge field is atensor, theLanczos tensor. Theories ofquantum gravity, beginning withgauge gravitation theory, also postulate the existence of a gauge boson known as thegraviton. Gauge symmetries can be viewed as analogues of theprinciple of general covarianceof general relativity in which the coordinate system can be chosen freely under arbitrarydiffeomorphismsof spacetime. Both gauge invariance and diffeomorphism invariance reflect a redundancy in the description of the system. An alternative theory of gravitation,gauge theory gravity, replaces the principle of general covariance with a true gauge principle with new gauge fields.
Historically, these ideas were first stated in the context ofclassical electromagnetismand later ingeneral relativity. However, the modern importance of gauge symmetries appeared first in therelativistic quantum mechanicsofelectrons–quantum electrodynamics, elaborated on below. Today, gauge theories are useful incondensed matter,nuclearandhigh energy physicsamong other subfields.
The concept and the name of gauge theory derives from the work ofHermann Weylin 1918.[1]Weyl, in an attempt to generalize the geometrical ideas ofgeneral relativityto includeelectromagnetism, conjectured thatEichinvarianzor invariance under the change ofscale(or "gauge") might also be a local symmetry of general relativity. After the development ofquantum mechanics, Weyl,Vladimir Fock[2]andFritz Londonreplaced the simple scale factor with acomplexquantity and turned the scale transformation into a change ofphase, which is aU(1)gauge symmetry. This explained theelectromagnetic fieldeffect on thewave functionof achargedquantum mechanicalparticle. Weyl's 1929 paper introduced the modern concept of gauge invariance[3]subsequently popularized byWolfgang Pauliin his 1941 review.[4]In retrospect,James Clerk Maxwell's formulation, in 1864–65, ofelectrodynamicsin "A Dynamical Theory of the Electromagnetic Field" suggested the possibility of invariance, when he stated that any vector field whose curl vanishes—and can therefore normally be written as agradientof a function—could be added to the vector potential without affecting themagnetic field. Similarly unnoticed,David Hilberthad derived theEinstein field equationsby postulating the invariance of theactionunder a general coordinate transformation. The importance of these symmetry invariances remained unnoticed until Weyl's work.
Inspired by Pauli's descriptions of connection between charge conservation and field theory driven by invariance,Chen Ning Yangsought a field theory foratomic nucleibinding based on conservation of nuclearisospin.[5]: 202In 1954, Yang andRobert Millsgeneralized the gauge invariance of electromagnetism, constructing a theory based on the action of the (non-abelian) SU(2) symmetrygroupon theisospindoublet ofprotonsandneutrons.[6]This is similar to the action of theU(1)group on thespinorfieldsofquantum electrodynamics.
TheYang–Mills theorybecame the prototype theory to resolve some of the confusion inelementary particle physics.
This idea later found application in thequantum field theoryof theweak force, and its unification with electromagnetism in theelectroweaktheory. Gauge theories became even more attractive when it was realized that non-abelian gauge theories reproduced a feature calledasymptotic freedom. Asymptotic freedom was believed to be an important characteristic of strong interactions. This motivated searching for a strong force gauge theory. This theory, now known asquantum chromodynamics, is a gauge theory with the action of the SU(3) group on thecolortriplet ofquarks. TheStandard Modelunifies the description of electromagnetism, weak interactions and strong interactions in the language of gauge theory.
In the 1970s,Michael Atiyahbegan studying the mathematics of solutions to the classicalYang–Millsequations. In 1983, Atiyah's studentSimon Donaldsonbuilt on this work to show that thedifferentiableclassification ofsmooth4-manifoldsis very different from their classificationup tohomeomorphism.[7]Michael Freedmanused Donaldson's work to exhibitexoticR4s, that is, exoticdifferentiable structuresonEuclidean4-dimensional space. This led to an increasing interest in gauge theory for its own sake, independent of its successes in fundamental physics. In 1994,Edward WittenandNathan Seiberginvented gauge-theoretic techniques based onsupersymmetrythat enabled the calculation of certaintopologicalinvariants[8][9](theSeiberg–Witten invariants). These contributions to mathematics from gauge theory have led to a renewed interest in this area.
The importance of gauge theories in physics is exemplified in the success of the mathematical formalism in providing a unified framework to describe thequantum field theoriesofelectromagnetism, theweak forceand thestrong force. This theory, known as theStandard Model, accurately describes experimental predictions regarding three of the fourfundamental forcesof nature, and is a gauge theory with the gauge groupSU(3) × SU(2) × U(1). Modern theories likestring theory, as well asgeneral relativity, are, in one way or another, gauge theories.
Inphysics, the mathematical description of any physical situation usually contains excessdegrees of freedom; the same physical situation is equally well described by many equivalent mathematical configurations. For instance, inNewtonian dynamics, if two configurations are related by aGalilean transformation(aninertialchange of reference frame) they represent the same physical situation. These transformations form agroupof "symmetries" of the theory, and a physical situation corresponds not to an individual mathematical configuration but to a class of configurations related to one another by this symmetry group.
This idea can be generalized to include local as well as global symmetries, analogous to much more abstract "changes of coordinates" in a situation where there is no preferred "inertial" coordinate system that covers the entire physical system. A gauge theory is a mathematical model that has symmetries of this kind, together with a set of techniques for making physical predictions consistent with the symmetries of the model.
When a quantity occurring in the mathematical configuration is not just a number but has some geometrical significance, such as a velocity or an axis of rotation, its representation as numbers arranged in a vector or matrix is also changed by a coordinate transformation. For instance, if one description of a pattern of fluid flow states that the fluid velocity in the neighborhood of (x= 1,y= 0) is 1 m/s in the positivexdirection, then a description of the same situation in which the coordinate system has been rotated clockwise by 90 degrees states that the fluid velocity in the neighborhood of (x= 0,y= −1) is 1 m/s in the negativeydirection. The coordinate transformation has affected both the coordinate system used to identify thelocationof the measurement and the basis in which itsvalueis expressed. As long as this transformation is performed globally (affecting the coordinate basis in the same way at every point), the effect on values that represent therate of changeof some quantity along some path in space and time as it passes through pointPis the same as the effect on values that are truly local toP.
In order to adequately describe physical situations in more complex theories, it is often necessary to introduce a "coordinate basis" for some of the objects of the theory that do not have this simple relationship to the coordinates used to label points in space and time. (In mathematical terms, the theory involves afiber bundlein which the fiber at each point of the base space consists of possible coordinate bases for use when describing the values of objects at that point.) In order to spell out a mathematical configuration, one must choose a particular coordinate basis at each point (alocal sectionof the fiber bundle) and express the values of the objects of the theory (usually "fields" in the physicist's sense) using this basis. Two such mathematical configurations are equivalent (describe the same physical situation) if they are related by a transformation of this abstract coordinate basis (a change of local section, orgauge transformation).
In most gauge theories, the set of possible transformations of the abstract gauge basis at an individual point in space and time is a finite-dimensional Lie group. The simplest such group isU(1), which appears in the modern formulation ofquantum electrodynamics (QED)via its use ofcomplex numbers. QED is generally regarded as the first, and simplest, physical gauge theory. The set of possible gauge transformations of the entire configuration of a given gauge theory also forms a group, thegauge groupof the theory. An element of the gauge group can be parameterized by a smoothly varying function from the points of spacetime to the (finite-dimensional) Lie group, such that the value of the function and its derivatives at each point represents the action of the gauge transformation on the fiber over that point.
A gauge transformation with constant parameter at every point in space and time is analogous to a rigid rotation of the geometric coordinate system; it represents aglobal symmetryof the gauge representation. As in the case of a rigid rotation, this gauge transformation affects expressions that represent the rate of change along a path of some gauge-dependent quantity in the same way as those that represent a truly local quantity. A gauge transformation whose parameter isnota constant function is referred to as alocal symmetry; its effect on expressions that involve aderivativeis qualitatively different from that on expressions that do not. (This is analogous to a non-inertial change of reference frame, which can produce aCoriolis effect.)
The "gauge covariant" version of a gauge theory accounts for this effect by introducing agauge field(in mathematical language, anEhresmann connection) and formulating all rates of change in terms of thecovariant derivativewith respect to this connection. The gauge field becomes an essential part of the description of a mathematical configuration. A configuration in which the gauge field can be eliminated by a gauge transformation has the property that itsfield strength(in mathematical language, itscurvature) is zero everywhere; a gauge theory isnotlimited to these configurations. In other words, the distinguishing characteristic of a gauge theory is that the gauge field does not merely compensate for a poor choice of coordinate system; there is generally no gauge transformation that makes the gauge field vanish.
When analyzing thedynamicsof a gauge theory, the gauge field must be treated as a dynamical variable, similar to other objects in the description of a physical situation. In addition to itsinteractionwith other objects via the covariant derivative, the gauge field typically contributesenergyin the form of a "self-energy" term. One can obtain the equations for the gauge theory by:
This is the sense in which a gauge theory "extends" a global symmetry to a local symmetry, and closely resembles the historical development of the gauge theory of gravity known asgeneral relativity.
Gauge theories used to model the results of physical experiments engage in:
We cannot express the mathematical descriptions of the "setup information" and the "possible measurement outcomes", or the "boundary conditions" of the experiment, without reference to a particular coordinate system, including a choice of gauge. One assumes an adequate experiment isolated from "external" influence that is itself a gauge-dependent statement. Mishandling gauge dependence calculations in boundary conditions is a frequent source ofanomalies, and approaches to anomaly avoidance classifies gauge theories[clarification needed].
The two gauge theories mentioned above, continuum electrodynamics and general relativity, are continuum field theories. The techniques of calculation in acontinuum theoryimplicitly assume that:
Determination of the likelihood of possible measurement outcomes proceed by:
These assumptions have enough validity across a wide range of energy scales and experimental conditions to allow these theories to make accurate predictions about almost all of the phenomena encountered in daily life: light, heat, and electricity, eclipses, spaceflight, etc. They fail only at the smallest and largest scales due to omissions in the theories themselves, and when the mathematical techniques themselves break down, most notably in the case ofturbulenceand otherchaoticphenomena.
Other than these classical continuum field theories, the most widely known gauge theories arequantum field theories, includingquantum electrodynamicsand theStandard Modelof elementary particle physics. The starting point of a quantum field theory is much like that of its continuum analog: a gauge-covariantaction integralthat characterizes "allowable" physical situations according to theprinciple of least action. However, continuum and quantum theories differ significantly in how they handle the excess degrees of freedom represented by gauge transformations. Continuum theories, and most pedagogical treatments of the simplest quantum field theories, use agauge fixingprescription to reduce the orbit of mathematical configurations that represent a given physical situation to a smaller orbit related by a smaller gauge group (the global symmetry group, or perhaps even the trivial group).
More sophisticated quantum field theories, in particular those that involve a non-abelian gauge group, break the gauge symmetry within the techniques ofperturbation theoryby introducing additional fields (theFaddeev–Popov ghosts) and counterterms motivated byanomaly cancellation, in an approach known asBRST quantization. While these concerns are in one sense highly technical, they are also closely related to the nature of measurement, the limits on knowledge of a physical situation, and the interactions between incompletely specified experimental conditions and incompletely understood physical theory.[citation needed]The mathematical techniques that have been developed in order to make gauge theories tractable have found many other applications, fromsolid-state physicsandcrystallographytolow-dimensional topology.
Inelectrostatics, one can either discuss the electric field,E, or its correspondingelectric potential,V. Knowledge of one makes it possible to find the other, except that potentials differing by a constant,V↦V+C{\displaystyle V\mapsto V+C}, correspond to the same electric field. This is because the electric field relates tochangesin the potential from one point in space to another, and the constantCwould cancel out when subtracting to find the change in potential. In terms ofvector calculus, the electric field is thegradientof the potential,E=−∇V{\displaystyle \mathbf {E} =-\nabla V}. Generalizing from static electricity to electromagnetism, we have a second potential, thevector potentialA, with
The general gauge transformations now become not justV↦V+C{\displaystyle V\mapsto V+C}but
wherefis any twice continuously differentiable function that depends on position and time. The electromagnetic fields remain the same under the gauge transformation.
The following illustrates how local gauge invariance can be "motivated" heuristically starting from global symmetry properties, and how it leads to an interaction between originally non-interacting fields.
Consider a set ofn{\displaystyle n}non-interacting realscalar fields, with equal massesm. This system is described by anactionthat is the sum of the (usual) action for each scalar fieldφi{\displaystyle \varphi _{i}}
The Lagrangian (density) can be compactly written as
by introducing avectorof fields
The term∂μΦ{\displaystyle \partial _{\mu }\Phi }is thepartial derivativeofΦ{\displaystyle \Phi }along dimensionμ{\displaystyle \mu }.
It is now transparent that the Lagrangian is invariant under the transformation
wheneverGis aconstantmatrixbelonging to then-by-northogonal groupO(n). This is seen to preserve the Lagrangian, since the derivative ofΦ′{\displaystyle \Phi '}transforms identically toΦ{\displaystyle \Phi }and both quantities appear insidedot productsin the Lagrangian (orthogonal transformations preserve the dot product).
This characterizes theglobalsymmetry of this particular Lagrangian, and the symmetry group is often called thegauge group; the mathematical term isstructure group, especially in the theory ofG-structures. Incidentally,Noether's theoremimplies that invariance under this group of transformations leads to the conservation of thecurrents
where theTamatrices aregeneratorsof the SO(n) group. There is one conserved current for every generator.
Now, demanding that this Lagrangian should havelocalO(n)-invariance requires that theGmatrices (which were earlier constant) should be allowed to become functions of thespacetimecoordinatesx.
In this case, theGmatrices do not "pass through" the derivatives, whenG=G(x),
The failure of the derivative to commute with "G" introduces an additional term (in keeping with the product rule), which spoils the invariance of the Lagrangian. In order to rectify this we define a new derivative operator such that the derivative ofΦ′{\displaystyle \Phi '}again transforms identically withΦ{\displaystyle \Phi }
This new "derivative" is called a(gauge) covariant derivativeand takes the form
wheregis called the coupling constant; a quantity defining the strength of an interaction.
After a simple calculation we can see that thegauge fieldA(x) must transform as follows
The gauge field is an element of the Lie algebra, and can therefore be expanded as
There are therefore as many gauge fields as there are generators of the Lie algebra.
Finally, we now have alocally gauge invariantLagrangian
Pauli uses the termgauge transformation of the first typeto mean the transformation ofΦ{\displaystyle \Phi }, while the compensating transformation inA{\displaystyle A}is called agauge transformation of the second type.
The difference between this Lagrangian and the originalglobally gauge-invariantLagrangian is seen to be theinteraction Lagrangian
This term introducesinteractionsbetween thenscalar fields just as a consequence of the demand for local gauge invariance. However, to make this interaction physical and not completely arbitrary, the mediatorA(x) needs to propagate in space. That is dealt with in the next section by adding yet another term,Lgf{\displaystyle {\mathcal {L}}_{\mathrm {gf} }}, to the Lagrangian. In thequantizedversion of the obtainedclassical field theory, thequantaof the gauge fieldA(x) are calledgauge bosons. The interpretation of the interaction Lagrangian in quantum field theory is ofscalarbosonsinteracting by the exchange of these gauge bosons.
The picture of a classical gauge theory developed in the previous section is almost complete, except for the fact that to define the covariant derivativesD, one needs to know the value of the gauge fieldA(x){\displaystyle A(x)}at all spacetime points. Instead of manually specifying the values of this field, it can be given as the solution to a field equation. Further requiring that the Lagrangian that generates this field equation is locally gauge invariant as well, one possible form for the gauge field Lagrangian is
where theFμνa{\displaystyle F_{\mu \nu }^{a}}are obtained from potentialsAμa{\displaystyle A_{\mu }^{a}}, being the components ofA(x){\displaystyle A(x)}, by
and thefabc{\displaystyle f^{abc}}are thestructure constantsof the Lie algebra of the generators of the gauge group. This formulation of the Lagrangian is called aYang–Mills action. Other gauge invariant actions also exist (e.g.,nonlinear electrodynamics,Born–Infeld action,Chern–Simons model,theta term, etc.).
In this Lagrangian term there is no field whose transformation counterweighs the one ofA{\displaystyle A}. Invariance of this term under gauge transformations is a particular case ofa prioriclassical (geometrical) symmetry. This symmetry must be restricted in order to perform quantization, the procedure being denominatedgauge fixing, but even after restriction, gauge transformations may be possible.[12]
The complete Lagrangian for the gauge theory is now
As a simple application of the formalism developed in the previous sections, consider the case ofelectrodynamics, with only theelectronfield. The bare-bones action that generates the electron field'sDirac equationis
The global symmetry for this system is
The gauge group here isU(1), just rotations of thephase angleof the field, with the particular rotation determined by the constantθ.
"Localising" this symmetry implies the replacement ofθbyθ(x). An appropriate covariant derivative is then
Identifying the "charge"e(not to be confused with the mathematical constantein the symmetry description) with the usualelectric charge(this is the origin of the usage of the term in gauge theories), and the gauge fieldA(x)with the four-vector potentialof theelectromagnetic fieldresults in an interaction Lagrangian
whereJμ(x)=eℏψ¯(x)γμψ(x){\displaystyle J^{\mu }(x)={\frac {e}{\hbar }}{\bar {\psi }}(x)\gamma ^{\mu }\psi (x)}is the electric currentfour vectorin theDirac field. Thegauge principleis therefore seen to naturally introduce the so-calledminimal couplingof the electromagnetic field to the electron field.
Adding a Lagrangian for the gauge fieldAμ(x){\displaystyle A_{\mu }(x)}in terms of thefield strength tensorexactly as in electrodynamics, one obtains the Lagrangian used as the starting point inquantum electrodynamics.
Gauge theories are usually discussed in the language ofdifferential geometry. Mathematically, agaugeis just a choice of a (local)sectionof someprincipal bundle. Agauge transformationis just a transformation between two such sections.
Although gauge theory is dominated by the study ofconnections(primarily because it's mainly studied byhigh-energy physicists), the idea of a connection is not central to gauge theory in general. In fact, a result in general gauge theory shows thataffine representations(i.e., affinemodules) of the gauge transformations can be classified as sections of ajet bundlesatisfying certain properties. There are representations that transform covariantly pointwise (called by physicists gauge transformations of the first kind), representations that transform as aconnection form(called by physicists gauge transformations of the second kind, an affine representation)—and other more general representations, such as the B field inBF theory. There are more generalnonlinear representations(realizations), but these are extremely complicated. Still,nonlinear sigma modelstransform nonlinearly, so there are applications.
If there is aprincipal bundlePwhosebase spaceisspaceorspacetimeandstructure groupis a Lie group, then the sections ofPform aprincipal homogeneous spaceof the group of gauge transformations.
Connections(gauge connection) define this principal bundle, yielding acovariant derivative∇ in eachassociated vector bundle. If a local frame is chosen (a local basis of sections), then this covariant derivative is represented by theconnection formA, a Lie algebra-valued1-form, which is called thegauge potentialinphysics. This is evidently not an intrinsic but a frame-dependent quantity. Thecurvature formF, a Lie algebra-valued2-formthat is an intrinsic quantity, is constructed from a connection form by
where d stands for theexterior derivativeand∧{\displaystyle \wedge }stands for thewedge product. (A{\displaystyle \mathbf {A} }is an element of the vector space spanned by the generatorsTa{\displaystyle T^{a}}, and so the components ofA{\displaystyle \mathbf {A} }do not commute with one another. Hence the wedge productA∧A{\displaystyle \mathbf {A} \wedge \mathbf {A} }does not vanish.)
Infinitesimal gauge transformations form a Lie algebra, which is characterized by a smooth Lie-algebra-valuedscalar, ε. Under such aninfinitesimalgauge transformation,
where[⋅,⋅]{\displaystyle [\cdot ,\cdot ]}is the Lie bracket.
One nice thing is that ifδεX=εX{\displaystyle \delta _{\varepsilon }X=\varepsilon X}, thenδεDX=εDX{\displaystyle \delta _{\varepsilon }DX=\varepsilon DX}where D is the covariant derivative
Also,δεF=[ε,F]{\displaystyle \delta _{\varepsilon }\mathbf {F} =[\varepsilon ,\mathbf {F} ]}, which meansF{\displaystyle \mathbf {F} }transforms covariantly.
Not all gauge transformations can be generated byinfinitesimalgauge transformations in general. An example is when thebase manifoldis acompactmanifoldwithoutboundarysuch that thehomotopyclass of mappings from thatmanifoldto the Lie group is nontrivial. Seeinstantonfor an example.
TheYang–Mills actionis now given by
where⋆{\displaystyle {\star }}is theHodge star operatorand the integral is defined as indifferential geometry.
A quantity which isgauge-invariant(i.e.,invariantunder gauge transformations) is theWilson loop, which is defined over any closed path, γ, as follows:
whereχis thecharacterof a complexrepresentationρ andP{\displaystyle {\mathcal {P}}}represents the path-ordered operator.
The formalism of gauge theory carries over to a general setting. For example, it is sufficient to ask that avector bundlehave ametric connection; when one does so, one finds that the metric connection satisfies the Yang–Mills equations of motion.
Gauge theories may be quantized by specialization of methods which are applicable to anyquantum field theory. However, because of the subtleties imposed by the gauge constraints (see section on Mathematical formalism, above) there are many technical problems to be solved which do not arise in other field theories. At the same time, the richer structure of gauge theories allows simplification of some computations: for exampleWard identitiesconnect differentrenormalizationconstants.
The first gauge theory quantized wasquantum electrodynamics(QED). The first methods developed for this involved gauge fixing and then applyingcanonical quantization. TheGupta–Bleulermethod was also developed to handle this problem. Non-abelian gauge theories are now handled by a variety of means. Methods for quantization are covered in the article onquantization.
The main point to quantization is to be able to computequantum amplitudesfor various processes allowed by the theory. Technically, they reduce to the computations of certaincorrelation functionsin thevacuum state. This involves arenormalizationof the theory.
When therunning couplingof the theory is small enough, then all required quantities may be computed inperturbation theory. Quantization schemes intended to simplify such computations (such ascanonical quantization) may be calledperturbative quantization schemes. At present some of these methods lead to the most precise experimental tests of gauge theories.
However, in most gauge theories, there are many interesting questions which are non-perturbative. Quantization schemes suited to these problems (such aslattice gauge theory) may be callednon-perturbative quantization schemes. Precise computations in such schemes often requiresupercomputing, and are therefore less well-developed currently than other schemes.
Some of the symmetries of the classical theory are then seen not to hold in the quantum theory; a phenomenon called ananomaly. Among the most well known are:
A pure gauge is the set of field configurations obtained by agauge transformationon the null-field configuration, i.e., a gauge transform of zero. So it is a particular "gauge orbit" in the field configuration's space.
Thus, in the abelian case, whereAμ(x)→Aμ′(x)=Aμ(x)+∂μf(x){\displaystyle A_{\mu }(x)\rightarrow A'_{\mu }(x)=A_{\mu }(x)+\partial _{\mu }f(x)}, the pure gauge is just the set of field configurationsAμ′(x)=∂μf(x){\displaystyle A'_{\mu }(x)=\partial _{\mu }f(x)}for allf(x). | https://en.wikipedia.org/wiki/Gauge_theory |
Indifferential topology, theHopf fibration(also known as theHopf bundleorHopf map) describes a3-sphere(ahypersphereinfour-dimensional space) in terms ofcirclesand an ordinarysphere. Discovered byHeinz Hopfin 1931, it is an influential early example of afiber bundle. Technically, Hopf found a many-to-onecontinuous function(or "map") from the3-sphere onto the2-sphere such that each distinctpointof the2-sphere is mapped from a distinctgreat circleof the3-sphere (Hopf 1931).[1]Thus the3-sphere is composed of fibers, where each fiber is a circle — one for each point of the2-sphere.
This fiber bundle structure is denoted
meaning that the fiber spaceS1(a circle) isembeddedin the total spaceS3(the3-sphere), andp:S3→S2(Hopf's map) projectsS3onto the base spaceS2(the ordinary2-sphere). The Hopf fibration, like any fiber bundle, has the important property that it islocallyaproduct space. However it is not atrivialfiber bundle, i.e.,S3is notgloballya product ofS2andS1although locally it is indistinguishable from it.
This has many implications: for example the existence of this bundle shows that the higherhomotopy groups of spheresare not trivial in general. It also provides a basic example of aprincipal bundle, by identifying the fiber with thecircle group.
Stereographic projectionof the Hopf fibration induces a remarkable structure onR3, in which all of 3-dimensional space, except for the z-axis, is filled with nestedtorimade of linkingVillarceau circles. Here each fiber projects to acirclein space (one of which is a line, thought of as a "circle through infinity"). Each torus is the stereographic projection of theinverse imageof a circle of latitude of the2-sphere. (Topologically, a torus is the product of two circles.) These tori are illustrated in the images at right. WhenR3is compressed to the boundary of a ball, some geometric structure is lost although the topological structure is retained (seeTopology and geometry). The loops arehomeomorphicto circles, although they are not geometriccircles.
There are numerous generalizations of the Hopf fibration. The unit sphere incomplex coordinate spaceCn+1fibers naturally over thecomplex projective spaceCPnwith circles as fibers, and there are alsoreal,quaternionic,[2]andoctonionicversions of these fibrations. In particular, the Hopf fibration belongs to a family of four fiber bundles in which the total space, base space, and fiber space are all spheres:
ByAdams's theoremsuch fibrations can occur only in these dimensions.
For anynatural numbern, ann-dimensional sphere, orn-sphere, can be defined as the set of points in an(n+1){\displaystyle (n+1)}-dimensionalspacewhich are a fixed distance from a centralpoint. For concreteness, the central point can be taken to be theorigin, and the distance of the points on the sphere from this origin can be assumed to be a unit length. With this convention, then-sphere,Sn{\displaystyle S^{n}}, consists of the points(x1,x2,…,xn+1){\displaystyle (x_{1},x_{2},\ldots ,x_{n+1})}inRn+1{\displaystyle \mathbb {R} ^{n+1}}withx12+x22+ ⋯+xn+ 12= 1. For example, the3-sphere consists of the points (x1,x2,x3,x4) inR4withx12+x22+x32+x42= 1.
The Hopf fibrationp:S3→S2of the3-sphere over the2-sphere can be defined in several ways.
IdentifyR4withC2(whereCdenotes thecomplex numbers) by writing:
and identifyR3withC×Rby writing
ThusS3is identified with thesubsetof all(z0,z1)inC2such that|z0|2+ |z1|2= 1, andS2is identified with the subset of all(z,x)inC×Rsuch that|z|2+x2= 1. (Here, for a complex numberz=x+ iy, its squared absolute value is |z|2=zz∗=x2+y2, where the star denotes thecomplex conjugate.) Then the Hopf fibrationpis defined by
The first component is a complex number, whereas the second component is real. Any point on the3-sphere must have the property that|z0|2+ |z1|2= 1. If that is so, thenp(z0,z1)lies on the unit2-sphere inC×R, as may be shown by adding the squares of the absolute values of the complex and real components ofp
Furthermore, if two points on the 3-sphere map to the same point on the 2-sphere, i.e., ifp(z0,z1) =p(w0,w1), then(w0,w1)must equal(λz0,λz1)for some complex numberλwith|λ|2= 1. The converse is also true; any two points on the3-sphere that differ by a common complex factorλmap to the same point on the2-sphere. These conclusions follow, because the complex factorλcancels with its complex conjugateλ∗in both parts ofp: in the complex2z0z1∗component and in the real component|z0|2− |z1|2.
Since the set of complex numbersλwith|λ|2= 1form the unit circle in the complex plane, it follows that for each pointminS2, theinverse imagep−1(m)is a circle, i.e.,p−1m≅S1. Thus the3-sphere is realized as adisjoint unionof these circular fibers.
A direct parametrization of the3-sphere employing the Hopf map is as follows.[3]
or in EuclideanR4
Whereηruns over the range from0toπ/2,ξ1runs over the range from0to2π, andξ2can take any value from0to4π. Every value ofη, except0andπ/2which specify circles, specifies a separateflat torusin the3-sphere, and one round trip (0to4π) of eitherξ1orξ2causes you to make one full circle of both limbs of the torus.
A mapping of the above parametrization to the2-sphere is as follows, with points on the circles parametrized byξ2.
A geometric interpretation of the fibration may be obtained using thecomplex projective line,CP1, which is defined to be the set of all complex one-dimensionalsubspacesofC2. Equivalently,CP1is thequotientofC2\{0}by theequivalence relationwhich identifies(z0,z1)with(λz0,λz1)for any nonzero complex numberλ. On any complex line inC2there is a circle of unit norm, and so the restriction of thequotient mapto the points of unit norm is a fibration ofS3overCP1.
CP1is diffeomorphic to a2-sphere: indeed it can be identified with theRiemann sphereC∞=C∪ {∞}, which is theone point compactificationofC(obtained by adding apoint at infinity). The formula given forpabove defines an explicit diffeomorphism between the complex projective line and the ordinary2-sphere in3-dimensional space. Alternatively, the point(z0,z1)can be mapped to the ratioz1/z0in the Riemann sphereC∞.
The Hopf fibration defines afiber bundle, with bundle projectionp. This means that it has a "local product structure", in the sense that every point of the2-sphere has someneighborhoodUwhose inverse image in the3-sphere can beidentifiedwith theproductofUand a circle:p−1(U) ≅U×S1. Such a fibration is said to belocally trivial.
For the Hopf fibration, it is enough to remove a single pointmfromS2and the corresponding circlep−1(m)fromS3; thus one can takeU=S2\{m}, and any point inS2has a neighborhood of this form.
Another geometric interpretation of the Hopf fibration can be obtained by considering rotations of the2-sphere in ordinary3-dimensional space. Therotation group SO(3)has adouble cover, thespin groupSpin(3),diffeomorphicto the3-sphere. The spin group actstransitivelyonS2by rotations. Thestabilizerof a point is isomorphic to thecircle group; its elements are angles of rotation leaving the given point unmoved, all sharing the axis connecting that point to the sphere's center. It follows easily that the3-sphere is aprincipal circle bundleover the2-sphere, and this is the Hopf fibration.
To make this more explicit, there are two approaches: the groupSpin(3)can either be identified with the groupSp(1)ofunit quaternions, or with thespecial unitary groupSU(2).
In the first approach, a vector(x1,x2,x3,x4)inR4is interpreted as a quaternionq∈Hby writing
The3-sphere is then identified with theversors, the quaternions of unit norm, thoseq∈Hfor which|q|2= 1, where|q|2=q q∗, which is equal tox12+x22+x32+x42forqas above.
On the other hand, a vector(y1,y2,y3)inR3can be interpreted as a pure quaternion
Then, as is well-known sinceCayley (1845), the mapping
is a rotation inR3: indeed it is clearly anisometry, since|q p q∗|2=q p q∗q p∗q∗=q p p∗q∗= |p|2, and it is not hard to check that it preserves orientation.
In fact, this identifies the group ofversorswith the group of rotations ofR3, modulo the fact that the versorsqand−qdetermine the same rotation. As noted above, the rotations act transitively onS2, and the set of versorsqwhich fix a given right versorphave the formq=u+vp, whereuandvare real numbers withu2+v2= 1. This is a circle subgroup. For concreteness, one can takep=k, and then the Hopf fibration can be defined as the map sending a versorωtoωkω∗. All the quaternionsωq, whereqis one of the circle of versors that fixk, get mapped to the same thing (which happens to be one of the two180°rotations rotatingkto the same place asωdoes).
Another way to look at this fibration is that every versor ω moves the plane spanned by{1,k}to a new plane spanned by{ω,ωk}. Any quaternionωq, whereqis one of the circle of versors that fixk, will have the same effect. We put all these into one fibre, and the fibres can be mapped one-to-one to the2-sphere of180°rotations which is the range ofωkω*.
This approach is related to the direct construction by identifying a quaternionq=x1+ix2+jx3+kx4with the2×2matrix:
This identifies the group of versors withSU(2), and the imaginary quaternions with the skew-hermitian2×2matrices (isomorphic toC×R).
The rotation induced by a unit quaternionq=w+ix+jy+kzis given explicitly by theorthogonal matrix
Here we find an explicit real formula for the bundle projection by noting that the fixed unit vector along thezaxis,(0,0,1), rotates to another unit vector,
which is a continuous function of(w,x,y,z). That is, the image ofqis the point on the2-sphere where it sends the unit vector along thezaxis. The fiber for a given point onS2consists of all those unit quaternions that send the unit vector there.
We can also write an explicit formula for the fiber over a point(a,b,c)inS2. Multiplication of unit quaternions produces composition of rotations, and
is a rotation by2θaround thezaxis. Asθvaries, this sweeps out agreat circleofS3, our prototypical fiber. So long as the base point,(a,b,c), is not the antipode,(0, 0, −1), the quaternion
will send(0, 0, 1)to(a,b,c). Thus the fiber of(a,b,c)is given by quaternions of the formq(a,b,c)qθ, which are theS3points
Since multiplication byq(a,b,c)acts as a rotation of quaternion space, the fiber is not merely a topological circle, it is a geometric circle.
The final fiber, for(0, 0, −1), can be given by definingq(0,0,−1)to equali, producing
which completes the bundle. But note that this one-to-one mapping betweenS3andS2×S1is not continuous on this circle, reflecting the fact thatS3is not topologically equivalent toS2×S1.
Thus, a simple way of visualizing the Hopf fibration is as follows. Any point on the3-sphere is equivalent to aquaternion, which in turn is equivalent to a particular rotation of aCartesian coordinate framein three dimensions. The set of all possible quaternions produces the set of all possible rotations, which moves the tip of one unit vector of such a coordinate frame (say, thezvector) to all possible points on a unit2-sphere. However, fixing the tip of thezvector does not specify the rotation fully; a further rotation is possible about thez-axis. Thus, the3-sphere is mapped onto the2-sphere, plus a single rotation.
The rotation can be represented using theEuler anglesθ,φ, andψ. The Hopf mapping maps the rotation to the point on the 2-sphere given by θ and φ, and the associated circle is parametrized by ψ. Note that when θ = π the Euler angles φ and ψ are not well defined individually, so we do not have a one-to-one mapping (or a one-to-two mapping) between the3-torusof (θ,φ,ψ) andS3.
If the Hopf fibration is treated as a vector field in 3 dimensional space then there is a solution to the (compressible, non-viscous)Navier–Stokes equationsof fluid dynamics in which the fluid flows along the circles of the projection of the Hopf fibration in 3 dimensional space. The size of the velocities, the density and the pressure can be chosen at each point to satisfy the equations. All these quantities fall to zero going away from the centre. If a is the distance to the inner ring, the velocities, pressure and density fields are given by:
for arbitrary constantsAandB. Similar patterns of fields are found assolitonsolutions ofmagnetohydrodynamics:[4]
The Hopf construction, viewed as a fiber bundlep:S3→CP1, admits several generalizations, which are also often known as Hopf fibrations. First, one can replace the projective line by ann-dimensionalprojective space. Second, one can replace the complex numbers by any (real)division algebra, including (forn= 1) theoctonions.
A real version of the Hopf fibration is obtained by regarding the circleS1as a subset ofR2in the usual way and by
identifying antipodal points. This gives a fiber bundleS1→RP1over thereal projective linewith fiberS0= {1, −1}. Just asCP1is diffeomorphic to a sphere,RP1is diffeomorphic to a circle.
More generally, then-sphereSnfibers overreal projective spaceRPnwith fiberS0.
The Hopf construction gives circle bundlesp:S2n+1→CPnovercomplex projective space. This is actually the restriction of thetautological line bundleoverCPnto the unit sphere inCn+1.
Similarly, one can regardS4n+3as lying inHn+1(quaternionicn-space) and factor out by unit quaternion (=S3) multiplication to get thequaternionic projective spaceHPn. In particular, sinceS4=HP1, there is a bundleS7→S4with fiberS3.
A similar construction with theoctonionsyields a bundleS15→S8with fiberS7. But the sphereS31does not fiber overS16with fiberS15. One can regardS8as theoctonionic projective lineOP1. Although one can also define anoctonionic projective planeOP2, the sphereS23does not fiber overOP2with fiberS7.[5][6]
Sometimes the term "Hopf fibration" is restricted to the fibrations between spheres obtained above, which are
As a consequence ofAdams's theorem, fiber bundles withspheresas total space, base space, and fiber can occur only in these dimensions.
Fiber bundles with similar properties, but different from the Hopf fibrations, were used byJohn Milnorto constructexotic spheres.
The Hopf fibration has many implications, some purely attractive, others deeper. For example,stereographic projectionS3→R3induces a remarkable structure inR3, which in turn illuminates the topology of the bundle (Lyons 2003). Stereographic projection preserves circles and maps the Hopf fibers to geometrically perfect circles inR3which fill space. Here there is one exception: the Hopf circle containing the projection point maps to a straight line inR3— a "circle through infinity".
The fibers over a circle of latitude onS2form atorusinS3(topologically, a torus is the product of two circles) and these project to nestedtorusesinR3which also fill space. The individual fibers map to linkingVillarceau circleson these tori, with the exception of the circle through the projection point and the one through itsopposite point: the former maps to a straight line, the latter to a unit circle perpendicular to, and centered on, this line, which may be viewed as a degenerate torus whose minor radius has shrunken to zero. Every other fiber image encircles the line as well, and so, by symmetry, each circle is linked througheverycircle, both inR3and inS3. Two such linking circles form aHopf linkinR3
Hopf proved that the Hopf map hasHopf invariant1, and therefore is notnull-homotopic. In fact it generates thehomotopy groupπ3(S2) and has infinite order.
Inquantum mechanics, the Riemann sphere is known as theBloch sphere, and the Hopf fibration describes the topological structure of a quantum mechanicaltwo-level systemorqubit. Similarly, the topology of a pair of entangled two-level systems is given by the Hopf fibration
(Mosseri & Dandoloff 2001). Moreover, the Hopf fibration is equivalent to the fiber bundle structure of theDirac monopole.[7]
Hopf fibration also found applications inrobotics, where it was used to generate uniform samples onSO(3)for theprobabilistic roadmapalgorithm in motion planning.[8]It also found application in theautomatic controlofquadrotors.[9][10] | https://en.wikipedia.org/wiki/Hopf_bundle |
In mathematics, anI-bundleis afiber bundlewhose fiber is anintervaland whose base is amanifold. Any kind of interval, open, closed, semi-open, semi-closed, open-bounded, compact, evenrays, can be the fiber. An I-bundle is said to be twisted if it is not trivial.
Two simple examples of I-bundles are theannulusand theMöbius band, the only two possible I-bundles over the circleS1{\displaystyle S^{1}}. The annulus is a trivial or untwisted bundle because it corresponds to theCartesian productS1×I{\displaystyle S^{1}\times I}, and the Möbius band is a non-trivial or twisted bundle. Both bundles are2-manifolds, but the annulus is anorientable manifoldwhile the Möbius band is anon-orientable manifold.
Curiously, there are only two kinds of I-bundles when the base manifold is anysurfacebut theKlein bottleK{\displaystyle K}. That surface has three I-bundles: the trivial bundleK×I{\displaystyle K\times I}and two twisted bundles.
Together with theSeifert fiber spaces, I-bundles are fundamental elementary building blocks for the description of three-dimensional spaces. These observations are simple well known facts on elementary3-manifolds.
Line bundlesare both I-bundles andvector bundlesof rank one. When considering I-bundles, one is interested mostly in theirtopological propertiesand not their possible vector properties, as one might be forline bundles.
Thisknot theory-relatedarticle is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/I-bundle |
Indifferential geometry, a field inmathematics, anatural bundleis anyfiber bundleassociated to thes-frame bundleFs(M){\displaystyle F^{s}(M)}for somes≥1{\displaystyle s\geq 1}. It turns out that its transition functions depend functionally on local changes of coordinates in the basemanifoldM{\displaystyle M}together with their partial derivatives up to order at mosts{\displaystyle s}.[1]
The concept of a natural bundle was introduced byAlbert Nijenhuisas a modern reformulation of the classical concept of an arbitrary bundle of geometric objects.[2]
LetMf{\displaystyle Mf}denote thecategory of smooth manifoldsandsmooth mapsandMfn{\displaystyle Mf_{n}}thecategoryof smoothn{\displaystyle n}-dimensional manifolds andlocal diffeomorphisms. Consider also the categoryFM{\displaystyle {\mathcal {FM}}}offibred manifoldsand bundle morphisms, and the functorB:FM→Mf{\displaystyle B:{\mathcal {FM}}\to {\mathcal {M}}f}associating to any fibred manifold its base manifold.
A natural bundle (or bundle functor) is afunctorF:Mfn→FM{\displaystyle F:{\mathcal {M}}f_{n}\to {\mathcal {FM}}}satisfying the following three properties:
As a consequence of the first condition, one has anatural transformationp:F→B{\displaystyle p:F\to B}.
A natural bundleF:Mfn→Mf{\displaystyle F:Mf_{n}\to Mf}is called offinite orderr{\displaystyle r}if, for every local diffeomorphismf:M→N{\displaystyle f:M\to N}and every pointx∈M{\displaystyle x\in M}, the mapF(f)x:F(M)x→F(N)f(x){\displaystyle F(f)_{x}:F(M)_{x}\to F(N)_{f(x)}}depends only on the jetjxrf{\displaystyle j_{x}^{r}f}. Equivalently, for every local diffeomorphismsf,g:M→N{\displaystyle f,g:M\to N}and every pointx∈M{\displaystyle x\in M}, one hasjxrf=jxrg⇒F(f)|F(M)x=F(g)|F(M)x.{\displaystyle j_{x}^{r}f=j_{x}^{r}g\Rightarrow F(f)|_{F(M)_{x}}=F(g)|_{F(M)_{x}}.}Natural bundles of orderr{\displaystyle r}coincide with the associated fibre bundles to ther{\displaystyle r}-th orderframe bundlesFs(M){\displaystyle F^{s}(M)}.
A classical result byEpsteinandThurstonshows that all natural bundles have finite order.[3]
An example of natural bundle (of first order) is thetangent bundleTM{\displaystyle TM}of a manifoldM{\displaystyle M}.
Other examples include the cotangent bundles, the bundles of metrics of signature(r,s){\displaystyle (r,s)}and the bundle of linear connections.[4] | https://en.wikipedia.org/wiki/Natural_bundle |
Inmathematics, aprincipal bundle[1][2][3][4]is a mathematical object that formalizes some of the essential features of theCartesian productX×G{\displaystyle X\times G}of a spaceX{\displaystyle X}with agroupG{\displaystyle G}. In the same way as with the Cartesian product, a principal bundleP{\displaystyle P}is equipped with
Unless it is the product spaceX×G{\displaystyle X\times G}, a principal bundle lacks a preferred choice of identity cross-section; it has no preferred analog ofx↦(x,e){\displaystyle x\mapsto (x,e)}. Likewise, there is not generally a projection ontoG{\displaystyle G}generalizing the projection onto the second factor,X×G→G{\displaystyle X\times G\to G}that exists for the Cartesian product. They may also have a complicatedtopologythat prevents them from being realized as a product space even if a number of arbitrary choices are made to try to define such a structure by defining it on smaller pieces of the space.
A common example of a principal bundle is theframe bundleF(E){\displaystyle F(E)}of avector bundleE{\displaystyle E}, which consists of all orderedbasesof the vector space attached to each point. The groupG,{\displaystyle G,}in this case, is thegeneral linear group, which acts on the rightin the usual way: bychanges of basis. Since there is no natural way to choose an ordered basis of a vector space, a frame bundle lacks a canonical choice of identity cross-section.
Principal bundles have important applications intopologyanddifferential geometryand mathematicalgauge theory. They have also found application inphysicswhere they form part of the foundational framework of physicalgauge theories.
A principalG{\displaystyle G}-bundle, whereG{\displaystyle G}denotes anytopological group, is afiber bundleπ:P→X{\displaystyle \pi :P\to X}together with acontinuousright actionP×G→P{\displaystyle P\times G\to P}such thatG{\displaystyle G}preserves the fibers ofP{\displaystyle P}(i.e. ify∈Px{\displaystyle y\in P_{x}}thenyg∈Px{\displaystyle yg\in P_{x}}for allg∈G{\displaystyle g\in G}) and actsfreelyandtransitively(meaning each fiber is aG-torsor) on them in such a way that for eachx∈X{\displaystyle x\in X}andy∈Px{\displaystyle y\in P_{x}}, the mapG→Px{\displaystyle G\to P_{x}}sendingg{\displaystyle g}toyg{\displaystyle yg}is a homeomorphism. In particular each fiber of the bundle is homeomorphic to the groupG{\displaystyle G}itself. Frequently, one requires the base spaceX{\displaystyle X}to beHausdorffand possiblyparacompact.
Since the group action preserves the fibers ofπ:P→X{\displaystyle \pi :P\to X}and acts transitively, it follows that theorbitsof theG{\displaystyle G}-action are precisely these fibers and the orbit spaceP/G{\displaystyle P/G}ishomeomorphicto the base spaceX{\displaystyle X}. Because the action is free and transitive, the fibers have the structure of G-torsors. AG{\displaystyle G}-torsor is a space that is homeomorphic toG{\displaystyle G}but lacks a group structure since there is no preferred choice of anidentity element.
An equivalent definition of a principalG{\displaystyle G}-bundle is as aG{\displaystyle G}-bundleπ:P→X{\displaystyle \pi :P\to X}with fiberG{\displaystyle G}where the structure group acts on the fiber by left multiplication. Since right multiplication byG{\displaystyle G}on the fiber commutes with the action of the structure group, there exists an invariant notion of right multiplication byG{\displaystyle G}onP{\displaystyle P}. The fibers ofπ{\displaystyle \pi }then become rightG{\displaystyle G}-torsors for this action.
The definitions above are for arbitrary topological spaces. One can also define principalG{\displaystyle G}-bundles in thecategoryofsmooth manifolds. Hereπ:P→X{\displaystyle \pi :P\to X}is required to be asmooth mapbetween smooth manifolds,G{\displaystyle G}is required to be aLie group, and the corresponding action onP{\displaystyle P}should be smooth.
Over an open ballU⊂Rn{\displaystyle U\subset \mathbb {R} ^{n}}, orRn{\displaystyle \mathbb {R} ^{n}}, with induced coordinatesx1,…,xn{\displaystyle x_{1},\ldots ,x_{n}}, any principalG{\displaystyle G}-bundle is isomorphic to a trivial bundle
π:U×G→U{\displaystyle \pi :U\times G\to U}
and a smooth sections∈Γ(π){\displaystyle s\in \Gamma (\pi )}is equivalently given by a (smooth) functions^:U→G{\displaystyle {\hat {s}}:U\to G}since
s(u)=(u,s^(u))∈U×G{\displaystyle s(u)=(u,{\hat {s}}(u))\in U\times G}
for some smooth function. For example, ifG=U(2){\displaystyle G=U(2)}, the Lie group of2×2{\displaystyle 2\times 2}unitary matrices, then a section can be constructed by considering four real-valued functions
ϕ(x),ψ(x),Δ(x),θ(x):U→R{\displaystyle \phi (x),\psi (x),\Delta (x),\theta (x):U\to \mathbb {R} }
and applying them to the parameterization
s^(x)=eiϕ(x)[eiψ(x)00e−iψ(x)][cosθ(x)sinθ(x)−sinθ(x)cosθ(x)][eiΔ(x)00e−iΔ(x)].{\displaystyle {\hat {s}}(x)=e^{i\phi (x)}{\begin{bmatrix}e^{i\psi (x)}&0\\0&e^{-i\psi (x)}\end{bmatrix}}{\begin{bmatrix}\cos \theta (x)&\sin \theta (x)\\-\sin \theta (x)&\cos \theta (x)\\\end{bmatrix}}{\begin{bmatrix}e^{i\Delta (x)}&0\\0&e^{-i\Delta (x)}\end{bmatrix}}.}This same procedure valids by taking a parameterization of a collection of matrices defining a Lie groupG{\displaystyle G}and by considering the set of functions from a patch of the base spaceU⊂X{\displaystyle U\subset X}toR{\displaystyle \mathbb {R} }and inserting them into the parameterization.
One of the most important questions regarding any fiber bundle is whether or not it istrivial,i.e.isomorphic to a product bundle. For principal bundles there is a convenient characterization of triviality:
The same is not true in general for other fiber bundles. For instance,vector bundlesalways have a zero section whether they are trivial or not andsphere bundlesmay admit many global sections without being trivial.
The same fact applies to local trivializations of principal bundles. Letπ:P→Xbe a principalG-bundle. Anopen setUinXadmits a local trivialization if and only if there exists a local section onU. Given a local trivialization
one can define an associated local section
whereeis theidentityinG. Conversely, given a sectionsone defines a trivializationΦby
The simple transitivity of theGaction on the fibers ofPguarantees that this map is abijection, it is also ahomeomorphism. The local trivializations defined by local sections areG-equivariantin the following sense. If we write
in the form
then the map
satisfies
Equivariant trivializations therefore preserve theG-torsor structure of the fibers. In terms of the associated local sectionsthe mapφis given by
The local version of the cross section theorem then states that the equivariant local trivializations of a principal bundle are in one-to-one correspondence with local sections.
Given an equivariant local trivialization({Ui}, {Φi})ofP, we have local sectionssion eachUi. On overlaps these must be related by the action of the structure groupG. In fact, the relationship is provided by thetransition functions
By gluing the local trivializations together using these transition functions, one may reconstruct the original principal bundle. This is an example of thefiber bundle construction theorem.
For anyx∈Ui∩Ujwe have
Ifπ:P→X{\displaystyle \pi :P\to X}is a smooth principalG{\displaystyle G}-bundle thenG{\displaystyle G}acts freely andproperlyonP{\displaystyle P}so that the orbit spaceP/G{\displaystyle P/G}isdiffeomorphicto the base spaceX{\displaystyle X}. It turns out that these properties completely characterize smooth principal bundles. That is, ifP{\displaystyle P}is a smooth manifold,G{\displaystyle G}a Lie group andμ:P×G→P{\displaystyle \mu :P\times G\to P}a smooth, free, and proper right action then
Given a subgroup H of G one may consider the bundleP/H{\displaystyle P/H}whose fibers are homeomorphic to thecoset spaceG/H{\displaystyle G/H}. If the new bundle admits a global section, then one says that the section is areduction of the structure group fromG{\displaystyle G}toH{\displaystyle H}. The reason for this name is that the (fiberwise) inverse image of the values of this section form a subbundle ofP{\displaystyle P}that is a principalH{\displaystyle H}-bundle. IfH{\displaystyle H}is the identity, then a section ofP{\displaystyle P}itself is a reduction of the structure group to the identity. Reductions of the structure group do not in general exist.
Many topological questions about the structure of a manifold or the structure of bundles over it that are associated to a principalG{\displaystyle G}-bundle may be rephrased as questions about the admissibility of the reduction of the structure group (fromG{\displaystyle G}toH{\displaystyle H}). For example:
Also note: ann{\displaystyle n}-dimensional manifold admitsn{\displaystyle n}vector fields that are linearly independent at each point if and only if itsframe bundleadmits a global section. In this case, the manifold is calledparallelizable.
IfP{\displaystyle P}is a principalG{\displaystyle G}-bundle andV{\displaystyle V}is alinear representationofG{\displaystyle G}, then one can construct a vector bundleE=P×GV{\displaystyle E=P\times _{G}V}with fibreV{\displaystyle V}, as the quotient of the productP{\displaystyle P}×V{\displaystyle V}by the diagonal action ofG{\displaystyle G}. This is a special case of theassociated bundleconstruction, andE{\displaystyle E}is called anassociated vector bundletoP{\displaystyle P}. If the representation ofG{\displaystyle G}onV{\displaystyle V}isfaithful, so thatG{\displaystyle G}is a subgroup of the general linear group GL(V{\displaystyle V}), thenE{\displaystyle E}is aG{\displaystyle G}-bundle andP{\displaystyle P}provides a reduction of structure group of the frame bundle ofE{\displaystyle E}fromGL(V){\displaystyle GL(V)}toG{\displaystyle G}. This is the sense in which principal bundles provide an abstract formulation of the theory of frame bundles.
Any topological groupGadmits aclassifying spaceBG: the quotient by the action ofGof someweakly contractiblespace,e.g., a topological space with vanishinghomotopy groups. The classifying space has the property that anyGprincipal bundle over aparacompactmanifoldBis isomorphic to apullbackof the principal bundleEG→BG.[5]In fact, more is true, as the set of isomorphism classes of principalGbundles over the baseBidentifies with the set of homotopy classes of mapsB→BG. | https://en.wikipedia.org/wiki/Principal_bundle |
Inmathematics, aprojective bundleis afiber bundlewhose fibers areprojective spaces.
By definition, aschemeXover aNoetherian schemeSis aPn-bundle if it is locally a projectiven-space; i.e.,X×SU≃PUn{\displaystyle X\times _{S}U\simeq \mathbb {P} _{U}^{n}}and transition automorphisms are linear. Over a regular schemeSsuch as asmooth variety, every projective bundle is of the formP(E){\displaystyle \mathbb {P} (E)}for some vector bundle (locally free sheaf)E.[1]
Everyvector bundleover avarietyXgives a projective bundle by taking the projective spaces of the fibers, but not all projective bundles arise in this way: there is anobstructionin thecohomology groupH2(X,O*). To see why, recall that a projective bundle comes equipped with transition functions on double intersections of a suitable open cover. On triple overlaps, any lift of these transition functions satisfies the cocycle condition up to an invertible function. The collection of these functions forms a 2-cocycle which vanishes inH2(X,O*) only if the projective bundle is the projectivization of a vector bundle. In particular, ifXis a compactRiemann surfacethenH2(X,O*)=0, and so this obstruction vanishes.
The projective bundle of a vector bundleEis the same thing as theGrassmann bundleG1(E){\displaystyle G_{1}(E)}of 1-planes inE.
The projective bundleP(E) of a vector bundleEis characterized by the universal property that says:[2]
For example, takingfto bep, one gets the line subbundleO(-1) ofp*E, called thetautological line bundleonP(E). Moreover, thisO(-1) is auniversal bundlein the sense that when a line bundleLgives a factorizationf=p∘g,Lis the pullback ofO(-1) alongg. See alsoCone#O(1)for a more explicit construction ofO(-1).
OnP(E), there is a natural exact sequence (called the tautological exact sequence):
whereQis called the tautological quotient-bundle.
LetE⊂Fbe vector bundles (locally free sheaves of finite rank) onXandG=F/E. Letq:P(F) →Xbe the projection. Then the natural mapO(-1) →q*F→q*Gis a global section of thesheaf homHom(O(-1),q*G) =q*G⊗O(1). Moreover, this natural map vanishes at a point exactly when the point is a line inE; in other words, the zero-locus of this section isP(E).
A particularly useful instance of this construction is whenFis the direct sumE⊕ 1 ofEand the trivial line bundle (i.e., the structure sheaf). ThenP(E) is a hyperplane inP(E⊕ 1), called the hyperplane at infinity, and the complement ofP(E) can be identified withE. In this way,P(E⊕ 1) is referred to as the projective completion (or "compactification") ofE.
The projective bundleP(E) is stable under twistingEby a line bundle; precisely, given a line bundleL, there is the natural isomorphism:
such thatg∗(O(−1))≃O(−1)⊗p∗L.{\displaystyle g^{*}({\mathcal {O}}(-1))\simeq {\mathcal {O}}(-1)\otimes p^{*}L.}[3](In fact, one getsgby the universal property applied to the line bundle on the right.)
Many non-trivial examples of projective bundles can be found using fibrations overP1{\displaystyle \mathbb {P} ^{1}}such asLefschetz fibrations. For example, an ellipticK3 surfaceX{\displaystyle X}is a K3 surface with a fibration
π:X→P1{\displaystyle \pi :X\to \mathbb {P} ^{1}}
such that the fibersEp{\displaystyle E_{p}}forp∈P1{\displaystyle p\in \mathbb {P} ^{1}}are generically elliptic curves. Because every elliptic curve is a genus 1 curve with a distinguished point, there exists a global section of the fibration. Because of this global section, there exists a model ofX{\displaystyle X}giving a morphism to the projective bundle[4]
X→P(OP1(4)⊕OP1(6)⊕OP1){\displaystyle X\to \mathbb {P} ({\mathcal {O}}_{\mathbb {P} ^{1}}(4)\oplus {\mathcal {O}}_{\mathbb {P} ^{1}}(6)\oplus {\mathcal {O}}_{\mathbb {P} ^{1}})}
defined by theWeierstrass equation
y2z+a1xyz+a3yz2=x3+a2x2z+a4xz2+a6z3{\displaystyle y^{2}z+a_{1}xyz+a_{3}yz^{2}=x^{3}+a_{2}x^{2}z+a_{4}xz^{2}+a_{6}z^{3}}
wherex,y,z{\displaystyle x,y,z}represent the local coordinates ofOP1(4),OP1(6),OP1{\displaystyle {\mathcal {O}}_{\mathbb {P} ^{1}}(4),{\mathcal {O}}_{\mathbb {P} ^{1}}(6),{\mathcal {O}}_{\mathbb {P} ^{1}}}, respectively, and the coefficients
ai∈H0(P1,OP1(2i)){\displaystyle a_{i}\in H^{0}(\mathbb {P} ^{1},{\mathcal {O}}_{\mathbb {P} ^{1}}(2i))}
are sections of sheaves onP1{\displaystyle \mathbb {P} ^{1}}. Note this equation is well-defined because each term in the Weierstrass equation has total degree12{\displaystyle 12}(meaning the degree of the coefficient plus the degree of the monomial. For example,deg(a1xyz)=2+(4+6+0)=12{\displaystyle {\text{deg}}(a_{1}xyz)=2+(4+6+0)=12}).
LetXbe a complex smooth projective variety andEa complex vector bundle of rankron it. Letp:P(E) →Xbe the projective bundle ofE. Then thecohomology ringH*(P(E)) is analgebra overH*(X) through the pullbackp*. Then the firstChern classζ =c1(O(1)) generates H*(P(E)) with the relation
whereci(E) is thei-th Chern class ofE. One interesting feature of this description is that one candefineChern classes as the coefficients in the relation; this is the approach taken by Grothendieck.
Over fields other than the complex field, the same description remains true withChow ringin place of cohomology ring (still assumingXis smooth). In particular, for Chow groups, there is the direct sum decomposition
As it turned out, this decomposition remains valid even ifXis not smooth nor projective.[5]In contrast,Ak(E) =Ak-r(X), via theGysin homomorphism, morally because that the fibers ofE, the vector spaces, are contractible. | https://en.wikipedia.org/wiki/Projective_bundle |
Inmathematics, apullback bundleorinduced bundle[1][2][3]is thefiber bundlethat is induced by a map of its base-space. Given a fiber bundleπ:E→Band acontinuous mapf:B′ →Bone can define a "pullback" ofEbyfas a bundlef*EoverB′. The fiber off*Eover a pointb′inB′is just the fiber ofEoverf(b′). Thusf*Eis thedisjoint unionof all these fibers equipped with a suitabletopology.
Letπ:E→Bbe a fiber bundle with abstract fiberFand letf:B′ →Bbe acontinuous map. Define thepullback bundleby
and equip it with thesubspace topologyand theprojection mapπ′ :f*E→B′given by the projection onto the first factor, i.e.,
The projection onto the second factor gives a map
such that the following diagramcommutes:
If(U,φ)is alocal trivializationofEthen(f−1U,ψ)is a local trivialization off*Ewhere
It then follows thatf*Eis a fiber bundle overB′with fiberF. The bundlef*Eis called thepullback ofEbyfor thebundle induced byf. The maphis then abundle morphismcoveringf.
AnysectionsofEoverBinduces a section off*E, called thepullback sectionf*s, simply by defining
If the bundleE→Bhasstructure groupGwith transition functionstij(with respect to a family of local trivializations{(Ui,φi)}) then the pullback bundlef*Ealso has structure groupG. The transition functions inf*Eare given by
IfE→Bis avector bundleorprincipal bundlethen so is the pullbackf*E. In the case of a principal bundle the rightactionofGonf*Eis given by
It then follows that the maphcoveringfisequivariantand so defines a morphism of principal bundles.
In the language ofcategory theory, the pullback bundle construction is an example of the more generalcategorical pullback. As such it satisfies the correspondinguniversal property.
The construction of the pullback bundle can be carried out in subcategories of the category oftopological spaces, such as the category ofsmooth manifolds. The latter construction is useful indifferential geometry and topology.
Bundles may also be described by theirsheaves of sections. The pullback of bundles then corresponds to theinverse image of sheaves, which is acontravariantfunctor. A sheaf, however, is more naturally acovariantobject, since it has apushforward, called thedirect image of a sheaf. The tension and interplay between bundles and sheaves, or inverse and direct image, can be advantageous in many areas of geometry. However, the direct image of a sheaf of sections of a bundle isnotin general the sheaf of sections of some direct image bundle, so that although the notion of a 'pushforward of a bundle' is defined in some contexts (for example, the pushforward by a diffeomorphism), in general it is better understood in the category of sheaves, because the objects it creates cannot in general be bundles. | https://en.wikipedia.org/wiki/Pullback_bundle |
Inalgebraic topology, aquasifibrationis a generalisation offibre bundlesandfibrationsintroduced byAlbrecht DoldandRené Thom. Roughly speaking, it is a continuous mapp:E→Bhaving the same behaviour as a fibration regarding the (relative)homotopy groupsofE,Bandp−1(x). Equivalently, one can define a quasifibration to be a continuous map such that the inclusion of each fibre into its homotopy fibre is aweak equivalence. One of the main applications of quasifibrations lies in proving theDold-Thom theorem.
A continuous surjective map of topological spacesp:E→Bis called aquasifibrationif it inducesisomorphisms
for allx∈B,y∈p−1(x) andi≥ 0. Fori= 0,1 one can only speak of bijections between the two sets.
By definition, quasifibrations share a key property of fibrations, namely that a quasifibrationp:E→Binduces along exact sequenceof homotopy groups
as follows directly from the long exact sequence for the pair (E,p−1(x)).
This long exact sequence is also functorial in the following sense: Any fibrewise mapf:E→E′induces a morphism between the exact sequences of the pairs (E,p−1(x)) and (E′,p′−1(x)) and therefore a morphism between the exact sequences of a quasifibration. Hence, the diagram
commutes withf0being the restriction offtop−1(x) andx′being an element of the formp′(f(e)) for ane∈p−1(x).
An equivalent definition is saying that a surjective mapp:E→Bis a quasifibration if the inclusion of the fibrep−1(b) into the homotopy fibreFbofpoverbis a weak equivalence for allb∈B. To see this, recall thatFbis the fibre ofqunderbwhereq:Ep→Bis the usualpath fibration construction. Thus, one has
andqis given byq(e, γ) = γ(1). Now consider the natural homotopy equivalence φ :E→Ep, given by φ(e) = (e,p(e)), wherep(e) denotes the corresponding constant path. By definition,pfactors throughEpsuch that one gets a commutative diagram
Applying πnyields the alternative definition.
The following is a direct consequence of the alternative definition of a fibration using the homotopy fibre:
A corollary of this theorem is that all fibres of a quasifibration are weakly homotopy equivalent if the base space ispath-connected, as this is the case for fibrations.
Checking whether a given map is a quasifibration tends to be quite tedious. The following two theorems are designed to make this problem easier. They will make use of the following notion: Letp:E→Bbe a continuous map. A subsetU⊂p(E) is calleddistinguished(with respect top) ifp:p−1(U) →Uis a quasifibration.
To see that the latter statement holds, one only needs to bear in mind that continuous images ofcompactsets inBalready lie in someBn. That way, one can reduce it to the case where the assertion is known.
These two theorems mean that it suffices to show that a given map is a quasifibration on certain subsets. Then one can patch these together in order to see that it holds on bigger subsets and finally, using a limiting argument, one sees that the map is a quasifibration on the whole space. This procedure has e.g. been used in the proof of the Dold-Thom theorem. | https://en.wikipedia.org/wiki/Quasifibration |
Inmathematics, theuniversal bundlein the theory offiber bundleswith structure group a giventopological groupG, is a specific bundle over aclassifying spaceBG, such that every bundle with the givenstructure groupGoverMis apullbackby means of acontinuous mapM→BG.
When the definition of the classifying space takes place within the homotopycategoryofCW complexes, existence theorems for universal bundles arise fromBrown's representability theorem.
We will first prove:
Proof.There exists an injection ofGinto aunitary groupU(n)fornbig enough.[1]If we findEU(n)then we can takeEGto beEU(n). The construction ofEU(n)is given inclassifying space forU(n).
The following Theorem is a corollary of the above Proposition.
Proof.On one hand, the pull-back of the bundleπ:EG→BGby the natural projectionP×GEG→BGis the bundleP×EG. On the other hand, the pull-back of the principalG-bundleP→Mby the projectionp:P×GEG→Mis alsoP×EG
Sincepis a fibration with contractible fibreEG, sections ofpexist.[2]To such a sectionswe associate the composition with the projectionP×GEG→BG. The map we get is thefwe were looking for.
For the uniqueness up to homotopy, notice that there exists a one-to-one correspondence between mapsf:M→BGsuch thatf∗(EG) →Mis isomorphic toP→Mand sections ofp. We have just seen how to associate afto a section. Inversely, assume thatfis given. LetΦ :f∗(EG) →Pbe an isomorphism:
Now, simply define a section by
Because all sections ofpare homotopic, the homotopy class offis unique.
The total space of a universal bundle is usually writtenEG. These spaces are of interest in their own right, despite typically beingcontractible. For example, in defining thehomotopy quotientorhomotopy orbit spaceof agroup actionofG, in cases where theorbit spaceispathological(in the sense of being a non-Hausdorff space, for example). The idea, ifGacts on the spaceX, is to consider instead the action onY=X×EG, and corresponding quotient. Seeequivariant cohomologyfor more detailed discussion.
IfEGis contractible thenXandYarehomotopy equivalentspaces. But the diagonal action onY, i.e. whereGacts on bothXandEGcoordinates, may bewell-behavedwhen the action onXis not. | https://en.wikipedia.org/wiki/Universal_bundle |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.