text
stringlengths
16
172k
source
stringlengths
32
122
Inmathematics, avector bundleis atopologicalconstruction that makes precise the idea of afamilyofvector spacesparameterized by anotherspaceX{\displaystyle X}(for exampleX{\displaystyle X}could be atopological space, amanifold, or analgebraic variety): to every pointx{\displaystyle x}of the spaceX{\displaystyle X}we associate (or "attach") a vector spaceV(x){\displaystyle V(x)}in such a way that these vector spaces fit together to form another space of the same kind asX{\displaystyle X}(e.g. a topological space, manifold, or algebraic variety), which is then called avector bundle overX{\displaystyle X}. The simplest example is the case that the family of vector spaces is constant, i.e., there is a fixed vector spaceV{\displaystyle V}such thatV(x)=V{\displaystyle V(x)=V}for allx{\displaystyle x}inX{\displaystyle X}: in this case there is a copy ofV{\displaystyle V}for eachx{\displaystyle x}inX{\displaystyle X}and these copies fit together to form the vector bundleX×V{\displaystyle X\times V}overX{\displaystyle X}. Such vector bundles are said to betrivial. A more complicated (and prototypical) class of examples are thetangent bundlesofsmooth (or differentiable) manifolds: to every point of such a manifold we attach thetangent spaceto the manifold at that point. Tangent bundles are not, in general, trivial bundles. For example, the tangent bundle of the sphere is non-trivial by thehairy ball theorem. In general, a manifold is said to beparallelizableif, and only if, its tangent bundle is trivial. Vector bundles are almost always required to belocally trivial, which means they are examples offiber bundles. Also, the vector spaces are usually required to be over therealorcomplex numbers, in which case the vector bundle is said to be a real or complex vector bundle (respectively).Complex vector bundlescan be viewed as real vector bundles with additional structure. In the following, we focus on real vector bundles in thecategory of topological spaces. Areal vector bundleconsists of: where the following compatibility condition is satisfied: for every pointp{\displaystyle p}inX{\displaystyle X}, there is anopen neighborhoodU⊆X{\displaystyle U\subseteq X}ofp{\displaystyle p}, anatural numberk{\displaystyle k}, and ahomeomorphism such that for allx{\displaystyle x}inU{\displaystyle U}, The open neighborhoodU{\displaystyle U}together with the homeomorphismφ{\displaystyle \varphi }is called alocal trivializationof the vector bundle. The local trivialization shows thatlocallythe mapπ{\displaystyle \pi }"looks like" the projection ofU×Rk{\displaystyle U\times \mathbb {R} ^{k}}onU{\displaystyle U}. Every fiberπ−1({x}){\displaystyle \pi ^{-1}(\{x\})}is a finite-dimensional real vector space and hence has adimensionkx{\displaystyle k_{x}}. The local trivializations show that thefunctionx→kx{\displaystyle x\to k_{x}}islocally constant, and is therefore constant on eachconnected componentofX{\displaystyle X}. Ifkx{\displaystyle k_{x}}is equal to a constantk{\displaystyle k}on all ofX{\displaystyle X}, thenk{\displaystyle k}is called therankof the vector bundle, andE{\displaystyle E}is said to be avector bundle of rankk{\displaystyle k}. Often the definition of a vector bundle includes that the rank is well defined, so thatkx{\displaystyle k_{x}}is constant. Vector bundles of rank 1 are calledline bundles, while those of rank 2 are less commonly called plane bundles. TheCartesian productX×Rk{\displaystyle X\times \mathbb {R} ^{k}}, equipped with the projectionX×Rk→X{\displaystyle X\times \mathbb {R} ^{k}\to X}, is called thetrivial bundleof rankk{\displaystyle k}overX{\displaystyle X}. Given a vector bundleE→X{\displaystyle E\to X}of rankk{\displaystyle k}, and a pair of neighborhoodsU{\displaystyle U}andV{\displaystyle V}over which the bundle trivializes via thecomposite function is well-defined on the overlap, and satisfies for someGL(k){\displaystyle {\text{GL}}(k)}-valued function These are called thetransition functions(or thecoordinate transformations) of the vector bundle. Thesetof transition functions forms aČech cocyclein the sense that for allU,V,W{\displaystyle U,V,W}over which the bundle trivializes satisfyingU∩V∩W≠∅{\displaystyle U\cap V\cap W\neq \emptyset }. Thus the data(E,X,π,Rk){\displaystyle (E,X,\pi ,\mathbb {R} ^{k})}defines afiber bundle; the additional data of thegUV{\displaystyle g_{UV}}specifies aGL(k){\displaystyle {\text{GL}}(k)}structure group in which theactionon the fiber is the standard action ofGL(k){\displaystyle {\text{GL}}(k)}. Conversely, given a fiber bundle(E,X,π,Rk){\displaystyle (E,X,\pi ,\mathbb {R} ^{k})}with aGL(k){\displaystyle {\text{GL}}(k)}cocycle acting in the standard way on the fiberRk{\displaystyle \mathbb {R} ^{k}}, there isassociateda vector bundle. This is an example of thefibre bundle construction theoremfor vector bundles, and can be taken as an alternative definition of a vector bundle. One simple method of constructing vector bundles is by taking subbundles of other vector bundles. Given a vector bundleπ:E→X{\displaystyle \pi :E\to X}over a topological space, a subbundle is simply asubspaceF⊂E{\displaystyle F\subset E}for which therestrictionπ|F{\displaystyle \left.\pi \right|_{F}}ofπ{\displaystyle \pi }toF{\displaystyle F}givesπ|F:F→X{\displaystyle \left.\pi \right|_{F}:F\to X}the structure of a vector bundle also. In this case the fibreFx⊂Ex{\displaystyle F_{x}\subset E_{x}}is a vector subspace for everyx∈X{\displaystyle x\in X}. A subbundle of a trivial bundle need not be trivial, and indeed every real vector bundle over a compact space can be viewed as a subbundle of a trivial bundle of sufficiently high rank. For example, theMöbius band, a non-trivialline bundleover the circle, can be seen as a subbundle of the trivial rank 2 bundle over the circle. Amorphismfrom the vector bundleπ1:E1→X1to the vector bundleπ2:E2→X2is given by a pair of continuous mapsf:E1→E2andg:X1→X2such that Note thatgis determined byf(becauseπ1is surjective), andfis then said tocoverg. The class of all vector bundles together with bundle morphisms forms acategory. Restricting to vector bundles for which the spaces are manifolds (and the bundle projections are smooth maps) and smooth bundle morphisms we obtain the category of smooth vector bundles. Vector bundle morphisms are a special case of the notion of abundle mapbetweenfiber bundles, and are sometimes called(vector) bundle homomorphisms. A bundle homomorphism fromE1toE2with aninversewhich is also a bundle homomorphism (fromE2toE1) is called a(vector) bundle isomorphism, and thenE1andE2are said to beisomorphicvector bundles. An isomorphism of a (rankk) vector bundleEoverXwith the trivial bundle (of rankkoverX) is called atrivializationofE, andEis then said to betrivial(ortrivializable). The definition of a vector bundle shows that any vector bundle islocally trivial. We can also consider the category of all vector bundles over a fixed base spaceX. As morphisms in this category we take those morphisms of vector bundles whose map on the base space is theidentity maponX. That is, bundle morphisms for which the following diagramcommutes: (Note that this category isnotabelian; thekernelof a morphism of vector bundles is in general not a vector bundle in any natural way.) A vector bundle morphism between vector bundlesπ1:E1→X1andπ2:E2→X2covering a mapgfromX1toX2can also be viewed as a vector bundle morphism overX1fromE1to thepullback bundleg*E2. Given a vector bundleπ:E→Xand an open subsetUofX, we can considersectionsofπonU, i.e. continuous functionss:U→Ewhere the compositeπ∘sis such that(π∘s)(u) =ufor alluinU. Essentially, a section assigns to every point ofUa vector from the attached vector space, in a continuous manner. As an example, sections of the tangent bundle of a differential manifold are nothing butvector fieldson that manifold. LetF(U) be the set of all sections onU.F(U) always contains at least one element, namely thezero section: the functionsthat maps every elementxofUto thezero element of the vector spaceπ−1({x}). With thepointwiseaddition andscalar multiplicationof sections,F(U) becomes itself a real vector space. The collection of these vector spaces is asheafof vector spaces onX. Ifsis an element ofF(U) and α:U→Ris a continuous map, then αs(pointwise scalar multiplication) is inF(U). We see thatF(U) is amoduleover theringof continuousreal-valued functionsonU. Furthermore, if OXdenotes the structure sheaf of continuous real-valued functions onX, thenFbecomes a sheaf of OX-modules. Not every sheaf of OX-modules arises in this fashion from a vector bundle: only thelocally freeones do. (The reason: locally we are looking for sections of a projectionU×Rk→U; these are precisely the continuous functionsU→Rk, and such a function is ak-tupleof continuous functionsU→R.) Even more: the category of real vector bundles onXisequivalentto the category of locally free andfinitely generatedsheaves of OX-modules. So we can think of the category of real vector bundles onXas sitting inside the category ofsheaves of OX-modules; this latter category is abelian, so this is where we can computekernelsandcokernelsof morphisms of vector bundles. A ranknvector bundle is trivialif and only ifit hasnlinearly independentglobal sections. Mostoperationson vector spaces can be extended to vector bundles by performing the vector space operationfiberwise. For example, ifEis a vector bundle overX, then there is a bundleE*overX, called thedual bundle, whose fiber atx∈Xis thedual vector space(Ex)*. FormallyE*can be defined as the set of pairs (x, φ), wherex∈Xand φ ∈ (Ex)*. The dual bundle is locally trivial because thedual spaceof the inverse of a local trivialization ofEis a local trivialization ofE*: the key point here is that the operation of taking the dual vector space isfunctorial. There are many functorial operations which can be performed on pairs of vector spaces (over the same field), and these extend straightforwardly to pairs of vector bundlesE,FonX(over the given field). A few examples follow. Each of these operations is a particular example of a general feature of bundles: that many operations that can be performed on thecategory of vector spacescan also be performed on the category of vector bundles in afunctorialmanner. This is made precise in the language ofsmooth functors. An operation of a different nature is thepullback bundleconstruction. Given a vector bundleE→Yand a continuous mapf:X→Yone can "pull back"Eto a vector bundlef*EoverX. The fiber over a pointx∈Xis essentially just the fiber overf(x) ∈Y. Hence, Whitney summingE⊕Fcan be defined as the pullback bundle of the diagonal map fromXtoX×Xwhere the bundle overX×XisE×F. Remark: LetXbe acompact space. Any vector bundleEoverXis a direct summand of a trivial bundle; i.e., there exists a bundleE'such thatE⊕E'is trivial. This fails ifXis not compact: for example, thetautological line bundleover the infinite real projective space does not have this property.[1] Vector bundles are often given more structure. For instance, vector bundles may be equipped with avector bundle metric. Usually this metric is required to bepositive definite, in which case each fibre ofEbecomes aEuclidean space. A vector bundle with acomplex structurecorresponds to acomplex vector bundle, which may also be obtained by replacing real vector spaces in the definition with complex ones and requiring that all mappings becomplex-linearin the fibers. More generally, one can typically understand the additional structure imposed on a vector bundle in terms of the resultingreduction of the structure group of a bundle. Vector bundles over more generaltopological fieldsmay also be used. If instead of a finite-dimensional vector space, the fiberFis taken to be aBanach spacethen aBanach bundleis obtained.[2]Specifically, one must require that the local trivializations are Banach space isomorphisms (rather than just linear isomorphisms) on each of the fibers and that, furthermore, the transitions are continuous mappings ofBanach manifolds. In the corresponding theory for Cpbundles, all mappings are required to be Cp. Vector bundles are specialfiber bundles, those whose fibers are vector spaces and whose cocycle respects the vector space structure. More general fiber bundles can be constructed in which the fiber may have other structures; for examplesphere bundlesare fibered by spheres. A vector bundle (E,p,M) issmooth, ifEandMaresmooth manifolds, p:E→Mis a smooth map, and the local trivializations arediffeomorphisms. Depending on the required degree ofsmoothness, there are different corresponding notions ofCpbundles,infinitely differentiableC∞-bundles andreal analyticCω-bundles. In this section we will concentrate onC∞-bundles. The most important example of aC∞-vector bundle is thetangent bundle(TM,πTM,M) of aC∞-manifoldM. A smooth vector bundle can be characterized by the fact that it admits transition functions as described above which aresmoothfunctions on overlaps of trivializing chartsUandV. That is, a vector bundleEis smooth if it admits a covering by trivializing open sets such that for any two such setsUandV, the transition function is a smooth function into thematrix groupGL(k,R), which is aLie group. Similarly, if the transition functions are: TheC∞-vector bundles (E,p,M) have a very important property not shared by more generalC∞-fibre bundles. Namely, the tangent spaceTv(Ex) at anyv∈Excan be naturally identified with the fibreExitself. This identification is obtained through thevertical liftvlv:Ex→Tv(Ex), defined as The vertical lift can also be seen as a naturalC∞-vector bundle isomorphismp*E→VE, where (p*E,p*p,E) is the pull-back bundle of (E,p,M) overEthroughp:E→M, andVE:= Ker(p*) ⊂TEis thevertical tangent bundle, a natural vector subbundle of the tangent bundle (TE,πTE,E) of the total spaceE. The total spaceEof any smooth vector bundle carries a natural vector fieldVv:= vlvv, known as thecanonical vector field. More formally,Vis a smooth section of (TE,πTE,E), and it can also be defined as the infinitesimal generator of theLie-group action(t,v)↦etv{\displaystyle (t,v)\mapsto e^{tv}}given by the fibrewise scalar multiplication. The canonical vector fieldVcharacterizes completely the smooth vector bundle structure in the following manner. As a preparation, note that whenXis a smooth vector field on a smooth manifoldMandx∈Msuch thatXx= 0, the linear mapping does not depend on the choice of the linearcovariant derivative∇ onM. The canonical vector fieldVonEsatisfies the axioms Conversely, ifEis any smooth manifold andVis a smooth vector field onEsatisfying 1–4, then there is a unique vector bundle structure onEwhose canonical vector field isV. For any smooth vector bundle (E,p,M) the total spaceTEof its tangent bundle (TE,πTE,E) has a naturalsecondary vector bundle structure(TE,p*,TM), wherep*is thepush-forwardof the canonical projectionp:E→M. The vector bundle operations in this secondary vector bundle structure are the push-forwards +*:T(E×E) →TEand λ*:TE→TEof the original addition +:E×E→Eand scalar multiplication λ:E→E. The K-theory group,K(X), of a compactHausdorfftopological space is defined as theabelian groupgenerated byisomorphism classes[E]ofcomplex vector bundlesmodulo therelationthat, whenever we have anexact sequence0→A→B→C→0,{\displaystyle 0\to A\to B\to C\to 0,}then[B]=[A]+[C]{\displaystyle [B]=[A]+[C]}intopological K-theory.KO-theoryis a version of this construction which considers real vector bundles. K-theory withcompact supportscan also be defined, as well as higher K-theory groups. The famousperiodicity theoremofRaoul Bottasserts that the K-theory of any spaceXis isomorphic to that of theS2X, the double suspension ofX. Inalgebraic geometry, one considers the K-theory groups consisting ofcoherent sheaveson aschemeX, as well as the K-theory groups of vector bundles on the scheme with the aboveequivalence relation. The two constructs are naturally isomorphic provided that the underlying scheme issmooth.
https://en.wikipedia.org/wiki/Vector_bundle
Intopologyandhigh energy physics, theWu–Yang dictionaryrefers to the mathematical identification that allows back-and-forth translation between the concepts ofgauge theoryand those ofdifferential geometry. The dictionary appeared in 1975 in an article byTai Tsun WuandC. N. Yangcomparingelectromagnetismandfiber bundletheory.[1]This dictionary has been credited as bringing mathematics and theoretical physics closer together.[2] A crucial example of the success of the dictionary is that it allowed the understanding ofmonopole quantizationin terms ofHopf fibrations.[3][4] Equivalences between fiber bundle theory and gauge theory were hinted at the end of the 1960s. In 1967, mathematicianAndrzej Trautmanstarted a series of lectures aimed at physicists and mathematicians atKing's College Londonregarding these connections.[4] Theoretical physicistsTai Tsun WuandC. N. Yangworking inStony Brook University, published a paper in 1975 on the mathematical framework ofelectromagnetismand theAharonov–Bohm effectin terms offiber bundles.A year later, mathematicianIsadore Singercame to visit and brought a copy back to theUniversity of Oxford.[2][5][6]Singer showed the paper toMichael Atiyahand other mathematicians, sparking a close collaboration between physicists and mathematicians.[2] Yang also recounts a conversation that he had with one of the mathematicians that founded fiber bundle theory,Shiing-Shen Chern:[2] In 1975, impressed with the fact that gauge fields are connections on fiber bundles, I drove to the house of Shiing-Shen Chern inEl Cerrito, nearBerkeley. (I had taken courses with him in the early 1940s when he was a young professor and I an undergraduate student at theNational Southwest Associated UniversityinKunming,China. That was before fiber bundles had become important in differential geometry and before Chern had made history with his contributions to the generalizedGauss–Bonnet theoremand theChern classes.) We had much to talk about: friends, relatives, China. When our conversation turned to fiber bundles, I told him that I had finally learned fromJim Simonsthe beauty of fiber-bundle theory and the profoundChern-Weil theorem. I said I found it amazing that gauge fields are exactly connections on fiber bundles, which the mathematicians developed without reference to the physical world. I added ‘this is both thrilling and puzzling, since you mathematicians dreamed up these concepts out of nowhere.’ He immediately protested, ‘No, no. These concepts were not dreamed up. They were natural and real.' In 1977, Trautman used these results to demonstrate an equivalence between a quantization condition formagnetic monopolesused byPaul Diracback in 1931 andHopf fibration, a fibration of a 3-sphere proposed io the same year by mathematicianHeinz Hopf.[4]MathematicianJim Simonsdiscussing this equivalence with Yang expressed that “Dirac had discovered trivial and nontrivial bundles before mathematicians.”[4] In the original paper, Wu and Yang added sources (like theelectric current) to the dictionary next to a blank spot, indicating a lack of any equivalent concept on the mathematical side. During interviews, Yang recalls that Singer and Atiyah found great interest in this concept of sources, which was unknown for mathematicians but that physicists knew since the 19th century. Mathematicians started working on that, which lead to the development ofDonaldson theorybySimon Donaldson, a student of Atiyah.[7][8] The Wu-Yang dictionary relates terms inparticle physicswith terms in mathematics, specifically fiber bundle theory. Many versions and generalization of the dictionary exist. Here is an example of a dictionary, which puts each physics term next to its mathematical analogue:[9] Wu and Yang considered the description of an electron traveling around a cylinder in the presence of a magnetic field inside the cylinder (outside the cylinder the field vanishes i.e.fμν=0{\displaystyle f_{\mu \nu }=0}). According to theAharonov–Bohm effect, the interference patterns shift by a factorexp⁡(−iΩ/Ω0){\displaystyle \exp(-i\Omega /\Omega _{0})}, whereΩ{\displaystyle \Omega }is the magnetic flux andΩ0{\displaystyle \Omega _{0}}is themagnetic flux quantum. For two different fluxesaandb, the results are identical ifΩa−Ωb=NΩ0{\displaystyle \Omega _{a}-\Omega _{b}=N\Omega _{0}}, whereN{\displaystyle N}is an integer. We define the operatorSab{\displaystyle S_{ab}}as the gauge transformation that brings the electronwave functionfrom one configuration to the otherψb=Sbaψa{\displaystyle \psi _{b}=S_{ba}\psi _{a}}. For an electron that takes a path from pointPto pointQ, we define the phase factor as whereAμ{\displaystyle A_{\mu }}is theelectromagnetic four-potential. For the case of a SU2gauge field, we can make the substitution whereXk=−iσk/2{\displaystyle X_{k}=-i\sigma _{k}/2}are the generators of SU2,σk{\displaystyle \sigma _{k}}are thePauli matrices. Under these concepts, Wu and Yang showed the relation between the language of gauge theory andfiber bundles, was codified in following dictionary:[2][10][11]
https://en.wikipedia.org/wiki/Wu%E2%80%93Yang_dictionary
Ingraph theory, afriendly-index setis afinite setofintegersassociated with a givenundirected graphand generated by a type ofgraph labelingcalled afriendly labeling. A friendly labeling of ann-vertex undirected graphG= (V,E)is defined to be an assignment of the values 0 and 1 to the vertices ofGwith the property that the number of vertices labeled 0 is as close as possible to the number of vertices labeled 1: they should either be equal (for graphs with an even number of vertices) or differ by one (for graphs with an odd number of vertices). Given a friendly labeling of the vertices ofG, one may also label the edges: a given edgeuvis labeled with a 0 if its endpointsuandvhave equal labels, and it is labeled with a 1 if its endpoints have different labels. Thefriendly indexof the labeling is theabsolute valueof the difference between the number of edges labeled 0 and the number of edges labeled 1. Thefriendly index setofG, denotedFI(G), is the set of numbers that can arise as friendly indexes of friendly labelings ofG.[1] The Dynamic Survey of Graph Labeling contains a list of papers that examines the friendly indices of various graphs.[2] Thisgraph theory-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Friendly-index_set
Inmathematics, and more specifically inhomological algebra, thesplitting lemmastates that in anyabelian category, the following statements areequivalentfor ashort exact sequence If any of these statements holds, the sequence is called asplit exact sequence, and the sequence is said tosplit. In the above short exact sequence, where the sequence splits, it allows one to refine thefirst isomorphism theorem, which states that: to: where the first isomorphism theorem is then just the projection ontoC. It is acategoricalgeneralization of therank–nullity theorem(in the formV ≅ kerT⊕ imT)inlinear algebra. First, to show that 3. implies both 1. and 2., we assume 3. and take astthe natural projection of the direct sum ontoA, and take asuthe natural injection ofCinto the direct sum. Toprovethat 1. implies 3., first note that any member ofBis in the set (kert+imq). This follows since for allbinB,b= (b−qt(b)) +qt(b);qt(b)is inimq, andb−qt(b)is inkert, since Next, theintersectionofimqandkertis 0, since if there existsainAsuch thatq(a) =b, andt(b) = 0, then0 =tq(a) =a; and therefore,b= 0. This proves thatBis the direct sum ofimqandkert. So, for allbinB,bcan be uniquely identified by someainA,kinkert, such thatb=q(a) +k. By exactnesskerr= imq. The subsequenceB⟶C⟶ 0implies thatrisonto; therefore for anycinCthere exists someb=q(a) +ksuch thatc=r(b) =r(q(a) +k) =r(k). Therefore, for anycinC, existskin kertsuch thatc=r(k), andr(kert) =C. Ifr(k) = 0, thenkis inimq; since the intersection ofimqandkert= 0, thenk= 0. Therefore, therestrictionr: kert→Cis an isomorphism; andkertis isomorphic toC. Finally,imqis isomorphic toAdue to the exactness of0 ⟶A⟶B; soBis isomorphic to the direct sum ofAandC, which proves (3). To show that 2. implies 3., we follow a similar argument. Any member ofBis in the setkerr+ imu; since for allbinB,b= (b−ur(b)) +ur(b), which is inkerr+ imu. The intersection ofkerrandimuis0, since ifr(b) = 0andu(c) =b, then0 =ru(c) =c. By exactness,imq= kerr, and sinceqis aninjection,imqis isomorphic toA, soAis isomorphic tokerr. Sinceruis abijection,uis an injection, and thusimuis isomorphic toC. SoBis again the direct sum ofAandC. An alternative "abstract nonsense"proof of the splitting lemmamay be formulated entirely incategory theoreticterms. In the form stated here, the splitting lemma does not hold in the fullcategory of groups, which is not an abelian category. It is partially true: if a short exact sequence of groups is left split or a direct sum (1. or 3.), then all of the conditions hold. For a direct sum this is clear, as one can inject from or project to the summands. For a left split sequence, the mapt×r:B→A×Cgives an isomorphism, soBis a direct sum (3.), and thus inverting the isomorphism and composing with the natural injectionC→A×Cgives an injectionC→Bsplittingr(2.). However, if a short exact sequence of groups is right split (2.), then it need not be left split or a direct sum (neither 1. nor 3. follows): the problem is that the image of the right splitting need not benormal. What is true in this case is thatBis asemidirect product, though not in general adirect product. To form a counterexample, take the smallestnon-abelian groupB≅S3, thesymmetric groupon three letters. LetAdenote thealternating subgroup, and letC=B/A≅ {±1}. Letqandrdenote the inclusion map and thesignmap respectively, so that is a short exact sequence. 3. fails, becauseS3is not abelian, but 2. holds: we may defineu:C→Bby mapping the generator to anytwo-cycle. Note for completeness that 1. fails: any mapt:B→Amust map every two-cycle to theidentitybecause the map has to be agroup homomorphism, while theorderof a two-cycle is 2 which can not be divided by the order of the elements inAother than the identity element, which is 3 asAis the alternating subgroup ofS3, or namely thecyclic groupoforder3. But everypermutationis a product of two-cycles, sotis the trivial map, whencetq:A→Ais the trivial map, not the identity.
https://en.wikipedia.org/wiki/Splitting_lemma
Inmathematics, theinverse functionof afunctionf(also called theinverseoff) is afunctionthat undoes the operation off. The inverse offexistsif and only iffisbijective, and if it exists, is denoted byf−1.{\displaystyle f^{-1}.} For a functionf:X→Y{\displaystyle f\colon X\to Y}, its inversef−1:Y→X{\displaystyle f^{-1}\colon Y\to X}admits an explicit description: it sends each elementy∈Y{\displaystyle y\in Y}to the unique elementx∈X{\displaystyle x\in X}such thatf(x) =y. As an example, consider thereal-valuedfunction of a real variable given byf(x) = 5x− 7. One can think offas the function which multiplies its input by 5 then subtracts 7 from the result. To undo this, one adds 7 to the input, then divides the result by 5. Therefore, the inverse offis the functionf−1:R→R{\displaystyle f^{-1}\colon \mathbb {R} \to \mathbb {R} }defined byf−1(y)=y+75.{\displaystyle f^{-1}(y)={\frac {y+7}{5}}.} Letfbe a function whosedomainis thesetX, and whosecodomainis the setY. Thenfisinvertibleif there exists a functiongfromYtoXsuch thatg(f(x))=x{\displaystyle g(f(x))=x}for allx∈X{\displaystyle x\in X}andf(g(y))=y{\displaystyle f(g(y))=y}for ally∈Y{\displaystyle y\in Y}.[1] Iffis invertible, then there is exactly one functiongsatisfying this property. The functiongis called the inverse off, and is usually denoted asf−1, a notation introduced byJohn Frederick William Herschelin 1813.[2][3][4][5][6][nb 1] The functionfis invertible if and only if it is bijective. This is because the conditiong(f(x))=x{\displaystyle g(f(x))=x}for allx∈X{\displaystyle x\in X}implies thatfisinjective, and the conditionf(g(y))=y{\displaystyle f(g(y))=y}for ally∈Y{\displaystyle y\in Y}implies thatfissurjective. The inverse functionf−1tofcan be explicitly described as the function Recall that iffis an invertible function with domainXand codomainY, then Using thecomposition of functions, this statement can be rewritten to the following equations between functions: whereidXis theidentity functionon the setX; that is, the function that leaves its argument unchanged. Incategory theory, this statement is used as the definition of an inversemorphism. Considering function composition helps to understand the notationf−1. Repeatedly composing a functionf:X→Xwith itself is callediteration. Iffis appliedntimes, starting with the valuex, then this is written asfn(x); sof2(x) =f(f(x)), etc. Sincef−1(f(x)) =x, composingf−1andfnyieldsfn−1, "undoing" the effect of one application off. While the notationf−1(x)might be misunderstood,[1](f(x))−1certainly denotes themultiplicative inverseoff(x)and has nothing to do with the inverse function off.[6]The notationf⟨−1⟩{\displaystyle f^{\langle -1\rangle }}might be used for the inverse function to avoid ambiguity with themultiplicative inverse.[7] In keeping with the general notation, some English authors use expressions likesin−1(x)to denote the inverse of the sine function applied tox(actually apartial inverse; see below).[8][6]Other authors feel that this may be confused with the notation for the multiplicative inverse ofsin (x), which can be denoted as(sin (x))−1.[6]To avoid any confusion, aninverse trigonometric functionis often indicated by the prefix "arc" (for Latinarcus).[9][10]For instance, the inverse of the sine function is typically called thearcsinefunction, written asarcsin(x).[9][10]Similarly, the inverse of ahyperbolic functionis indicated by the prefix "ar" (for Latinārea).[10]For instance, the inverse of thehyperbolic sinefunction is typically written asarsinh(x).[10]The expressions likesin−1(x)can still be useful to distinguish themultivaluedinverse from the partial inverse:sin−1⁡(x)={(−1)narcsin⁡(x)+πn:n∈Z}{\displaystyle \sin ^{-1}(x)=\{(-1)^{n}\arcsin(x)+\pi n:n\in \mathbb {Z} \}}. Other inverse special functions are sometimes prefixed with the prefix "inv", if the ambiguity of thef−1notation should be avoided.[11][10] The functionf:R→ [0,∞)given byf(x) =x2is not injective because(−x)2=x2{\displaystyle (-x)^{2}=x^{2}}for allx∈R{\displaystyle x\in \mathbb {R} }. Therefore,fis not invertible. If the domain of the function is restricted to the nonnegative reals, that is, we take the functionf:[0,∞)→[0,∞);x↦x2{\displaystyle f\colon [0,\infty )\to [0,\infty );\ x\mapsto x^{2}}with the sameruleas before, then the function is bijective and so, invertible.[12]The inverse function here is called the(positive) square root functionand is denoted byx↦x{\displaystyle x\mapsto {\sqrt {x}}}. The following table shows several standard functions and their inverses: Many functions given by algebraic formulas possess a formula for their inverse. This is because the inversef−1{\displaystyle f^{-1}}of an invertible functionf:R→R{\displaystyle f\colon \mathbb {R} \to \mathbb {R} }has an explicit description as This allows one to easily determine inverses of many functions that are given by algebraic formulas. For example, iffis the function then to determinef−1(y){\displaystyle f^{-1}(y)}for a real numbery, one must find the unique real numberxsuch that(2x+ 8)3=y. This equation can be solved: Thus the inverse functionf−1is given by the formula Sometimes, the inverse of a function cannot be expressed by aclosed-form formula. For example, iffis the function thenfis a bijection, and therefore possesses an inverse functionf−1. Theformula for this inversehas an expression as an infinite sum: Since a function is a special type ofbinary relation, many of the properties of an inverse function correspond to properties ofconverse relations. If an inverse function exists for a given functionf, then it is unique.[13]This follows since the inverse function must be the converse relation, which is completely determined byf. There is a symmetry between a function and its inverse. Specifically, iffis an invertible function with domainXand codomainY, then its inversef−1has domainYand imageX, and the inverse off−1is the original functionf. In symbols, for functionsf:X→Yandf−1:Y→X,[13] This statement is a consequence of the implication that forfto be invertible it must be bijective. Theinvolutorynature of the inverse can be concisely expressed by[14] The inverse of a composition of functions is given by[15] Notice that the order ofgandfhave been reversed; to undoffollowed byg, we must first undog, and then undof. For example, letf(x) = 3xand letg(x) =x+ 5. Then the compositiong∘fis the function that first multiplies by three and then adds five, To reverse this process, we must first subtract five, and then divide by three, This is the composition(f−1∘g−1)(x). IfXis a set, then theidentity functiononXis its own inverse: More generally, a functionf:X→Xis equal to its own inverse, if and only if the compositionf∘fis equal toidX. Such a function is called aninvolution. Iffis invertible, then the graph of the function is the same as the graph of the equation This is identical to the equationy=f(x)that defines the graph off, except that the roles ofxandyhave been reversed. Thus the graph off−1can be obtained from the graph offby switching the positions of thexandyaxes. This is equivalent toreflectingthe graph across the liney=x.[16][1] By theinverse function theorem, acontinuous functionof a single variablef:A→R{\displaystyle f\colon A\to \mathbb {R} }(whereA⊆R{\displaystyle A\subseteq \mathbb {R} }) is invertible on its range (image) if and only if it is either strictlyincreasing or decreasing(with no localmaxima or minima). For example, the function is invertible, since thederivativef′(x) = 3x2+ 1is always positive. If the functionfisdifferentiableon an intervalIandf′(x) ≠ 0for eachx∈I, then the inversef−1is differentiable onf(I).[17]Ify=f(x), the derivative of the inverse is given by the inverse function theorem, UsingLeibniz's notationthe formula above can be written as This result follows from thechain rule(see the article oninverse functions and differentiation). The inverse function theorem can be generalized to functions of several variables. Specifically, a continuously differentiablemultivariable functionf:Rn→Rnis invertible in a neighborhood of a pointpas long as theJacobian matrixoffatpisinvertible. In this case, the Jacobian off−1atf(p)is thematrix inverseof the Jacobian offatp. Even if a functionfis not one-to-one, it may be possible to define apartial inverseoffbyrestrictingthe domain. For example, the function is not one-to-one, sincex2= (−x)2. However, the function becomes one-to-one if we restrict to the domainx≥ 0, in which case (If we instead restrict to the domainx≤ 0, then the inverse is the negative of the square root ofy.) Alternatively, there is no need to restrict the domain if we are content with the inverse being amultivalued function: Sometimes, this multivalued inverse is called thefull inverseoff, and the portions (such as√xand −√x) are calledbranches. The most important branch of a multivalued function (e.g. the positive square root) is called theprincipal branch, and its value atyis called theprincipal valueoff−1(y). For a continuous function on the real line, one branch is required between each pair oflocal extrema. For example, the inverse of acubic functionwith a local maximum and a local minimum has three branches (see the adjacent picture). The above considerations are particularly important for defining the inverses oftrigonometric functions. For example, thesine functionis not one-to-one, since for every realx(and more generallysin(x+ 2πn) = sin(x)for everyintegern). However, the sine is one-to-one on the interval[−⁠π/2⁠,⁠π/2⁠], and the corresponding partial inverse is called thearcsine. This is considered the principal branch of the inverse sine, so the principal value of the inverse sine is always between −⁠π/2⁠and⁠π/2⁠. The following table describes the principal branch of each inverse trigonometric function:[19] Function compositionon the left and on the right need not coincide. In general, the conditions imply different properties off. For example, letf:R→[0, ∞)denote the squaring map, such thatf(x) =x2for allxinR, and letg:[0, ∞)→Rdenote the square root map, such thatg(x) =√xfor allx≥ 0. Thenf(g(x)) =xfor allxin[0, ∞); that is,gis a right inverse tof. However,gis not a left inverse tof, since, e.g.,g(f(−1)) = 1 ≠ −1. Iff:X→Y, aleft inverseforf(orretractionoff) is a functiong:Y→Xsuch that composingfwithgfrom the left gives the identity function[20]g∘f=idX⁡.{\displaystyle g\circ f=\operatorname {id} _{X}{\text{.}}}That is, the functiongsatisfies the rule The functiongmust equal the inverse offon the image off, but may take any values for elements ofYnot in the image. A functionfwith nonempty domain is injective if and only if it has a left inverse.[21]An elementary proof runs as follows: If nonemptyf:X→Yis injective, construct a left inverseg:Y→Xas follows: for ally∈Y, ifyis in the image off, then there existsx∈Xsuch thatf(x) =y. Letg(y) =x; this definition is unique becausefis injective. Otherwise, letg(y)be an arbitrary element ofX. For allx∈X,f(x)is in the image off. By construction,g(f(x)) =x, the condition for a left inverse. In classical mathematics, every injective functionfwith a nonempty domain necessarily has a left inverse; however, this may fail inconstructive mathematics. For instance, a left inverse of theinclusion{0,1} →Rof the two-element set in the reals violatesindecomposabilityby giving aretractionof the real line to the set{0,1}.[22] Aright inverseforf(orsectionoff) is a functionh:Y→Xsuch that That is, the functionhsatisfies the rule Thus,h(y)may be any of the elements ofXthat map toyunderf. A functionfhas a right inverse if and only if it issurjective(though constructing such an inverse in general requires theaxiom of choice). An inverse that is both a left and right inverse (atwo-sided inverse), if it exists, must be unique. In fact, if a function has a left inverse and a right inverse, they are both the same two-sided inverse, so it can be calledthe inverse. A function has a two-sided inverse if and only if it is bijective. Iff:X→Yis any function (not necessarily invertible), thepreimage(orinverse image) of an elementy∈Yis defined to be the set of all elements ofXthat map toy: The preimage ofycan be thought of as theimageofyunder the (multivalued) full inverse of the functionf. The notion can be generalized to subsets of the range. Specifically, ifSis anysubsetofY, the preimage ofS, denoted byf−1(S){\displaystyle f^{-1}(S)}, is the set of all elements ofXthat map toS: For example, take the functionf:R→R;x↦x2. This function is not invertible as it is not bijective, but preimages may be defined for subsets of the codomain, e.g. The original notion and its generalization are related by the identityf−1(y)=f−1({y}),{\displaystyle f^{-1}(y)=f^{-1}(\{y\}),}The preimage of a single elementy∈Y– asingleton set{y}– is sometimes called thefiberofy. WhenYis the set of real numbers, it is common to refer tof−1({y})as alevel set.
https://en.wikipedia.org/wiki/Inverse_function#Left_and_right_inverses
Inmathematics, particularly incombinatorics, given afamily of sets, here called a collectionC, atransversal(also called across-section[1][2][3]) is a set containing exactly one element from each member of the collection. When the sets of the collection are mutually disjoint, each element of the transversal corresponds to exactly one member ofC(the set it is a member of). If the original sets are not disjoint, there are two possibilities for the definition of a transversal: Incomputer science, computing transversals is useful in several application domains, with the inputfamily of setsoften being described as ahypergraph. Inset theory, theaxiom of choiceis equivalent to the statement that everypartitionhas a transversal.[7] A fundamental question in the study of SDR is whether or not an SDR exists.Hall's marriage theoremgives necessary and sufficient conditions for a finite collection of sets, some possibly overlapping, to have a transversal. The condition is that, for every integerk, every collection ofksets must contain in common at leastkdifferent elements.[4]: 29 The following refinement byH. J. Rysergives lower bounds on the number of such SDRs.[8]: 48 Theorem. LetS1,S2, ...,Smbe a collection of sets such thatSi1∪Si2∪⋯∪Sik{\displaystyle S_{i_{1}}\cup S_{i_{2}}\cup \dots \cup S_{i_{k}}}contains at leastkelements fork= 1,2,...,mand for allk-combinations {i1,i2,…,ik{\displaystyle i_{1},i_{2},\ldots ,i_{k}}} of the integers 1,2,...,mand suppose that each of these sets contains at leasttelements. Ift≤mthen the collection has at leastt! SDRs, and ift>mthen the collection has at leastt! / (t-m)! SDRs. One can construct abipartite graphin which the vertices on one side are the sets, the vertices on the other side are the elements, and the edges connect a set to the elements it contains. Then, a transversal (defined as a system ofdistinctrepresentatives) is equivalent to aperfect matchingin this graph. One can construct ahypergraphin which the vertices are the elements, and the hyperedges are the sets. Then, a transversal (defined as a system ofnot-necessarily-distinctrepresentatives) is avertex coverin a hypergraph. Ingroup theory, given asubgroupHof a groupG, a right (respectively left) transversal is asetcontaining exactly one element from each right (respectively left)cosetofH. In this case, the "sets" (cosets) are mutually disjoint, i.e. the cosets form apartitionof the group. As a particular case of the previous example, given adirect product of groupsG=H×K{\displaystyle G=H\times K}, thenHis a transversal for the cosets ofK. In general, since anyequivalence relationon an arbitrary set gives rise to a partition, picking any representative from eachequivalence classresults in a transversal. Another instance of a partition-based transversal occurs when one considers the equivalence relation known as the(set-theoretic) kernel of a function, defined for a functionf{\displaystyle f}withdomainXas the partition of the domainker⁡f:={{y∈X∣f(x)=f(y)}∣x∈X}{\displaystyle \operatorname {ker} f:=\left\{\,\left\{\,y\in X\mid f(x)=f(y)\,\right\}\mid x\in X\,\right\}}. which partitions the domain offinto equivalence classes such that all elements in a class map viafto the same value. Iffis injective, there is only one transversal ofker⁡f{\displaystyle \operatorname {ker} f}. For a not-necessarily-injectivef, fixing a transversalTofker⁡f{\displaystyle \operatorname {ker} f}induces a one-to-one correspondence betweenTand theimageoff, henceforth denoted byIm⁡f{\displaystyle \operatorname {Im} f}. Consequently, a functiong:(Im⁡f)→T{\displaystyle g:(\operatorname {Im} f)\to T}is well defined by the property that for allzinIm⁡f,g(z)=x{\displaystyle \operatorname {Im} f,g(z)=x}wherexis the unique element inTsuch thatf(x)=z{\displaystyle f(x)=z}; furthermore,gcan be extended (not necessarily in a unique manner) so that it is defined on the wholecodomainoffby picking arbitrary values forg(z)whenzis outside the image off. It is a simple calculation to verify thatgthus defined has the property thatf∘g∘f=f{\displaystyle f\circ g\circ f=f}, which is the proof (when the domain and codomain offare the same set) that thefull transformation semigroupis aregular semigroup.g{\displaystyle g}acts as a (not necessarily unique)quasi-inverseforf; within semigroup theory this is simply called an inverse. Note however that for an arbitrarygwith the aforementioned property the "dual" equationg∘f∘g=g{\displaystyle g\circ f\circ g=g}may not hold. However if we denote byh=g∘f∘g{\displaystyle h=g\circ f\circ g}, thenfis a quasi-inverse ofh, i.e.h∘f∘h=h{\displaystyle h\circ f\circ h=h}. Acommon transversalof the collectionsAandB(where|A|=|B|=n{\displaystyle |A|=|B|=n}) is a set that is a transversal of bothAandB. The collectionsAandBhave a common transversal if and only if, for allI,J⊂{1,...,n}{\displaystyle I,J\subset \{1,...,n\}}, Apartial transversalis a set containing at most one element from each member of the collection, or (in the stricter form of the concept) a set with an injection from the set toC. The transversals of a finite collectionCof finite sets form the basis sets of amatroid, thetransversal matroidofC. The independent sets of the transversal matroid are the partial transversals ofC.[10] Anindependent transversal(also called arainbow-independent setorindependent system of representatives) is a transversal which is also anindependent setof a given graph. To explain the difference in figurative terms, consider a faculty withmdepartments, where the faculty dean wants to construct a committee ofmmembers, one member per department. Such a committee is a transversal. But now, suppose that some faculty members dislike each other and do not agree to sit in the committee together. In this case, the committee must be an independent transversal, where the underlying graph describes the "dislike" relations.[11] Another generalization of the concept of a transversal would be a set that just has a non-empty intersection with each member ofC. An example of the latter would be aBernstein set, which is defined as a set that has a non-empty intersection with each set ofC, but contains no set ofC, whereCis the collection of allperfect setsof a topologicalPolish space. As another example, letCconsist of all the lines of aprojective plane, then ablocking setin this plane is a set of points which intersects each line but contains no line. In the language ofcategory theory, atransversalof a collection of mutually disjoint sets is asectionof thequotient mapinduced by the collection. Thecomputational complexityof computing all transversals of an inputfamily of setshas been studied, in particular in the framework ofenumeration algorithms.
https://en.wikipedia.org/wiki/Transversal_(combinatorics)
Inmathematics, some functions or groups of functions are important enough to deserve their own names. This is a listing of articles which explain some of these functions in more detail. There is a large theory ofspecial functionswhich developed out ofstatisticsandmathematical physics. A modern, abstract point of view contrasts largefunction spaces, which are infinite-dimensional and within which most functions are 'anonymous', with special functions picked out by properties such assymmetry, or relationship toharmonic analysisandgroup representations. See alsoList of types of functions Elementary functionsare functions built from basic operations (e.g. addition, exponentials, logarithms...) Algebraic functionsare functions that can be expressed as the solution of a polynomial equation with integer coefficients. Transcendental functionsare functions that are not algebraic.
https://en.wikipedia.org/wiki/List_of_mathematical_functions
Setscan be classified according to the properties they have.
https://en.wikipedia.org/wiki/List_of_types_of_sets
In applied mathematics,test functions, known asartificial landscapes, are useful to evaluate characteristics of optimization algorithms, such asconvergence rate, precision, robustness and general performance. Here some test functions are presented with the aim of giving an idea about the different situations that optimization algorithms have to face when coping with these kinds of problems. In the first part, some objective functions for single-objective optimization cases are presented. In the second part, test functions with their respectivePareto frontsformulti-objective optimizationproblems (MOP) are given. The artificial landscapes presented herein for single-objective optimization problems are taken from Bäck,[1]Haupt et al.[2]and from Rody Oldenhuis software.[3]Given the number of problems (55 in total), just a few are presented here. The test functions used to evaluate the algorithms for MOP were taken from Deb,[4]Binh et al.[5]and Binh.[6]The software developed by Deb can be downloaded,[7]which implements the NSGA-II procedure with GAs, or the program posted on Internet,[8]which implements the NSGA-II procedure with ES. Just a general form of the equation, a plot of the objective function, boundaries of the object variables and the coordinates of global minima are given herein. where:A=10{\displaystyle {\text{where: }}A=10} −exp⁡[0.5(cos⁡2πx+cos⁡2πy)]+e+20{\displaystyle -\exp \left[0.5\left(\cos 2\pi x+\cos 2\pi y\right)\right]+e+20} +(2.625−x+xy3)2{\displaystyle +\left(2.625-x+xy^{3}\right)^{2}} [30+(2x−3y)2(18−32x+12x2+48y−36xy+27y2)]{\displaystyle \left[30+\left(2x-3y\right)^{2}\left(18-32x+12x^{2}+48y-36xy+27y^{2}\right)\right]} +(y−1)2(1+sin2⁡2πy){\displaystyle +\left(y-1\right)^{2}\left(1+\sin ^{2}2\pi y\right)} or, similarly,f(x1,x2,...,xn−1,xn)=∑i=1m(ci+∑j=1n(xj−aij)2)−1{\displaystyle f(x_{1},x_{2},...,x_{n-1},x_{n})=\sum _{i=1}^{m}\;\left(c_{i}+\sum \limits _{j=1}^{n}(x_{j}-a_{ij})^{2}\right)^{-1}} subjected to:x2+y2≤2{\displaystyle x^{2}+y^{2}\leq 2} subjected to:(x+5)2+(y+5)2<25{\displaystyle (x+5)^{2}+(y+5)^{2}<25} subjected to:x2+y2<[2cos⁡t−12cos⁡2t−14cos⁡3t−18cos⁡4t]2+[2sin⁡t]2{\displaystyle x^{2}+y^{2}<\left[2\cos t-{\frac {1}{2}}\cos 2t-{\frac {1}{4}}\cos 3t-{\frac {1}{8}}\cos 4t\right]^{2}+[2\sin t]^{2}}where:t= Atan2(x,y) subjected to:0.75−∏i=1mxi<0{\displaystyle 0.75-\prod _{i=1}^{m}x_{i}<0}, and∑i=1mxi−7.5m<0{\displaystyle \sum _{i=1}^{m}x_{i}-7.5m<0} [further explanation needed] where={A1=0.5sin⁡(1)−2cos⁡(1)+sin⁡(2)−1.5cos⁡(2)A2=1.5sin⁡(1)−cos⁡(1)+2sin⁡(2)−0.5cos⁡(2)B1(x,y)=0.5sin⁡(x)−2cos⁡(x)+sin⁡(y)−1.5cos⁡(y)B2(x,y)=1.5sin⁡(x)−cos⁡(x)+2sin⁡(y)−0.5cos⁡(y){\displaystyle {\text{where}}={\begin{cases}A_{1}=0.5\sin \left(1\right)-2\cos \left(1\right)+\sin \left(2\right)-1.5\cos \left(2\right)\\A_{2}=1.5\sin \left(1\right)-\cos \left(1\right)+2\sin \left(2\right)-0.5\cos \left(2\right)\\B_{1}\left(x,y\right)=0.5\sin \left(x\right)-2\cos \left(x\right)+\sin \left(y\right)-1.5\cos \left(y\right)\\B_{2}\left(x,y\right)=1.5\sin \left(x\right)-\cos \left(x\right)+2\sin \left(y\right)-0.5\cos \left(y\right)\end{cases}}}
https://en.wikipedia.org/wiki/Test_functions_for_optimization
This following list features abbreviated names of mathematical functions, function-like operators and other mathematical terminology.
https://en.wikipedia.org/wiki/List_of_mathematical_abbreviations
This is alist of special function eponymsinmathematics, to cover the theory ofspecial functions, thedifferential equationsthey satisfy, nameddifferential operatorsof the theory (but not intended to include every mathematicaleponym). Namedsymmetric functions, and other special polynomials, are included.
https://en.wikipedia.org/wiki/List_of_special_functions_and_eponyms
Inanalytical chemistry, acalibration curve, also known as astandard curve, is a general method for determining the concentration of a substance in an unknown sample by comparing the unknown to a set of standard samples of known concentration.[1]A calibration curve is one approach to the problem of instrument calibration; other standard approaches may mix the standard into the unknown, giving aninternal standard. The calibration curve is a plot of how the instrumental response, the so-called analytical signal, changes with the concentration of theanalyte(the substance to be measured). In more general use, a calibration curve is acurveortablefor ameasuring instrumentwhich measures some parameter indirectly, giving values for the desired quantity as a function of values ofsensoroutput. For example, a calibration curve can be made for a particularpressure transducerto determine appliedpressurefromtransduceroutput (a voltage).[2]Such a curve is typically used when an instrument uses a sensor whose calibration varies from one sample to another, or changes with time or use; if sensor output is consistent the instrument would be marked directly in terms of the measured unit. The operator prepares a series of standards across a range of concentrations near the expected concentration of analyte in the unknown. The concentrations of the standards must lie within the working range of the technique (instrumentation) they are using.[3]Analyzing each of these standards using the chosen technique will produce a series of measurements. For most analyses a plot of instrument response vs. concentration will show a linear relationship. The operator can measure the response of the unknown and, using the calibration curve, caninterpolateto find the concentration of analyte. The data – the concentrations of the analyte and the instrument response for each standard – can be fit to a straight line, usinglinear regressionanalysis. This yields a model described by the equationy = mx + y0, whereyis the instrument response,mrepresents the sensitivity, andy0is a constant that describes the background. The analyte concentration (x) of unknown samples may be calculated from this equation. Many different variables can be used as the analytical signal. For instance,chromium(III) might be measured using achemiluminescencemethod, in an instrument that contains aphotomultiplier tube(PMT) as the detector. The detector converts the light produced by the sample into a voltage, which increases with intensity of light. The amount of light measured is the analytical signal. TheBradford assayis a colorimetric assay that measures protein concentration. ThereagentCoomassie brilliant blueturns blue when it binds toarginineandaromaticamino acidspresent in proteins, thus increasing theabsorbanceof the sample. The absorbance is measured using aspectrophotometer, at the maximum absorbance frequency (Amax) of the blue dye (which is 595 nm). In this case, the greater the absorbance, the higher the protein concentration. Data for known concentrations of protein are used to make the standard curve, plotting concentration on the X axis, and the assay measurement on the Y axis. The same assay is then performed with samples of unknown concentration. To analyze the data, one locates the measurement on the Y-axis that corresponds to the assay measurement of the unknown substance and follows a line to intersect the standard curve. The corresponding value on the X-axis is the concentration of substance in the unknown sample.[4][5] As expected, the concentration of the unknown will have some error which can be calculated from the formula below.[6][7][8]This formula assumes that a linear relationship is observed for all the standards. It is important to note that the error in the concentration will be minimal if the signal from the unknown lies in the middle of the signals of all the standards (the termyunk−y¯{\displaystyle y_{\text{unk}}-{\bar {y}}}goes to zero ifyunk=y¯{\displaystyle y_{\text{unk}}={\bar {y}}}) sx=sy|m|1n+1k+(yunk−y¯)2m2∑i(xi−x¯)2{\displaystyle s_{x}={\frac {s_{y}}{|m|}}{\sqrt {{\frac {1}{n}}+{\frac {1}{k}}+{\frac {(y_{\text{unk}}-{\bar {y}})^{2}}{m^{2}\sum _{i}{(x_{i}-{\bar {x}})^{2}}}}}}} Most analytical techniques use a calibration curve. There are a number of advantages to this approach. First, the calibration curve provides a reliable way to calculate the uncertainty of the concentration calculated from the calibration curve (using the statistics of theleast squaresline fit to the data).[9][10] Second, the calibration curve provides data on an empirical relationship. The mechanism for the instrument's response to the analyte may be predicted or understood according to some theoretical model, but most such models have limited value for real samples. (Instrumental response is usually highly dependent on the condition of the analyte,solventsused and impurities it may contain; it could also be affected by external factors such as pressure and temperature.) Many theoretical relationships, such asfluorescence, require the determination of an instrumental constant anyway, by analysis of one or more reference standards; a calibration curve is a convenient extension of this approach. The calibration curve for a particular analyte in a particular (type of) sample provides the empirical relationship needed for those particular measurements. The chief disadvantages are (1) that the standards require a supply of the analyte material, preferably of high purity and in known concentration, and (2) that the standards and the unknown are in the same matrix. Some analytes – e.g., particular proteins – are extremely difficult to obtain pure in sufficient quantity. Other analytes are often in complex matrices, e.g., heavy metals in pond water. In this case, the matrix may interfere with or attenuate the signal of the analyte. Therefore, a comparison between the standards (which contain no interfering compounds) and the unknown is not possible. The method ofstandard additionis a way to handle such a situation.
https://en.wikipedia.org/wiki/Calibration_curve
Curve-fitting compactionisdata compactionaccomplished by replacing data to be stored or transmitted with ananalytical expression. Examples of curve-fitting compaction consisting ofdiscretizationand theninterpolationare: This article incorporatespublic domain materialfromFederal Standard 1037C.General Services Administration. Archived fromthe originalon 2022-01-22. This computing article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Curve-fitting_compaction
Inapplied mathematics,discretizationis the process of transferringcontinuousfunctions, models, variables, and equations intodiscretecounterparts. This process is usually carried out as a first step toward making them suitable for numerical evaluation and implementation on digital computers.Dichotomizationis the special case of discretization in which the number of discrete classes is 2, which can approximate a continuous variable as abinary variable(creating adichotomyformodelingpurposes, as inbinary classification). Discretization is also related todiscrete mathematics, and is an important component ofgranular computing. In this context,discretizationmay also refer to modification of variable or categorygranularity, as when multiple discrete variables are aggregated or multiple discrete categories fused. Whenever continuous data isdiscretized, there is always some amount ofdiscretization error. The goal is to reduce the amount to a level considerednegligiblefor themodelingpurposes at hand. The termsdiscretizationandquantizationoften have the samedenotationbut not always identicalconnotations. (Specifically, the two terms share asemantic field.) The same is true ofdiscretization errorandquantization error. Mathematical methods relating to discretization include theEuler–Maruyama methodand thezero-order hold. Discretization is also concerned with the transformation of continuousdifferential equationsinto discretedifference equations, suitable fornumerical computing. The following continuous-timestate space model x˙(t)=Ax(t)+Bu(t)+w(t)y(t)=Cx(t)+Du(t)+v(t){\displaystyle {\begin{aligned}{\dot {\mathbf {x} }}(t)&=\mathbf {Ax} (t)+\mathbf {Bu} (t)+\mathbf {w} (t)\\[2pt]\mathbf {y} (t)&=\mathbf {Cx} (t)+\mathbf {Du} (t)+\mathbf {v} (t)\end{aligned}}} wherevandware continuous zero-meanwhite noisesources withpower spectral densities w(t)∼N(0,Q)v(t)∼N(0,R){\displaystyle {\begin{aligned}\mathbf {w} (t)&\sim N(0,\mathbf {Q} )\\[2pt]\mathbf {v} (t)&\sim N(0,\mathbf {R} )\end{aligned}}} can be discretized, assumingzero-order holdfor the inputuand continuous integration for the noisev, to x[k+1]=Adx[k]+Bdu[k]+w[k]y[k]=Cdx[k]+Ddu[k]+v[k]{\displaystyle {\begin{aligned}\mathbf {x} [k+1]&=\mathbf {A_{d}x} [k]+\mathbf {B_{d}u} [k]+\mathbf {w} [k]\\[2pt]\mathbf {y} [k]&=\mathbf {C_{d}x} [k]+\mathbf {D_{d}u} [k]+\mathbf {v} [k]\end{aligned}}} with covariances w[k]∼N(0,Qd)v[k]∼N(0,Rd){\displaystyle {\begin{aligned}\mathbf {w} [k]&\sim N(0,\mathbf {Q_{d}} )\\[2pt]\mathbf {v} [k]&\sim N(0,\mathbf {R_{d}} )\end{aligned}}} where Ad=eAT=L−1{(sI−A)−1}t=TBd=(∫τ=0TeAτdτ)BCd=CDd=DQd=∫τ=0TeAτQeA⊤τdτRd=R1T{\displaystyle {\begin{aligned}\mathbf {A_{d}} &=e^{\mathbf {A} T}={\mathcal {L}}^{-1}{\Bigl \{}(s\mathbf {I} -\mathbf {A} )^{-1}{\Bigr \}}_{t=T}\\[4pt]\mathbf {B_{d}} &=\left(\int _{\tau =0}^{T}e^{\mathbf {A} \tau }d\tau \right)\mathbf {B} \\[4pt]\mathbf {C_{d}} &=\mathbf {C} \\[8pt]\mathbf {D_{d}} &=\mathbf {D} \\[2pt]\mathbf {Q_{d}} &=\int _{\tau =0}^{T}e^{\mathbf {A} \tau }\mathbf {Q} e^{\mathbf {A} ^{\top }\tau }d\tau \\[2pt]\mathbf {R_{d}} &=\mathbf {R} {\frac {1}{T}}\end{aligned}}} andTis thesample time. IfAisnonsingular,Bd=A−1(Ad−I)B.{\displaystyle \mathbf {B_{d}} =\mathbf {A} ^{-1}(\mathbf {A_{d}} -\mathbf {I} )\mathbf {B} .} The equation for the discretized measurement noise is a consequence of the continuous measurement noise being defined with a power spectral density.[1] A clever trick to computeAdandBdin one step is by utilizing the following property:[2]: p. 215 e[AB00]T=[AdBd0I]{\displaystyle e^{{\begin{bmatrix}\mathbf {A} &\mathbf {B} \\\mathbf {0} &\mathbf {0} \end{bmatrix}}T}={\begin{bmatrix}\mathbf {A_{d}} &\mathbf {B_{d}} \\\mathbf {0} &\mathbf {I} \end{bmatrix}}} WhereAdandBdare the discretized state-space matrices. Numerical evaluation ofQdis a bit trickier due to the matrix exponential integral. It can, however, be computed by first constructing a matrix, and computing the exponential of it[3]F=[−AQ0A⊤]TG=eF=[…Ad−1Qd0Ad⊤]{\displaystyle {\begin{aligned}\mathbf {F} &={\begin{bmatrix}-\mathbf {A} &\mathbf {Q} \\\mathbf {0} &\mathbf {A} ^{\top }\end{bmatrix}}T\\[2pt]\mathbf {G} &=e^{\mathbf {F} }={\begin{bmatrix}\dots &\mathbf {A_{d}} ^{-1}\mathbf {Q_{d}} \\\mathbf {0} &\mathbf {A_{d}} ^{\top }\end{bmatrix}}\end{aligned}}}The discretized process noise is then evaluated by multiplying the transpose of the lower-right partition ofGwith the upper-right partition ofG:Qd=(Ad⊤)⊤(Ad−1Qd)=Ad(Ad−1Qd).{\displaystyle \mathbf {Q_{d}} =(\mathbf {A_{d}} ^{\top })^{\top }(\mathbf {A_{d}} ^{-1}\mathbf {Q_{d}} )=\mathbf {A_{d}} (\mathbf {A_{d}} ^{-1}\mathbf {Q_{d}} ).} Starting with the continuous modelx˙(t)=Ax(t)+Bu(t){\displaystyle \mathbf {\dot {x}} (t)=\mathbf {Ax} (t)+\mathbf {Bu} (t)}we know that thematrix exponentialisddteAt=AeAt=eAtA{\displaystyle {\frac {d}{dt}}e^{\mathbf {A} t}=\mathbf {A} e^{\mathbf {A} t}=e^{\mathbf {A} t}\mathbf {A} }and by premultiplying the model we gete−Atx˙(t)=e−AtAx(t)+e−AtBu(t){\displaystyle e^{-\mathbf {A} t}\mathbf {\dot {x}} (t)=e^{-\mathbf {A} t}\mathbf {Ax} (t)+e^{-\mathbf {A} t}\mathbf {Bu} (t)}which we recognize asddt[e−Atx(t)]=e−AtBu(t){\displaystyle {\frac {d}{dt}}{\Bigl [}e^{-\mathbf {A} t}\mathbf {x} (t){\Bigr ]}=e^{-\mathbf {A} t}\mathbf {Bu} (t)}and by integrating,e−Atx(t)−e0x(0)=∫0te−AτBu(τ)dτx(t)=eAtx(0)+∫0teA(t−τ)Bu(τ)dτ{\displaystyle {\begin{aligned}e^{-\mathbf {A} t}\mathbf {x} (t)-e^{0}\mathbf {x} (0)&=\int _{0}^{t}e^{-\mathbf {A} \tau }\mathbf {Bu} (\tau )d\tau \\[2pt]\mathbf {x} (t)&=e^{\mathbf {A} t}\mathbf {x} (0)+\int _{0}^{t}e^{\mathbf {A} (t-\tau )}\mathbf {Bu} (\tau )d\tau \end{aligned}}}which is an analytical solution to the continuous model. Now we want to discretise the above expression. We assume thatuisconstantduring each timestep.x[k]=defx(kT)x[k]=eAkTx(0)+∫0kTeA(kT−τ)Bu(τ)dτx[k+1]=eA(k+1)Tx(0)+∫0(k+1)TeA[(k+1)T−τ]Bu(τ)dτx[k+1]=eAT[eAkTx(0)+∫0kTeA(kT−τ)Bu(τ)dτ]+∫kT(k+1)TeA(kT+T−τ)Bu(τ)dτ{\displaystyle {\begin{aligned}\mathbf {x} [k]&\,{\stackrel {\mathrm {def} }{=}}\ \mathbf {x} (kT)\\[6pt]\mathbf {x} [k]&=e^{\mathbf {A} kT}\mathbf {x} (0)+\int _{0}^{kT}e^{\mathbf {A} (kT-\tau )}\mathbf {Bu} (\tau )d\tau \\[4pt]\mathbf {x} [k+1]&=e^{\mathbf {A} (k+1)T}\mathbf {x} (0)+\int _{0}^{(k+1)T}e^{\mathbf {A} [(k+1)T-\tau ]}\mathbf {Bu} (\tau )d\tau \\[2pt]\mathbf {x} [k+1]&=e^{\mathbf {A} T}\left[e^{\mathbf {A} kT}\mathbf {x} (0)+\int _{0}^{kT}e^{\mathbf {A} (kT-\tau )}\mathbf {Bu} (\tau )d\tau \right]+\int _{kT}^{(k+1)T}e^{\mathbf {A} (kT+T-\tau )}\mathbf {B} \mathbf {u} (\tau )d\tau \end{aligned}}}We recognize the bracketed expression asx[k]{\displaystyle \mathbf {x} [k]}, and the second term can be simplified by substituting with the functionv(τ)=kT+T−τ{\displaystyle v(\tau )=kT+T-\tau }. Note thatdτ=−dv{\displaystyle d\tau =-dv}. We also assume thatuis constant during theintegral, which in turn yields x[k+1]=eATx[k]−(∫v(kT)v((k+1)T)eAvdv)Bu[k]=eATx[k]−(∫T0eAvdv)Bu[k]=eATx[k]+(∫0TeAvdv)Bu[k]=eATx[k]+A−1(eAT−I)Bu[k]{\displaystyle {\begin{aligned}\mathbf {x} [k+1]&=e^{\mathbf {A} T}\mathbf {x} [k]-\left(\int _{v(kT)}^{v((k+1)T)}e^{\mathbf {A} v}dv\right)\mathbf {Bu} [k]\\[2pt]&=e^{\mathbf {A} T}\mathbf {x} [k]-\left(\int _{T}^{0}e^{\mathbf {A} v}dv\right)\mathbf {Bu} [k]\\[2pt]&=e^{\mathbf {A} T}\mathbf {x} [k]+\left(\int _{0}^{T}e^{\mathbf {A} v}dv\right)\mathbf {Bu} [k]\\[4pt]&=e^{\mathbf {A} T}\mathbf {x} [k]+\mathbf {A} ^{-1}\left(e^{\mathbf {A} T}-\mathbf {I} \right)\mathbf {Bu} [k]\end{aligned}}} which is an exact solution to the discretization problem. WhenAis singular, the latter expression can still be used by replacingeAT{\displaystyle e^{\mathbf {A} T}}by itsTaylor expansion,eAT=∑k=0∞1k!(AT)k.{\displaystyle e^{\mathbf {A} T}=\sum _{k=0}^{\infty }{\frac {1}{k!}}(\mathbf {A} T)^{k}.}This yieldsx[k+1]=eATx[k]+(∫0TeAvdv)Bu[k]=(∑k=0∞1k!(AT)k)x[k]+(∑k=1∞1k!Ak−1Tk)Bu[k],{\displaystyle {\begin{aligned}\mathbf {x} [k+1]&=e^{\mathbf {A} T}\mathbf {x} [k]+\left(\int _{0}^{T}e^{\mathbf {A} v}dv\right)\mathbf {Bu} [k]\\[2pt]&=\left(\sum _{k=0}^{\infty }{\frac {1}{k!}}(\mathbf {A} T)^{k}\right)\mathbf {x} [k]+\left(\sum _{k=1}^{\infty }{\frac {1}{k!}}\mathbf {A} ^{k-1}T^{k}\right)\mathbf {Bu} [k],\end{aligned}}}which is the form used in practice. Exact discretization may sometimes be intractable due to the heavy matrix exponential and integral operations involved. It is much easier to calculate an approximate discrete model, based on that for small timestepseAT≈I+AT{\displaystyle e^{\mathbf {A} T}\approx \mathbf {I} +\mathbf {A} T}. The approximate solution then becomes:x[k+1]≈(I+AT)x[k]+TBu[k]{\displaystyle \mathbf {x} [k+1]\approx (\mathbf {I} +\mathbf {A} T)\mathbf {x} [k]+T\mathbf {Bu} [k]} This is also known as theEuler method, which is also known as the forward Euler method. Other possible approximations areeAT≈(I−AT)−1{\displaystyle e^{\mathbf {A} T}\approx (\mathbf {I} -\mathbf {A} T)^{-1}}, otherwise known as the backward Euler method andeAT≈(I+12AT)(I−12AT)−1{\displaystyle e^{\mathbf {A} T}\approx (\mathbf {I} +{\tfrac {1}{2}}\mathbf {A} T)(\mathbf {I} -{\tfrac {1}{2}}\mathbf {A} T)^{-1}}, which is known as thebilinear transform, or Tustin transform. Each of these approximations has different stability properties. The bilinear transform preserves the instability of the continuous-time system. Instatisticsand machine learning,discretizationrefers to the process of converting continuous features or variables to discretized or nominal features. This can be useful when creating probability mass functions. Ingeneralized functionstheory,discretizationarises as a particular case of theConvolution Theoremontempered distributions whereIII{\displaystyle \operatorname {III} }is theDirac comb,⋅III{\displaystyle \cdot \operatorname {III} }is discretization,∗III{\displaystyle *\operatorname {III} }isperiodization,f{\displaystyle f}is a rapidly decreasing tempered distribution (e.g. aDirac delta functionδ{\displaystyle \delta }or any othercompactly supportedfunction),α{\displaystyle \alpha }is asmooth,slowly growingordinary function(e.g. the function that is constantly1{\displaystyle 1}or any otherband-limitedfunction) andF{\displaystyle {\mathcal {F}}}is the (unitary, ordinary frequency)Fourier transform. Functionsα{\displaystyle \alpha }which are not smooth can be made smooth using amollifierprior to discretization. As an example, discretization of the function that is constantly1{\displaystyle 1}yields thesequence[..,1,1,1,..]{\displaystyle [..,1,1,1,..]}which, interpreted as the coefficients of alinear combinationofDirac delta functions, forms aDirac comb. If additionallytruncationis applied, one obtains finite sequences, e.g.[1,1,1,1]{\displaystyle [1,1,1,1]}. They are discrete in both, time and frequency.
https://en.wikipedia.org/wiki/Discretization
In general, afunction approximationproblem asks us to select afunctionamong awell-defined class[citation needed][clarification needed]that closely matches ("approximates") atarget function[citation needed]in a task-specific way.[1][better source needed]The need for function approximations arises in many branches ofapplied mathematics, andcomputer sciencein particular[why?],[citation needed]such as predicting the growth of microbes inmicrobiology.[2]Function approximations are used where theoretical models are unavailable or hard to compute.[2] One can distinguish[citation needed]two major classes of function approximation problems: First, for known target functionsapproximation theoryis the branch ofnumerical analysisthat investigates how certain known functions (for example,special functions) can be approximated by a specific class of functions (for example,polynomialsorrational functions) that often have desirable properties (inexpensive computation, continuity, integral and limit values, etc.).[3] Second, the target function, call itg, may be unknown; instead of an explicit formula, only a set of points of the form (x,g(x)) is provided.[citation needed]Depending on the structure of thedomainandcodomainofg, several techniques for approximatinggmay be applicable. For example, ifgis an operation on thereal numbers, techniques ofinterpolation,extrapolation,regression analysis, andcurve fittingcan be used. If thecodomain(range or target set) ofgis a finite set, one is dealing with aclassificationproblem instead.[4] To some extent, the different problems (regression, classification,fitness approximation) have received a unified treatment instatistical learning theory, where they are viewed assupervised learningproblems.[citation needed] Thismathematical analysis–related article is astub. You can help Wikipedia byexpanding it. Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Function_approximation
Genetic programming(GP) is anevolutionary algorithm, an artificial intelligence technique mimicking natural evolution, which operates on a population of programs. It applies thegenetic operatorsselectionaccording to a predefinedfitness measure,mutationandcrossover. The crossover operation involves swapping specified parts of selected pairs (parents) to produce new and different offspring that become part of the new generation of programs. Some programs not selected for reproduction are copied from the current generation to the new generation. Mutation involves substitution of some random part of a program with some other random part of a program. Then the selection and other operations are recursively applied to the new generation of programs. Typically, members of each new generation are on average more fit than the members of the previous generation, and the best-of-generation program is often better than the best-of-generation programs from previous generations. Termination of the evolution usually occurs when some individual program reaches a predefined proficiency or fitness level. It may and often does happen that a particular run of the algorithm results in premature convergence to some local maximum which is not a globally optimal or even good solution. Multiple runs (dozens to hundreds) are usually necessary to produce a very good result. It may also be necessary to have a large starting population size and variability of the individuals to avoid pathologies. The first record of the proposal to evolve programs is probably that ofAlan Turingin 1950.[1]There was a gap of 25 years before the publication of John Holland's 'Adaptation in Natural and Artificial Systems' laid out the theoretical and empirical foundations of the science. In 1981, Richard Forsyth demonstrated the successful evolution of small programs, represented as trees, to perform classification of crime scene evidence for the UK Home Office.[2] Although the idea of evolving programs, initially in the computer languageLisp, was current amongst John Holland's students,[3]it was not until they organised the firstGenetic Algorithms(GA) conference in Pittsburgh that Nichael Cramer[4]published evolved programs in two specially designed languages, which included the first statement of modern "tree-based" Genetic Programming (that is, procedural languages organized in tree-based structures and operated on by suitably defined GA-operators). In 1988,John Koza(also a PhD student of John Holland) patented his invention of a GA for program evolution.[5]This was followed by publication in the International Joint Conference on Artificial Intelligence IJCAI-89.[6] Koza followed this with 205 publications on “Genetic Programming” (GP), name coined by David Goldberg, also a PhD student of John Holland.[7]However, it is the series of 4 books by Koza, starting in 1992[8]with accompanying videos,[9]that really established GP. Subsequently, there was an enormous expansion of the number of publications with the Genetic Programming Bibliography, surpassing 10,000 entries.[10]In 2010, Koza[11]listed 77 results where Genetic Programming was human competitive. In 1996, Koza started the annual Genetic Programming conference[12]which was followed in 1998 by the annual EuroGP conference,[13]and the first book[14]in a GP series edited by Koza. 1998 also saw the first GP textbook.[15]GP continued to flourish, leading to the first specialist GP journal[16]and three years later (2003) the annual Genetic Programming Theory and Practice (GPTP) workshop was established by Rick Riolo.[17][18]Genetic Programming papers continue to be published at a diversity of conferences and associated journals. Today there are nineteen GP books including several for students.[15] Early work that set the stage for current genetic programming research topics and applications is diverse, and includessoftware synthesisand repair, predictive modeling, data mining,[30]financial modeling,[31]soft sensors,[32]design,[33]and image processing.[34]Applications in some areas, such as design, often make use of intermediate representations,[35]such as Fred Gruau's cellular encoding.[36]Industrial uptake has been significant in several areas including finance, the chemical industry, bioinformatics[37][38]and the steel industry.[39] GP evolves computer programs, traditionally represented in memory astree structures.[40]Trees can be easily evaluated in a recursive manner. Every internal node has an operator function and every terminal node has an operand, making mathematical expressions easy to evolve and evaluate. Thus traditionally GP favors the use ofprogramming languagesthat naturally embody tree structures (for example,Lisp; otherfunctional programming languagesare also suitable). Non-tree representations have been suggested and successfully implemented, such aslinear genetic programmingwhich perhaps suits the more traditionalimperative languages.[41][42]The commercial GP softwareDiscipulususes automatic induction of binary machine code ("AIM")[43]to achieve better performance.μGP[44]usesdirected multigraphsto generate programs that fully exploit the syntax of a givenassembly language.Multi expression programmingusesThree-address codefor encoding solutions. Other program representations on which significant research and development have been conducted include programs for stack-based virtual machines,[45][46][47]and sequences of integers that are mapped to arbitrary programming languages via grammars.[48][49]Cartesian genetic programmingis another form of GP, which uses a graph representation instead of the usual tree based representation to encode computer programs. Most representations have structurally noneffective code (introns). Such non-coding genes may seem to be useless because they have no effect on the performance of any one individual. However, they alter the probabilities of generating different offspring under the variation operators, and thus alter the individual'svariational properties. Experiments seem to show faster convergence when using program representations that allow such non-coding genes, compared to program representations that do not have any non-coding genes.[50][51]Instantiations may have both trees with introns and those without; the latter are called canonical trees. Special canonical crossover operators are introduced that maintain the canonical structure of parents in their children. The methods for creation of the initial population include:[52] Selection is a process whereby certain individuals are selected from the current generation that would serve as parents for the next generation. The individuals are selected probabilistically such that the better performing individuals have a higher chance of getting selected.[18]The most commonly used selection method in GP istournament selection, although other methods such asfitness proportionate selection, lexicase selection,[53]and others have been demonstrated to perform better for many GP problems. Elitism, which involves seeding the next generation with the best individual (or bestnindividuals) from the current generation, is a technique sometimes employed to avoid regression. In Genetic Programming two fit individuals are chosen from the population to be parents for one or two children. In tree genetic programming, these parents are represented as inverted lisp like trees, with their root nodes at the top. In subtree crossover in each parent a subtree is randomly chosen. (Highlighted with yellow in the animation.) In the root donating parent (in the animation on the left) the chosen subtree is removed and replaced with a copy of the randomly chosen subtree from the other parent, to give a new child tree. Sometimes two child crossover is used, in which case the removed subtree (in the animation on the left) is not simply deleted but is copied to a copy of the second parent (here on the right) replacing (in the copy) its randomly chosen subtree. Thus this type of subtree crossover takes two fit trees and generates two child trees. Some individuals selected according to fitness criteria do not participate in crossover, but are copied into the next generation, akin to asexual reproduction in the natural world. They may be further subject to mutation. There are many types of mutation in genetic programming. They start from a fit syntactically correct parent and aim to randomly create a syntactically correct child. In the animation a subtree is randomly chosen (highlighted by yellow). It is removed and replaced by a randomly generated subtree. Other mutation operators select a leaf (external node) of the tree and replace it with a randomly chosen leaf. Another mutation is to select at random a function (internal node) and replace it with another function with the same arity (number of inputs). Hoist mutation randomly chooses a subtree and replaces it with a subtree within itself. Thus hoist mutation is guaranteed to make the child smaller. Leaf and same arity function replacement ensure the child is the same size as the parent. Whereas subtree mutation (in the animation) may, depending upon the function and terminal sets, have a bias to either increase or decrease the tree size. Other subtree based mutations try to carefully control the size of the replacement subtree and thus the size of the child tree. Similarly there are many types of linear genetic programming mutation, each of which tries to ensure the mutated child is still syntactically correct. GP has been successfully used as anautomatic programmingtool, a machine learning tool and an automatic problem-solving engine.[18]GP is especially useful in the domains where the exact form of the solution is not known in advance or an approximate solution is acceptable (possibly because finding the exact solution is very difficult). Some of the applications of GP are curve fitting, data modeling,symbolic regression,feature selection, classification, etc. John R. Koza mentions 76 instances where Genetic Programming has been able to produce results that are competitive with human-produced results (called Human-competitive results).[54]Since 2004, the annual Genetic and Evolutionary Computation Conference (GECCO) holds Human Competitive Awards (called Humies) competition,[55]where cash awards are presented to human-competitive results produced by any form of genetic and evolutionary computation. GP has won many awards in this competition over the years. Meta-genetic programming is the proposedmeta-learningtechnique of evolving a genetic programming system using genetic programming itself. It suggests that chromosomes, crossover, and mutation were themselves evolved, therefore like their real life counterparts should be allowed to change on their own rather than being determined by a human programmer. Meta-GP was formally proposed byJürgen Schmidhuberin 1987.[56]Doug Lenat'sEuriskois an earlier effort that may be the same technique. It is a recursive but terminating algorithm, allowing it to avoid infinite recursion. In the "autoconstructive evolution" approach to meta-genetic programming, the methods for the production and variation of offspring are encoded within the evolving programs themselves, and programs are executed to produce new programs to be added to the population.[46][57] Critics of this idea often say this approach is overly broad in scope. However, it might be possible to constrain the fitness criterion onto a general class of results, and so obtain an evolved GP that would more efficiently produce results for sub-classes. This might take the form of a meta evolved GP for producing human walking algorithms which is then used to evolve human running, jumping, etc. The fitness criterion applied to the meta GP would simply be one of efficiency.
https://en.wikipedia.org/wiki/Genetic_programming
Inmathematicsand computing, theLevenberg–Marquardt algorithm(LMAor justLM), also known as thedamped least-squares(DLS) method, is used to solvenon-linear least squaresproblems. These minimization problems arise especially inleast squarescurve fitting. The LMA interpolates between theGauss–Newton algorithm(GNA) and the method ofgradient descent. The LMA is morerobustthan the GNA, which means that in many cases it finds a solution even if it starts very far off the final minimum. For well-behaved functions and reasonable starting parameters, the LMA tends to be slower than the GNA. LMA can also be viewed asGauss–Newtonusing atrust regionapproach. The algorithm was first published in 1944 byKenneth Levenberg,[1]while working at theFrankford Army Arsenal. It was rediscovered in 1963 byDonald Marquardt,[2]who worked as astatisticianatDuPont, and independently by Girard,[3]Wynne[4]and Morrison.[5] The LMA is used in many software applications for solving generic curve-fitting problems. By using the Gauss–Newton algorithm it often converges faster than first-order methods.[6]However, like other iterative optimization algorithms, the LMA finds only alocal minimum, which is not necessarily theglobal minimum. The primary application of the Levenberg–Marquardt algorithm is in the least-squares curve fitting problem: given a set ofm{\displaystyle m}empirical pairs(xi,yi){\displaystyle \left(x_{i},y_{i}\right)}of independent and dependent variables, find the parameters⁠β{\displaystyle {\boldsymbol {\beta }}}⁠of the model curvef(x,β){\displaystyle f\left(x,{\boldsymbol {\beta }}\right)}so that the sum of the squares of the deviationsS(β){\displaystyle S\left({\boldsymbol {\beta }}\right)}is minimized: Like other numeric minimization algorithms, the Levenberg–Marquardt algorithm is aniterativeprocedure. To start a minimization, the user has to provide an initial guess for the parameter vector⁠β{\displaystyle {\boldsymbol {\beta }}}⁠. In cases with only one minimum, an uninformed standard guess likeβT=(1,1,…,1){\displaystyle {\boldsymbol {\beta }}^{\text{T}}={\begin{pmatrix}1,\ 1,\ \dots ,\ 1\end{pmatrix}}}will work fine; in cases withmultiple minima, the algorithm converges to the global minimum only if the initial guess is already somewhat close to the final solution. In each iteration step, the parameter vector⁠β{\displaystyle {\boldsymbol {\beta }}}⁠is replaced by a new estimate⁠β+δ{\displaystyle {\boldsymbol {\beta }}+{\boldsymbol {\delta }}}⁠. To determine⁠δ{\displaystyle {\boldsymbol {\delta }}}⁠, the functionf(xi,β+δ){\displaystyle f\left(x_{i},{\boldsymbol {\beta }}+{\boldsymbol {\delta }}\right)}is approximated by itslinearization: where is thegradient(row-vector in this case) of⁠f{\displaystyle f}⁠with respect to⁠β{\displaystyle {\boldsymbol {\beta }}}⁠. The sumS(β){\displaystyle S\left({\boldsymbol {\beta }}\right)}of square deviations has its minimum at a zerogradientwith respect to⁠β{\displaystyle {\boldsymbol {\beta }}}⁠. The above first-order approximation off(xi,β+δ){\displaystyle f\left(x_{i},{\boldsymbol {\beta }}+{\boldsymbol {\delta }}\right)}gives or in vector notation, Taking the derivative of this approximation ofS(β+δ){\displaystyle S\left({\boldsymbol {\beta }}+{\boldsymbol {\delta }}\right)}with respect to⁠δ{\displaystyle {\boldsymbol {\delta }}}⁠and setting the result to zero gives whereJ{\displaystyle \mathbf {J} }is theJacobian matrix, whose⁠i{\displaystyle i}⁠-th row equalsJi{\displaystyle \mathbf {J} _{i}}, and wheref(β){\displaystyle \mathbf {f} \left({\boldsymbol {\beta }}\right)}andy{\displaystyle \mathbf {y} }are vectors with⁠i{\displaystyle i}⁠-th componentf(xi,β){\displaystyle f\left(x_{i},{\boldsymbol {\beta }}\right)}andyi{\displaystyle y_{i}}respectively. The above expression obtained for⁠β{\displaystyle {\boldsymbol {\beta }}}⁠comes under the Gauss–Newton method. The Jacobian matrix as defined above is not (in general) a square matrix, but a rectangular matrix of sizem×n{\displaystyle m\times n}, wheren{\displaystyle n}is the number of parameters (size of the vectorβ{\displaystyle {\boldsymbol {\beta }}}). The matrix multiplication(JTJ){\displaystyle \left(\mathbf {J} ^{\mathrm {T} }\mathbf {J} \right)}yields the requiredn×n{\displaystyle n\times n}square matrix and the matrix-vector product on the right hand side yields a vector of sizen{\displaystyle n}. The result is a set ofn{\displaystyle n}linear equations, which can be solved for⁠δ{\displaystyle {\boldsymbol {\delta }}}⁠. Levenberg's contribution is to replace this equation by a "damped version": where⁠I{\displaystyle \mathbf {I} }⁠is the identity matrix, giving as the increment⁠δ{\displaystyle {\boldsymbol {\delta }}}⁠to the estimated parameter vector⁠β{\displaystyle {\boldsymbol {\beta }}}⁠. The (non-negative) damping factor⁠λ{\displaystyle \lambda }⁠is adjusted at each iteration. If reduction of⁠S{\displaystyle S}⁠is rapid, a smaller value can be used, bringing the algorithm closer to theGauss–Newton algorithm, whereas if an iteration gives insufficient reduction in the residual,⁠λ{\displaystyle \lambda }⁠can be increased, giving a step closer to the gradient-descent direction. Note that thegradientof⁠S{\displaystyle S}⁠with respect to⁠β{\displaystyle {\boldsymbol {\beta }}}⁠equals−2(JT[y−f(β)])T{\displaystyle -2\left(\mathbf {J} ^{\mathrm {T} }\left[\mathbf {y} -\mathbf {f} \left({\boldsymbol {\beta }}\right)\right]\right)^{\mathrm {T} }}. Therefore, for large values of⁠λ{\displaystyle \lambda }⁠, the step will be taken approximately in the direction opposite to the gradient. If either the length of the calculated step⁠δ{\displaystyle {\boldsymbol {\delta }}}⁠or the reduction of sum of squares from the latest parameter vector⁠β+δ{\displaystyle {\boldsymbol {\beta }}+{\boldsymbol {\delta }}}⁠fall below predefined limits, iteration stops, and the last parameter vector⁠β{\displaystyle {\boldsymbol {\beta }}}⁠is considered to be the solution. When the damping factor⁠λ{\displaystyle \lambda }⁠is large relative to‖JTJ‖{\displaystyle \|\mathbf {J} ^{\mathrm {T} }\mathbf {J} \|}, invertingJTJ+λI{\displaystyle \mathbf {J} ^{\mathrm {T} }\mathbf {J} +\lambda \mathbf {I} }is not necessary, as the update is well-approximated by the small gradient stepλ−1JT[y−f(β)]{\displaystyle \lambda ^{-1}\mathbf {J} ^{\mathrm {T} }\left[\mathbf {y} -\mathbf {f} \left({\boldsymbol {\beta }}\right)\right]}. To make the solution scale invariant Marquardt's algorithm solved a modified problem with each component of the gradient scaled according to the curvature. This provides larger movement along the directions where the gradient is smaller, which avoids slow convergence in the direction of small gradient. Fletcher in his 1971 paperA modified Marquardt subroutine for non-linear least squaressimplified the form, replacing the identity matrix⁠I{\displaystyle \mathbf {I} }⁠with the diagonal matrix consisting of the diagonal elements of⁠JTJ{\displaystyle \mathbf {J} ^{\text{T}}\mathbf {J} }⁠: A similar damping factor appears inTikhonov regularization, which is used to solve linearill-posed problems, as well as inridge regression, anestimationtechnique instatistics. Various more or less heuristic arguments have been put forward for the best choice for the damping parameter⁠λ{\displaystyle \lambda }⁠. Theoretical arguments exist showing why some of these choices guarantee local convergence of the algorithm; however, these choices can make the global convergence of the algorithm suffer from the undesirable properties ofsteepest descent, in particular, very slow convergence close to the optimum. The absolute values of any choice depend on how well-scaled the initial problem is. Marquardt recommended starting with a value⁠λ0{\displaystyle \lambda _{0}}⁠and a factor⁠ν>1{\displaystyle \nu >1}⁠. Initially settingλ=λ0{\displaystyle \lambda =\lambda _{0}}and computing the residual sum of squaresS(β){\displaystyle S\left({\boldsymbol {\beta }}\right)}after one step from the starting point with the damping factor ofλ=λ0{\displaystyle \lambda =\lambda _{0}}and secondly with⁠λ0/ν{\displaystyle \lambda _{0}/\nu }⁠. If both of these are worse than the initial point, then the damping is increased by successive multiplication by⁠ν{\displaystyle \nu }⁠until a better point is found with a new damping factor of⁠λ0νk{\displaystyle \lambda _{0}\nu ^{k}}⁠for some⁠k{\displaystyle k}⁠. If use of the damping factor⁠λ/ν{\displaystyle \lambda /\nu }⁠results in a reduction in squared residual, then this is taken as the new value of⁠λ{\displaystyle \lambda }⁠(and the new optimum location is taken as that obtained with this damping factor) and the process continues; if using⁠λ/ν{\displaystyle \lambda /\nu }⁠resulted in a worse residual, but using⁠λ{\displaystyle \lambda }⁠resulted in a better residual, then⁠λ{\displaystyle \lambda }⁠is left unchanged and the new optimum is taken as the value obtained with⁠λ{\displaystyle \lambda }⁠as damping factor. An effective strategy for the control of the damping parameter, calleddelayed gratification, consists of increasing the parameter by a small amount for each uphill step, and decreasing by a large amount for each downhill step. The idea behind this strategy is to avoid moving downhill too fast in the beginning of optimization, therefore restricting the steps available in future iterations and therefore slowing down convergence.[7]An increase by a factor of 2 and a decrease by a factor of 3 has been shown to be effective in most cases, while for large problems more extreme values can work better, with an increase by a factor of 1.5 and a decrease by a factor of 5.[8] When interpreting the Levenberg–Marquardt step as the velocityvk{\displaystyle {\boldsymbol {v}}_{k}}along ageodesicpath in the parameter space, it is possible to improve the method by adding a second order term that accounts for the accelerationak{\displaystyle {\boldsymbol {a}}_{k}}along the geodesic whereak{\displaystyle {\boldsymbol {a}}_{k}}is the solution of Since this geodesic acceleration term depends only on thedirectional derivativefvv=∑μνvμvν∂μ∂νf(x){\displaystyle f_{vv}=\sum _{\mu \nu }v_{\mu }v_{\nu }\partial _{\mu }\partial _{\nu }f({\boldsymbol {x}})}along the direction of the velocityv{\displaystyle {\boldsymbol {v}}}, it does not require computing the full second order derivative matrix, requiring only a small overhead in terms of computing cost.[9]Since the second order derivative can be a fairly complex expression, it can be convenient to replace it with afinite differenceapproximation wheref(x){\displaystyle f({\boldsymbol {x}})}andJ{\displaystyle {\boldsymbol {J}}}have already been computed by the algorithm, therefore requiring only one additional function evaluation to computef(x+hδ){\displaystyle f({\boldsymbol {x}}+h{\boldsymbol {\delta }})}. The choice of the finite difference steph{\displaystyle h}can affect the stability of the algorithm, and a value of around 0.1 is usually reasonable in general.[8] Since the acceleration may point in opposite direction to the velocity, to prevent it to stall the method in case the damping is too small, an additional criterion on the acceleration is added in order to accept a step, requiring that whereα{\displaystyle \alpha }is usually fixed to a value lesser than 1, with smaller values for harder problems.[8] The addition of a geodesic acceleration term can allow significant increase in convergence speed and it is especially useful when the algorithm is moving through narrow canyons in the landscape of the objective function, where the allowed steps are smaller and the higher accuracy due to the second order term gives significant improvements.[8] In this example we try to fit the functiony=acos⁡(bX)+bsin⁡(aX){\displaystyle y=a\cos \left(bX\right)+b\sin \left(aX\right)}using the Levenberg–Marquardt algorithm implemented inGNU Octaveas theleasqrfunction. The graphs show progressively better fitting for the parametersa=100{\displaystyle a=100},b=102{\displaystyle b=102}used in the initial curve. Only when the parameters in the last graph are chosen closest to the original, are the curves fitting exactly. This equation is an example of very sensitive initial conditions for the Levenberg–Marquardt algorithm. One reason for this sensitivity is the existence of multiple minima — the functioncos⁡(βx){\displaystyle \cos \left(\beta x\right)}has minima at parameter valueβ^{\displaystyle {\hat {\beta }}}andβ^+2nπ{\displaystyle {\hat {\beta }}+2n\pi }.
https://en.wikipedia.org/wiki/Levenberg%E2%80%93Marquardt_algorithm
In mathematics,linear interpolationis a method ofcurve fittingusinglinear polynomialsto construct new data points within the range of a discrete set of known data points. If the two known points are given by the coordinates(x0,y0){\displaystyle (x_{0},y_{0})}and(x1,y1){\displaystyle (x_{1},y_{1})},thelinear interpolantis the straight line between these points. For a valuex{\displaystyle x}in the interval(x0,x1){\displaystyle (x_{0},x_{1})},the valuey{\displaystyle y}along the straight line is given from the equation of slopesy−y0x−x0=y1−y0x1−x0,{\displaystyle {\frac {y-y_{0}}{x-x_{0}}}={\frac {y_{1}-y_{0}}{x_{1}-x_{0}}},}which can be derived geometrically from the figure on the right. It is a special case ofpolynomial interpolationwithn=1{\displaystyle n=1}. Solving this equation fory{\displaystyle y}, which is the unknown value atx{\displaystyle x}, givesy=y0+(x−x0)y1−y0x1−x0=y0(x1−x0)x1−x0+y1(x−x0)−y0(x−x0)x1−x0=y1x−y1x0−y0x+y0x0+y0x1−y0x0x1−x0=y0(x1−x)+y1(x−x0)x1−x0,{\displaystyle {\begin{aligned}y&=y_{0}+(x-x_{0}){\frac {y_{1}-y_{0}}{x_{1}-x_{0}}}\\&={\frac {y_{0}(x_{1}-x_{0})}{x_{1}-x_{0}}}+{\frac {y_{1}(x-x_{0})-y_{0}(x-x_{0})}{x_{1}-x_{0}}}\\&={\frac {y_{1}x-y_{1}x_{0}-y_{0}x+y_{0}x_{0}+y_{0}x_{1}-y_{0}x_{0}}{x_{1}-x_{0}}}\\&={\frac {y_{0}(x_{1}-x)+y_{1}(x-x_{0})}{x_{1}-x_{0}}},\end{aligned}}}which is the formula for linear interpolation in the interval(x0,x1){\displaystyle (x_{0},x_{1})}.Outside this interval, the formula is identical tolinear extrapolation. This formula can also be understood as a weighted average. The weights are inversely related to the distance from the end points to the unknown point; the closer point has more influence than the farther point. Thus, the weights are1−(x−x0)/(x1−x0){\textstyle 1-(x-x_{0})/(x_{1}-x_{0})}and1−(x1−x)/(x1−x0){\textstyle 1-(x_{1}-x)/(x_{1}-x_{0})}, which are normalized distances between the unknown point and each of the end points. Because these sum to 1,y=y0(1−x−x0x1−x0)+y1(1−x1−xx1−x0)=y0(1−x−x0x1−x0)+y1(x−x0x1−x0)=y0(x1−xx1−x0)+y1(x−x0x1−x0){\displaystyle {\begin{aligned}y&=y_{0}\left(1-{\frac {x-x_{0}}{x_{1}-x_{0}}}\right)+y_{1}\left(1-{\frac {x_{1}-x}{x_{1}-x_{0}}}\right)\\&=y_{0}\left(1-{\frac {x-x_{0}}{x_{1}-x_{0}}}\right)+y_{1}\left({\frac {x-x_{0}}{x_{1}-x_{0}}}\right)\\&=y_{0}\left({\frac {x_{1}-x}{x_{1}-x_{0}}}\right)+y_{1}\left({\frac {x-x_{0}}{x_{1}-x_{0}}}\right)\end{aligned}}}yielding the formula for linear interpolation given above. Linear interpolation on a set of data points(x0,y0), (x1,y1), ..., (xn,yn)is defined aspiecewise linear, resulting from the concatenation oflinear segmentinterpolants between each pair of data points. This results in acontinuous curve, with a discontinuous derivative (in general), thus ofdifferentiability classC0{\displaystyle C^{0}}. Linear interpolation is often used to approximate a value of somefunctionfusing two known values of that function at other points. Theerrorof this approximation is defined asRT=f(x)−p(x),{\displaystyle R_{T}=f(x)-p(x),}wherepdenotes the linear interpolationpolynomialdefined above:p(x)=f(x0)+f(x1)−f(x0)x1−x0(x−x0).{\displaystyle p(x)=f(x_{0})+{\frac {f(x_{1})-f(x_{0})}{x_{1}-x_{0}}}(x-x_{0}).} It can be proven usingRolle's theoremthat iffhas a continuous second derivative, then the error is bounded by|RT|≤(x1−x0)28maxx0≤x≤x1|f″(x)|.{\displaystyle |R_{T}|\leq {\frac {(x_{1}-x_{0})^{2}}{8}}\max _{x_{0}\leq x\leq x_{1}}\left|f''(x)\right|.} That is, the approximation between two points on a given function gets worse with the second derivative of the function that is approximated. This is intuitively correct as well: the "curvier" the function is, the worse the approximations made with simple linear interpolation become. Linear interpolation has been used since antiquity for filling the gaps in tables. Suppose that one has a table listing the population of some country in 1970, 1980, 1990 and 2000, and that one wanted to estimate the population in 1994. Linear interpolation is an easy way to do this. It is believed that it was used in theSeleucid Empire(last three centuries BC) and by the Greek astronomer and mathematicianHipparchus(second century BC). A description of linear interpolation can be found in the ancientChinese mathematicaltext calledThe Nine Chapters on the Mathematical Art(九章算術),[1]dated from 200 BC to AD 100 and theAlmagest(2nd century AD) byPtolemy. The basic operation of linear interpolation between two values is commonly used incomputer graphics. In that field's jargon it is sometimes called alerp(fromlinear interpolation). The term can be used as averbornounfor the operation. e.g. "Bresenham's algorithmlerps incrementally between the two endpoints of the line." Lerp operations are built into the hardware of all modern computer graphics processors. They are often used as building blocks for more complex operations: for example, abilinear interpolationcan be accomplished in three lerps. Because this operation is cheap, it's also a good way to implement accuratelookup tableswith quick lookup forsmooth functionswithout having too many table entries. If aC0function is insufficient, for example if the process that has produced the data points is known to be smoother thanC0, it is common to replace linear interpolation withspline interpolationor, in some cases,polynomial interpolation. Linear interpolation as described here is for data points in one spatial dimension. For two spatial dimensions, the extension of linear interpolation is calledbilinear interpolation, and in three dimensions,trilinear interpolation. Notice, though, that these interpolants are no longerlinear functionsof the spatial coordinates, rather products of linear functions; this is illustrated by the clearly non-linear example ofbilinear interpolationin the figure below. Other extensions of linear interpolation can be applied to other kinds ofmeshsuch as triangular and tetrahedral meshes, includingBézier surfaces. These may be defined as indeed higher-dimensionalpiecewise linear functions(see second figure below). Many libraries andshading languageshave a "lerp" helper-function (inGLSLknown instead asmix), returning an interpolation between two inputs(v0, v1)for a parametertin the closed unit interval [0, 1]. Signatures between lerp functions are variously implemented in both the forms(v0, v1, t)and(t, v0, v1). This lerp function is commonly used foralpha blending(the parameter "t" is the "alpha value"), and the formula may be extended to blend multiple components of a vector (such as spatialx,y,zaxes orr,g,bcolour components) in parallel.
https://en.wikipedia.org/wiki/Linear_interpolation
Amathematical modelis anabstractdescription of a concretesystemusingmathematicalconcepts andlanguage. The process of developing a mathematicalmodelis termedmathematical modeling. Mathematical models are used inapplied mathematicsand in thenatural sciences(such asphysics,biology,earth science,chemistry) andengineeringdisciplines (such ascomputer science,electrical engineering), as well as in non-physical systems such as thesocial sciences[1](such aseconomics,psychology,sociology,political science). It can also be taught as a subject in its own right.[2] The use of mathematical models to solve problems in business or military operations is a large part of the field ofoperations research. Mathematical models are also used inmusic,[3]linguistics,[4]andphilosophy(for example, intensively inanalytic philosophy). A model may help to explain a system and to study the effects of different components, and to make predictions about behavior. Mathematical models can take many forms, includingdynamical systems,statistical models,differential equations, orgame theoretic models. These and other types of models can overlap, with a given model involving a variety of abstract structures. In general, mathematical models may includelogical models. In many cases, the quality of a scientific field depends on how well the mathematical models developed on the theoretical side agree with results of repeatable experiments. Lack of agreement between theoretical mathematical models and experimental measurements often leads to important advances as better theories are developed. In thephysical sciences, a traditional mathematical model contains most of the following elements: Mathematical models are of different types: Inbusinessandengineering, mathematical models may be used to maximize a certain output. The system under consideration will require certain inputs. The system relating inputs to outputs depends on other variables too:decision variables,state variables,exogenousvariables, andrandom variables. Decision variables are sometimes known as independent variables. Exogenous variables are sometimes known asparametersorconstants. The variables are not independent of each other as the state variables are dependent on the decision, input, random, and exogenous variables. Furthermore, the output variables are dependent on the state of the system (represented by the state variables). Objectivesandconstraintsof the system and its users can be represented asfunctionsof the output variables or state variables. Theobjective functionswill depend on the perspective of the model's user. Depending on the context, an objective function is also known as anindex of performance, as it is some measure of interest to the user. Although there is no limit to the number of objective functions and constraints a model can have, using or optimizing the model becomes more involved (computationally) as the number increases. For example,economistsoften applylinear algebrawhen usinginput–output models. Complicated mathematical models that have many variables may be consolidated by use ofvectorswhere one symbol represents several variables. Mathematical modeling problems are often classified intoblack boxorwhite boxmodels, according to how mucha prioriinformation on the system is available. A black-box model is a system of which there is no a priori information available. A white-box model (also called glass box or clear box) is a system where all necessary information is available. Practically all systems are somewhere between the black-box and white-box models, so this concept is useful only as an intuitive guide for deciding which approach to take. Usually, it is preferable to use as much a priori information as possible to make the model more accurate. Therefore, the white-box models are usually considered easier, because if you have used the information correctly, then the model will behave correctly. Often the a priori information comes in forms of knowing the type of functions relating different variables. For example, if we make a model of how a medicine works in a human system, we know that usually the amount of medicine in the blood is anexponentially decayingfunction, but we are still left with several unknown parameters; how rapidly does the medicine amount decay, and what is the initial amount of medicine in blood? This example is therefore not a completely white-box model. These parameters have to be estimated through some means before one can use the model. In black-box models, one tries to estimate both the functional form of relations between variables and the numerical parameters in those functions. Using a priori information we could end up, for example, with a set of functions that probably could describe the system adequately. If there is no a priori information we would try to use functions as general as possible to cover all different models. An often used approach for black-box models areneural networkswhich usually do not make assumptions about incoming data. Alternatively, the NARMAX (Nonlinear AutoRegressive Moving Average model with eXogenous inputs) algorithms which were developed as part ofnonlinear system identification[8]can be used to select the model terms, determine the model structure, and estimate the unknown parameters in the presence of correlated and nonlinear noise. The advantage of NARMAX models compared to neural networks is that NARMAX produces models that can be written down and related to the underlying process, whereas neural networks produce an approximation that is opaque. Sometimes it is useful to incorporate subjective information into a mathematical model. This can be done based onintuition,experience, orexpert opinion, or based on convenience of mathematical form.Bayesian statisticsprovides a theoretical framework for incorporating such subjectivity into a rigorous analysis: we specify aprior probability distribution(which can be subjective), and then update this distribution based on empirical data. An example of when such approach would be necessary is a situation in which an experimenter bends a coin slightly and tosses it once, recording whether it comes up heads, and is then given the task of predicting the probability that the next flip comes up heads. After bending the coin, the true probability that the coin will come up heads is unknown; so the experimenter would need to make a decision (perhaps by looking at the shape of the coin) about what prior distribution to use. Incorporation of such subjective information might be important to get an accurate estimate of the probability. In general, model complexity involves a trade-off between simplicity and accuracy of the model.Occam's razoris a principle particularly relevant to modeling, its essential idea being that among models with roughly equal predictive power, the simplest one is the most desirable. While added complexity usually improves the realism of a model, it can make the model difficult to understand and analyze, and can also pose computational problems, includingnumerical instability.Thomas Kuhnargues that as science progresses, explanations tend to become more complex before aparadigm shiftoffers radical simplification.[9] For example, when modeling the flight of an aircraft, we could embed each mechanical part of the aircraft into our model and would thus acquire an almost white-box model of the system. However, the computational cost of adding such a huge amount of detail would effectively inhibit the usage of such a model. Additionally, the uncertainty would increase due to an overly complex system, because each separate part induces some amount of variance into the model. It is therefore usually appropriate to make some approximations to reduce the model to a sensible size. Engineers often can accept some approximations in order to get a more robust and simple model. For example,Newton'sclassical mechanicsis an approximated model of the real world. Still, Newton's model is quite sufficient for most ordinary-life situations, that is, as long as particle speeds are well below thespeed of light, and we study macro-particles only. Note that better accuracy does not necessarily mean a better model.Statistical modelsare prone tooverfittingwhich means that a model is fitted to data too much and it has lost its ability to generalize to new events that were not observed before. Any model which is not pure white-box contains someparametersthat can be used tofit the modelto the system it is intended to describe. If the modeling is done by anartificial neural networkor othermachine learning, the optimization of parameters is calledtraining, while the optimization of model hyperparameters is calledtuningand often usescross-validation.[10]In more conventional modeling through explicitly given mathematical functions, parameters are often determined bycurve fitting.[citation needed] A crucial part of the modeling process is the evaluation of whether or not a given mathematical model describes a system accurately. This question can be difficult to answer as it involves several different types of evaluation. Usually, the easiest part of model evaluation is checking whether a model predicts experimental measurements or other empirical data not used in the model development. In models with parameters, a common approach is to split the data into two disjoint subsets: training data and verification data. The training data are used to estimate the model parameters. An accurate model will closely match the verification data even though these data were not used to set the model's parameters. This practice is referred to ascross-validationin statistics. Defining ametricto measure distances between observed and predicted data is a useful tool for assessing model fit. In statistics, decision theory, and someeconomic models, aloss functionplays a similar role. While it is rather straightforward to test the appropriateness of parameters, it can be more difficult to test the validity of the general mathematical form of a model. In general, more mathematical tools have been developed to test the fit ofstatistical modelsthan models involvingdifferential equations. Tools fromnonparametric statisticscan sometimes be used to evaluate how well the data fit a known distribution or to come up with a general model that makes only minimal assumptions about the model's mathematical form. Assessing the scope of a model, that is, determining what situations the model is applicable to, can be less straightforward. If the model was constructed based on a set of data, one must determine for which systems or situations the known data is a "typical" set of data. The question of whether the model describes well the properties of the system between data points is calledinterpolation, and the same question for events or data points outside the observed data is calledextrapolation. As an example of the typical limitations of the scope of a model, in evaluating Newtonianclassical mechanics, we can note that Newton made his measurements without advanced equipment, so he could not measure properties of particles traveling at speeds close to the speed of light. Likewise, he did not measure the movements of molecules and other small particles, but macro particles only. It is then not surprising that his model does not extrapolate well into these domains, even though his model is quite sufficient for ordinary life physics. Many types of modeling implicitly involve claims aboutcausality. This is usually (but not always) true of models involving differential equations. As the purpose of modeling is to increase our understanding of the world, the validity of a model rests not only on its fit to empirical observations, but also on its ability to extrapolate to situations or data beyond those originally described in the model. One can think of this as the differentiation between qualitative and quantitative predictions. One can also argue that a model is worthless unless it provides some insight which goes beyond what is already known from direct investigation of the phenomenon being studied. An example of such criticism is the argument that the mathematical models ofoptimal foraging theorydo not offer insight that goes beyond the common-sense conclusions ofevolutionand other basic principles of ecology.[11]It should also be noted that while mathematical modeling uses mathematical concepts and language, it is not itself a branch of mathematics and does not necessarily conform to anymathematical logic, but is typically a branch of some science or other technical subject, with corresponding concepts and standards of argumentation.[2] Mathematical models are of great importance in the natural sciences, particularly inphysics. Physicaltheoriesare almost invariably expressed using mathematical models. Throughout history, more and more accurate mathematical models have been developed.Newton's lawsaccurately describe many everyday phenomena, but at certain limitstheory of relativityandquantum mechanicsmust be used. It is common to use idealized models in physics to simplify things. Massless ropes, point particles,ideal gasesand theparticle in a boxare among the many simplified models used in physics. The laws of physics are represented with simple equations such as Newton's laws,Maxwell's equationsand theSchrödinger equation. These laws are a basis for making mathematical models of real situations. Many real situations are very complex and thus modeled approximately on a computer, a model that is computationally feasible to compute is made from the basic laws or from approximate models made from the basic laws. For example, molecules can be modeled bymolecular orbitalmodels that are approximate solutions to the Schrödinger equation. Inengineering, physics models are often made by mathematical methods such asfinite element analysis. Different mathematical models use different geometries that are not necessarily accurate descriptions of the geometry of the universe.Euclidean geometryis much used in classical physics, whilespecial relativityandgeneral relativityare examples of theories that usegeometrieswhich are not Euclidean. Often when engineers analyze a system to be controlled or optimized, they use a mathematical model. In analysis, engineers can build a descriptive model of the system as a hypothesis of how the system could work, or try to estimate how an unforeseeable event could affect the system. Similarly, in control of a system, engineers can try out different control approaches insimulations. A mathematical model usually describes a system by asetof variables and a set of equations that establish relationships between the variables. Variables may be of many types;realorintegernumbers,Booleanvalues orstrings, for example. The variables represent some properties of the system, for example, the measured system outputs often in the form ofsignals,timing data, counters, and event occurrence. The actual model is the set of functions that describe the relations between the different variables. General reference Philosophical
https://en.wikipedia.org/wiki/Mathematical_model
Multi Expression Programming(MEP) is an evolutionary algorithm for generating mathematical functions describing a given set of data. MEP is aGenetic Programmingvariant encoding multiple solutions in the same chromosome. MEP representation is not specific (multiple representations have been tested). In the simplest variant, MEP chromosomes are linear strings of instructions. This representation was inspired byThree-address code. MEP strength consists in the ability to encode multiple solutions, of a problem, in the same chromosome. In this way, one can explore larger zones of the search space. For most of the problems this advantage comes with no running-time penalty compared withgenetic programmingvariants encoding a single solution in a chromosome.[1][2][3] MEP chromosomes are arrays of instructions represented inThree-address codeformat. Each instruction contains a variable, a constant, or a function. If the instruction is a function, then the arguments (given as instruction's addresses) are also present. Here is a simple MEP chromosome (labels on the left side are not a part of the chromosome): When the chromosome is evaluated it is unclear which instruction will provide the output of the program. In many cases, a set of programs is obtained, some of them being completely unrelated (they do not have common instructions). For the above chromosome, here is the list of possible programs obtained during decoding: Each instruction is evaluated as a possible output of the program. The fitness (or error) is computed in a standard manner. For instance, in the case ofsymbolic regression, the fitness is the sum of differences (in absolute value) between the expected output (called target) and the actual output. Which expression will represent the chromosome? Which one will give the fitness of the chromosome? In MEP, the best of them (which has the lowest error) will represent the chromosome. This is different from other GP techniques: InLinear genetic programmingthe last instruction will give the output. InCartesian Genetic Programmingthe gene providing the output is evolved like all other genes. Note that, for many problems, this evaluation has the same complexity as in the case of encoding a single solution in each chromosome. Thus, there is no penalty in running time compared to other techniques. MEPX is a cross-platform (Windows, macOS, and Linux Ubuntu) free software for the automatic generation of computer programs. It can be used for data analysis, particularly for solvingsymbolic regression,statistical classificationandtime-seriesproblems. Libmepis a free and open source library implementing Multi Expression Programming technique. It is written in C++. hmepis a new open source library implementing Multi Expression Programming technique in Haskell programming language.
https://en.wikipedia.org/wiki/Multi_expression_programming
Infinance, aninterest rateswap(IRS) is aninterest rate derivative (IRD). It involves exchange of interest rates between two parties. In particular it is a"linear" IRDand one of the mostliquid, benchmark products. It has associations withforward rate agreements (FRAs), and withzero coupon swaps (ZCSs). In its December 2014 statistics release, theBank for International Settlementsreported that interest rate swaps were the largest component of the globalOTCderivativemarket, representing 60%, with thenotional amountoutstanding in OTC interest rate swaps of $381 trillion, and the gross market value of $14 trillion.[1] Interest rate swaps can be traded as an index through theFTSE MTIRS Index. An interest rate swap's (IRS's) effective description is a derivative contract, agreed between twocounterparties, which specifies the nature of an exchange of payments benchmarked against an interest rate index. The most common IRS is a fixed for floating swap, whereby one party will make payments to the other based on an initially agreed fixed rate of interest, to receive back payments based on a floating interest rate index. Each of these series of payments is termed a "leg", so a typical IRS has both a fixed and a floating leg. The floating index is commonly aninterbank offered rate(IBOR) of specific tenor in the appropriate currency of the IRS, for exampleLIBORin GBP,EURIBORin EUR, or STIBOR in SEK. To completely determine any IRS a number of parameters must be specified for each leg:[2] Each currency has its own standard market conventions regarding the frequency of payments, the day count conventions and the end-of-month rule.[3] AsOTCinstruments, interest rate swaps (IRSs) can be customised in a number of ways and can be structured to meet the specific needs of the counterparties. For example: payment dates could be irregular, the notional of the swap could beamortizedover time, reset dates (or fixing dates) of the floating rate could be irregular, mandatory break clauses may be inserted into the contract, etc. A common form of customisation is often present innew issue swapswhere the fixed leg cashflows are designed to replicate those cashflows received as the coupons on a purchased bond. Theinterbank market, however, only has a few standardised types. There is no consensus on the scope of naming convention for different types of IRS. Even a wide description of IRS contracts only includes those whose legs are denominated in the same currency. It is generally accepted that swaps of similar nature whose legs are denominated in different currencies are calledcross currency basis swaps. Swaps which are determined on a floating rate index in one currency but whose payments are denominated in another currency are calledQuantos. In traditional interest rate derivative terminology an IRS is afixed leg versus floating legderivative contract referencing anIBORas the floating leg. If the floating leg is redefined to be anovernight index, such as EONIA, SONIA, FFOIS, etc. then this type of swap is generally referred to as anovernight indexed swap (OIS). Some financial literature may classify OISs as a subset of IRSs and other literature may recognise a distinct separation. Fixed leg versus fixed legswaps are rare, and generally constitute a form of specialised loan agreement. Float leg versus float legswaps are much more common. These are typically termed (single currency)basis swaps(SBSs). The legs on SBSs will necessarily be different interest indexes, such as 1M LIBOR, 3M LIBOR, 6M LIBOR, SONIA, etc. The pricing of these swaps requires aspreadoften quoted in basis points to be added to one of the floating legs in order to satisfy value equivalence. Interest rate swaps are used to hedge against or speculate on changes in interest rates. They are also used to manage cashflows by converting floating to fixed interest payments, or vice versa. Interest rate swaps are also used speculatively by hedge funds or other investors who expect a change in interest rates or the relationships between them. Traditionally, fixed income investors who expected rates to fall would purchase cash bonds, whose value increased as rates fell. Today, investors with a similar view could enter a floating-for-fixed interest rate swap; as rates fall, investors would pay a lower floating rate in exchange for the same fixed rate. Interest rate swaps are also popular for thearbitrageopportunities they provide. Varying levels ofcreditworthinessmeans that there is often a positivequality spread differentialthat allows both parties to benefit from an interest rate swap. The interest rate swap market in USD is closely linked to theEurodollarfutures market which trades among others at theChicago Mercantile Exchange. IRSs are bespoke financial products whose customisation can include changes to payment dates, notional changes (such as those in amortised IRSs), accrual period adjustment and calculation convention changes (such as aday count conventionof 30/360E to ACT/360 or ACT/365). A vanilla IRS is the term used for standardised IRSs. Typically these will have none of the above customisations, and instead exhibit constant notional throughout, implied payment and accrual dates and benchmark calculation conventions by currency.[2]A vanilla IRS is also characterised by one leg being "fixed" and the second leg "floating" referencing an-IBORindex. The netpresent value(PV) of a vanilla IRS can be computed by determining the PV of each fixed leg and floating leg separately and summing. For pricing a mid-market IRS the underlying principle is that the two legs must have the same value initially; see furtherunder Rational pricing. Calculating the fixed leg requires discounting all of the known cashflows by an appropriate discount factor: whereN{\displaystyle N}is the notional,R{\displaystyle R}is the fixed rate,n1{\displaystyle n_{1}}is the number of payments,di{\displaystyle d_{i}}is the decimalised day count fraction of the accrual in the i'th period, andvi{\displaystyle v_{i}}is the discount factor associated with the payment date of the i'th period. Calculating the floating leg is a similar process replacing the fixed rate with forecast index rates: wheren2{\displaystyle n_{2}}is the number of payments of the floating leg andrj{\displaystyle r_{j}}are the forecast-IBORindex rates of the appropriate currency. The PV of the IRS from the perspective of receiving the fixed leg is then: Historically IRSs were valued using discount factors derived from the same curve used to forecast the-IBORrates (i.e. the erstwhilereference rates). This has been called "self-discounted". Some early literature described some incoherence introduced by that approach and multiple banks were using different techniques to reduce them. It became more apparent with the2008 financial crisisthat the approach was not appropriate, and alignment towards discount factors associated with physicalcollateralof the IRSs was needed. Post crisis, to accommodate credit risk, the now-standard pricing approach is themulti-curve framework, applied where forecast discount factors and-IBOR(see below re MRRs) exhibit disparity. Note that the economic pricing principle is unchanged: leg values are still identical at initiation. SeeFinancial economics § Derivative pricingfor further context. Here,overnight index swap(OIS) rates are typically used to derive discount factors, since that index is the standard inclusion onCredit Support Annexes(CSAs) to determine the rate of interest payable on collateral for IRS contracts. As regards the rates forecast, since thebasis spreadbetweenLIBORrates of different maturities widened during the crisis, forecast curves are generally constructed for eachLIBOR tenorused in floating rate derivative legs.[4] Regarding the curve build, see:[5][6][2]Under the old framework a single self-discounted curve was"bootstrapped"for each tenor; i.e.: solved such that it exactly returnedthe observed prices of selected instruments—IRSs, withFRAsin the short end—with the build proceeding sequentially, date-wise, through these instruments. Under the new framework, the various curves arebest fittedto observed market prices as a "curve set": one curve for discounting, and one for each IBOR-tenor "forecast curve"; the build is then based on quotes for IRSsandOISs, with FRAs included as before. Here, since the observed averageovernight rateplus a spread isswapped for[7]the-IBORrate over the same period (the most liquid tenor in that market), and the-IBORIRSs are in turn discounted on the OIS curve, the problem entails anonlinear system, where all curve points are solved at once, and specializediterative methodsare usually employed — very oftena modification of Newton's method. The forecast-curves for other tenors can be solved in a "second stage", bootstrap-style, with discounting on the now-solved OIS curve. Various approaches to solving curves are possible. Modern methods[8][9][10]tend to employglobal optimizerswith complete flexibility in the parameters that are solved relative to the calibrating instruments used to tune them. (Maturities corresponding to input instruments are referred to as "pillar points".) These optimizers will seek to minimize someobjective function- here matching the observed instrument values - and this assumes that someinterpolationmode has been configured for the curves. A CSA could allow for collateral, and hence interest payments on that collateral, in any currency.[11]To accommodate this, banks include in their curve-set a USD discount-curve to be used for discountinglocal-IBORtrades which have USD collateral; this curve is sometimes called the (Dollar) "basis-curve". It is built by solving for observed (mark-to-market)cross-currency swap rates, where the local-IBORis swapped for USD LIBOR with USD collateral as underpin. The latest, pre-solved USD-LIBOR-curve is therefore an (external) element of the curve-set, and the basis-curve is then solved in the "third stage". Each currency's curve-set will thus include a local-currency discount-curve and its USD discounting basis-curve. As required, a third-currency discount curve — i.e. for local trades collateralized in a currency other than local or USD (or any other combination) — can then be constructed from the local-currency basis-curve and third-currency basis-curve, combinedvia an arbitrage relationshipknown here as "FX Forward Invariance".[12] Starting in 2021,LIBOR is being phased out, with replacements including other "market reference rates" (MRRs) such asSOFRandTONAR. (These MRRs are based on securedovernight fundingtransactions). With the coexistence of "old" and "new" rates in the market, multi-curve and OIS curve "management" is necessary, with changes required to incorporate new discounting and compounding conventions, while the underlying logic is unaffected; see.[13][14][15] The complexities of modern curvesets mean that there may not be discount factors available for a specific-IBORindex curve. These curves are known as 'forecast only' curves and only contain the information of a forecast-IBORindex rate for any future date. Some designs constructed with a discount based methodology mean forecast -IBOR index rates are implied by the discount factors inherent to that curve: To price the mid-market or par rate,S{\displaystyle S}of an IRS (defined by the value of fixed rateR{\displaystyle R}that gives a net PV of zero), the above formula is re-arranged to: In the event old methodologies are applied the discount factorsvk{\displaystyle v_{k}}can be replaced with the self discounted valuesxk{\displaystyle x_{k}}and the above reduces to: In both cases, the PV of a general swap can be expressed exactly with the following intuitive formula:PIRS=N(R−S)A{\displaystyle P_{\text{IRS}}=N(R-S)A}whereA{\displaystyle A}is the so-calledAnnuityfactorA=∑i=1n1divi{\textstyle A=\sum _{i=1}^{n_{1}}d_{i}v_{i}}(orA=∑i=1n1dixi{\textstyle A=\sum _{i=1}^{n_{1}}d_{i}x_{i}}for self-discounting). This shows that the PV of an IRS is roughly linear in the swap par rate (though small non-linearities arise from the co-dependency of the swap rate with the discount factors in the Annuity sum). Interest rate swaps expose traders and institutions to various categories offinancial risk:[2]predominantlymarket risk- specificallyinterest rate risk- andcredit risk. Reputation risks also exist. The mis-selling of swaps,over-exposure of municipalitiesto derivative contracts, andIBOR manipulationare examples of high-profile cases where trading interest rate swaps has led to a loss of reputation and fines by regulators. As regards market risk, during the swap's life, both the discounting factors and the forward rates change, and thus, per the above valuation techniques, thePVof a swap will deviate from its initial value. The swap will therefore at times be an asset to one party and a liability to the other. (The way these changes in value are reported is the subject ofIAS 39for jurisdictions followingIFRS, andFAS 133forU.S. GAAP.) In market terminology, thefirst-orderlink of swap value to interest rates is referred to asdelta risk; theirgamma riskreflects how delta risk changes as market interest rates fluctuate (seeGreeks (finance)). Other specific types of market risk that interest rate swaps have exposure to arebasis risks, where various IBOR tenor indexes can deviate from one another, andreset risks, where thepublication of specific tenor IBOR indexesare subject to daily fluctuation. Uncollateralised interest rate swaps — those executed bilaterally without aCSA in place— expose the trading counterparties to funding risks andcounterpartycredit risks.[16]Funding risks because the value of the swap might deviate to become so negative that it is unaffordable and cannot be funded. Credit risks because the respective counterparty, for whom the value of the swap is positive, will be concerned aboutthe opposing counterparty defaultingon its obligations. Collateralised interest rate swaps, on the other hand, expose the users to collateral risks: here, depending upon the terms of the CSA, the type of posted collateral that is permitted might become more or less expensive due to other extraneous market movements. Credit and funding risks still exist for collateralised trades but to a much lesser extent. Regardless, due to regulations set out in theBasel IIIRegulatory Frameworks, trading interest rate derivativescommands a capital usage. The consequence of this is that, dependent upon their specific nature, interest rate swaps may be capital intensive; with the latter, also, sensitive to market movements. Capital risks are thus another concern for users, and Banks typically calculate acredit valuation adjustment, CVA - as well asXVAfor other risks - which then incorporate these risks into the instrument value.[17] Debt security traders, dailymark to markettheir swap positions so as to "visualize their inventory" (seevaluation control).As required, they will attempt tohedge, both to protect value and to reduce volatility. Since thecash flowsof component swaps offset each other, traders willimplement this hedgingon anet basisfor entire books.[18]Here, the trader would typically hedge her interest rate risk through offsettingTreasuries(either spot or futures). For credit risks – which will not typically offset – traders estimate:[16]for each counterparty theprobability of defaultusing models such asJarrow–TurnbullandKMV, or bystripping thesefromCDSprices; and then for each trade, thepotential future exposureandexpected exposureto the counterparty.Credit derivativeswill then be purchased[16]as appropriate. Often, a specializedXVA-deskcentrallymonitors and managesoverall CVA and XVA exposure and capital, and will then implement this hedge.[19]The other risks must be managed systematically, sometimesinvolving group treasury. These processes will all rely on well-designednumericalrisk models: both to measure and forecast the (overall) change in value, and to suggest reliable offsetting benchmark trades which may be used to mitigate risks. Note, however, (and reP&L Attribution) that the multi-curve framework adds complexity[7]in that (individual) positions are (potentially) affected by numerous instruments not obviously related. ICE Swap rate[20]replaced the rate formerly known as ISDAFIX in 2015. Swap Rate benchmark rates are calculated using eligible prices and volumes for specified interest rate derivative products. The prices are provided by trading venues in accordance with a “Waterfall” Methodology. The first level of the Waterfall (“Level 1”) uses eligible, executable prices and volumes provided by regulated, electronic, trading venues. Multiple, randomised snapshots of market data are taken during a short window before calculation. This enhances the benchmark's robustness and reliability by protecting against attempted manipulation and temporary aberrations in the underlying market.[citation needed] The market-making of IRSs is an involved process involving multiple tasks; curve construction with reference to interbank markets, individual derivative contract pricing, risk management of credit, cash and capital. The cross disciplines required include quantitative analysis and mathematical expertise, disciplined and organized approach towards profits and losses, and coherent psychological and subjective assessment of financial market information and price-taker analysis. The time sensitive nature of markets also creates a pressurized environment. Many tools and techniques have been designed to improve efficiency of market-making in a drive to efficiency and consistency.[2] General: Early literature on the incoherence of the one curve pricing approach: Multi-curves framework: and are regarded as an
https://en.wikipedia.org/wiki/Multi-curve_framework
Infinance,bootstrappingis a method for constructing a (zero-coupon) fixed-incomeyield curvefrom the prices of a set of coupon-bearing products, e.g.bondsandswaps.[1] Abootstrapped curve, correspondingly, is one where the prices of the instruments used as aninputto the curve, will be an exactoutput, when these same instruments arevalued using this curve. Here, the term structure of spot returns is recovered from the bond yields by solving for them recursively, byforward substitution: this iterative process is called thebootstrap method. The usefulness of bootstrapping is that using only a few carefully selected zero-coupon products, it becomes possible to derive parswaprates (forward and spot) forallmaturities given the solved curve. Given: 0.5-year spot rate, Z1 = 4%, and 1-year spot rate, Z2 = 4.3% (we can get these rates from T-Bills which are zero-coupon); and the par rate on a 1.5-year semi-annual coupon bond, R3 = 4.5%. We then use these rates to calculate the 1.5 year spot rate. We solve the 1.5 year spot rate, Z3, by the formula below: Z3{\displaystyle Z_{3}}is 4.51%. As stated above, the selection of the input securities is important, given that there is a general lack of data points in ayield curve(there are only a fixed number of products in the market). More importantly, because the input securities have varying coupon frequencies, the selection of the input securities is critical. It makes sense to construct a curve of zero-coupon instruments from which one can price any yield, whether forward or spot, without the need of more external information.[2]Note that certain assumptions (e.g. theinterpolationmethod) will always be required. The general methodology is as follows: (1) Define the set of yielding products - these will generally be coupon-bearing bonds; (2) Derive discount factors for the corresponding terms - these are the internal rates of return of the bonds; (3) 'Bootstrap' the zero-coupon curve, successivelycalibratingthis curve such that it returns the prices of the inputs. A generically statedalgorithmfor the third step is as follows; for more detail seeYield curve § Construction of the full yield curve from market data. For each input instrument, proceeding through these in terms of increasing maturity: When solved as described here, the curve will bearbitrage freein the sense that it is exactly consistent with the selected prices; seeRational pricing § Fixed income securitiesandBond valuation § Arbitrage-free pricing approach. Note that some analysts will instead construct the curve such that it results in abest-fit"through" the input prices, as opposed to an exact match, using a method such asNelson-Siegel. Regardless of approach, however, there is a requirement that the curve be arbitrage-free in a second sense: that allforward ratesare positive. More sophisticated methods for the curve construction — whether targeting an exact- or a best-fit — will additionally targetcurve "smoothness"[broken anchor]as an output,[3][4]and the choice ofinterpolation methodhere, for rates not directly specified, will then be important. A more detailed description of the forward substitution is as follows. For each stage of the iterative process, we are interested in deriving the n-yearzero-coupon bondyield, also known as theinternal rate of returnof the zero-coupon bond. As there are no intermediate payments on this bond, (all the interest and principal is realized at the end of n years) it is sometimes called the n-year spot rate. To derive this rate we observe that the theoretical price of a bond can be calculated as the present value of the cash flows to be received in the future. In the case of swap rates, we want the par bond rate (Swaps are priced at par when created) and therefore we require that the present value of the future cash flows and principal be equal to 100%. therefore (this formula is preciselyforward substitution) After thefinancial crisis of 2007–2008swap valuation is typically under a "multi-curveand collateral" framework; the above, by contrast, describes the "self discounting" approach. Under the new framework, when valuing a Libor-based swap: (i) the forecasted cashflows are derived from the Libor-curve, (ii) however, these cashflows are discounted at theOIS-based curve'sovernight rate, as opposed to at Libor. The result is that, in practice, curves are built as a "set" and not individually, where, correspondingly: (i) "forecast curves" are constructed foreachfloating-legLibor tenor; and (ii) discounting is on a single, common OIS curve which must simultaneously be constructed. The reason for the change is that, post-crisis, theovernight rateis the rate paid on the collateral (variation margin) posted by counterparties on mostCSAs. The forward values of the overnight rate can be read from the overnight index swap curve. "OIS-discounting" is now standard, and is sometimes, referred to as "CSA-discounting". See:Financial economics § Derivative pricingfor context;Interest rate swap § Valuation and pricingfor the math. References Standard texts
https://en.wikipedia.org/wiki/Bootstrapping_(finance)
In statistics,nonlinear regressionis a form ofregression analysisin which observational data are modeled by a function which is a nonlinear combination of the model parameters and depends on one or more independent variables. The data are fitted by a method of successive approximations (iterations). In nonlinear regression, astatistical modelof the form, y∼f(x,β){\displaystyle \mathbf {y} \sim f(\mathbf {x} ,{\boldsymbol {\beta }})} relates a vector ofindependent variables,x{\displaystyle \mathbf {x} }, and its associated observeddependent variables,y{\displaystyle \mathbf {y} }. The functionf{\displaystyle f}is nonlinear in the components of the vector of parametersβ{\displaystyle \beta }, but otherwise arbitrary. For example, theMichaelis–Mentenmodel for enzyme kinetics has two parameters and one independent variable, related byf{\displaystyle f}by:[a] f(x,β)=β1xβ2+x{\displaystyle f(x,{\boldsymbol {\beta }})={\frac {\beta _{1}x}{\beta _{2}+x}}} This function, which is a rectangular hyperbola, isnonlinearbecause it cannot be expressed as alinear combinationof the twoβ{\displaystyle \beta }s. Systematic errormay be present in the independent variables but its treatment is outside the scope of regression analysis. If the independent variables are not error-free, this is anerrors-in-variables model, also outside this scope. Other examples of nonlinear functions includeexponential functions,logarithmic functions,trigonometric functions,power functions,Gaussian function, andLorentz distributions. Some functions, such as the exponential or logarithmic functions, can be transformed so that they are linear. When so transformed, standard linear regression can be performed but must be applied with caution. See§ Linearization §§ Transformation, below, for more details. In general, there is no closed-form expression for the best-fitting parameters, as there is inlinear regression. Usually numericaloptimizationalgorithms are applied to determine the best-fitting parameters. Again in contrast to linear regression, there may be manylocal minimaof the function to be optimized and even the global minimum may produce abiasedestimate. In practice,estimated valuesof the parameters are used, in conjunction with the optimization algorithm, to attempt to find the global minimum of a sum of squares. For details concerning nonlinear data modeling seeleast squaresandnon-linear least squares. The assumption underlying this procedure is that the model can be approximated by a linear function, namely a first-orderTaylor series: f(xi,β)≈f(xi,0)+∑jJijβj{\displaystyle f(x_{i},{\boldsymbol {\beta }})\approx f(x_{i},0)+\sum _{j}J_{ij}\beta _{j}} whereJij=∂f(xi,β)∂βj{\displaystyle J_{ij}={\frac {\partial f(x_{i},{\boldsymbol {\beta }})}{\partial \beta _{j}}}}are Jacobian matrix elements. It follows from this that the least squares estimators are given by β^≈(JTJ)−1JTy,{\displaystyle {\hat {\boldsymbol {\beta }}}\approx \mathbf {(J^{T}J)^{-1}J^{T}y} ,}comparegeneralized least squareswith covariance matrix proportional to the unit matrix. The nonlinear regression statistics are computed and used as in linear regression statistics, but usingJin place ofXin the formulas. When the functionf(xi,β){\displaystyle f(x_{i},{\boldsymbol {\beta }})}itself is not known analytically, but needs to belinearly approximatedfromn+1{\displaystyle n+1}, or more, known values (wheren{\displaystyle n}is the number of estimators), the best estimator is obtained directly from theLinear Template Fitas[1]β^=((YM~)TΩ−1YM~)−1(YM~)TΩ−1(d−Ym¯){\displaystyle {\hat {\boldsymbol {\beta }}}=((\mathbf {Y{\tilde {M}}} )^{\mathsf {T}}{\boldsymbol {\Omega }}^{-1}\mathbf {Y{\tilde {M}}} )^{-1}(\mathbf {Y{\tilde {M}}} )^{\mathsf {T}}{\boldsymbol {\Omega }}^{-1}(\mathbf {d} -\mathbf {Y{\bar {m}})} }(see alsolinear least squares). The linear approximation introducesbiasinto the statistics. Therefore, more caution than usual is required in interpreting statistics derived from a nonlinear model. The best-fit curve is often assumed to be that which minimizes the sum of squaredresiduals. This is theordinary least squares(OLS) approach. However, in cases where the dependent variable does not have constant variance, or there are some outliers, a sum of weighted squared residuals may be minimized; seeweighted least squares. Each weight should ideally be equal to the reciprocal of the variance of the observation, or the reciprocal of the dependent variable to some power in the outlier case,[2]but weights may be recomputed on each iteration, in an iteratively weighted least squares algorithm. Some nonlinear regression problems can be moved to a linear domain by a suitable transformation of the model formulation. For example, consider the nonlinear regression problem y=aebxU{\displaystyle y=ae^{bx}U} with parametersaandband with multiplicative error termU. If we take the logarithm of both sides, this becomes ln⁡(y)=ln⁡(a)+bx+u,{\displaystyle \ln {(y)}=\ln {(a)}+bx+u,} whereu= ln(U), suggesting estimation of the unknown parameters by a linear regression of ln(y) onx, a computation that does not require iterative optimization. However, use of a nonlinear transformation requires caution. The influences of the data values will change, as will the error structure of the model and the interpretation of any inferential results. These may not be desired effects. On the other hand, depending on what the largest source of error is, a nonlinear transformation may distribute the errors in a Gaussian fashion, so the choice to perform a nonlinear transformation must be informed by modeling considerations. ForMichaelis–Menten kinetics, the linearLineweaver–Burk plot 1v=1Vmax+KmVmax[S]{\displaystyle {\frac {1}{v}}={\frac {1}{V_{\max }}}+{\frac {K_{m}}{V_{\max }[S]}}} of 1/vagainst 1/[S] has been much used. However, since it is very sensitive to data error and is strongly biased toward fitting the data in a particular range of the independent variable, [S], its use is strongly discouraged. For error distributions that belong to theexponential family, a link function may be used to transform the parameters under theGeneralized linear modelframework. Theindependentorexplanatory variable(say X) can be split up into classes or segments andlinear regressioncan be performed per segment. Segmented regression withconfidence analysismay yield the result that thedependentorresponsevariable(say Y) behaves differently in the various segments.[3] The figure shows that thesoil salinity(X) initially exerts no influence on thecrop yield(Y) of mustard, until acriticalorthresholdvalue (breakpoint), after which the yield is affected negatively.[4]
https://en.wikipedia.org/wiki/Nonlinear_regression
Inmathematics, aplane curveis acurvein aplanethat may be aEuclidean plane, anaffine planeor aprojective plane. The most frequently studied cases are smooth plane curves (includingpiecewisesmooth plane curves), andalgebraic plane curves. Plane curves also include theJordan curves(curves that enclose a region of the plane but need not be smooth) and thegraphs of continuous functions. A plane curve can often be represented inCartesian coordinatesby animplicit equationof the formf(x,y)=0{\displaystyle f(x,y)=0}for some specific functionf. If this equation can be solved explicitly foryorx– that is, rewritten asy=g(x){\displaystyle y=g(x)}orx=h(y){\displaystyle x=h(y)}for specific functiongorh– then this provides an alternative, explicit, form of the representation. A plane curve can also often be represented in Cartesian coordinates by aparametric equationof the form(x,y)=(x(t),y(t)){\displaystyle (x,y)=(x(t),y(t))}for specific functionsx(t){\displaystyle x(t)}andy(t).{\displaystyle y(t).} Plane curves can sometimes also be represented in alternativecoordinate systems, such aspolar coordinatesthat express the location of each point in terms of an angle and a distance from the origin. A smooth plane curve is a curve in arealEuclidean plane⁠R2{\displaystyle \mathbb {R} ^{2}}⁠and is a one-dimensionalsmooth manifold. This means that a smooth plane curve is a plane curve which "locally looks like aline", in the sense that near every point, it may be mapped to a line by asmooth function. Equivalently, a smooth plane curve can be given locally by an equationf(x,y)=0,{\displaystyle f(x,y)=0,}where⁠f:R2→R{\displaystyle f:\mathbb {R} ^{2}\to \mathbb {R} }⁠is asmooth function, and thepartial derivatives⁠∂f/∂x{\displaystyle \partial f/\partial x}⁠and⁠∂f/∂y{\displaystyle \partial f/\partial y}⁠are never both 0 at a point of the curve. Analgebraic plane curveis a curve in anaffineorprojective planegiven by one polynomial equationf(x,y)=0{\displaystyle f(x,y)=0}(orF(x,y,z)=0,{\displaystyle F(x,y,z)=0,}whereFis ahomogeneous polynomial, in the projective case.) Algebraic curves have been studied extensively since the 18th century. Every algebraic plane curve has a degree, thedegreeof the defining equation, which is equal, in case of analgebraically closed field, to the number of intersections of the curve with a line ingeneral position. For example, the circle given by the equationx2+y2=1{\displaystyle x^{2}+y^{2}=1}has degree 2. Thenon-singularplane algebraic curves of degree 2 are calledconic sections, and theirprojective completionare allisomorphicto the projective completion of the circlex2+y2=1{\displaystyle x^{2}+y^{2}=1}(that is the projective curve of equationx2+y2−z2=0{\displaystyle x^{2}+y^{2}-z^{2}=0}).The plane curves of degree 3 are calledcubic plane curvesand, if they are non-singular,elliptic curves. Those of degree 4 are calledquartic plane curves. Numerous examples of plane curves are shown inGallery of curvesand listed atList of curves. The algebraic curves of degree 1 or 2 are shown here (an algebraic curve of degree less than 3 is always contained in a plane):
https://en.wikipedia.org/wiki/Plane_curve
Probability distribution fittingor simplydistribution fittingis the fitting of aprobability distributionto a series of data concerning the repeated measurement of a variable phenomenon. The aim of distribution fitting is topredicttheprobabilityor toforecastthefrequencyof occurrence of the magnitude of the phenomenon in a certain interval. There are many probability distributions (seelist of probability distributions) of which some can be fitted more closely to the observed frequency of the data than others, depending on the characteristics of the phenomenon and of the distribution. The distribution giving a close fit is supposed to lead to good predictions. In distribution fitting, therefore, one needs to select a distribution that suits the data well. The selection of the appropriate distribution depends on the presence or absence of symmetry of the data set with respect to thecentral tendency. Symmetrical distributions When the data are symmetrically distributed around the mean while the frequency of occurrence of data farther away from the mean diminishes, one may for example select thenormal distribution, thelogistic distribution, or theStudent's t-distribution. The first two are very similar, while the last, with one degree of freedom, has "heavier tails" meaning that the values farther away from the mean occur relatively more often (i.e. thekurtosisis higher). TheCauchy distributionis also symmetric. Skew distributions to the right When the larger values tend to be farther away from the mean than the smaller values, one has a skew distribution to the right (i.e. there is positiveskewness), one may for example select thelog-normal distribution(i.e. the log values of the data arenormally distributed), thelog-logistic distribution(i.e. the log values of the data follow alogistic distribution), theGumbel distribution, theexponential distribution, thePareto distribution, theWeibull distribution, theBurr distribution, or theFréchet distribution. The last four distributions are bounded to the left. Skew distributions to the left When the smaller values tend to be farther away from the mean than the larger values, one has a skew distribution to the left (i.e. there is negative skewness), one may for example select thesquare-normal distribution(i.e. the normal distribution applied to the square of the data values),[1]the inverted (mirrored) Gumbel distribution,[1]theDagum distribution(mirrored Burr distribution), or theGompertz distribution, which is bounded to the left. The following techniques of distribution fitting exist:[2] It is customary to transform data logarithmically to fit symmetrical distributions (like thenormalandlogistic) to data obeying a distribution that is positively skewed (i.e. skew to the right, withmean>mode, and with a right hand tail that is longer than the left hand tail), seelognormal distributionand theloglogistic distribution. A similar effect can be achieved by taking the square root of the data. To fit a symmetrical distribution to data obeying a negatively skewed distribution (i.e. skewed to the left, withmean<mode, and with a right hand tail this is shorter than the left hand tail) one could use the squared values of the data to accomplish the fit. More generally one can raise the data to a powerpin order to fit symmetrical distributions to data obeying a distribution of any skewness, wherebyp< 1 when the skewness is positive andp> 1 when the skewness is negative. The optimal value ofpis to be found by anumerical method. The numerical method may consist of assuming a range ofpvalues, then applying the distribution fitting procedure repeatedly for all the assumedpvalues, and finally selecting the value ofpfor which the sum of squares of deviations of calculated probabilities from measured frequencies (chi squared) is minimum, as is done inCumFreq. The generalization enhances the flexibility of probability distributions and increases their applicability in distribution fitting.[6] The versatility of generalization makes it possible, for example, to fit approximately normally distributed data sets to a large number of different probability distributions,[7]while negatively skewed distributions can be fitted to square normal and mirrored Gumbel distributions.[8] Skewed distributions can be inverted (or mirrored) by replacing in the mathematical expression of thecumulative distribution function(F) by its complement: F'=1-F, obtaining thecomplementary distribution function(also calledsurvival function) that gives a mirror image. In this manner, a distribution that is skewed to the right is transformed into a distribution that is skewed to the left and vice versa. The technique of skewness inversion increases the number of probability distributions available for distribution fitting and enlarges the distribution fitting opportunities. Some probability distributions, like theexponential, do not support negative data values (X). Yet, when negative data are present, such distributions can still be used replacingXbyY=X-Xm, whereXmis the minimum value ofX. This replacement represents a shift of the probability distribution in positive direction, i.e. to the right, becauseXmis negative. After completing the distribution fitting ofY, the correspondingX-values are found fromX=Y+Xm, which represents a back-shift of the distribution in negative direction, i.e. to the left.The technique of distribution shifting augments the chance to find a properly fitting probability distribution. The option exists to use two different probability distributions, one for the lower data range, and one for the higher like for example theLaplace distribution. The ranges are separated by a break-point. The use of such composite (discontinuous) probability distributions can be opportune when the data of the phenomenon studied were obtained under two sets different conditions.[6] Predictions of occurrence based on fitted probability distributions are subject touncertainty, which arises from the following conditions: An estimate of the uncertainty in the first and second case can be obtained with thebinomial probability distributionusing for example the probability of exceedancePe(i.e. the chance that the eventXis larger than a reference valueXrofX) and the probability of non-exceedancePn(i.e. the chance that the eventXis smaller than or equal to the reference valueXr, this is also calledcumulative probability). In this case there are only two possibilities: either there is exceedance or there is non-exceedance. This duality is the reason that the binomial distribution is applicable. With the binomial distribution one can obtain aprediction interval. Such an interval also estimates the risk of failure, i.e. the chance that the predicted event still remains outside the confidence interval. The confidence or risk analysis may include thereturn periodT=1/Peas is done inhydrology. A Bayesian approach can be used for fitting a modelP(x|θ){\displaystyle P(x|\theta )}having a prior distributionP(θ){\displaystyle P(\theta )}for the parameterθ{\displaystyle \theta }. When one has samplesX{\displaystyle X}that are independently drawn from the underlying distribution then one can derive the so-called posterior distributionP(θ|X){\displaystyle P(\theta |X)}. This posterior can be used to update the probability mass function for a new samplex{\displaystyle x}given the observationsX{\displaystyle X}, one obtains Pθ(x|X):=∫dθP(x|θ)P(θ|X).{\displaystyle P_{\theta }(x|X):=\int d\theta \ P(x|\theta )\ P(\theta |X).} The variance of the newly obtained probability mass function can also be determined. The variance for a Bayesian probability mass function can be defined as σPθ(x|X)2:=∫dθ[P(x|θ)−Pθ(x|X)]2P(θ|X).{\displaystyle \sigma _{P_{\theta }(x|X)}^{2}:=\int d\theta \ \left[P(x|\theta )-P_{\theta }(x|X)\right]^{2}\ P(\theta |X).} This expression for the variance can be substantially simplified (assuming independently drawn samples). Defining the "self probability mass function" as Pθ(x|{X,x})=∫dθP(x|θ)P(θ|{X,x}),{\displaystyle P_{\theta }(x|\left\{X,x\right\})=\int d\theta \ P(x|\theta )\ P(\theta |\left\{X,x\right\}),} one obtains for the variance[12] σPθ(x|X)2=Pθ(x|X)[Pθ(x|{X,x})−Pθ(x|X)].{\displaystyle \sigma _{P_{\theta }(x|X)}^{2}=P_{\theta }(x|X)\left[P_{\theta }(x|\left\{X,x\right\})-P_{\theta }(x|X)\right].} The expression for variance involves an additional fit that includes the samplex{\displaystyle x}of interest. By ranking thegoodness of fitof various distributions one can get an impression of which distribution is acceptable and which is not. From thecumulative distribution function(CDF) one can derive ahistogramand theprobability density function(PDF).
https://en.wikipedia.org/wiki/Probability_distribution_fitting
In mathematics, theprogressive-iterative approximation methodis aniterative methodofdata fittingwith geometric meanings.[1]Given a set of data points to be fitted, the method obtains a series of fitting curves (or surfaces) by iteratively updating the control points, and thelimitcurve (surface) caninterpolateorapproximatethe given data points.[2]It avoids solving a linear system of equations directly and allows flexibility in adding constraints during the iterative process.[3]Therefore, it has been widely used in geometric design and related fields.[2] The study of the iterative method with geometric meaning can be traced back to the work of scholars such as Dongxu Qi andCarl de Boorin the 1970s.[4][5]In 1975, Qi et al. developed and proved the "profit and loss" algorithm for uniform cubicB-splinecurves,[4]and in 1979, de Boor independently proposed this algorithm.[5]In 2004, Hongwei Lin and coauthors proved that non-uniform cubic B-spline curves and surfaces have the "profit and loss" property.[3]Later, in 2005, Lin et al. proved that the curves and surfaces with normalized and totally positive basis all have this property and named itprogressive iterative approximation(PIA).[1]In 2007, Maekawa et al. changed the algebraic distance in PIA to geometric distance and named itgeometric interpolation(GI).[6]In 2008, Cheng et al. extended it tosubdivision surfacesand named the methodprogressive interpolation(PI).[7]Since the iteration steps of the PIA, GI, and PI algorithms are similar and all have geometric meanings, they are collectively referred to asgeometric iterative methods(GIM).[2] PIA is now extended to several common curves and surfaces in thegeometric designfield,[8]includingNURBScurves and surfaces,[9]T-splinesurfaces,[10]andimplicit curvesandsurfaces.[11] Generally, progressive-iterative approximation (PIA) can be divided into interpolation and approximation schemes.[2]In interpolation algorithms, the number of control points is equal to that of the data points; in approximation algorithms, the number of control points can be less than that of the data points. Specifically, there are some representative iteration methods—such as local-PIA,[12]implicit-PIA,[11]fairing-PIA,[13]and isogeometric least-squares progressive-iterative approximation (IG-LSPIA)[14]—that are specialized for solving theisogeometric analysisproblem.[15] In interpolation algorithms of PIA,[1][3][9][16]every data point is used as a control point. To facilitate the description of the PIA iteration format for different forms of curves and surfaces, the following formula is uniformly used:P(t)=∑i=1nPiBi(t).{\displaystyle \mathbf {P} (\mathbf {t} )=\sum _{i=1}^{n}\mathbf {P} _{i}B_{i}(\mathbf {t} ).}For example: Additionally, this can be applied to NURBS curves and surfaces, T-spline surfaces, and triangular Bernstein–Bézier surfaces.[18] Given an ordered data setQi{\displaystyle \mathbf {Q} _{i}}with parametersti{\displaystyle t_{i}}satisfyingt1<t2<⋯{\displaystyle t_{1}<t_{2}<\cdots }fori=1,2,⋯,n{\displaystyle i=1,2,\cdots ,n}, the initial fitting curve is:[1]P(0)(t)=∑i=1nPi(0)Bi(t){\displaystyle \mathbf {P} ^{(0)}(t)=\sum _{i=1}^{n}\mathbf {P} _{i}^{(0)}B_{i}(t)}where the initial control points of the initial fitting curvePi(0){\displaystyle \mathbf {P} _{i}^{(0)}}can be randomly selected. Suppose that after thek{\displaystyle k}th iteration, thek{\displaystyle k}th fitting curveP(k)(t){\displaystyle \mathbf {P} ^{(k)}(t)}is generated by To construct the(k+1){\displaystyle (k+1)}st curve, we first calculate thedifference vectors,Δi(k)=Qi−P(k)(ti),i=1,2,⋯,n{\displaystyle \mathbf {\Delta } _{i}^{(k)}=\mathbf {Q} _{i}-\mathbf {P} ^{(k)}(t_{i}),\quad i=1,2,\cdots ,n}and use them to update the control points byPi(k+1)=Pi(k)+Δi(k){\displaystyle \mathbf {P} _{i}^{(k+1)}=\mathbf {P} _{i}^{(k)}+\mathbf {\Delta } _{i}^{(k)}}which leads to the(k+1){\displaystyle (k+1)}st fitting curve:P(k+1)(t)=∑i=1nPi(k+1)Bi(t).{\displaystyle \mathbf {P} ^{(k+1)}(t)=\sum _{i=1}^{n}\mathbf {P} _{i}^{(k+1)}B_{i}(t).}In this way, we obtain a sequence of curvesP(α)(t),α=0,1,2,⋯{\textstyle \mathbf {P} ^{(\alpha )}(t),\alpha =0,1,2,\cdots }, which converges to a limit curve that interpolates the give data points,[1][9]i.e.,limα→∞P(α)(ti)=Qi,i=1,2,⋯,n.{\displaystyle \lim \limits _{\alpha \rightarrow \infty }\mathbf {P} ^{(\alpha )}(t_{i})=\mathbf {Q} _{i},\quad i=1,2,\cdots ,n.} For the B-spline curve and surface fitting problem, Deng and Lin proposed aleast-squares progressive–iterative approximation(LSPIA),[10][19]which allows the number of control points to be less than the number of the data points and is more suitable for large-scale data fitting problems.[10] Assume there existsm{\displaystyle m}data points andn{\displaystyle n}control points, wheren≤m{\displaystyle n\leq m}. Start with equation (1), which gives thek{\displaystyle k}th fitting curve asP(k)(t)=∑j=1nPj(k)Bj(t).{\displaystyle \mathbf {P} ^{(k)}(t)=\sum _{j=1}^{n}\mathbf {P} _{j}^{(k)}B_{j}(t).}To generate the(k+1){\displaystyle (k+1)}th fitting curve, first compute the difference vectors for the data points[10][19]δi(k)=Qi−P(k)(ti),i=1,2,⋯,m{\displaystyle {\boldsymbol {\delta }}_{i}^{(k)}=\mathbf {Q} _{i}-\mathbf {P} ^{(k)}(t_{i}),\quad i=1,2,\cdots ,m}and then the difference vectors for the control pointsΔj(k)=∑i∈IjciBj(ti)δi(k)∑i∈IjciBj(ti),j=1,2,⋯,n{\displaystyle \mathbf {\Delta } _{j}^{(k)}={\frac {\sum _{i\in I_{j}}{c_{i}B_{j}(t_{i}){\boldsymbol {\delta }}_{i}^{(k)}}}{\sum _{i\in I_{j}}c_{i}B_{j}(t_{i})}},\quad j=1,2,\cdots ,n}whereIj{\displaystyle I_{j}}is the index set of the data points in thej{\displaystyle j}th group, whose parameters fall in the local support of thej{\displaystyle j}th basis function, i.e.,Bj(ti)≠0{\displaystyle B_{j}(t_{i})\neq 0}. Theci{\displaystyle c_{i}}are weights that guarantee the convergence of the algorithm, usually taken asci=1,i∈Ij{\displaystyle c_{i}=1,i\in I_{j}}. Finally, the control points of the(k+1){\displaystyle (k+1)}th curve are updated byPj(k+1)=Pj(k)+Δj(k),{\displaystyle \mathbf {P} _{j}^{(k+1)}=\mathbf {P} _{j}^{(k)}+\mathbf {\Delta } _{j}^{(k)},}leading to the(k+1){\displaystyle (k+1)}th fitting curveP(k+1)(t){\displaystyle \mathbf {P} ^{(k+1)}(t)}. In this way, we obtain a sequence of curve, and the limit curve converges to theleast-squares fittingresult to the given data points.[10][19] In the local-PIA method,[12]the control points are divided into active and fixed control points, whose subscripts are denoted asI={i1,i2,⋯,iI}{\textstyle I=\left\{i_{1},i_{2},\cdots ,i_{I}\right\}}andJ={j1,j2,⋯,jJ}{\textstyle J=\left\{j_{1},j_{2},\cdots ,j_{J}\right\}}, respectively. Assume that, thek{\textstyle k}th fitting curve isP(k)(t)=∑j=1nPj(k)Bj(t){\textstyle \mathbf {P} ^{(k)}(t)=\sum _{j=1}^{n}\mathbf {P} _{j}^{(k)}B_{j}(t)}, where the fixed control points satisfyPj(k)=Pj(0),j∈J,k=0,1,2,⋯.{\displaystyle \mathbf {P} _{j}^{(k)}=\mathbf {P} _{j}^{(0)},\quad j\in J,\quad k=0,1,2,\cdots .}Then, on the one hand, the iterative formula of the difference vectorΔh(k+1){\textstyle \mathbf {\Delta } _{h}^{(k+1)}}corresponding to the fixed control points isΔh(k+1)=Qh−∑j=1nPj(k+1)Bj(th)=Qh−∑j∈JPj(k+1)Bj(th)−∑i∈I(Pi(k)+Δi(k))Bi(th)=Qh−∑j=1nPj(k)Bj(th)−∑i∈IΔi(k)Bi(th)=Δh(k)−∑i∈IΔi(k)Bi(th),h∈J.{\displaystyle {\begin{aligned}\mathbf {\Delta } _{h}^{(k+1)}&=\mathbf {Q} _{h}-\sum _{j=1}^{n}\mathbf {P} _{j}^{(k+1)}B_{j}(t_{h})\\&=\mathbf {Q} _{h}-\sum _{j\in J}\mathbf {P} _{j}^{(k+1)}B_{j}(t_{h})-\sum _{i\in I}\left(\mathbf {P} _{i}^{(k)}+\mathbf {\Delta } _{i}^{(k)}\right)B_{i}(t_{h})\\&=\mathbf {Q} _{h}-\sum _{j=1}^{n}\mathbf {P} _{j}^{(k)}B_{j}(t_{h})-\sum _{i\in I}\mathbf {\Delta } _{i}^{(k)}B_{i}(t_{h})\\&=\mathbf {\Delta } _{h}^{(k)}-\sum _{i\in I}\mathbf {\Delta } _{i}^{(k)}B_{i}(t_{h}),\quad h\in J.\end{aligned}}}On the other hand, the iterative formula of the difference vectorDl(k+1){\textstyle \mathbf {D} _{l}^{(k+1)}}corresponding to the active control points isΔl(k+1)=Ql−∑j=1nPj(k+1)Bj(tl)=Ql−∑j=1nPj(k)Bj(tl)−∑i∈IΔi(k)Bi(tl)=Δl(k)−∑i∈IΔi(k)Bi(tl)=−Δi1(k)Bi1(tl)−Δi2(k)Bi2(tl)−⋯+(1−Bl(tl))Δl(k)−⋯−ΔiI(k)BiI(tl),l∈I.{\displaystyle {\begin{aligned}\mathbf {\Delta } _{l}^{(k+1)}&=\mathbf {Q} _{l}-\sum _{j=1}^{n}\mathbf {P} _{j}^{(k+1)}B_{j}(t_{l})\\&=\mathbf {Q} _{l}-\sum _{j=1}^{n}\mathbf {P} _{j}^{(k)}B_{j}(t_{l})-\sum _{i\in I}\mathbf {\Delta } _{i}^{(k)}B_{i}(t_{l})\\&=\mathbf {\Delta } _{l}^{(k)}-\sum _{i\in I}\mathbf {\Delta } _{i}^{(k)}B_{i}(t_{l})\\&=-\mathbf {\Delta } _{i_{1}}^{(k)}B_{i_{1}}(t_{l})-\mathbf {\Delta } _{i_{2}}^{(k)}B_{i_{2}}(t_{l})-\cdots +\left(1-B_{l}(t_{l})\right)\mathbf {\Delta } _{l}^{(k)}-\cdots -\mathbf {\Delta } _{i_{I}}^{(k)}B_{i_{I}}(t_{l}),\quad l\in I.\end{aligned}}}Arranging the above difference vectors into a one-dimensional sequence,D(k+1)=[Δj1(k+1),Δj2(k+1),⋯,ΔjJ(k+1),Δi1(k+1),Δi2(k+1),⋯,ΔiI(k+1)]T,k=0,1,2,⋯,{\displaystyle \mathbf {D} ^{(k+1)}=\left[\mathbf {\Delta } _{j_{1}}^{(k+1)},\mathbf {\Delta } _{j_{2}}^{(k+1)},\cdots ,\mathbf {\Delta } _{j_{J}}^{(k+1)},\mathbf {\Delta } _{i_{1}}^{(k+1)},\mathbf {\Delta } _{i_{2}}^{(k+1)},\cdots ,\mathbf {\Delta } _{i_{I}}^{(k+1)}\right]^{T},\quad k=0,1,2,\cdots ,}the local iteration format in matrix form is,D(k+1)=TD(k),k=0,1,2,⋯,{\displaystyle \mathbf {D} ^{(k+1)}=\mathbf {T} \mathbf {D} ^{(k)},\quad k=0,1,2,\cdots ,}whereT{\textstyle \mathbf {T} }is the iteration matrix:T=[EJ−B10EI−B2],{\displaystyle \mathbf {T} ={\begin{bmatrix}\mathbf {E} _{J}&-\mathbf {B} _{1}\\0&\mathbf {E} _{I}-\mathbf {B} _{2}\end{bmatrix}},}whereEJ{\textstyle \mathbf {E} _{J}}andEI{\textstyle \mathbf {E} _{I}}are the identity matrices andB1=[Bi1(tj1)Bi2(tj1)⋯BiI(tj1)Bi1(tj2)Bi2(tj2)⋯BiI(tj2)⋮⋮⋮⋮Bi1(tjJ)Bi2(tjJ)⋯BiI(tjJ)],B2=[Bi1(ti1)Bi2(ti1)⋯BiI(ti1)Bi1(ti2)Bi2(ti2)⋯BiI(ti2)⋮⋮⋮⋮Bi1(tiI)Bi2(tiI)⋯BiI(tiI)].{\displaystyle \mathbf {B} _{1}={\begin{bmatrix}B_{i_{1}}\left(t_{j_{1}}\right)&B_{i_{2}}\left(t_{j_{1}}\right)&\cdots &B_{i_{I}}\left(t_{j_{1}}\right)\\B_{i_{1}}\left(t_{j_{2}}\right)&B_{i_{2}}\left(t_{j_{2}}\right)&\cdots &B_{i_{I}}\left(t_{j_{2}}\right)\\\vdots &\vdots &\vdots &\vdots \\B_{i_{1}}\left(t_{j_{J}}\right)&B_{i_{2}}\left(t_{j_{J}}\right)&\cdots &B_{i_{I}}\left(t_{j_{J}}\right)\\\end{bmatrix}},\mathbf {B} _{2}={\begin{bmatrix}B_{i_{1}}\left(t_{i_{1}}\right)&B_{i_{2}}\left(t_{i_{1}}\right)&\cdots &B_{i_{I}}\left(t_{i_{1}}\right)\\B_{i_{1}}\left(t_{i_{2}}\right)&B_{i_{2}}\left(t_{i_{2}}\right)&\cdots &B_{i_{I}}\left(t_{i_{2}}\right)\\\vdots &\vdots &\vdots &\vdots \\B_{i_{1}}\left(t_{i_{I}}\right)&B_{i_{2}}\left(t_{i_{I}}\right)&\cdots &B_{i_{I}}\left(t_{i_{I}}\right)\\\end{bmatrix}}.}The above local iteration format converges and can be extended to blending surfaces[12]and subdivision surfaces.[20] The PIA format for implicit curve and surface reconstruction is presented in the following.[11]Given an ordered point cloud{Qi}i=1n{\textstyle \left\{\mathbf {Q} _{i}\right\}_{i=1}^{n}}and a unit normal vector{ni}i=1n{\textstyle \left\{\mathbf {n} _{i}\right\}_{i=1}^{n}}on the data points, we want to reconstruct an implicit curve from the given point cloud. To avoid a trivial solution, some offset points{Ql}l=n+12n{\textstyle \left\{\mathbf {Q} _{l}\right\}_{l=n+1}^{2n}}are added to the point cloud.[11]They are offset by a distanceσ{\textstyle \sigma }along the unit normal vector of each pointQl=Qi+σni,l=n+i,i=1,2,⋯,n.{\displaystyle \mathbf {Q} _{l}=\mathbf {Q} _{i}+\sigma \mathbf {n} _{i},\quad l=n+i,\quad i=1,2,\cdots ,n.}Assume thatϵ{\textstyle \epsilon }is the value of the implicit function at the offset pointf(Ql)=ϵ,l=n+1,n+2,⋯,2n.{\displaystyle f\left(\mathbf {Q} _{l}\right)=\epsilon ,\quad l=n+1,n+2,\cdots ,2n.}Let the implicit curve after theα{\textstyle \alpha }th iteration bef(α)(x,y)=∑i=1Nu∑j=1NvCij(α)Bi(x)Bj(y),{\displaystyle f^{(\alpha )}(x,y)=\sum _{i=1}^{N_{u}}\sum _{j=1}^{N_{v}}C_{ij}^{(\alpha )}B_{i}(x)B_{j}(y),}whereCij(α){\textstyle C_{ij}^{(\alpha )}}is the control point. Define the difference vector of data points as[11]δk(α)=0−f(α)(xk,yk),k=1,2,⋯,n,δl(α)=ϵ−f(α)(xl,yl),l=n+1,n+2,⋯,2n.{\displaystyle {\begin{aligned}{\boldsymbol {\delta }}_{k}^{(\alpha )}&=0-f^{(\alpha )}(x_{k},y_{k}),\quad k=1,2,\cdots ,n,\\{\boldsymbol {\delta }}_{l}^{(\alpha )}&=\epsilon -f^{(\alpha )}(x_{l},y_{l}),\quad l=n+1,n+2,\cdots ,2n.\end{aligned}}}Next, calculate the difference vector of control coefficientsΔij(α)=μ∑k=12nBi(xk)Bj(yk)δk(α),i=1,2,⋯,Nu,j=1,2,⋯,Nv,{\displaystyle {\boldsymbol {\Delta }}_{ij}^{(\alpha )}=\mu \sum _{k=1}^{2n}B_{i}(x_{k})B_{j}(y_{k}){\boldsymbol {\delta }}_{k}^{(\alpha )},\quad i=1,2,\cdots ,N_{u},\quad j=1,2,\cdots ,N_{v},}whereμ{\textstyle \mu }is the convergence coefficient. As a result, the new control coefficients areCij(α+1)=Cij(α)+Δij(α),{\displaystyle C_{ij}^{(\alpha +1)}=C_{ij}^{(\alpha )}+{\boldsymbol {\Delta }}_{ij}^{(\alpha )},}leading to the new algebraic B-spline curvef(α+1)(x,y)=∑i=1Nu∑j=1NvCij(α+1)Bi(x)Bj(y).{\displaystyle f^{(\alpha +1)}(x,y)=\sum _{i=1}^{N_{u}}\sum _{j=1}^{N_{v}}C_{ij}^{(\alpha +1)}B_{i}(x)B_{j}(y).}The above procedure is carried out iteratively to generate a sequence of algebraic B-spline functions{f(α)(x,y),α=0,1,2,⋯}{\textstyle \left\{f^{(\alpha )}(x,y),\quad \alpha =0,1,2,\cdots \right\}}. The sequence converges to a minimization problem with constraints when the initial control coefficientsCij(0)=0{\textstyle C_{ij}^{(0)}=0}.[11] Assume that the implicit surface generated after theα{\textstyle \alpha }th iteration isf(α)(x,y,z)=∑i=1Nu∑j=1Nv∑k=1NwCijk(α)Bi(x)Bj(y)Bk(z),{\displaystyle f^{(\alpha )}(x,y,z)=\sum _{i=1}^{N_{u}}\sum _{j=1}^{N_{v}}\sum _{k=1}^{N_{w}}C_{ijk}^{(\alpha )}B_{i}(x)B_{j}(y)B_{k}(z),}the iteration format is similar to that of the curve case.[11][21] To develop fairing-PIA, we first define the functionals as follows:[13]Fr,j(f)=∫t1tmBr,j(t)fdt,j=1,2,⋯,n,r=1,2,3,{\displaystyle {\mathcal {F}}_{r,j}(f)=\int _{t_{1}}^{t_{m}}B_{r,j}(t)fdt,\quad j=1,2,\cdots ,n,\quad r=1,2,3,}whereBr,j(t){\textstyle B_{r,j}(t)}represents ther{\textstyle r}th derivative of the basis functionBj(t){\textstyle B_{j}(t)},[8](e.g.B-spline basis function). Let the curve after thek{\textstyle k}th iteration beP[k](t)=∑j=1nBj(t)Pj[k],t∈[t1,tm].{\displaystyle \mathbf {P} ^{[k]}(t)=\sum _{j=1}^{n}B_{j}(t)\mathbf {P} _{j}^{[k]},\quad t\in [t_{1},t_{m}].}To construct the new curveP[k+1](t){\textstyle \mathbf {P} ^{[k+1]}(t)}, we first calculate the(k+1){\textstyle (k+1)}st difference vectors for data points,[13]di[k]=Qi−P[k](ti),i=1,2,⋯,m.{\displaystyle \mathbf {d} _{i}^{[k]}=\mathbf {Q} _{i}-\mathbf {P} ^{[k]}(t_{i}),\quad i=1,2,\cdots ,m.}Then, the fitting difference vectors and the fairing vectors for control points are calculated by[13]δj[k]=∑h∈IjBj(th)dh[k],j=1,2,⋯,nηj[k]=∑l=1nFr,l(Br,j(t))Pl[k],j=1,2,⋯,n{\displaystyle {\begin{aligned}{\boldsymbol {\delta }}_{j}^{[k]}&=\sum _{h\in I_{j}}B_{j}(t_{h})\mathbf {d} _{h}^{[k]},\quad j=1,2,\cdots ,n\\{\boldsymbol {\eta }}_{j}^{[k]}&=\sum _{l=1}^{n}{\mathcal {F}}_{r,l}\left(B_{r,j}(t)\right)\mathbf {P} _{l}^{[k]},\quad j=1,2,\cdots ,n\\\end{aligned}}}Finally, the control points of the(k+1){\displaystyle (k+1)}st curve are produced by[13]Pj[k+1]=Pj[k]+μj[(1−ωj)δj[k]−ωjηj[k]],j=1,2,⋯,n,{\displaystyle \mathbf {P} _{j}^{[k+1]}=\mathbf {P} _{j}^{[k]}+\mu _{j}\left[\left(1-\omega _{j}\right){\boldsymbol {\delta }}_{j}^{[k]}-\omega _{j}{\boldsymbol {\eta }}_{j}^{[k]}\right],\quad j=1,2,\cdots ,n,}whereμj{\displaystyle \mu _{j}}is a normalization weight, andωj{\displaystyle \omega _{j}}is a smoothing weight corresponding to thej{\displaystyle j}th control point. The smoothing weights can be employed to adjust the smoothness individually, thus bringing great flexibility for smoothness.[13]The larger the smoothing weight is, the smoother the generated curve is. The new curve is obtained as followsP[k+1](t)=∑j=1nBj(t)Pj[k+1],t∈[t1,tm].{\displaystyle \mathbf {P} ^{[k+1]}(t)=\sum _{j=1}^{n}B_{j}(t)\mathbf {P} _{j}^{[k+1]},\quad t\in [t_{1},t_{m}].}In this way, we obtain a sequence of curves{P[k](t),k=1,2,3,⋯}{\textstyle \left\{\mathbf {P} ^{[k]}(t),\;k=1,2,3,\cdots \right\}}. The sequence converges to the solution of the conventional fairing method based on energy minimization when all smoothing weights are equal (ωj=ω{\textstyle \omega _{j}=\omega }).[13]Similarly, the fairing-PIA can be extended to the surface case. Isogeometric least-squares progressive-iterative approximation (IG-LSPIA).[14]Given aboundary value problem[15]{Lu=f,inΩ,Gu=g,on∂Ω,{\displaystyle \left\{{\begin{aligned}{\mathcal {L}}u=f,&\quad {\text{in}}\;\Omega ,\\{\mathcal {G}}u=g,&\quad {\text{on}}\;\partial \Omega ,\end{aligned}}\right.}whereu:Ω→R{\textstyle u:\Omega \to \mathbb {R} }is the unknown solution,L{\textstyle {\mathcal {L}}}is thedifferential operator,G{\textstyle {\mathcal {G}}}is the boundary operator, andf{\textstyle f}andg{\textstyle g}are the continuous functions. In theisogeometric analysis method, NURBS basis functions[8]are used as shape functions to solve the numerical solution of this boundary value problem.[15]The same basis functions are applied to represent the numerical solutionuh{\textstyle u_{h}}and the geometric mappingG{\textstyle G}:uh(τ^)=∑j=1nRj(τ^)uj,G(τ^)=∑j=1nRj(τ^)Pj,{\displaystyle {\begin{aligned}u_{h}\left({\hat {\tau }}\right)&=\sum _{j=1}^{n}R_{j}({\hat {\tau }})u_{j},\\G({\hat {\tau }})&=\sum _{j=1}^{n}R_{j}({\hat {\tau }})P_{j},\end{aligned}}}whereRj(τ^){\textstyle R_{j}({\hat {\tau }})}denotes the NURBS basis function,uj{\textstyle u_{j}}is the control coefficient. After substituting the collocation points[22]τ^i,i=1,2,...,m{\textstyle {\hat {\tau }}_{i},i=1,2,...,{m}}into the strong form ofPDE, we obtain a discretized problem[22]{Luh(τ^i)=f(G(τ^i)),i∈IL,Guh(τ^j)=g(G(τ^j)),j∈IG,{\displaystyle \left\{{\begin{aligned}{\mathcal {L}}u_{h}({\hat {\tau }}_{i})=f(G({\hat {\tau }}_{i})),&\quad i\in {\mathcal {I_{L}}},\\{\mathcal {G}}u_{h}({\hat {\tau }}_{j})=g(G({\hat {\tau }}_{j})),&\quad j\in {\mathcal {I_{G}}},\end{aligned}}\right.}whereIL{\textstyle {\mathcal {I_{L}}}}andIG{\textstyle {\mathcal {I_{G}}}}denote the subscripts of internal and boundary collocation points, respectively. Arranging the control coefficientsuj{\textstyle u_{j}}of the numerical solutionuh(τ^){\textstyle u_{h}({\hat {\tau }})}into an1{\textstyle 1}-dimensional column vectorU=[u1,u2,...,un]T{\textstyle \mathbf {U} =[u_{1},u_{2},...,u_{n}]^{T}}, the discretized problem can be reformulated in matrix form asAU=b{\displaystyle \mathbf {AU} =\mathbf {b} }whereA{\textstyle \mathbf {A} }is the collocation matrix andb{\textstyle \mathbf {b} }is the load vector. Assume that the discretized load values are data points{bi}i=1m{\textstyle \left\{b_{i}\right\}_{i=1}^{m}}to be fitted. Given the initial guess of the control coefficients{uj(0)}j=1n,n<m{\textstyle \left\{u_{j}^{(0)}\right\}_{j=1}^{n},n<m}, we obtain an initial blending function[14]U(0)(τ^)=∑j=1nAj(τ^)uj(0),τ^∈[τ^1,τ^m],{\displaystyle U^{(0)}({\hat {\tau }})=\sum _{j=1}^{n}A_{j}({\hat {\tau }})u_{j}^{(0)},\quad {\hat {\tau }}\in [{\hat {\tau }}_{1},{\hat {\tau }}_{m}],}whereAj(τ^){\textstyle A_{j}({\hat {\tau }})},j=1,2,⋯,n{\textstyle j=1,2,\cdots ,n}, represents the combination of different order derivatives of the NURBS basis functions determined using the operatorsL{\textstyle {\mathcal {L}}}andG{\textstyle {\mathcal {G}}}Aj(τ^)={LRj(τ^),τ^inΩpin,GRj(τ^),τ^inΩpbd,j=1,2,⋯,n,{\displaystyle A_{j}({\hat {\tau }})=\left\{{\begin{aligned}{\mathcal {L}}R_{j}({\hat {\tau }}),&\quad {\hat {\tau }}\ {\text{in}}\ \Omega _{p}^{in},\\{\mathcal {G}}R_{j}({\hat {\tau }}),&\quad {\hat {\tau }}\ {\text{in}}\ \Omega _{p}^{bd},\quad j=1,2,\cdots ,n,\end{aligned}}\right.}whereΩpin{\textstyle \Omega _{p}^{in}}andΩpbd{\textstyle \Omega _{p}^{bd}}indicate the interior and boundary of the parameter domain, respectively. EachAj(τ^){\textstyle A_{j}({\hat {\tau }})}corresponds to thej{\textstyle j}th control coefficient. Assume thatJin{\textstyle J_{in}}andJbd{\textstyle J_{bd}}are the index sets of the internal and boundary control coefficients, respectively. Without loss of generality, we further assume that the boundary control coefficients have been obtained using strong or weak imposition and are fixed, i.e.,uj(k)=uj∗,j∈Jbd,k=0,1,2,⋯.{\displaystyle u_{j}^{(k)}=u_{j}^{*},\quad j\in J_{bd},\quad k=0,1,2,\cdots .}Thek{\textstyle k}th blending function, generated after thek{\textstyle k}th iteration of IG-LSPIA,[14]is assumed to be as follows:U(k)(τ^)=∑j=1nAj(τ^)uj(k),τ^∈[τ^1,τ^m].{\displaystyle U^{(k)}({\hat {\tau }})=\sum _{j=1}^{n}A_{j}({\hat {\tau }})u_{j}^{(k)},\quad {\hat {\tau }}\in [{\hat {\tau }}_{1},{\hat {\tau }}_{m}].}Then, the difference vectors for collocation points (DCP) in the(k+1){\textstyle (k+1)}st iteration are obtained usingδi(k)=bi−∑j=1nAj(τ^i)uj(k)=bi−∑j∈JbdAj(τ^i)uj(k)−∑j∈JinAj(τ^i)uj(k),i=1,2,...,m.{\displaystyle {\begin{aligned}{\boldsymbol {\delta }}_{i}^{(k)}&=b_{i}-\sum _{j=1}^{n}A_{j}({\hat {\tau }}_{i})u_{j}^{(k)}\\&=b_{i}-\sum _{j\in J_{bd}}A_{j}({\hat {\tau }}_{i})u_{j}^{(k)}-\sum _{j\in J_{in}}A_{j}({\hat {\tau }}_{i})u_{j}^{(k)},\quad i=1,2,...,m.\end{aligned}}}Moreover, group all load values whose parameters fall in the local support of thej{\textstyle j}th derivatives function, i.e.,Aj(τ^i)≠0{\textstyle A_{j}({\hat {\tau }}_{i})\neq 0}, into thej{\textstyle j}th group corresponding to thej{\textstyle j}th control coefficient, and denote the index set of thej{\textstyle j}th group of load values asIj{\textstyle I_{j}}. Lastly, the differences for control coefficients (DCC) can be constructed as follows:[14]dj(k)=μ∑h∈IjAj(τ^h)δh(k),j=1,2,...,n,{\displaystyle d_{j}^{(k)}=\mu \sum _{h\in I_{j}}A_{j}({\hat {\tau }}_{h}){\boldsymbol {\delta }}_{h}^{(k)},\quad j=1,2,...,n,}whereμ{\textstyle \mu }is a normalization weight to guarantee the convergence of the algorithm. Thus, the new control coefficients are updated via the following formula,uj(k+1)=uj(k)+dj(k),j=1,2,...,n,{\displaystyle u_{j}^{(k+1)}=u_{j}^{(k)}+d_{j}^{(k)},\quad j=1,2,...,n,}Consequently, the(k+1){\textstyle (k+1)}st blending function is generated as follows:U(k+1)(τ^)=∑j=1nAj(τ^)uj(k+1).{\displaystyle U^{(k+1)}({\hat {\tau }})=\sum _{j=1}^{n}A_{j}({\hat {\tau }})u_{j}^{(k+1)}.}The above iteration process is performed until the desired fitting precision is reached and a sequence of blending functions is obtained{U(k)(τ^),k=0,1,…}.{\displaystyle \left\{U^{(k)}({\hat {\tau }}),k=0,1,\dots \right\}.}The IG-LSPIA converges to the solution of a constrained least-squares collocation problem.[14] Letnbe the number of control points andmbe the number of data points. Ifn=m{\textstyle n=m}, the PIA iterative format in matrix form is[1]P(α+1)=P(α)+Δ(α)=P(α)+Q−BP(α)=(I−B)P(α)+Q{\displaystyle {\begin{aligned}\mathbf {P^{(\alpha +1)}} &=\mathbf {P^{(\alpha )}} +\mathbf {\Delta } ^{(\alpha )}\\&=\mathbf {P} ^{(\alpha )}+\mathbf {Q} -\mathbf {B} \mathbf {P} ^{(\alpha )}\\&=\left(\mathbf {I} -\mathbf {B} \right)\mathbf {P} ^{(\alpha )}+\mathbf {Q} \end{aligned}}}whereQ=[Q1,Q2,⋯,Qm]TP(α)=[P1(α),P2(α),⋯,Pn(α)]TΔ(α)=[Δ1(α),Δ2(α),⋯,Δn(α)]TB=[B1(t1)B2(t1)⋯Bn(t1)B1(t2)B2(t2)⋯Bn(t2)⋮⋮⋱⋮B1(tm)B2(tm)⋯Bn(tm)].{\displaystyle {\begin{aligned}\mathbf {Q} &=\left[\mathbf {Q} _{1},\mathbf {Q} _{2},\cdots ,\mathbf {Q} _{m}\right]^{T}\\\mathbf {P^{(\alpha )}} &=\left[\mathbf {P} _{1}^{(\alpha )},\mathbf {P} _{2}^{(\alpha )},\cdots ,\mathbf {P} _{n}^{(\alpha )}\right]^{T}\\\mathbf {\Delta } ^{(\alpha )}&=\left[\mathbf {\Delta } _{1}^{(\alpha )},\mathbf {\Delta } _{2}^{(\alpha )},\cdots ,\mathbf {\Delta } _{n}^{(\alpha )}\right]^{T}\\\mathbf {B} &={\begin{bmatrix}B_{1}(t_{1})&B_{2}(t_{1})&\cdots &B_{n}(t_{1})\\B_{1}(t_{2})&B_{2}(t_{2})&\cdots &B_{n}(t_{2})\\\vdots &\vdots &\ddots &\vdots \\B_{1}(t_{m})&B_{2}(t_{m})&\cdots &B_{n}(t_{m})\\\end{bmatrix}}.\end{aligned}}}The convergence of the PIA is related to the properties of the collocation matrix. If thespectral radiusof the iteration matrixI−B{\displaystyle \mathbf {I} -\mathbf {B} }is less than1{\displaystyle 1}, then the PIA is convergent. It has been shown that the PIA methods are convergent for Bézier curves and surfaces, B-spline curves and surfaces, NURBS curves and surfaces, triangular Bernstein–Bézier surfaces, and subdivision surfaces (Loop, Catmull-Clark, Doo-Sabin).[2] Ifn<m{\textstyle n<m}, the LSPIA in matrix form is[10][19]P(α+1)=P(α)+μBTΔ(α)=P(α)+μBT(Q−BP(α))=(I−μBTB)P(α)+μBTQ.{\displaystyle {\begin{aligned}\mathbf {P^{(\alpha +1)}} &=\mathbf {P^{(\alpha )}} +\mu \mathbf {B} ^{T}\mathbf {\Delta } ^{(\alpha )}\\&=\mathbf {P} ^{(\alpha )}+\mu \mathbf {B} ^{T}\left(\mathbf {Q} -\mathbf {B} \mathbf {P} ^{(\alpha )}\right)\\&=\left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)\mathbf {P} ^{(\alpha )}+\mu \mathbf {B} ^{T}\mathbf {Q} .\end{aligned}}}When the matrixBTB{\textstyle \mathbf {B} ^{T}\mathbf {B} }isnonsingular, the following results can be obtained:[23] Lemma—If0<μ<2λ0{\textstyle 0<\mu <{\frac {2}{\lambda _{0}}}}, whereλ0{\textstyle \lambda _{0}}is the largesteigenvalueof the matrixBTB{\textstyle \mathbf {B} ^{T}\mathbf {B} }, then the eigenvalues ofμBTB{\textstyle \mu \mathbf {B} ^{T}\mathbf {B} }are real numbers and satisfy0<λ(μBTB)<2{\textstyle 0<\lambda (\mu \mathbf {B} ^{T}\mathbf {B} )<2}. ProofSinceBTB{\textstyle \mathbf {B} ^{T}\mathbf {B} }is nonsingular, andμ>0{\textstyle \mu >0}, thenλ(μBTB)>0{\textstyle \lambda (\mu \mathbf {B} ^{T}\mathbf {B} )>0}. Moreover,λ(μBTB)=μλ(BTB)<2λ(BTB)λ0<2.{\displaystyle \lambda (\mu \mathbf {B} ^{T}\mathbf {B} )=\mu \lambda (\mathbf {B} ^{T}\mathbf {B} )<2{\frac {\lambda (\mathbf {B} ^{T}\mathbf {B} )}{\lambda _{0}}}<2.}In summary,0<λ(μBTB)<2{\textstyle 0<\lambda (\mu \mathbf {B} ^{T}\mathbf {B} )<2}. Theorem—If0<μ<2λ0{\textstyle 0<\mu <{\frac {2}{\lambda _{0}}}}, LSPIA is convergent, and converges to the least-squares fitting result to the given data points.[10][19] ProofFrom the matrix form of iterative format, we obtain the following:P(α+1)=(I−μBTB)P(α)+μBTQ,=(I−μBTB)[(I−μBTB)P(α−1)+μBTQ]+μBTQ,=(I−μBTB)2P(α−1)+∑i=01(I−μBTB)μBTQ,=⋯=(I−μBTB)α+1P(0)+∑i=0α(I−μBTB)αμBTQ.{\displaystyle {\begin{aligned}\mathbf {P^{(\alpha +1)}} &=\left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)\mathbf {P} ^{(\alpha )}+\mu \mathbf {B} ^{T}\mathbf {Q} ,\\&=\left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)\left[\left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)\mathbf {P} ^{(\alpha -1)}+\mu \mathbf {B} ^{T}\mathbf {Q} \right]+\mu \mathbf {B} ^{T}\mathbf {Q} ,\\&=\left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)^{2}\mathbf {P} ^{(\alpha -1)}+\sum _{i=0}^{1}\left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)\mu \mathbf {B} ^{T}\mathbf {Q} ,\\&=\cdots \\&=\left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)^{\alpha +1}\mathbf {P} ^{(0)}+\sum _{i=0}^{\alpha }\left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)^{\alpha }\mu \mathbf {B} ^{T}\mathbf {Q} .\\\end{aligned}}}According to above Lemma, the spectral radius of the matrixμBTB{\textstyle \mu \mathbf {B} ^{T}\mathbf {B} }satisfies0<ρ(μBTB)<2{\displaystyle 0<\rho \left({\mu \mathbf {B} ^{T}\mathbf {B} }\right)<2}and thus the spectral radius of the iteration matrix satisfies0<ρ(I−μBTB)<1.{\displaystyle 0<\rho \left({\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} }\right)<1.}Whenα→∞{\textstyle \alpha \rightarrow \infty }(I−μBTB)∞=0,∑i=0∞(I−μBTB)α=1μ(BTB)−1.{\displaystyle \left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)^{\infty }=0,\ \sum _{i=0}^{\infty }\left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)^{\alpha }={\frac {1}{\mu }}\left(\mathbf {B} ^{T}\mathbf {B} \right)^{-1}.}As a result,P(∞)=(BTB)−1BTQ,{\displaystyle \mathbf {P} ^{(\infty )}=\left(\mathbf {B} ^{T}\mathbf {B} \right)^{-1}\mathbf {B} ^{T}\mathbf {Q} ,}i.e.,BTBP(∞)=BTQ{\textstyle \mathbf {B} ^{T}\mathbf {B} \mathbf {P} ^{(\infty )}=\mathbf {B} ^{T}\mathbf {Q} }, which is equivalent to the normal equation of the fitting problem. Hence, the LSPIA algorithm converges to the least squares result for a given sequence of points. Lin et al. showed that LSPIA converges even when the iteration matrix is singular.[18] Since PIA has obvious geometric meaning, constraints can be easily integrated in the iterations. Currently, PIA has been widely applied in many fields, such as data fitting, reverse engineering, geometric design, mesh generation, data compression, fairing curve and surface generation, and isogeometric analysis. For implicit curve and surface reconstruction, PIA avoids the additional zero level set and regularization term, which greatly improves the speed of the reconstruction algorithm.[11] Firstly, the data points are sampled on the original curve. Then, the initial polynomial approximation curve or rational approximation curve of the offset curve is generated from these sampled points. Finally, the offset curve is approximated iteratively using the PIA method.[33] Given a triangular mesh model as input, the algorithm first constructs the initial hexahedral mesh, then extracts the quadrilateral mesh of the surface as the initial boundary mesh. During the iterations, the movement of each mesh vertex is constrained to ensure the validity of the mesh. Finally, the hexahedral model is fitted to the given input model. The algorithm can guarantee the validity of the generated hexahedral mesh, i.e., the Jacobi value at each mesh vertex is greater than zero.[34] First, the image data are converted into a one-dimensional sequence by Hilbert scan. Then, these data points are fitted by LSPIA to generate a Hilbert curve. Finally, the Hilbert curve is sampled, and the compressed image can be reconstructed. This method can well preserve the neighborhood information of pixels.[35] Given a data point set, we first define the fairing functional, and calculate the fitting difference vector and the fairing vector of the control point; then, adjust the control points with fairing weights. According to the above steps, the fairing curve and surface can be generated iteratively. Due to the sufficient fairing parameters, the method can achieve global or local fairing. It is also flexible to adjust knot vectors, fairing weights, or data parameterization after each round of iteration. The traditional energy-minimization method is a special case of this method, i.e., when the smooth weights are all the same.[13] The discretized load values are regarded as the set of data points, and the combination of the basis functions and their derivative functions is used as the blending function for fitting. The method automatically adjusts the degrees of freedom of the numerical solution of the partial differential equation according to the fitting result of the blending function to the load values. In addition, the average iteration time per step is only related to the number of data points (i.e., collocation points) and unrelated to the number of control coefficients.[14]
https://en.wikipedia.org/wiki/Progressive-iterative_approximation_method
Instatisticsandimage processing, tosmoothadata setis to create an approximatingfunctionthat attempts to capture importantpatternsin the data, while leaving outnoiseor other fine-scale structures/rapid phenomena. In smoothing, the data points of a signal are modified so individual points higher than the adjacent points (presumably because of noise) are reduced, and points that are lower than the adjacent points are increased leading to a smoother signal. Smoothing may be used in two important ways that can aid in data analysis (1) by being able to extract more information from the data as long as the assumption of smoothing is reasonable and (2) by being able to provide analyses that are both flexible and robust.[1]Many differentalgorithmsare used in smoothing. Smoothing may be distinguished from the related and partially overlapping concept ofcurve fittingin the following ways: In the case that the smoothed values can be written as alinear transformationof the observed values, the smoothing operation is known as alinear smoother; the matrix representing the transformation is known as asmoother matrixorhat matrix.[citation needed] The operation of applying such a matrix transformation is calledconvolution. Thus the matrix is also called convolution matrix or aconvolution kernel. In the case of simple series of data points (rather than a multi-dimensional image), the convolution kernel is a one-dimensionalvector. One of the most common algorithms is the "moving average", often used to try to capture important trends in repeatedstatistical surveys. Inimage processingandcomputer vision, smoothing ideas are used inscale spacerepresentations. The simplest smoothing algorithm is the "rectangular" or "unweighted sliding-average smooth". This method replaces each point in the signal with the average of "m" adjacent points, where "m" is a positive integer called the "smooth width". Usually m is an odd number. Thetriangular smoothis like therectangular smoothexcept that it implements a weighted smoothing function.[2] Some specific smoothing and filter types, with their respective uses, pros and cons are:
https://en.wikipedia.org/wiki/Smoothing
Inmathematics, asplineis afunctiondefinedpiecewisebypolynomials. Ininterpolatingproblems,spline interpolationis often preferred topolynomial interpolationbecause it yields similar results, even when using lowdegreepolynomials, while avoidingRunge's phenomenonfor higher degrees. In thecomputer sciencesubfields ofcomputer-aided designandcomputer graphics, the termsplinemore frequently refers to a piecewise polynomial (parametric)curve. Splines are popular curves in these subfields because of the simplicity of their construction, their ease and accuracy of evaluation, and their capacity to approximate complex shapes throughcurve fittingand interactive curve design. The term spline comes from the flexiblesplinedevices used by shipbuilders anddraftsmento draw smooth shapes. The term "spline" is used to refer to a wide class of functions that are used in applications requiring data interpolation and/or smoothing. The data may be either one-dimensional or multi-dimensional. Spline functions for interpolation are normally determined as the minimizers of suitable measures of roughness (for example integral squared curvature) subject to the interpolation constraints. Smoothing splines may be viewed as generalizations of interpolation splines where the functions are determined to minimize a weighted combination of the average squared approximation error over observed data and the roughness measure. For a number of meaningful definitions of the roughness measure, the spline functions are found to be finite dimensional in nature, which is the primary reason for their utility in computations and representation. For the rest of this section, we focus entirely on one-dimensional, polynomial splines and use the term "spline" in this restricted sense. According to Gerald Farin, B-splines were explored as early as the nineteenth century byNikolai LobachevskyatKazan Universityin Russia.[1] Before computers were used, numerical calculations were done by hand. Although piecewise-defined functions like thesign functionorstep functionwere used, polynomials were generally preferred because they were easier to work with. Through the advent of computers, splines have gained importance. They were first used as a replacement for polynomials in interpolation, then as a tool to construct smooth and flexible shapes in computer graphics. It is commonly accepted that the first mathematical reference to splines is the 1946 paper bySchoenberg, which is probably the first place that the word "spline" is used in connection with smooth, piecewise polynomial approximation. However, the ideas have their roots in the aircraft and shipbuilding industries. In the foreword to (Bartels et al., 1987),Robin Forrestdescribes "lofting", a technique used in the British aircraft industry duringWorld War IIto construct templates for airplanes by passing thin wooden strips (called "splines") through points laid out on the floor of a large design loft, a technique borrowed from ship-hull design. For years the practice of ship design had employed models to design in the small. The successful design was then plotted on graph paper and the key points of the plot were re-plotted on larger graph paper to full size. The thin wooden strips provided an interpolation of the key points into smooth curves. The strips would be held in place at discrete points (called "ducks" by Forrest; Schoenberg used "dogs" or "rats") and between these points would assume shapes of minimum strain energy. According to Forrest, one possible impetus for a mathematical model for this process was the potential loss of the critical design components for an entire aircraft should the loft be hit by an enemy bomb. This gave rise to "conic lofting", which used conic sections to model the position of the curve between the ducks. Conic lofting was replaced by what we would call splines in the early 1960s based on work byJ. C. FergusonatBoeingand (somewhat later) byM.A. SabinatBritish Aircraft Corporation. The word "spline" was originally anEast Angliandialect word. The use of splines for modeling automobile bodies seems to have several independent beginnings. Credit is claimed on behalf ofde CasteljauatCitroën,Pierre BézieratRenault, andBirkhoff,Garabedian, andde BooratGeneral Motors(see Birkhoff and de Boor, 1965), all for work occurring in the very early 1960s or late 1950s. At least one of de Casteljau's papers was published, but not widely, in 1959. De Boor's work atGeneral Motorsresulted in a number of papers being published in the early 1960s, including some of the fundamental work onB-splines. Work was also being done at Pratt & Whitney Aircraft, where two of the authors of (Ahlberg et al., 1967) — the first book-length treatment of splines — were employed, and theDavid Taylor Model Basin, by Feodor Theilheimer. The work atGeneral Motorsis detailed nicely in (Birkhoff, 1990) and (Young, 1997). Davis (1997) summarizes some of this material. We begin by limiting our discussion topolynomials in one variable. In this case, a spline is apiecewisepolynomialfunction. This function, call itS, takes values from an interval[a,b]and maps them toR,{\displaystyle \mathbb {R} ,}the set ofreal numbers,S:[a,b]→R.{\displaystyle S:[a,b]\to \mathbb {R} .}We wantSto be piecewise defined. To accomplish this, let the interval[a,b]be covered bykordered,disjointsubintervals, [ti,ti+1],i=0,…,k−1[a,b]=[t0,t1)∪[t1,t2)∪⋯∪[tk−2,tk−1)∪[tk−1,tk)∪[tk]a=t0≤t1≤⋯≤tk−1≤tk=b{\displaystyle {\begin{aligned}&[t_{i},t_{i+1}],\quad i=0,\ldots ,k-1\\[4pt]&[a,b]=[t_{0},t_{1})\cup [t_{1},t_{2})\cup \cdots \cup [t_{k-2},t_{k-1})\cup [t_{k-1},t_{k})\cup [t_{k}]\\[4pt]&a=t_{0}\leq t_{1}\leq \cdots \leq t_{k-1}\leq t_{k}=b\end{aligned}}} On each of thesek"pieces" of[a,b], we want to define a polynomial, call itPi.Pi:[ti,ti+1]→R.{\displaystyle P_{i}:[t_{i},t_{i+1}]\to \mathbb {R} .}On theith subinterval of[a,b],Sis defined byPi, S(t)=P0(t),t0≤t<t1,S(t)=P1(t),t1≤t<t2,⋮S(t)=Pk−1(t),tk−1≤t≤tk.{\displaystyle {\begin{aligned}S(t)&=P_{0}(t),&&t_{0}\leq t<t_{1},\\[2pt]S(t)&=P_{1}(t),&&t_{1}\leq t<t_{2},\\&\vdots \\S(t)&=P_{k-1}(t),&&t_{k-1}\leq t\leq t_{k}.\end{aligned}}} The givenk+ 1pointstiare calledknots. The vectort= (t0, …,tk)is called aknot vectorfor the spline. If the knots are equidistantly distributed in the interval[a,b]we say the spline isuniform, otherwise we say it isnon-uniform. If the polynomial piecesPieach have degree at mostn, then the spline is said to be ofdegree≤n(or ofordern+ 1). IfS∈Cri{\displaystyle S\in C^{r_{i}}}in a neighborhood ofti, then the spline is said to be ofsmoothness(at least)Cri{\displaystyle C^{r_{i}}}atti. That is, attithe two polynomial piecesPi–1andPishare common derivative values from the derivative of order 0 (the function value) up through the derivative of orderri(in other words, the two adjacent polynomial pieces connect withloss of smoothnessof at mostn–ri) Pi−1(0)(ti)=Pi(0)(ti),Pi−1(1)(ti)=Pi(1)(ti),⋮Pi−1(ri)(ti)=Pi(ri)(ti).{\displaystyle {\begin{aligned}P_{i-1}^{(0)}(t_{i})&=P_{i}^{(0)}(t_{i}),\\[2pt]P_{i-1}^{(1)}(t_{i})&=P_{i}^{(1)}(t_{i}),\\\vdots &\\P_{i-1}^{(r_{i})}(t_{i})&=P_{i}^{(r_{i})}(t_{i}).\end{aligned}}} A vectorr= (r1, …,rk–1)such that the spline has smoothnessCri{\displaystyle C^{r_{i}}}attifori= 1, …,k– 1is called asmoothness vectorfor the spline. Given a knot vectort, a degreen, and a smoothness vectorrfort, one can consider the set of all splines of degree≤nhaving knot vectortand smoothness vectorr. Equipped with the operation of adding two functions (pointwise addition) and taking real multiples of functions, this set becomes a real vector space. Thisspline spaceis commonly denoted bySnr(t).{\displaystyle S_{n}^{\mathbf {r} }(\mathbf {t} ).} In the mathematical study of polynomial splines the question of what happens when two knots, saytiandti+1, are taken to approach one another and become coincident has an easy answer. The polynomial piecePi(t)disappears, and the piecesPi−1(t)andPi+1(t)join with the sum of the smoothness losses fortiandti+1. That is,S(t)∈Cn−ji−ji+1[ti=ti+1],{\displaystyle S(t)\in C^{n-j_{i}-j_{i+1}}[t_{i}=t_{i+1}],}whereji=n–ri. This leads to a more general understanding of a knot vector. The continuity loss at any point can be considered to be the result ofmultiple knotslocated at that point, and a spline type can be completely characterized by its degreenand itsextendedknot vector (t0,t1,⋯,t1,t2,⋯,t2,t3,⋯,tk−2,tk−1,⋯,tk−1,tk){\displaystyle (t_{0},t_{1},\cdots ,t_{1},t_{2},\cdots ,t_{2},t_{3},\cdots ,t_{k-2},t_{k-1},\cdots ,t_{k-1},t_{k})} wheretiis repeatedjitimes fori= 1, …,k– 1. Aparametric curveon the interval[a,b]G(t)=(X(t),Y(t)),t∈[a,b]{\displaystyle G(t)={\bigl (}X(t),Y(t){\bigr )},\quad t\in [a,b]}is aspline curveif bothXandYare spline functions of the same degree with the same extended knot vectors on that interval. Suppose the interval[a,b]is[0, 3]and the subintervals are[0, 1], [1, 2], [2, 3]. Suppose the polynomial pieces are to be of degree 2, and the pieces on[0, 1]and[1, 2]must join in value and first derivative (att= 1) while the pieces on[1, 2]and[2, 3]join simply in value (att= 2). This would define a type of splineS(t)for which S(t)=P0(t)=−1+4t−t2,0≤t<1S(t)=P1(t)=2t,1≤t<2S(t)=P2(t)=2−t+t2,2≤t≤3{\displaystyle {\begin{aligned}S(t)&=P_{0}(t)=-1+4t-t^{2},&&0\leq t<1\\[2pt]S(t)&=P_{1}(t)=2t,&&1\leq t<2\\[2pt]S(t)&=P_{2}(t)=2-t+t^{2},&&2\leq t\leq 3\end{aligned}}} would be a member of that type, and also S(t)=P0(t)=−2−2t2,0≤t<1S(t)=P1(t)=1−6t+t2,1≤t<2S(t)=P2(t)=−1+t−2t2,2≤t≤3{\displaystyle {\begin{aligned}S(t)&=P_{0}(t)=-2-2t^{2},&&0\leq t<1\\[2pt]S(t)&=P_{1}(t)=1-6t+t^{2},&&1\leq t<2\\[2pt]S(t)&=P_{2}(t)=-1+t-2t^{2},&&2\leq t\leq 3\end{aligned}}} would be a member of that type. (Note: while the polynomial piece2tis not quadratic, the result is still called a quadratic spline. This demonstrates that the degree of a spline is the maximum degree of its polynomial parts.) The extended knot vector for this type of spline would be(0, 1, 2, 2, 3). The simplest spline has degree 0. It is also called astep function. The next most simple spline has degree 1. It is also called alinear spline. A closed linear spline (i.e, the first knot and the last are the same) in the plane is just apolygon. A common spline is thenatural cubic spline. A cubic spline has degree 3 with continuityC2, i.e. the values and first and second derivatives are continuous. Natural means that the second derivatives of the spline polynomials are zero at the endpoints of the interval of interpolation. S″(a)=S″(b)=0.{\displaystyle S''(a)\,=S''(b)=0.} Thus, the graph of the spline is a straight line outside of the interval, but still smooth. It might be asked what meaning more thannmultiple knots in a knot vector have, since this would lead to continuities likeS(t)∈C−m,m>0{\displaystyle S(t)\in C^{-m},\quad m>0}at the location of this high multiplicity. By convention, any such situation indicates a simple discontinuity between the two adjacent polynomial pieces. This means that if a knottiappears more thann+ 1times in an extended knot vector, all instances of it in excess of the(n+ 1)th can be removed without changing the character of the spline, since all multiplicitiesn+ 1,n+ 2,n+ 3, etc. have the same meaning. It is commonly assumed that any knot vector defining any type of spline has been culled in this fashion. The classical spline type of degreenused in numerical analysis has continuityS(t)∈Cn−1[a,b],{\displaystyle S(t)\in \mathrm {C} ^{n-1}[a,b],}which means that every two adjacent polynomial pieces meet in their value and firstn− 1derivatives at each knot. The mathematical spline that most closely models theflat splineis a cubic (n= 3), twice continuously differentiable (C2), natural spline, which is a spline of this classical type with additional conditions imposed at endpointsaandb. Another type of spline that is much used in graphics, for example in drawing programs such asAdobe IllustratorfromAdobe Systems, has pieces that are cubic but has continuity only at mostS(t)∈C1[a,b].{\displaystyle S(t)\in C^{1}[a,b].}This spline type is also used inPostScriptas well as in the definition of some computer typographic fonts. Many computer-aided design systems that are designed for high-end graphics and animation use extended knot vectors, for exampleAutodesk Maya. Computer-aided design systems often use an extended concept of a spline known as aNonuniform rational B-spline(NURBS). If sampled data from a function or a physical object is available,spline interpolationis an approach to creating a spline that approximates that data. The general expression for theithC2interpolating cubic spline at a pointxwith the natural condition can be found using the formula Si(x)=zi(x−ti−1)36hi+zi−1(ti−x)36hi+[f(ti)hi−zihi6](x−ti−1)+[f(ti−1)hi−zi−1hi6](ti−x){\displaystyle S_{i}(x)={\frac {z_{i}(x-t_{i-1})^{3}}{6h_{i}}}+{\frac {z_{i-1}(t_{i}-x)^{3}}{6h_{i}}}+\left[{\frac {f(t_{i})}{h_{i}}}-{\frac {z_{i}h_{i}}{6}}\right](x-t_{i-1})+\left[{\frac {f(t_{i-1})}{h_{i}}}-{\frac {z_{i-1}h_{i}}{6}}\right](t_{i}-x)} where For a given interval[a,b]and a given extended knot vector on that interval, the splines of degreenform avector space. Briefly this means that adding any two splines of a given type produces spline of that given type, and multiplying a spline of a given type by any constant produces a spline of that given type. Thedimensionof the space containing all splines of a certain type can be counted from the extended knot vector: a=t0<t1=⋯=t1⏟j1<⋯<tk−2=⋯=tk−2⏟jk−2<tk−1=bji≤n+1,i=1,…,k−2.{\displaystyle {\begin{aligned}a&=t_{0}<\underbrace {t_{1}=\cdots =t_{1}} _{j_{1}}<\cdots <\underbrace {t_{k-2}=\cdots =t_{k-2}} _{j_{k-2}}<t_{k-1}=b\\[4pt]j_{i}&\leq n+1,\qquad i=1,\ldots ,k-2.\end{aligned}}}The dimension is equal to the sum of the degree plus the multiplicities If a type of spline has additional linear conditions imposed upon it, then the resulting spline will lie in a subspace. The space of all natural cubic splines, for instance, is a subspace of the space of all cubicC2splines. The literature of splines is replete with names for special types of splines. These names have been associated with: Often a special name was chosen for a type of spline satisfying two or more of the main items above. For example, theHermite splineis a spline that is expressed using Hermite polynomials to represent each of the individual polynomial pieces. These are most often used withn= 3; that is, asCubic Hermite splines. In this degree they may additionally be chosen to be only tangent-continuous (C1); which implies that all interior knots are double. Several methods have been invented to fit such splines to given data points; that is, to make them into interpolating splines, and to do so by estimating plausible tangent values where each two polynomial pieces meet (giving usCardinal splines,Catmull-Rom splines, andKochanek-Bartels splines, depending on the method used). For each of the representations, some means of evaluation must be found so that values of the spline can be produced on demand. For those representations that express each individual polynomial piecePi(t)in terms of some basis for the degreenpolynomials, this is conceptually straightforward: ∑j=0k−2cjPj(t).{\displaystyle \sum _{j=0}^{k-2}c_{j}P_{j}(t).}However, the evaluation and summation steps are often combined in clever ways. For example, Bernstein polynomials are a basis for polynomials that can be evaluated in linear combinations efficiently using special recurrence relations. This is the essence ofDe Casteljau's algorithm, which features inBézier curvesandBézier splines). For a representation that defines a spline as a linear combination of basis splines, however, something more sophisticated is needed. Thede Boor algorithmis an efficient method for evaluatingB-splines. Online utilities Computer Code
https://en.wikipedia.org/wiki/Spline_(mathematics)
In themathematicalfield ofnumerical analysis,spline interpolationis a form ofinterpolationwhere the interpolant is a special type ofpiecewisepolynomialcalled aspline. That is, instead of fitting a single, high-degree polynomial to all of the values at once, spline interpolation fits low-degree polynomials to small subsets of the values, for example, fitting nine cubic polynomials between each of the pairs of ten points, instead of fitting a single degree-nine polynomial to all of them. Spline interpolation is often preferred overpolynomial interpolationbecause theinterpolation errorcan be made small even when using low-degree polynomials for the spline.[1]Spline interpolation also avoids the problem ofRunge's phenomenon, in which oscillation can occur between points when interpolating using high-degree polynomials. Originally,splinewas a term forelasticrulersthat were bent to pass through a number of predefined points, orknots. These were used to maketechnical drawingsforshipbuildingand construction by hand, as illustrated in the figure. We wish to model similar kinds of curves using a set of mathematical equations. Assume we have a sequence ofn+1{\displaystyle n+1}knots,(x0,y0){\displaystyle (x_{0},y_{0})}through(xn,yn){\displaystyle (x_{n},y_{n})}. There will be a cubic polynomialqi(x)=y{\displaystyle q_{i}(x)=y}between each successive pair of knots(xi−1,yi−1){\displaystyle (x_{i-1},y_{i-1})}and(xi,yi){\displaystyle (x_{i},y_{i})}connecting to both of them, wherei=1,2,…,n{\displaystyle i=1,2,\dots ,n}. So there will ben{\displaystyle n}polynomials, with the first polynomial starting at(x0,y0){\displaystyle (x_{0},y_{0})}, and the last polynomial ending at(xn,yn){\displaystyle (x_{n},y_{n})}. Thecurvatureof any curvey=y(x){\displaystyle y=y(x)}is defined as wherey′{\displaystyle y'}andy″{\displaystyle y''}are the first and second derivatives ofy(x){\displaystyle y(x)}with respect tox{\displaystyle x}. To make the spline take a shape that minimizes the bending (under the constraint of passing through all knots), we will define bothy′{\displaystyle y'}andy″{\displaystyle y''}to be continuous everywhere, including at the knots. Each successive polynomial must have equal values (which are equal to the y-value of the corresponding datapoint), derivatives, and second derivatives at their joining knots, which is to say that This can only be achieved if polynomials of degree 3 (cubic polynomials) or higher are used. The classical approach is to use polynomials of exactly degree 3 —cubic splines. In addition to the three conditions above, anatural cubic splinehas the condition thatq1″(x0)=qn″(xn)=0{\displaystyle q''_{1}(x_{0})=q''_{n}(x_{n})=0}. In addition to the three main conditions above, aclamped cubic splinehas the conditions thatq1′(x0)=f′(x0){\displaystyle q'_{1}(x_{0})=f'(x_{0})}andqn′(xn)=f′(xn){\displaystyle q'_{n}(x_{n})=f'(x_{n})}wheref′(x){\displaystyle f'(x)}is the derivative of the interpolated function. In addition to the three main conditions above, anot-a-knot splinehas the conditions thatq1‴(x1)=q2‴(x1){\displaystyle q'''_{1}(x_{1})=q'''_{2}(x_{1})}andqn−1‴(xn−1)=qn‴(xn−1){\displaystyle q'''_{n-1}(x_{n-1})=q'''_{n}(x_{n-1})}.[2] We wish to find each polynomialqi(x){\displaystyle q_{i}(x)}given the points(x0,y0){\displaystyle (x_{0},y_{0})}through(xn,yn){\displaystyle (x_{n},y_{n})}. To do this, we will consider just a single piece of the curve,q(x){\displaystyle q(x)}, which will interpolate from(x1,y1){\displaystyle (x_{1},y_{1})}to(x2,y2){\displaystyle (x_{2},y_{2})}. This piece will have slopesk1{\displaystyle k_{1}}andk2{\displaystyle k_{2}}at its endpoints. Or, more precisely, The full equationq(x){\displaystyle q(x)}can be written in the symmetrical form where But what arek1{\displaystyle k_{1}}andk2{\displaystyle k_{2}}? To derive these critical values, we must consider that It then follows that Settingt=0andt=1respectively in equations (5) and (6), one gets from (2) that indeed first derivativesq′(x1) =k1andq′(x2) =k2, and also second derivatives If now(xi,yi),i= 0, 1, ...,naren+ 1points, and wherei= 1, 2, ...,n, andt=x−xi−1xi−xi−1{\displaystyle t={\tfrac {x-x_{i-1}}{x_{i}-x_{i-1}}}}arenthird-degree polynomials interpolatingyin the intervalxi−1≤x≤xifori= 1, ...,nsuch thatq′i(xi) =q′i+1(xi)fori= 1, ...,n− 1, then thenpolynomials together define a differentiable function in the intervalx0≤x≤xn, and fori= 1, ...,n, where If the sequencek0,k1, ...,knis such that, in addition,q′′i(xi) =q′′i+1(xi)holds fori= 1, ...,n− 1, then the resulting function will even have a continuous second derivative. From (7), (8), (10) and (11) follows that this is the case if and only if fori= 1, ...,n− 1. The relations (15) aren− 1linear equations for then+ 1valuesk0,k1, ...,kn. For the elastic rulers being the model for the spline interpolation, one has that to the left of the left-most "knot" and to the right of the right-most "knot" the ruler can move freely and will therefore take the form of a straight line withq′′= 0. Asq′′should be a continuous function ofx, "natural splines" in addition to then− 1linear equations (15) should have i.e. that Eventually, (15) together with (16) and (17) constituten+ 1linear equations that uniquely define then+ 1parametersk0,k1, ...,kn. There exist other end conditions, "clamped spline", which specifies the slope at the ends of the spline, and the popular "not-a-knot spline", which requires that the third derivative is also continuous at thex1andxn−1points. For the "not-a-knot" spline, the additional equations will read: whereΔxi=xi−xi−1,Δyi=yi−yi−1{\displaystyle \Delta x_{i}=x_{i}-x_{i-1},\ \Delta y_{i}=y_{i}-y_{i-1}}. In case of three points the values fork0,k1,k2{\displaystyle k_{0},k_{1},k_{2}}are found by solving thetridiagonal linear equation system with For the three points one gets that and from (10) and (11) that In the figure, the spline function consisting of the two cubic polynomialsq1(x){\displaystyle q_{1}(x)}andq2(x){\displaystyle q_{2}(x)}given by (9) is displayed.
https://en.wikipedia.org/wiki/Spline_interpolation
Inapplied statistics,total least squaresis a type oferrors-in-variables regression, aleast squaresdata modeling technique in which observational errors on both dependent and independent variables are taken into account. It is a generalization ofDeming regressionand also oforthogonal regression, and can be applied to both linear and non-linear models. The total least squares approximation of the data is generically equivalent to the best, in theFrobenius norm,low-rank approximationof the data matrix.[1] In theleast squaresmethod of data modeling, theobjective functionto be minimized,S, is aquadratic form: whereris the vector ofresidualsandWis a weighting matrix. Inlinear least squaresthe model contains equations which are linear in the parameters appearing in the parameter vectorβ{\displaystyle {\boldsymbol {\beta }}}, so the residuals are given by There aremobservations inyandnparameters inβwithm>n.Xis am×nmatrix whose elements are either constants or functions of the independent variables,x. The weight matrixWis, ideally, the inverse of thevariance-covariance matrixMy{\displaystyle \mathbf {M} _{y}}of the observationsy. The independent variables are assumed to be error-free. The parameter estimates are found by setting the gradient equations to zero, which results in the normal equations[note 1] Now, suppose that bothxandyare observed subject to error, with variance-covariance matricesMx{\displaystyle \mathbf {M} _{x}}andMy{\displaystyle \mathbf {M} _{y}}respectively. In this case the objective function can be written as whererx{\displaystyle \mathbf {r} _{x}}andry{\displaystyle \mathbf {r} _{y}}are the residuals inxandyrespectively. Clearly[further explanation needed]these residuals cannot be independent of each other, but they must be constrained by some kind of relationship. Writing the model function asf(rx,ry,β){\displaystyle \mathbf {f(r_{x},r_{y},{\boldsymbol {\beta }})} }, the constraints are expressed bymcondition equations.[2] Thus, the problem is to minimize the objective function subject to themconstraints. It is solved by the use ofLagrange multipliers. After some algebraic manipulations,[3]the result is obtained. or alternativelyXTM−1Xβ=XTM−1y,{\displaystyle \mathbf {X^{T}M^{-1}X{\boldsymbol {\beta }}=X^{T}M^{-1}y} ,}whereMis the variance-covariance matrix relative to both independent and dependent variables. When the data errors are uncorrelated, all matricesMandWare diagonal. Then, take the example of straight line fitting. in this case showing how the variance at theith point is determined by the variances of both independent and dependent variables and by the model being used to fit the data. The expression may be generalized by noting that the parameterβ{\displaystyle \beta }is the slope of the line. An expression of this type is used in fittingpH titration datawhere a small error onxtranslates to a large error on y when the slope is large. As was shown in 1980 by Golub and Van Loan, the TLS problem does not have a solution in general.[4]The following considers the simple case where a unique solution exists without making any particular assumptions. The computation of the TLS usingsingular value decomposition(SVD) is described in standard texts.[5]We can solve the equation forBwhereXism-by-nandYism-by-k.[note 2] That is, we seek to findBthat minimizes error matricesEandFforXandYrespectively. That is, where[EF]{\displaystyle [E\;F]}is theaugmented matrixwithEandFside by side and‖⋅‖F{\displaystyle \|\cdot \|_{F}}is theFrobenius norm, the square root of the sum of the squares of all entries in a matrix and so equivalently the square root of the sum of squares of the lengths of the rows or columns of the matrix. This can be rewritten as whereIk{\displaystyle I_{k}}is thek×k{\displaystyle k\times k}identity matrix. The goal is then to find[EF]{\displaystyle [E\;F]}that reduces the rank of[XY]{\displaystyle [X\;Y]}byk. Define[U][Σ][V]∗{\displaystyle [U][\Sigma ][V]^{*}}to be the singular value decomposition of the augmented matrix[XY]{\displaystyle [X\;Y]}. whereVis partitioned into blocks corresponding to the shape ofXandY. Using theEckart–Young theorem, the approximation minimising the norm of the error is such that matricesU{\displaystyle U}andV{\displaystyle V}are unchanged, while the smallestk{\displaystyle k}singular values are replaced with zeroes. That is, we want so by linearity, We can then remove blocks from theUand Σ matrices, simplifying to This providesEandFso that Now ifVYY{\displaystyle V_{YY}}is nonsingular, which is not always the case (note that the behavior of TLS whenVYY{\displaystyle V_{YY}}is singular is not well understood yet), we can then right multiply both sides by−VYY−1{\displaystyle -V_{YY}^{-1}}to bring the bottom block of the right matrix to the negative identity, giving[6] and so A naiveGNU Octaveimplementation of this is: The way described above of solving the problem, which requires that the matrixVYY{\displaystyle V_{YY}}is nonsingular, can be slightly extended by the so-calledclassical TLS algorithm.[7] The standard implementation of classical TLS algorithm is available throughNetlib, see also.[8][9]All modern implementations based, for example, on solving a sequence of ordinary least squares problems, approximate the matrixB{\displaystyle B}(denotedX{\displaystyle X}in the literature), as introduced byVan Huffeland Vandewalle. It is worth noting, that thisB{\displaystyle B}is, however,not the TLS solutionin many cases.[10][11] Fornon-linear systemssimilar reasoning shows that the normal equations for an iteration cycle can be written as whereJ{\displaystyle \mathbf {J} }is theJacobian matrix. When the independent variable is error-free a residual represents the "vertical" distance between the observed data point and the fitted curve (or surface). In total least squares a residual represents the distance between a data point and the fitted curve measured along some direction. In fact, if both variables are measured in the same units and the errors on both variables are the same, then the residual represents theshortest distance between the data point and the fitted curve, that is, the residual vector is perpendicular to the tangent of the curve. For this reason, this type of regression is sometimes calledtwo dimensional Euclidean regression(Stein, 1983)[12]ororthogonal regression. A serious difficulty arises if the variables are not measured in the same units. First consider measuring distance between a data point and the line: what are the measurement units for this distance? If we consider measuring distance based on Pythagoras' Theorem then it is clear that we shall be adding quantities measured in different units, which is meaningless. Secondly, if we rescale one of the variables e.g., measure in grams rather than kilograms, then we shall end up with different results (a different line). To avoid these problems it is sometimes suggested that we convert to dimensionless variables—this may be called normalization or standardization. However, there are various ways of doing this, and these lead to fitted models which are not equivalent to each other. One approach is to normalize by known (or estimated) measurement precision thereby minimizing theMahalanobis distancefrom the points to the line, providing amaximum-likelihoodsolution;[citation needed]the unknown precisions could be found viaanalysis of variance. In short, total least squares does not have the property of units-invariance—i.e. it is notscale invariant. For a meaningful model we require this property to hold. A way forward is to realise that residuals (distances) measured in different units can be combined if multiplication is used instead of addition. Consider fitting a line: for each data point the product of the vertical and horizontal residuals equals twice the area of the triangle formed by the residual lines and the fitted line. We choose the line which minimizes the sum of these areas. Nobel laureatePaul Samuelsonproved in 1942 that, in two dimensions, it is the only line expressible solely in terms of the ratios of standard deviations and the correlation coefficient which (1) fits the correct equation when the observations fall on a straight line, (2) exhibits scale invariance, and (3) exhibits invariance under interchange of variables.[13]This solution has been rediscovered in different disciplines and is variously known asstandardised major axis(Ricker 1975, Warton et al., 2006),[14][15]thereduced major axis, thegeometric mean functional relationship(Draper and Smith, 1998),[16]least products regression,diagonal regression,line of organic correlation, and theleast areas line(Tofallis, 2002).[17] Tofallis (2015, 2023)[18][19]has extended this approach to deal with multiple variables. The calculations are simpler than for total least squares as they only require knowledge of covariances, and can be computed using standard spreadsheet functions.
https://en.wikipedia.org/wiki/Total_least_squares
Inmathematics, animplicit curveis aplane curvedefined by animplicit equationrelating two coordinate variables, commonlyxandy. For example, theunit circleis defined by the implicit equationx2+y2=1{\displaystyle x^{2}+y^{2}=1}. In general, every implicit curve is defined by an equation of the form for some functionFof two variables. Hence an implicit curve can be considered as the set ofzeros of a functionof two variables.Implicitmeans that the equation is not expressed as a solution for eitherxin terms ofyor vice versa. IfF(x,y){\displaystyle F(x,y)}is a polynomial in two variables, the corresponding curve is called analgebraic curve, and specific methods are available for studying it. Plane curves can be represented inCartesian coordinates(x,ycoordinates) by any of three methods, one of which is the implicit equation given above. Thegraph of a functionis usually described by an equationy=f(x){\displaystyle y=f(x)}in which the functional form is explicitly stated; this is called anexplicitrepresentation. The third essential description of a curve is theparametricone, where thex- andy-coordinates of curve points are represented by two functionsx(t),y(t)both of whose functional forms are explicitly stated, and which are dependent on a common parametert.{\displaystyle t.} Examples of implicit curves include: The first four examples are algebraic curves, but the last one is not algebraic. The first three examples possess simple parametric representations, which is not true for the fourth and fifth examples. The fifth example shows the possibly complicated geometric structure of an implicit curve. Theimplicit function theoremdescribes conditions under which an equationF(x,y)=0{\displaystyle F(x,y)=0}can besolved implicitlyforxand/ory– that is, under which one can validly writex=g(y){\displaystyle x=g(y)}ory=f(x){\displaystyle y=f(x)}. This theorem is the key for the computation of essential geometric features of the curve:tangents,normals, andcurvature. In practice implicit curves have an essential drawback: their visualization is difficult. But there are computer programs enabling one to display an implicit curve. Special properties of implicit curves make them essential tools in geometry and computer graphics. An implicit curve with an equationF(x,y)=0{\displaystyle F(x,y)=0}can be considered as thelevel curveof level 0 of the surfacez=F(x,y){\displaystyle z=F(x,y)}(see third diagram). In general, implicit curves fail thevertical line test(meaning that some values ofxare associated with more than one value ofy) and so are not necessarily graphs of functions. However, theimplicit function theoremgives conditions under which an implicit curvelocallyis given by the graph of a function (so in particular it has no self-intersections). If the defining relations are sufficiently smooth then, in such regions, implicit curves have well defined slopes, tangent lines, normal vectors, and curvature. There are several possible ways to compute these quantities for a given implicit curve. One method is to useimplicit differentiationto compute the derivatives ofywith respect tox. Alternatively, for a curve defined by the implicit equationF(x,y)=0{\displaystyle F(x,y)=0}, one can express these formulas directly in terms of thepartial derivativesofF{\displaystyle F}. In what follows, the partial derivatives are denotedFx{\displaystyle F_{x}}(for the derivative with respect tox),Fy{\displaystyle F_{y}},Fxx{\displaystyle F_{xx}}(for the second partial with respect tox),Fxy{\displaystyle F_{xy}}(for the mixed second partial),Fyy.{\displaystyle F_{yy}.} A curve point(x0,y0){\displaystyle (x_{0},y_{0})}isregularif the first partial derivativesFx(x0,y0){\displaystyle F_{x}(x_{0},y_{0})}andFy(x0,y0){\displaystyle F_{y}(x_{0},y_{0})}are not both equal to 0. The equation of thetangentline at a regular point(x0,y0){\displaystyle (x_{0},y_{0})}is so the slope of the tangent line, and hence the slope of the curve at that point, is IfFy(x,y)=0≠Fx(x,y){\displaystyle F_{y}(x,y)=0\neq F_{x}(x,y)}at(x0,y0),{\displaystyle (x_{0},y_{0}),}the curve is vertical at that point, while if bothFy(x,y)=0{\displaystyle F_{y}(x,y)=0}andFx(x,y)=0{\displaystyle F_{x}(x,y)=0}at that point then the curve is not differentiable there, but instead is asingular point– either acuspor a point where the curve intersects itself. A normal vector to the curve at the point is given by (here written as a row vector). For readability of the formulas, the arguments(x0,y0){\displaystyle (x_{0},y_{0})}are omitted. Thecurvatureκ{\displaystyle \kappa }at a regular point is given by the formula The implicit function theorem guarantees within a neighborhood of a point(x0,y0){\displaystyle (x_{0},y_{0})}the existence of a functionf{\displaystyle f}such thatF(x,f(x))=0{\displaystyle F(x,f(x))=0}. By thechain rule, the derivatives of functionf{\displaystyle f}are (where the arguments(x,f(x)){\displaystyle (x,f(x))}on the right side of the second formula are omitted for ease of reading). Inserting the derivatives of functionf{\displaystyle f}into the formulas for a tangent and curvature of the graph of the explicit equationy=f(x){\displaystyle y=f(x)}yields The essential disadvantage of an implicit curve is the lack of an easy possibility to calculate single points which is necessary for visualization of an implicit curve (see next section). Within mathematics implicit curves play a prominent role asalgebraic curves. In addition, implicit curves are used for designing curves of desired geometrical shapes. Here are two examples. A smooth approximation of aconvex polygoncan be achieved in the following way: Letgi(x,y)=aix+biy+ci=0,i=1,…,n{\displaystyle g_{i}(x,y)=a_{i}x+b_{i}y+c_{i}=0,\ i=1,\dotsc ,n}be the equations of the lines containing the edges of the polygon such that for an inner point of the polygongi{\displaystyle g_{i}}is positive. Then a subset of the implicit curve with suitable small parameterc{\displaystyle c}is a smooth (differentiable) approximation of the polygon. For example, the curves contain smooth approximations of a polygon with 5 edges (see diagram). In case of two lines one gets For example, the product of the coordinate axes variables yields the pencil of hyperbolasxy−c=0,c≠0{\displaystyle xy-c=0,\ c\neq 0}, which have the coordinate axes as asymptotes. If one starts with simple implicit curves other than lines (circles, parabolas,...) one gets a wide range of interesting new curves. For example, (product of a circle and the x-axis) yields smooth approximations of one half of a circle (see picture), and (product of two circles) yields smooth approximations of the intersection of two circles (see diagram). InCADone uses implicit curves for the generation ofblending curves,[2][3]which are special curves establishing a smooth transition between two given curves. For example, generates blending curves between the two circles The method guarantees the continuity of the tangents and curvatures at the points of contact (see diagram). The two lines determine the points of contact at the circles. Parameterμ{\displaystyle \mu }is a design parameter. In the diagram,μ=0.05,…,0.2{\displaystyle \mu =0.05,\dotsc ,0.2}. Equipotential curvesof two equalpoint chargesat the pointsP1=(1,0),P2=(−1,0){\displaystyle P_{1}=(1,0),\;P_{2}=(-1,0)}can be represented by the equation The curves are similar toCassini ovals, but they are not such curves. To visualize an implicit curve one usually determines a polygon on the curve and displays the polygon. For a parametric curve this is an easy task: One just computes the points of a sequence of parametric values. For an implicit curve one has to solve two subproblems: In both cases it is reasonable to assumegrad⁡F≠(0,0){\displaystyle \operatorname {grad} F\neq (0,0)}. In practice this assumption is violated at single isolated points only. For the solution of both tasks mentioned above it is essential to have a computer program (which we will callCPoint{\displaystyle {\mathsf {CPoint}}}), which, when given a pointQ0=(x0,y0){\displaystyle Q_{0}=(x_{0},y_{0})}near an implicit curve, finds a pointP{\displaystyle P}that is exactly on the curve,up tothe accuracy of computation: In order to generate a nearly equally spaced polygon on the implicit curve one chooses a step lengths{\displaystyle s}and Because the algorithm traces the implicit curve it is called atracing algorithm. The algorithm traces only connected parts of the curve. If the implicit curve consists of several parts it has to be started several times with suitable starting points. If the implicit curve consists of several or even unknown parts, it may be better to use arasterisationalgorithm. Instead of exactly following the curve, a raster algorithm covers the entire curve in so many points that they blend together and look like the curve. If the net is dense enough, the result approximates the connected parts of the implicit curve. If for further applications polygons on the curves are needed one can trace parts of interest by the tracing algorithm. Anyspace curvewhich is defined by two equations is called animplicit space curve. A curve point(x0,y0,z0){\displaystyle (x_{0},y_{0},z_{0})}is calledregularif thecross productof the gradientsF{\displaystyle F}andG{\displaystyle G}is not(0,0,0){\displaystyle (0,0,0)}at this point: otherwise it is calledsingular. Vectort(x0,y0,z0){\displaystyle \mathbf {t} (x_{0},y_{0},z_{0})}is atangent vectorof the curve at point(x0,y0,z0).{\displaystyle (x_{0},y_{0},z_{0}).} Examples: (1)x+y+z−1=0,x−y+z−2=0{\displaystyle (1)\quad x+y+z-1=0\ ,\ x-y+z-2=0} (2)x2+y2+z2−4=0,x+y+z−1=0{\displaystyle (2)\quad x^{2}+y^{2}+z^{2}-4=0\ ,\ x+y+z-1=0} (3)x2+y2−1=0,x+y+z−1=0{\displaystyle (3)\quad x^{2}+y^{2}-1=0\ ,\ x+y+z-1=0} (4)x2+y2+z2−16=0,(y−y0)2+z2−9=0{\displaystyle (4)\quad x^{2}+y^{2}+z^{2}-16=0\ ,\ (y-y_{0})^{2}+z^{2}-9=0} For the computation of curve points and the visualization of an implicit space curve seeIntersection.
https://en.wikipedia.org/wiki/Implicit_curve
Inmathematics, alevel setof areal-valued functionfofnreal variablesis asetwhere the function takes on a givenconstantvaluec, that is: When the number of independent variables is two, a level set is called alevel curve, also known ascontour lineorisoline; so a levelcurveis the set of all real-valued solutions of an equation in two variablesx1andx2. Whenn= 3, a level set is called alevel surface(orisosurface); so a levelsurfaceis the set of all real-valued roots of an equation in three variablesx1,x2andx3. For higher values ofn, the level set is alevel hypersurface, the set of all real-valued roots of an equation inn> 3variables (ahigher-dimensionalhypersurface). A level set is a special case of afiber. Level sets show up in many applications, often under different names. For example, animplicit curveis a level curve, which is considered independently of its neighbor curves, emphasizing that such a curve is defined by animplicit equation. Analogously, a level surface is sometimes called an implicit surface or anisosurface. The name isocontour is also used, which means a contour of equal height. In various application areas, isocontours have received specific names, which indicate often the nature of the values of the considered function, such asisobar,isotherm,isogon,isochrone,isoquantandindifference curve. Consider the 2-dimensional Euclidean distance:d(x,y)=x2+y2{\displaystyle d(x,y)={\sqrt {x^{2}+y^{2}}}}A level setLr(d){\displaystyle L_{r}(d)}of this function consists of those points that lie at a distance ofr{\displaystyle r}from the origin, that make acircle. For example,(3,4)∈L5(d){\displaystyle (3,4)\in L_{5}(d)}, becaused(3,4)=5{\displaystyle d(3,4)=5}. Geometrically, this means that the point(3,4){\displaystyle (3,4)}lies on the circle of radius 5 centered at the origin. More generally, aspherein ametric space(M,m){\displaystyle (M,m)}with radiusr{\displaystyle r}centered atx∈M{\displaystyle x\in M}can be defined as the level setLr(y↦m(x,y)){\displaystyle L_{r}(y\mapsto m(x,y))}. A second example is the plot ofHimmelblau's functionshown in the figure to the right. Each curve shown is a level curve of the function, and they are spaced logarithmically: if a curve representsLx{\displaystyle L_{x}}, the curve directly "within" representsLx/10{\displaystyle L_{x/10}}, and the curve directly "outside" representsL10x{\displaystyle L_{10x}}. To understand what this means, imagine that two hikers are at the same location on a mountain. One of them is bold, and decides to go in the direction where the slope is steepest. The other one is more cautious and does not want to either climb or descend, choosing a path which stays at the same height. In our analogy, the above theorem says that the two hikers will depart in directions perpendicular to each other. A consequence of this theorem (and its proof) is that iffis differentiable, a level set is ahypersurfaceand amanifoldoutside thecritical pointsoff. At a critical point, a level set may be reduced to a point (for example at alocal extremumoff) or may have asingularitysuch as aself-intersection pointor acusp. A set of the form is called asublevel setoff(or, alternatively, alower level setortrenchoff). Astrict sublevelset offis Similarly is called asuperlevel setoff(or, alternatively, anupper level setoff). And astrict superlevel setoffis Sublevel sets are important inminimization theory. ByWeierstrass's theorem, theboundnessof somenon-emptysublevel set and the lower-semicontinuity of the function implies that a function attains its minimum. Theconvexityof all the sublevel sets characterizesquasiconvex functions.[2]
https://en.wikipedia.org/wiki/Level_set
Acontour line(alsoisoline,isopleth,isoquantorisarithm) of afunction of two variablesis acurvealong which the function has a constant value, so that the curve joins points of equal value.[1][2]It is aplane sectionof thethree-dimensional graphof the functionf(x,y){\displaystyle f(x,y)}parallel to the(x,y){\displaystyle (x,y)}-plane. More generally, a contour line for a function of two variables is a curve connecting points where the function has the same particular value.[2] Incartography, a contour line (often just called a "contour") joins points of equalelevation(height) above a given level, such asmean sea level.[3]Acontour mapis amapillustrated with contour lines, for example atopographic map, which thus shows valleys and hills, and the steepness or gentleness of slopes.[4]Thecontour intervalof a contour map is the difference in elevation between successive contour lines.[5] Thegradientof the function is always perpendicular to the contour lines. When the lines are close together the magnitude of the gradient is large: the variation is steep. Alevel setis a generalization of a contour line for functions of any number of variables. Contour lines are curved, straight or a mixture of both lines on amapdescribing the intersection of a real or hypothetical surface with one or more horizontal planes. The configuration of these contours allows map readers to infer the relative gradient of a parameter and estimate that parameter at specific places. Contour lines may be either traced on a visible three-dimensional model of thesurface, as when a photogrammetrist viewing a stereo-model plots elevation contours, or interpolated from the estimated surfaceelevations, as when a computer program threads contours through a network of observation points of area centroids. In the latter case, the method ofinterpolationaffects the reliability of individual isolines and their portrayal ofslope, pits and peaks.[6] The idea of lines that join points of equal value was rediscovered several times. The oldest knownisobath(contour line of constant depth) is found on a map dated 1584 of the riverSpaarne, nearHaarlem, byDutchmanPieter Bruinsz.[7]In 1701,Edmond Halleyused such lines (isogons) on a chart of magnetic variation.[8]The Dutch engineerNicholas Cruquiusdrew the bed of the riverMerwedewith lines of equal depth (isobaths) at intervals of 1fathomin 1727, andPhilippe Buacheused them at 10-fathom intervals on a chart of theEnglish Channelthat was prepared in 1737 and published in 1752. Such lines were used to describe a land surface (contour lines) in a map of theDuchy of Modena and Reggioby Domenico Vandelli in 1746, and they were studied theoretically by Ducarla in 1771, andCharles Huttonused them in theSchiehallion experiment. In 1791, a map of France by J. L. Dupain-Triel used contour lines at 20-metre intervals, hachures, spot-heights and a vertical section. In 1801, the chief of the French Corps of Engineers,Haxo, used contour lines at the larger scale of 1:500 on a plan of his projects forRocca d'Anfo, now in northern Italy, underNapoleon.[9][10][11] By around 1843, when theOrdnance Surveystarted to regularly record contour lines inGreat BritainandIreland, they were already in general use in European countries. Isobaths were not routinely used onnautical chartsuntil those ofRussiafrom 1834, and those of Britain from 1838.[9][12][13] As different uses of the technique were invented independently, cartographers began to recognize a common theme, and debated what to call these "lines of equal value" generally. The wordisogram(fromAncient Greekἴσος(isos)'equal'andγράμμα(gramma)'writing, drawing') was proposed byFrancis Galtonin 1889 for lines indicating equality of some physical condition or quantity,[14]thoughisogramcan also refer to aword without a repeated letter. As late as 1944,John K. Wrightstill preferredisogram, but it never attained wide usage. During the early 20th century,isopleth(πλῆθος,plethos, 'amount') was being used by 1911 in the United States, whileisarithm(ἀριθμός,arithmos, 'number') had become common in Europe. Additional alternatives, including the Greek-English hybridisolineandisometric line(μέτρον,metron, 'measure'), also emerged. Despite attempts to select a single standard, all of these alternatives have survived to the present.[15][16] When maps with contour lines became common, the idea spread to other applications. Perhaps the latest to develop areair qualityandnoise pollutioncontour maps, which first appeared in the United States in approximately 1970, largely as a result of national legislation requiring spatial delineation of these parameters. Contour lines are often given specific names beginning with "iso-" according to the nature of the variable being mapped, although in many usages the phrase "contour line" is most commonly used. Specific names are most common in meteorology, where multiple maps with different variables may be viewed simultaneously. The prefix "'iso-" can be replaced with "isallo-" to specify a contour line connecting points where a variable changes at the samerateduring a given time period. Anisogon(fromAncient Greekγωνία(gonia)'angle') is a contour line for a variable which measures direction. In meteorology and in geomagnetics, the termisogonhas specific meanings which are described below. Anisocline(κλίνειν,klinein, 'to lean or slope') is a line joining points with equal slope. In population dynamics and in geomagnetics, the termsisoclineandisoclinic linehave specific meanings which are described below. A curve of equidistant points is a set of points all at the same distance from a givenpoint,line, orpolyline. In this case the function whose value is being held constant along a contour line is adistance function. In 1944, John K. Wright proposed that the termisoplethbe used for contour lines that depict a variable which cannot be measured at a point, but which instead must be calculated from data collected over an area, as opposed toisometric linesfor variables that could be measured at a point; this distinction has since been followed generally.[16][17]An example of an isopleth ispopulation density, which can be calculated by dividing the population of acensus districtby the surface area of that district. Each calculated value is presumed to be the value of the variable at the centre of the area, and isopleths can then be drawn by a process ofinterpolation. The idea of an isopleth map can be compared with that of achoropleth map.[18][19] In meteorology, the wordisoplethis used for any type of contour line.[20] Meteorological contour lines are based oninterpolationof the point data received fromweather stationsandweather satellites. Weather stations are seldom exactly positioned at a contour line (when they are, this indicates a measurement precisely equal to the value of the contour). Instead, lines are drawn to best approximate the locations of exact values, based on the scattered information points available. Meteorological contour mapsmay present collected data such as actual air pressure at a given time, or generalized data such as average pressure over a period of time, or forecast data such as predicted air pressure at some point in the future. Thermodynamic diagramsuse multiple overlapping contour sets (including isobars and isotherms) to present a picture of the major thermodynamic factors in a weather system. Anisobar(fromAncient Greekβάρος(baros)'weight') is a line of equal or constantpressureon a graph, plot, or map; an isopleth or contour line of pressure. More accurately, isobars are lines drawn on a map joining places of equal average atmospheric pressure reduced to sea level for a specified period of time. Inmeteorology, thebarometric pressuresshown are reduced tosea level, not the surface pressures at the map locations.[21]The distribution of isobars is closely related to the magnitude and direction of thewindfield, and can be used to predict future weather patterns. Isobars are commonly used in television weather reporting. Isallobarsare lines joining points of equal pressure change during a specific time interval.[22]These can be divided intoanallobars, lines joining points of equal pressure increase during a specific time interval,[23]andkatallobars, lines joining points of equal pressure decrease.[24]In general, weather systems move along an axis joining high and low isallobaric centers.[25]Isallobaric gradients are important components of the wind as they increase or decrease thegeostrophic wind. Anisopycnalis a line of constant density. Anisoheightorisohypseis a line of constantgeopotentialheight on a constant pressure surface chart. Isohypse and isoheight are simply known as lines showing equal pressure on a map. Anisotherm(fromAncient Greekθέρμη(thermē)'heat') is a line that connects points on a map that have the sametemperature. Therefore, all points through which an isotherm passes have the same or equal temperatures at the time indicated.[26][2]An isotherm at 0 °C is called thefreezing level. The termlignes isothermes(orlignes d'égale chaleur)was coined by thePrussiangeographer and naturalistAlexander von Humboldt, who as part of his research into the geographical distribution of plants published the first map of isotherms in Paris, in 1817.[27][28]According to Thomas Hankins, the Scottish engineerWilliam Playfair's graphical developments greatly influenced Alexander von Humbolt's invention of the isotherm.[29]Humbolt later used his visualizations and analyses to contradict theories by Kant and other Enlightenment thinkers that non-Europeans were inferior due to their climate.[30] Anisocheimis a line of equal mean winter temperature, and anisothereis a line of equal mean summer temperature. Anisohel(ἥλιος,helios, 'Sun') is a line of equal or constantsolar radiation. Anisogeothermis a line of equal temperature beneath the Earth's surface. Anisohyetorisohyetal line(fromAncient Greekὑετός(huetos)'rain') is a line on amapjoining points of equal rainfall in a given period. A map with isohyets is called anisohyetal map. Anisohumeis a line of constant relativehumidity, while anisodrosotherm(fromAncient Greekδρόσος(drosos)'dew'andθέρμη(therme)'heat') is a line of equal or constantdew point. Anisonephis a line indicating equalcloudcover. Anisochalazis a line of constant frequency ofhailstorms, and anisobrontis a line drawn through geographical points at which a given phase of thunderstorm activity occurred simultaneously. Snowcover is frequently shown as a contour-line map. Anisotach(fromAncient Greekταχύς(tachus)'fast') is a line joining points with constantwindspeed. In meteorology, the termisogonrefers to a line of constant wind direction. Anisopecticline denotes equal dates oficeformation each winter, and anisotacdenotes equal dates of thawing. Contours are one of severalcommon methodsused to denoteelevationoraltitudeand depth onmaps. From these contours, a sense of the generalterraincan be determined. They are used at a variety of scales, from large-scale engineering drawings and architectural plans, throughtopographic mapsandbathymetric charts, up to continental-scale maps. "Contour line" is the most common usage incartography, butisobathfor underwater depths onbathymetricmaps andisohypsefor elevations are also used. In cartography, thecontour intervalis the elevation difference between adjacent contour lines. The contour interval should be the same over a single map. When calculated as a ratio against the map scale, a sense of the hilliness of the terrain can be derived. There are several rules to note when interpreting terrain contour lines: Of course, to determine differences in elevation between two points, the contour interval, or distance in altitude between two adjacent contour lines, must be known, and this is normally stated in the map key. Usually contour intervals are consistent throughout a map, but there are exceptions. Sometimes intermediate contours are present in flatter areas; these can be dashed or dotted lines at half the noted contour interval. When contours are used withhypsometric tintson a small-scale map that includes mountains and flatter low-lying areas, it is common to have smaller intervals at lower elevations so that detail is shown in all areas. Conversely, for an island which consists of a plateau surrounded by steep cliffs, it is possible to use smaller intervals as the height increases.[32] Anisopotential mapis a measure of electrostatic potential in space, often depicted in two dimensions with the electrostatic charges inducing thatelectric potential. The termequipotentiallineorisopotential linerefers to a curve of constantelectric potential. Whether crossing an equipotential line represents ascending or descending the potential is inferred from the labels on the charges. In three dimensions,equipotentialsurfacesmay be depicted with a two dimensional cross-section, showingequipotentiallines at the intersection of the surfaces and the cross-section. The general mathematical termlevel setis often used to describe the full collection of points having a particular potential, especially in higher dimensional space. In the study of theEarth's magnetic field, the termisogonorisogonic linerefers to a line of constantmagnetic declination, the variation of magnetic north from geographic north. Anagonic lineis drawn through points of zero magnetic declination. Anisoporic linerefers to a line of constant annual variation of magnetic declination .[33] Anisoclinic lineconnects points of equalmagnetic dip, and anaclinic lineis the isoclinic line of magnetic dip zero. Anisodynamic line(fromδύναμιςordynamismeaning 'power') connects points with the same intensity of magnetic force. Besides ocean depth,oceanographersuse contour to describe diffuse variable phenomena much as meteorologists do with atmospheric phenomena. In particular,isobathythermsare lines showing depths of water with equal temperature,isohalinesshow lines of equal ocean salinity, andisopycnalsare surfaces of equal water density. Variousgeologicaldata are rendered as contour maps instructural geology,sedimentology,stratigraphyandeconomic geology. Contour maps are used to show the below ground surface of geologicstrata,faultsurfaces (especially low anglethrust faults) andunconformities.Isopach mapsuseisopachs(lines of equal thickness) to illustrate variations in thickness of geologic units. In discussing pollution, density maps can be very useful in indicating sources and areas of greatest contamination. Contour maps are especially useful for diffuse forms or scales of pollution. Acid precipitation is indicated on maps withisoplats. Some of the most widespread applications ofenvironmental sciencecontour maps involve mapping ofenvironmental noise(where lines of equal sound pressure level are denotedisobels[34]),air pollution,soil contamination,thermal pollutionandgroundwatercontamination. Bycontour plantingandcontour ploughing, the rate ofwater runoffand thussoil erosioncan be substantially reduced; this is especially important inriparianzones. Anisofloris an isopleth contour connecting areas of comparable biological diversity. Usually, the variable is the number of species of a given genus or family that occurs in a region. Isoflor maps are thus used to show distribution patterns and trends such as centres of diversity.[35] Ineconomics, contour lines can be used to describe features which vary quantitatively over space. Anisochroneshows lines of equivalent drive time or travel time to a given location and is used in the generation ofisochrone maps. Anisotimshows equivalent transport costs from the source of a raw material, and anisodapaneshows equivalent cost of travel time. Contour lines are also used to display non-geographic information in economics.Indifference curves(as shown at left) are used to show bundles of goods to which a person would assign equal utility. Anisoquant(in the image at right) is a curve of equal production quantity for alternative combinations ofinput usages, and anisocost curve(also in the image at right) shows alternative usages having equal production costs. Inpolitical sciencean analogous method is used in understanding coalitions (for example the diagram in Laver and Shepsle's work[36]). Inpopulation dynamics, anisoclineshows the set of population sizes at which the rate of change, or partial derivative, for one population in a pair of interacting populations is zero. In statistics, isodensity lines[37]or isodensanes are lines that join points with the same value of aprobability density. Isodensanes are used to displaybivariate distributions. For example, for a bivariateelliptical distributionthe isodensity lines areellipses. Various types of graphs inthermodynamics, engineering, and other sciences use isobars (constant pressure), isotherms (constant temperature), isochors (constant specific volume), or other types of isolines, even though these graphs are usually not related to maps. Such isolines are useful for representing more than two dimensions (or quantities) on two-dimensional graphs. Common examples in thermodynamics are some types ofphase diagrams. Isoclinesare used to solveordinary differential equations. In interpretingradarimages, anisodopis a line of equalDopplervelocity, and anisoechois a line of equal radar reflectivity. In the case of hybrid contours, energies of hybrid orbitals and the energies of pure atomic orbitals are plotted. The graph obtained is called hybrid contour. To maximize readability of contour maps, there are several design choices available to the map creator, principally line weight, linecolor, line type and method of numerical marking. Line weightis simply the darkness or thickness of the line used. This choice is made based upon the least intrusive form of contours that enable the reader to decipher the background information in the map itself. If there is little or no content on the base map, the contour lines may be drawn with relatively heavy thickness. Also, for many forms of contours such as topographic maps, it is common to vary the line weight and/or color, so that a different line characteristic occurs for certain numerical values. For example, in thetopographicmap above, the even hundred foot elevations are shown in a different weight from the twenty foot intervals. Line coloris the choice of any number ofpigmentsthat suit the display. Sometimes asheen or glossis used as well as color to set the contour lines apart from thebase map. Line colour can be varied to show other information. Line typerefers to whether the basic contour line is solid, dashed, dotted or broken in some other pattern to create the desired effect. Dotted or dashed lines are often used when the underlying base map conveys very important (or difficult to read) information. Broken line types are used when the location of the contour line is inferred. Numerical markingis the manner of denoting thearithmeticalvalues of contour lines. This can be done by placing numbers along some of the contour lines, typically usinginterpolationfor intervening lines. Alternatively a map key can be produced associating the contours with their values. If the contour lines are not numerically labeled and adjacent lines have the same style (with the same weight, color and type), then the direction of the gradient cannot be determined from the contour lines alone. However, if the contour lines cycle through three or more styles, then the direction of the gradient can be determined from the lines. The orientation of the numerical text labels is often used to indicate the direction of the slope. Most commonly contour lines are drawn in plan view, or as an observer in space would view the Earth's surface: ordinary map form. However, some parameters can often be displayed in profile view showing a vertical profile of the parameter mapped. Some of the most common parameters mapped in profile areair pollutant concentrationsandsound levels. In each of those cases it may be important to analyze (air pollutant concentrations or sound levels) at varying heights so as to determine the air quality ornoise health effectson people at different elevations, for example, living on different floor levels of an urban apartment. In actuality, both plan and profile view contour maps are used inair pollutionandnoise pollutionstudies. Labelsare a critical component of elevation maps. A properly labeled contour map helps the reader to quickly interpret the shape of the terrain. If numbers are placed close to each other, it means that the terrain is steep. Labels should be placed along a slightly curved line "pointing" to the summit or nadir, from several directions if possible, making the visual identification of the summit or nadir easy.[38][39]Contour labels can be oriented so a reader is facing uphill when reading the label. Manual labeling of contour maps is a time-consuming process, however, there are a few software systems that can do the job automatically and in accordance with cartographic conventions, calledautomatic label placement.
https://en.wikipedia.org/wiki/Contour_line
Anisosurfaceis a three-dimensional analog of anisoline. It is asurfacethat represents points of a constant value (e.g.pressure,temperature,velocity,density) within avolumeof space; in other words, it is alevel setof a continuousfunctionwhosedomainis3-space. The termisolineis also sometimes used for domains of more than 3 dimensions.[1] Isosurfaces are normally displayed usingcomputer graphics, and are used as data visualization methods incomputational fluid dynamics(CFD), allowing engineers to study features of afluid flow(gas or liquid) around objects, such as aircraftwings. An isosurface may represent an individualshock waveinsupersonicflight, or several isosurfaces may be generated showing a sequence of pressure values in the air flowing around a wing. Isosurfaces tend to be a popular form of visualization for volume datasets since they can be rendered by a simple polygonal model, which can be drawn on the screen very quickly. Inmedical imaging, isosurfaces may be used to represent regions of a particulardensityin a three-dimensionalCTscan, allowing the visualization of internalorgans,bones, or other structures. Numerous other disciplines that are interested in three-dimensional data often use isosurfaces to obtain information aboutpharmacology,chemistry,geophysicsandmeteorology. Themarching cubesalgorithm was first published in the 1987 SIGGRAPH proceedings by Lorensen and Cline,[2]and it creates a surface by intersecting the edges of adatavolume grid with the volume contour. Where the surface intersects the edge the algorithm creates a vertex. By using a table of different triangles depending on different patterns of edge intersections the algorithm can create a surface. This algorithm has solutions for implementation both on the CPU and on the GPU. Theasymptotic decideralgorithm was developed as an extension tomarching cubesin order to resolve the possibility of ambiguity in it. Themarching tetrahedraalgorithm was developed as an extension tomarching cubesin order to solve an ambiguity in that algorithm and to create higher quality output surface. The Surface Nets algorithm places an intersecting vertex in the middle of a volume voxel instead of at the edges, leading to a smoother output surface. Thedual contouringalgorithm was first published in the 2002 SIGGRAPH proceedings by Ju and Losasso,[3]developed as an extension to bothsurface netsand marching cubes. It retains adualvertex within thevoxelbut no longer at the center. Dual contouring leverages the position andnormalof where the surface crosses the edges of a voxel to interpolate the position of the dual vertex within the voxel. This has the benefit of retaining sharp or smooth surfaces where surface nets often look blocky or incorrectly beveled.[4]Dual contouring often uses surface generation that leveragesoctreesas an optimization to adapt the number of triangles in output to the complexity of the surface. Manifolddual contouringincludes an analysis of the octree neighborhood to maintain continuity of the manifold surface[5][6][7] Examples of isosurfaces are 'Metaballs' or 'blobby objects' used in 3D visualisation. A more general way to construct an isosurface is to use thefunction representation.
https://en.wikipedia.org/wiki/Isosurface
In economics, themarginal rate of substitution(MRS) is the rate at which a consumer can give up some amount of one good in exchange for another good while maintaining the same level ofutility. At equilibrium consumption levels (assuming no externalities), marginal rates of substitution are identical. The marginal rate of substitution is one of the three factors from marginal productivity, the others being marginal rates of transformation and marginal productivity of a factor.[1] Under the standard assumption ofneoclassical economicsthat goods and services are continuously divisible, the marginal rates of substitution will be the same regardless of the direction of exchange, and will correspond to the slope of anindifference curve(more precisely, to the slope multiplied by −1) passing through the consumption bundle in question, at that point: mathematically, it is theimplicit derivative. MRS of X for Y is the amount of Y which a consumer can exchange for one unit of X locally. The MRS is different at each point along the indifference curve thus it is important to keep locus in the definition. Further on this assumption, or otherwise on the assumption that utility isquantified, the marginal rate of substitution of good or service X for good or service Y (MRSxy) is also equivalent to themarginal utilityof X over the marginal utility of Y. Formally, It is important to note that when comparing bundles of goods X and Y that give a constant utility (points along anindifference curve), themarginal utilityof X is measured in terms of units of Y that is being given up. For example, if the MRSxy= 2, the consumer will give up 2 units of Y to obtain 1 additional unit of X. As one moves down a (standardly convex) indifference curve, the marginal rate of substitution decreases (as measured by the absolute value of the slope of the indifference curve, which decreases). This is known as the law of diminishing marginal rate of substitution. Since the indifference curve is convex with respect to the origin and we have defined the MRS as the negative slope of the indifference curve, Assume the consumerutility functionis defined byU(x,y){\displaystyle U(x,y)}, whereUis consumer utility,xandyare goods. Then the marginal rate of substitution can be computed viapartial differentiation, as follows. Also, note that: whereMUx{\displaystyle \ MU_{x}}is themarginal utilitywith respect to goodxandMUy{\displaystyle \ MU_{y}}is the marginal utility with respect to goody. By taking thetotal differentialof the utility function equation, we obtain the following results: Through any point on the indifference curve,dU/dx= 0, becauseU=c, wherecis a constant. It follows from the above equation that: The marginal rate of substitution is defined as the absolute value of the slope of the indifference curve at whichever commodity bundle quantities are of interest. That turns out to equal the ratio of the marginal utilities: When consumers maximize utility with respect to a budget constraint, the indifference curve is tangent to thebudget line, therefore, withmrepresenting slope: Therefore, when the consumer is choosing his utility maximized market basket on his budget line, This important result tells us that utility is maximized when the consumer's budget is allocated so that the marginal utility per unit of money spent is equal for each good. If this equality did not hold, the consumer could increase his/her utility by cutting spending on the good with lower marginal utility per unit of money and increase spending on the other good. To decrease the marginal rate of substitution, the consumer must buy more of the good for which he/she wishes the marginal utility to fall for (due to the law of diminishing marginal utility). An important principle of economic theory is that marginal rate of substitution of X for Y diminishes as more and more of good X is substituted for good Y. In other words, as the consumer has more and more of good X, he is prepared to forego less and less of good Y. It means that as the consumer's stock of X increases and his stock of Y decreases, he is willing to forego less and less of Y for a given increment in X. In other words, the marginal rate of substitution of X for Y falls as the consumer has more of X and less of Y. That the marginal rate of substitution of X for Y diminishes can also be known from drawing tangents at different points on an indifference curve. When analyzing the utility function of consumer's in terms of determining if they are convex or not. For the horizon of two goods we can apply a quick derivative test (take the derivative of MRS) to determine if our consumer's preferences are convex. If the derivative of MRS is negative the utility curve would be concave down meaning that it has a maximum and then decreases on either side of the maximum. This utility curve may have an appearance similar to that of a lower case n. If the derivative of MRS is equal to 0 the utility curve would be linear, the slope would stay constant throughout the utility curve. If the derivative of MRS is positive the utility curve would be convex up meaning that it has a minimum and then increases on either side of the minimum. This utility curve may have an appearance similar to that of a u. These statements are shown mathematically below. For more than two variables, the use of the Hessian matrix is required. Adam Hayes. (2021, March 31). Inside the marginal rate of substitution. Investopedia. Jerelin, R. (2017, May 30). Diminishing marginal rate of substitution | Indifference curve | Economics. Economics Discussion
https://en.wikipedia.org/wiki/Marginal_rate_of_substitution
Inmultivariable calculus, theimplicit function theorem[a]is a tool that allowsrelationsto be converted tofunctions of several real variables. It does so by representing the relation as thegraph of a function. There may not be a single function whose graph can represent the entire relation, but there may be such a function on a restriction of thedomainof the relation. The implicit function theorem gives a sufficient condition to ensure that there is such a function. More precisely, given a system ofmequationsfi(x1, ...,xn,y1, ...,ym) = 0,i= 1, ...,m(often abbreviated intoF(x,y) =0), the theorem states that, under a mild condition on thepartial derivatives(with respect to eachyi) at a point, themvariablesyiare differentiable functions of thexjin someneighborhoodof the point. As these functions generally cannot be expressed inclosed form, they areimplicitlydefined by the equations, and this motivated the name of the theorem.[1] In other words, under a mild condition on the partial derivatives, the set ofzerosof a system of equations islocallythegraph of a function. Augustin-Louis Cauchy(1789–1857) is credited with the first rigorous form of the implicit function theorem.Ulisse Dini(1845–1918) generalized the real-variable version of the implicit function theorem to the context of functions of any number of real variables.[2] Letf:R2→R{\displaystyle f:\mathbb {R} ^{2}\to \mathbb {R} }be a continuously differentiable function defining theimplicit equationof acurvef(x,y)=0{\displaystyle f(x,y)=0}. Let(x0,y0){\displaystyle (x_{0},y_{0})}be a point on the curve, that is, a point such thatf(x0,y0)=0{\displaystyle f(x_{0},y_{0})=0}. In this simple case, the implicit function theorem can be stated as follows: Theorem—If⁠f(x,y){\displaystyle f(x,y)}⁠is a function that is continuously differentiable in a neighbourhood of the point⁠(x0,y0){\displaystyle (x_{0},y_{0})}⁠, and∂f∂y(x0,y0)≠0,{\textstyle {\frac {\partial f}{\partial y}}(x_{0},y_{0})\neq 0,}then there exists a unique differentiable function⁠φ{\displaystyle \varphi }⁠such that⁠y0=φ(x0){\displaystyle y_{0}=\varphi (x_{0})}⁠and⁠f(x,φ(x))=0{\displaystyle f(x,\varphi (x))=0}⁠in a neighbourhood of⁠x0{\displaystyle x_{0}}⁠. Proof.By differentiating the equation⁠f(x,φ(x))=0{\displaystyle f(x,\varphi (x))=0}⁠, one gets∂f∂x(x,φ(x))+φ′(x)∂f∂y(x,φ(x))=0.{\displaystyle {\frac {\partial f}{\partial x}}(x,\varphi (x))+\varphi '(x)\,{\frac {\partial f}{\partial y}}(x,\varphi (x))=0.}and thusφ′(x)=−∂f∂x(x,φ(x))∂f∂y(x,φ(x)).{\displaystyle \varphi '(x)=-{\frac {{\frac {\partial f}{\partial x}}(x,\varphi (x))}{{\frac {\partial f}{\partial y}}(x,\varphi (x))}}.}This gives anordinary differential equationfor⁠φ{\displaystyle \varphi }⁠, with the initial condition⁠φ(x0)=y0{\displaystyle \varphi (x_{0})=y_{0}}⁠. Since∂f∂y(x0,y0)≠0,{\textstyle {\frac {\partial f}{\partial y}}(x_{0},y_{0})\neq 0,}the right-hand side of the differential equation is continuous, upper bounded and lower bounded on some closed interval around⁠x0{\displaystyle x_{0}}⁠. It is thereforeLipschitz continuous,[dubious–discuss]and theCauchy-Lipschitz theoremapplies for proving the existence of a unique solution. If we define the functionf(x,y) =x2+y2, then the equationf(x,y) = 1cuts out theunit circleas thelevel set{(x,y) |f(x,y) = 1}. There is no way to represent the unit circle as the graph of a function of one variabley=g(x)because for each choice ofx∈ (−1, 1), there are two choices ofy, namely±1−x2{\displaystyle \pm {\sqrt {1-x^{2}}}}. However, it is possible to representpartof the circle as the graph of a function of one variable. If we letg1(x)=1−x2{\displaystyle g_{1}(x)={\sqrt {1-x^{2}}}}for−1 ≤x≤ 1, then the graph ofy=g1(x)provides the upper half of the circle. Similarly, ifg2(x)=−1−x2{\displaystyle g_{2}(x)=-{\sqrt {1-x^{2}}}}, then the graph ofy=g2(x)gives the lower half of the circle. The purpose of the implicit function theorem is to tell us that functions likeg1(x)andg2(x)almost alwaysexist, even in situations where we cannot write down explicit formulas. It guarantees thatg1(x)andg2(x)are differentiable, and it even works in situations where we do not have a formula forf(x,y). Letf:Rn+m→Rm{\displaystyle f:\mathbb {R} ^{n+m}\to \mathbb {R} ^{m}}be acontinuously differentiablefunction. We think ofRn+m{\displaystyle \mathbb {R} ^{n+m}}as theCartesian productRn×Rm,{\displaystyle \mathbb {R} ^{n}\times \mathbb {R} ^{m},}and we write a point of this product as(x,y)=(x1,…,xn,y1,…ym).{\displaystyle (\mathbf {x} ,\mathbf {y} )=(x_{1},\ldots ,x_{n},y_{1},\ldots y_{m}).}Starting from the given functionf{\displaystyle f}, our goal is to construct a functiong:Rn→Rm{\displaystyle g:\mathbb {R} ^{n}\to \mathbb {R} ^{m}}whose graph(x,g(x)){\displaystyle ({\textbf {x}},g({\textbf {x}}))}is precisely the set of all(x,y){\displaystyle ({\textbf {x}},{\textbf {y}})}such thatf(x,y)=0{\displaystyle f({\textbf {x}},{\textbf {y}})={\textbf {0}}}. As noted above, this may not always be possible. We will therefore fix a point(a,b)=(a1,…,an,b1,…,bm){\displaystyle ({\textbf {a}},{\textbf {b}})=(a_{1},\dots ,a_{n},b_{1},\dots ,b_{m})}which satisfiesf(a,b)=0{\displaystyle f({\textbf {a}},{\textbf {b}})={\textbf {0}}}, and we will ask for ag{\displaystyle g}that works near the point(a,b){\displaystyle ({\textbf {a}},{\textbf {b}})}. In other words, we want anopen setU⊂Rn{\displaystyle U\subset \mathbb {R} ^{n}}containinga{\displaystyle {\textbf {a}}}, an open setV⊂Rm{\displaystyle V\subset \mathbb {R} ^{m}}containingb{\displaystyle {\textbf {b}}}, and a functiong:U→V{\displaystyle g:U\to V}such that the graph ofg{\displaystyle g}satisfies the relationf=0{\displaystyle f={\textbf {0}}}onU×V{\displaystyle U\times V}, and that no other points withinU×V{\displaystyle U\times V}do so. In symbols, {(x,g(x))∣x∈U}={(x,y)∈U×V∣f(x,y)=0}.{\displaystyle \{(\mathbf {x} ,g(\mathbf {x} ))\mid \mathbf {x} \in U\}=\{(\mathbf {x} ,\mathbf {y} )\in U\times V\mid f(\mathbf {x} ,\mathbf {y} )=\mathbf {0} \}.} To state the implicit function theorem, we need theJacobian matrixoff{\displaystyle f}, which is the matrix of thepartial derivativesoff{\displaystyle f}. Abbreviating(a1,…,an,b1,…,bm){\displaystyle (a_{1},\dots ,a_{n},b_{1},\dots ,b_{m})}to(a,b){\displaystyle ({\textbf {a}},{\textbf {b}})}, the Jacobian matrix is (Df)(a,b)=[∂f1∂x1(a,b)⋯∂f1∂xn(a,b)∂f1∂y1(a,b)⋯∂f1∂ym(a,b)⋮⋱⋮⋮⋱⋮∂fm∂x1(a,b)⋯∂fm∂xn(a,b)∂fm∂y1(a,b)⋯∂fm∂ym(a,b)]=[XY]{\displaystyle (Df)(\mathbf {a} ,\mathbf {b} )=\left[{\begin{array}{ccc|ccc}{\frac {\partial f_{1}}{\partial x_{1}}}(\mathbf {a} ,\mathbf {b} )&\cdots &{\frac {\partial f_{1}}{\partial x_{n}}}(\mathbf {a} ,\mathbf {b} )&{\frac {\partial f_{1}}{\partial y_{1}}}(\mathbf {a} ,\mathbf {b} )&\cdots &{\frac {\partial f_{1}}{\partial y_{m}}}(\mathbf {a} ,\mathbf {b} )\\\vdots &\ddots &\vdots &\vdots &\ddots &\vdots \\{\frac {\partial f_{m}}{\partial x_{1}}}(\mathbf {a} ,\mathbf {b} )&\cdots &{\frac {\partial f_{m}}{\partial x_{n}}}(\mathbf {a} ,\mathbf {b} )&{\frac {\partial f_{m}}{\partial y_{1}}}(\mathbf {a} ,\mathbf {b} )&\cdots &{\frac {\partial f_{m}}{\partial y_{m}}}(\mathbf {a} ,\mathbf {b} )\end{array}}\right]=\left[{\begin{array}{c|c}X&Y\end{array}}\right]} whereX{\displaystyle X}is the matrix of partial derivatives in the variablesxi{\displaystyle x_{i}}andY{\displaystyle Y}is the matrix of partial derivatives in the variablesyj{\displaystyle y_{j}}. The implicit function theorem says that ifY{\displaystyle Y}is an invertible matrix, then there areU{\displaystyle U},V{\displaystyle V}, andg{\displaystyle g}as desired. Writing all the hypotheses together gives the following statement. Letf:Rn+m→Rm{\displaystyle f:\mathbb {R} ^{n+m}\to \mathbb {R} ^{m}}be acontinuously differentiable function, and letRn+m{\displaystyle \mathbb {R} ^{n+m}}have coordinates(x,y){\displaystyle ({\textbf {x}},{\textbf {y}})}. Fix a point(a,b)=(a1,…,an,b1,…,bm){\displaystyle ({\textbf {a}},{\textbf {b}})=(a_{1},\dots ,a_{n},b_{1},\dots ,b_{m})}withf(a,b)=0{\displaystyle f({\textbf {a}},{\textbf {b}})=\mathbf {0} }, where0∈Rm{\displaystyle \mathbf {0} \in \mathbb {R} ^{m}}is the zero vector. If theJacobian matrix(this is the right-hand panel of the Jacobian matrix shown in the previous section):Jf,y(a,b)=[∂fi∂yj(a,b)]{\displaystyle J_{f,\mathbf {y} }(\mathbf {a} ,\mathbf {b} )=\left[{\frac {\partial f_{i}}{\partial y_{j}}}(\mathbf {a} ,\mathbf {b} )\right]}isinvertible, then there exists an open setU⊂Rn{\displaystyle U\subset \mathbb {R} ^{n}}containinga{\displaystyle {\textbf {a}}}such that there exists a unique functiong:U→Rm{\displaystyle g:U\to \mathbb {R} ^{m}}such thatg(a)=b{\displaystyle g(\mathbf {a} )=\mathbf {b} },andf(x,g(x))=0for allx∈U{\displaystyle f(\mathbf {x} ,g(\mathbf {x} ))=\mathbf {0} ~{\text{for all}}~\mathbf {x} \in U}.Moreover,g{\displaystyle g}is continuously differentiable and, denoting the left-hand panel of the Jacobian matrix shown in the previous section as:Jf,x(a,b)=[∂fi∂xj(a,b)],{\displaystyle J_{f,\mathbf {x} }(\mathbf {a} ,\mathbf {b} )=\left[{\frac {\partial f_{i}}{\partial x_{j}}}(\mathbf {a} ,\mathbf {b} )\right],}the Jacobian matrix of partial derivatives ofg{\displaystyle g}inU{\displaystyle U}is given by thematrix product:[3][∂gi∂xj(x)]m×n=−[Jf,y(x,g(x))]m×m−1[Jf,x(x,g(x))]m×n{\displaystyle \left[{\frac {\partial g_{i}}{\partial x_{j}}}(\mathbf {x} )\right]_{m\times n}=-\left[J_{f,\mathbf {y} }(\mathbf {x} ,g(\mathbf {x} ))\right]_{m\times m}^{-1}\,\left[J_{f,\mathbf {x} }(\mathbf {x} ,g(\mathbf {x} ))\right]_{m\times n}} For a proof, seeInverse function theorem#Implicit_function_theorem. Here, the two-dimensional case is detailed. If, moreover,f{\displaystyle f}isanalyticor continuously differentiablek{\displaystyle k}times in a neighborhood of(a,b){\displaystyle ({\textbf {a}},{\textbf {b}})}, then one may chooseU{\displaystyle U}in order that the same holds true forg{\displaystyle g}insideU{\displaystyle U}.[4]In the analytic case, this is called theanalytic implicit function theorem. Let us go back to the example of theunit circle. In this casen=m= 1 andf(x,y)=x2+y2−1{\displaystyle f(x,y)=x^{2}+y^{2}-1}. The matrix of partial derivatives is just a 1 × 2 matrix, given by(Df)(a,b)=[∂f∂x(a,b)∂f∂y(a,b)]=[2a2b]{\displaystyle (Df)(a,b)={\begin{bmatrix}{\dfrac {\partial f}{\partial x}}(a,b)&{\dfrac {\partial f}{\partial y}}(a,b)\end{bmatrix}}={\begin{bmatrix}2a&2b\end{bmatrix}}} Thus, here, theYin the statement of the theorem is just the number2b; the linear map defined by it is invertibleif and only ifb≠ 0. By the implicit function theorem we see that we can locally write the circle in the formy=g(x)for all points wherey≠ 0. For(±1, 0)we run into trouble, as noted before. The implicit function theorem may still be applied to these two points, by writingxas a function ofy, that is,x=h(y){\displaystyle x=h(y)}; now the graph of the function will be(h(y),y){\displaystyle \left(h(y),y\right)}, since whereb= 0we havea= 1, and the conditions to locally express the function in this form are satisfied. The implicit derivative ofywith respect tox, and that ofxwith respect toy, can be found bytotally differentiatingthe implicit functionx2+y2−1{\displaystyle x^{2}+y^{2}-1}and equating to 0:2xdx+2ydy=0,{\displaystyle 2x\,dx+2y\,dy=0,}givingdydx=−xy{\displaystyle {\frac {dy}{dx}}=-{\frac {x}{y}}}anddxdy=−yx.{\displaystyle {\frac {dx}{dy}}=-{\frac {y}{x}}.} Suppose we have anm-dimensional space, parametrised by a set of coordinates(x1,…,xm){\displaystyle (x_{1},\ldots ,x_{m})}. We can introduce a new coordinate system(x1′,…,xm′){\displaystyle (x'_{1},\ldots ,x'_{m})}by supplying m functionsh1…hm{\displaystyle h_{1}\ldots h_{m}}each being continuously differentiable. These functions allow us to calculate the new coordinates(x1′,…,xm′){\displaystyle (x'_{1},\ldots ,x'_{m})}of a point, given the point's old coordinates(x1,…,xm){\displaystyle (x_{1},\ldots ,x_{m})}usingx1′=h1(x1,…,xm),…,xm′=hm(x1,…,xm){\displaystyle x'_{1}=h_{1}(x_{1},\ldots ,x_{m}),\ldots ,x'_{m}=h_{m}(x_{1},\ldots ,x_{m})}. One might want to verify if the opposite is possible: given coordinates(x1′,…,xm′){\displaystyle (x'_{1},\ldots ,x'_{m})}, can we 'go back' and calculate the same point's original coordinates(x1,…,xm){\displaystyle (x_{1},\ldots ,x_{m})}? The implicit function theorem will provide an answer to this question. The (new and old) coordinates(x1′,…,xm′,x1,…,xm){\displaystyle (x'_{1},\ldots ,x'_{m},x_{1},\ldots ,x_{m})}are related byf= 0, withf(x1′,…,xm′,x1,…,xm)=(h1(x1,…,xm)−x1′,…,hm(x1,…,xm)−xm′).{\displaystyle f(x'_{1},\ldots ,x'_{m},x_{1},\ldots ,x_{m})=(h_{1}(x_{1},\ldots ,x_{m})-x'_{1},\ldots ,h_{m}(x_{1},\ldots ,x_{m})-x'_{m}).}Now the Jacobian matrix offat a certain point (a,b) [ wherea=(x1′,…,xm′),b=(x1,…,xm){\displaystyle a=(x'_{1},\ldots ,x'_{m}),b=(x_{1},\ldots ,x_{m})}] is given by(Df)(a,b)=[−1⋯0⋮⋱⋮0⋯−1|∂h1∂x1(b)⋯∂h1∂xm(b)⋮⋱⋮∂hm∂x1(b)⋯∂hm∂xm(b)]=[−Im|J].{\displaystyle (Df)(a,b)=\left[{\begin{matrix}-1&\cdots &0\\\vdots &\ddots &\vdots \\0&\cdots &-1\end{matrix}}\left|{\begin{matrix}{\frac {\partial h_{1}}{\partial x_{1}}}(b)&\cdots &{\frac {\partial h_{1}}{\partial x_{m}}}(b)\\\vdots &\ddots &\vdots \\{\frac {\partial h_{m}}{\partial x_{1}}}(b)&\cdots &{\frac {\partial h_{m}}{\partial x_{m}}}(b)\\\end{matrix}}\right.\right]=[-I_{m}|J].}where Imdenotes them×midentity matrix, andJis them×mmatrix of partial derivatives, evaluated at (a,b). (In the above, these blocks were denoted by X and Y. As it happens, in this particular application of the theorem, neither matrix depends ona.) The implicit function theorem now states that we can locally express(x1,…,xm){\displaystyle (x_{1},\ldots ,x_{m})}as a function of(x1′,…,xm′){\displaystyle (x'_{1},\ldots ,x'_{m})}ifJis invertible. DemandingJis invertible is equivalent to detJ≠ 0, thus we see that we can go back from the primed to the unprimed coordinates if the determinant of the JacobianJis non-zero. This statement is also known as theinverse function theorem. As a simple application of the above, consider the plane, parametrised bypolar coordinates(R,θ). We can go to a new coordinate system (cartesian coordinates) by defining functionsx(R,θ) =Rcos(θ)andy(R,θ) =Rsin(θ). This makes it possible given any point(R,θ)to find corresponding Cartesian coordinates(x,y). When can we go back and convert Cartesian into polar coordinates? By the previous example, it is sufficient to havedetJ≠ 0, withJ=[∂x(R,θ)∂R∂x(R,θ)∂θ∂y(R,θ)∂R∂y(R,θ)∂θ]=[cos⁡θ−Rsin⁡θsin⁡θRcos⁡θ].{\displaystyle J={\begin{bmatrix}{\frac {\partial x(R,\theta )}{\partial R}}&{\frac {\partial x(R,\theta )}{\partial \theta }}\\{\frac {\partial y(R,\theta )}{\partial R}}&{\frac {\partial y(R,\theta )}{\partial \theta }}\\\end{bmatrix}}={\begin{bmatrix}\cos \theta &-R\sin \theta \\\sin \theta &R\cos \theta \end{bmatrix}}.}SincedetJ=R, conversion back to polar coordinates is possible ifR≠ 0. So it remains to check the caseR= 0. It is easy to see that in caseR= 0, our coordinate transformation is not invertible: at the origin, the value of θ is not well-defined. Based on theinverse function theoreminBanach spaces, it is possible to extend the implicit function theorem to Banach space valued mappings.[5][6] LetX,Y,ZbeBanach spaces. Let the mappingf:X×Y→Zbe continuouslyFréchet differentiable. If(x0,y0)∈X×Y{\displaystyle (x_{0},y_{0})\in X\times Y},f(x0,y0)=0{\displaystyle f(x_{0},y_{0})=0}, andy↦Df(x0,y0)(0,y){\displaystyle y\mapsto Df(x_{0},y_{0})(0,y)}is a Banach space isomorphism fromYontoZ, then there exist neighbourhoodsUofx0andVofy0and a Fréchet differentiable functiong:U→Vsuch thatf(x,g(x)) = 0 andf(x,y) = 0 if and only ify=g(x), for all(x,y)∈U×V{\displaystyle (x,y)\in U\times V}. Various forms of the implicit function theorem exist for the case when the functionfis not differentiable. It is standard that local strict monotonicity suffices in one dimension.[7]The following more general form was proven by Kumagai based on an observation by Jittorntrum.[8][9] Consider a continuous functionf:Rn×Rm→Rn{\displaystyle f:\mathbb {R} ^{n}\times \mathbb {R} ^{m}\to \mathbb {R} ^{n}}such thatf(x0,y0)=0{\displaystyle f(x_{0},y_{0})=0}. If there exist open neighbourhoodsA⊂Rn{\displaystyle A\subset \mathbb {R} ^{n}}andB⊂Rm{\displaystyle B\subset \mathbb {R} ^{m}}ofx0andy0, respectively, such that, for allyinB,f(⋅,y):A→Rn{\displaystyle f(\cdot ,y):A\to \mathbb {R} ^{n}}is locally one-to-one, then there exist open neighbourhoodsA0⊂Rn{\displaystyle A_{0}\subset \mathbb {R} ^{n}}andB0⊂Rm{\displaystyle B_{0}\subset \mathbb {R} ^{m}}ofx0andy0, such that, for ally∈B0{\displaystyle y\in B_{0}}, the equationf(x,y) = 0 has a unique solutionx=g(y)∈A0,{\displaystyle x=g(y)\in A_{0},}wheregis a continuous function fromB0intoA0. Perelman’s collapsing theorem for3-manifolds, the capstone of his proof of Thurston'sgeometrization conjecture, can be understood as an extension of the implicit function theorem.[10]
https://en.wikipedia.org/wiki/Implicit_function_theorem
Incalculus,logarithmic differentiationordifferentiation by taking logarithmsis a method used todifferentiatefunctionsby employing thelogarithmic derivativeof a functionf,[1](ln⁡f)′=f′f⟹f′=f⋅(ln⁡f)′.{\displaystyle (\ln f)'={\frac {f'}{f}}\quad \implies \quad f'=f\cdot (\ln f)'.} The technique is often performed in cases where it is easier to differentiate the logarithm of a function rather than the function itself. This usually occurs in cases where the function of interest is composed of a product of a number of parts, so that a logarithmic transformation will turn it into a sum of separate parts (which is much easier to differentiate). It can also be useful when applied to functions raised to the power of variables or functions. Logarithmic differentiation relies on thechain ruleas well as properties oflogarithms(in particular, thenatural logarithm, or the logarithm to the basee) to transform products into sums and divisions into subtractions.[2][3]The principle can be implemented, at least in part, in the differentiation of almost alldifferentiable functions, providing that these functions are non-zero. The method is used because the properties of logarithms provide avenues to quickly simplify complicated functions to be differentiated.[4]These properties can be manipulated after the taking of natural logarithms on both sides and before the preliminary differentiation. The most commonly used logarithm laws are[3]ln⁡(ab)=ln⁡(a)+ln⁡(b),ln⁡(ab)=ln⁡(a)−ln⁡(b),ln⁡(an)=nln⁡(a).{\displaystyle \ln(ab)=\ln(a)+\ln(b),\qquad \ln \left({\frac {a}{b}}\right)=\ln(a)-\ln(b),\qquad \ln(a^{n})=n\ln(a).} UsingFaà di Bruno's formula, the n-th order logarithmic derivative is,dndxnln⁡f(x)=∑m1+2m2+⋯+nmn=nn!m1!m2!⋯mn!⋅(−1)m1+⋯+mn−1(m1+⋯+mn−1)!f(x)m1+⋯+mn⋅∏j=1n(f(j)(x)j!)mj.{\displaystyle {\frac {d^{n}}{dx^{n}}}\ln f(x)=\sum _{m_{1}+2m_{2}+\cdots +nm_{n}=n}{\frac {n!}{m_{1}!\,m_{2}!\,\cdots \,m_{n}!}}\cdot {\frac {(-1)^{m_{1}+\cdots +m_{n}-1}(m_{1}+\cdots +m_{n}-1)!}{f(x)^{m_{1}+\cdots +m_{n}}}}\cdot \prod _{j=1}^{n}\left({\frac {f^{(j)}(x)}{j!}}\right)^{m_{j}}.}Using this, the first four derivatives are,d2dx2ln⁡f(x)=f″(x)f(x)−(f′(x)f(x))2d3dx3ln⁡f(x)=f(3)(x)f(x)−3f′(x)f″(x)f(x)2+2(f′(x)f(x))3d4dx4ln⁡f(x)=f(4)(x)f(x)−4f′(x)f(3)(x)f(x)2−3(f″(x)f(x))2+12f′(x)2f″(x)f(x)3−6(f′(x)f(x))4{\displaystyle {\begin{aligned}{\frac {d^{2}}{dx^{2}}}\ln f(x)&={\frac {f''(x)}{f(x)}}-\left({\frac {f'(x)}{f(x)}}\right)^{2}\\[1ex]{\frac {d^{3}}{dx^{3}}}\ln f(x)&={\frac {f^{(3)}(x)}{f(x)}}-3{\frac {f'(x)f''(x)}{f(x)^{2}}}+2\left({\frac {f'(x)}{f(x)}}\right)^{3}\\[1ex]{\frac {d^{4}}{dx^{4}}}\ln f(x)&={\frac {f^{(4)}(x)}{f(x)}}-4{\frac {f'(x)f^{(3)}(x)}{f(x)^{2}}}-3\left({\frac {f''(x)}{f(x)}}\right)^{2}+12{\frac {f'(x)^{2}f''(x)}{f(x)^{3}}}-6\left({\frac {f'(x)}{f(x)}}\right)^{4}\end{aligned}}} Anatural logarithmis applied to a product of two functionsf(x)=g(x)h(x){\displaystyle f(x)=g(x)h(x)}to transform the product into a sumln⁡(f(x))=ln⁡(g(x)h(x))=ln⁡(g(x))+ln⁡(h(x)).{\displaystyle \ln(f(x))=\ln(g(x)h(x))=\ln(g(x))+\ln(h(x)).}Differentiating by applying thechainand thesumrules yieldsf′(x)f(x)=g′(x)g(x)+h′(x)h(x),{\displaystyle {\frac {f'(x)}{f(x)}}={\frac {g'(x)}{g(x)}}+{\frac {h'(x)}{h(x)}},}and, after rearranging, yields[5]f′(x)=f(x)×{g′(x)g(x)+h′(x)h(x)}=g(x)h(x)×{g′(x)g(x)+h′(x)h(x)}=g′(x)h(x)+g(x)h′(x),{\displaystyle f'(x)=f(x)\times \left\{{\frac {g'(x)}{g(x)}}+{\frac {h'(x)}{h(x)}}\right\}=g(x)h(x)\times \left\{{\frac {g'(x)}{g(x)}}+{\frac {h'(x)}{h(x)}}\right\}=g'(x)h(x)+g(x)h'(x),}which is theproduct rulefor derivatives. Anatural logarithmis applied to a quotient of two functionsf(x)=g(x)h(x){\displaystyle f(x)={\frac {g(x)}{h(x)}}}to transform the division into a subtractionln⁡(f(x))=ln⁡(g(x)h(x))=ln⁡(g(x))−ln⁡(h(x)){\displaystyle \ln(f(x))=\ln \left({\frac {g(x)}{h(x)}}\right)=\ln(g(x))-\ln(h(x))}Differentiating by applying thechainand thesumrules yieldsf′(x)f(x)=g′(x)g(x)−h′(x)h(x),{\displaystyle {\frac {f'(x)}{f(x)}}={\frac {g'(x)}{g(x)}}-{\frac {h'(x)}{h(x)}},}and, after rearranging, yieldsf′(x)=f(x)×{g′(x)g(x)−h′(x)h(x)}=g(x)h(x)×{g′(x)g(x)−h′(x)h(x)}=g′(x)h(x)−g(x)h′(x)h(x)2,{\displaystyle f'(x)=f(x)\times \left\{{\frac {g'(x)}{g(x)}}-{\frac {h'(x)}{h(x)}}\right\}={\frac {g(x)}{h(x)}}\times \left\{{\frac {g'(x)}{g(x)}}-{\frac {h'(x)}{h(x)}}\right\}={\frac {g'(x)h(x)-g(x)h'(x)}{h(x)^{2}}},} which is thequotient rulefor derivatives. For a function of the formf(x)=g(x)h(x){\displaystyle f(x)=g(x)^{h(x)}}thenatural logarithmtransforms the exponentiation into a productln⁡(f(x))=ln⁡(g(x)h(x))=h(x)ln⁡(g(x)){\displaystyle \ln(f(x))=\ln \left(g(x)^{h(x)}\right)=h(x)\ln(g(x))}Differentiating by applying thechainand theproductrules yieldsf′(x)f(x)=h′(x)ln⁡(g(x))+h(x)g′(x)g(x),{\displaystyle {\frac {f'(x)}{f(x)}}=h'(x)\ln(g(x))+h(x){\frac {g'(x)}{g(x)}},}and, after rearranging, yieldsf′(x)=f(x)×{h′(x)ln⁡(g(x))+h(x)g′(x)g(x)}=g(x)h(x)×{h′(x)ln⁡(g(x))+h(x)g′(x)g(x)}.{\displaystyle f'(x)=f(x)\times \left\{h'(x)\ln(g(x))+h(x){\frac {g'(x)}{g(x)}}\right\}=g(x)^{h(x)}\times \left\{h'(x)\ln(g(x))+h(x){\frac {g'(x)}{g(x)}}\right\}.}The same result can be obtained by rewritingfin terms ofexpand applying the chain rule. Usingcapital pi notation, letf(x)=∏i(fi(x))αi(x){\displaystyle f(x)=\prod _{i}(f_{i}(x))^{\alpha _{i}(x)}}be a finite product of functions with functional exponents. The application of natural logarithms results in (withcapital sigma notation)ln⁡(f(x))=∑iαi(x)⋅ln⁡(fi(x)),{\displaystyle \ln(f(x))=\sum _{i}\alpha _{i}(x)\cdot \ln(f_{i}(x)),}and after differentiation,f′(x)f(x)=∑i[αi′(x)⋅ln⁡(fi(x))+αi(x)⋅fi′(x)fi(x)].{\displaystyle {\frac {f'(x)}{f(x)}}=\sum _{i}\left[\alpha _{i}'(x)\cdot \ln(f_{i}(x))+\alpha _{i}(x)\cdot {\frac {f_{i}'(x)}{f_{i}(x)}}\right].}Rearrange to get the derivative of the original function,f′(x)=∏i(fi(x))αi(x)⏞f(x)×∑i{αi′(x)⋅ln⁡(fi(x))+αi(x)⋅fi′(x)fi(x)}⏞[ln⁡(f(x))]′.{\displaystyle f'(x)=\overbrace {\prod _{i}(f_{i}(x))^{\alpha _{i}(x)}} ^{f(x)}\times \overbrace {\sum _{i}\left\{\alpha _{i}'(x)\cdot \ln(f_{i}(x))+\alpha _{i}(x)\cdot {\frac {f_{i}'(x)}{f_{i}(x)}}\right\}} ^{[\ln(f(x))]'}.}
https://en.wikipedia.org/wiki/Logarithmic_differentiation
Incomputer graphics, apolygonizeris a software component for converting a geometric model represented as animplicit surfaceto apolygon mesh.[1] Thiscomputer graphics–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Polygonizer
Indifferential calculus,related ratesproblems involve finding a rate at which a quantity changes byrelatingthat quantity to other quantities whose rates of change are known. The rate of change is usually with respect totime. Because science and engineering often relate quantities to each other, the methods of related rates have broad applications in these fields. Differentiation with respect to time or one of the other variables requires application of thechain rule,[1]since most problems involve several variables. Fundamentally, if a functionF{\displaystyle F}is defined such thatF=f(x){\displaystyle F=f(x)}, then the derivative of the functionF{\displaystyle F}can be taken with respect to another variable. We assumex{\displaystyle x}is a function oft{\displaystyle t}, i.e.x=g(t){\displaystyle x=g(t)}. ThenF=f(g(t)){\displaystyle F=f(g(t))}, so Written in Leibniz notation, this is: Thus, if it is known howx{\displaystyle x}changes with respect tot{\displaystyle t}, then we can determine howF{\displaystyle F}changes with respect tot{\displaystyle t}and vice versa. We can extend this application of the chain rule with the sum, difference, product and quotient rules of calculus, etc. For example, ifF(x)=G(y)+H(z){\displaystyle F(x)=G(y)+H(z)}then The most common way to approach related rates problems is the following:[2] Errors in this procedure are often caused by plugging in the known values for the variablesbefore(rather than after) finding the derivative with respect to time. Doing so will yield an incorrect result, since if those values are substituted for the variables before differentiation, those variables will become constants; and when the equation is differentiated, zeroes appear in places of all variables for which the values were plugged in. A 10-meter ladder is leaning against the wall of a building, and the base of the ladder is sliding away from the building at a rate of 3 meters per second. How fast is the top of the ladder sliding down the wall when the base of the ladder is 6 meters from the wall? The distance between the base of the ladder and the wall,x, and the height of the ladder on the wall,y, represent the sides of aright trianglewith the ladder as the hypotenuse,h. The objective is to finddy/dt, the rate of change ofywith respect to time,t, whenh,xanddx/dt, the rate of change ofx, are known. Step 1: Step 2: From thePythagorean theorem, the equation describes the relationship betweenx,yandh, for a right triangle. Differentiating both sides of this equation with respect to time,t, yields Step 3: When solved for the wanted rate of change,dy/dt, gives us Step 4 & 5: Using the variables from step 1 gives us: Solving for y using the Pythagorean Theorem gives: Plugging in 8 for the equation: It is generally assumed that negative values represent the downward direction. In doing such, the top of the ladder is sliding down the wall at a rate of⁠9/4⁠meters per second. Because one physical quantity often depends on another, which, in turn depends on others, such as time, related-rates methods have broad applications in Physics. This section presents an example of related rateskinematicsandelectromagnetic induction. For example, one can consider the kinematics problem where one vehicle is heading West toward an intersection at 80 miles per hour while another is heading North away from the intersection at 60 miles per hour. One can ask whether the vehicles are getting closer or further apart and at what rate at the moment when the North bound vehicle is 3 miles North of the intersection and the West bound vehicle is 4 miles East of the intersection. Big idea:use chain rule to compute rate of change of distance between two vehicles. Plan: Choose coordinate system:Let they-axis point North and thex-axis point East. Identify variables:Definey(t) to be the distance of the vehicle heading North from the origin andx(t) to be the distance of the vehicle heading West from the origin. Expresscin terms ofxandyvia the Pythagorean theorem: Expressdc/dtusing chain rule in terms ofdx/dtanddy/dt: Substitute inx= 4 mi,y= 3 mi,dx/dt= −80 mi/hr,dy/dt= 60 mi/hrand simplify Consequently, the two vehicles are getting closer together at a rate of 28 mi/hr. Themagnetic fluxthrough a loop of areaAwhose normal is at an angleθto a magnetic field of strengthBis Faraday's lawof electromagnetic induction states that the inducedelectromotive forceE{\displaystyle {\mathcal {E}}}is the negative rate of change of magnetic fluxΦB{\displaystyle \Phi _{B}}through a conducting loop. If the loop areaAand magnetic fieldBare held constant, but the loop is rotated so that the angleθis a known function of time, the rate of change ofθcan be related to the rate of change ofΦB{\displaystyle \Phi _{B}}(and therefore the electromotive force) by taking the time derivative of the flux relation If for example, the loop is rotating at a constant angular velocityω, so thatθ=ωt, then
https://en.wikipedia.org/wiki/Related_rates
Ingeometry, thefolium of Descartes(fromLatinfolium'leaf'; named forRené Descartes) is analgebraic curvedefined by theimplicit equationx3+y3−3axy=0.{\displaystyle x^{3}+y^{3}-3axy=0.} The curve was first proposed and studied byRené Descartesin 1638.[1]Its claim to fame lies in an incident in the development ofcalculus. Descartes challengedPierre de Fermatto find the tangent line to the curve at an arbitrary point since Fermat had recently discovered a method for finding tangent lines. Fermat solved the problem easily, something Descartes was unable to do.[2]Since the invention of calculus, the slope of the tangent line can be found easily usingimplicit differentiation.[3] The folium of Descartes can be expressed inpolar coordinatesasr=3asin⁡θcos⁡θsin3⁡θ+cos3⁡θ,{\displaystyle r={\frac {3a\sin \theta \cos \theta }{\sin ^{3}\theta +\cos ^{3}\theta }},}which is plotted on the left. This is equivalent to[4] r=3asec⁡θtan⁡θ1+tan3⁡θ.{\displaystyle r={\frac {3a\sec \theta \tan \theta }{1+\tan ^{3}\theta }}.} Another technique is to writey=px{\displaystyle y=px}and solve forx{\displaystyle x}andy{\displaystyle y}in terms ofp{\displaystyle p}. This yields therationalparametric equations:[5] We can see that the parameter is related to the position on the curve as follows: Another way of plotting the function can be derived from symmetry overy=x{\displaystyle y=x}. The symmetry can be seen directly from its equation (x and y can be interchanged). By applying rotation of 45° CW for example, one can plot the function symmetric over rotated x axis. This operation is equivalent to a substitution:x=u+v2,y=u−v2{\displaystyle x={{u+v} \over {\sqrt {2}}},\,y={{u-v} \over {\sqrt {2}}}}and yieldsv=±u3a2−2u6u+3a2.{\displaystyle v=\pm u{\sqrt {\frac {3a{\sqrt {2}}-2u}{6u+3a{\sqrt {2}}}}}.}Plotting in the Cartesian system of(u,v){\displaystyle (u,v)}gives the folium rotated by 45° and therefore symmetric byu{\displaystyle u}-axis. It forms a loop in the first quadrant with adouble pointat the origin andasymptotex+y+a=0.{\displaystyle x+y+a=0\,.}It is symmetrical about the liney=x{\displaystyle y=x}. As such, the two intersect at the origin and at the point(3a/2,3a/2){\displaystyle (3a/2,3a/2)}. Implicit differentiation gives the formula for the slope of the tangent line to this curve to be[3]dydx=ay−x2y2−ax.{\displaystyle {\frac {dy}{dx}}={\frac {ay-x^{2}}{y^{2}-ax}}.}Using either one of the polar representations above, the area of the interior of the loop is found to be3a2/2{\displaystyle 3a^{2}/2}. Moreover, the area between the "wings" of the curve and its slanted asymptote is also3a2/2{\displaystyle 3a^{2}/2}.[1] The folium of Descartes is related to thetrisectrix of Maclaurinbyaffine transformation. To see this, start with the equationx3+y3=3axy,{\displaystyle x^{3}+y^{3}=3axy\,,}and change variables to find the equation in a coordinate system rotated 45 degrees. This amounts to setting x=X+Y2,y=X−Y2.{\displaystyle x={{X+Y} \over {\sqrt {2}}},y={{X-Y} \over {\sqrt {2}}}.}In theX,Y{\displaystyle X,Y}plane the equation is2X(X2+3Y2)=32a(X2−Y2).{\displaystyle 2X(X^{2}+3Y^{2})=3{\sqrt {2}}a(X^{2}-Y^{2}).} If we stretch the curve in theY{\displaystyle Y}direction by a factor of3{\displaystyle {\sqrt {3}}}this becomes2X(X2+Y2)=a2(3X2−Y2),{\displaystyle 2X(X^{2}+Y^{2})=a{\sqrt {2}}(3X^{2}-Y^{2}),}which is the equation of the trisectrix of Maclaurin.
https://en.wikipedia.org/wiki/Folium_of_Descartes
"Argument Clinic" is a sketch fromMonty Python's Flying Circus, written byJohn CleeseandGraham Chapman. The sketch was originally broadcast as part of the television series and has subsequently been performed live by the group. It relies heavily on wordplay and dialogue, and has been used as an example of how language works. After the episode's end credits have scrolled, theBBC 1 mirror globeappears on screen, while acontinuity announcer(Eric Idle) introduces "five more minutes ofMonty Python's Flying Circus".[2]In the ensuing sketch, an unnamed man (Michael Palin) approaches a receptionist (Rita Davies) and says that he would like to have an argument. She directs him to a Mr. Barnard who occupies an office along the corridor. The customer enters an office in which Barnard (Graham Chapman) hurls angry insults at him. The customer says that he came into the room for an argument, causing Barnard to apologize and clarify that his office is dedicated to "abuse"; "argument" is next door. He politely sends the customer on his way before calling him a "stupidgit" out of earshot.[2] The customer enters the next office, where Mr. Vibrating (John Cleese) is seated.[2]The customer asks if he is in the right office for an argument, to which Vibrating responds that he has already told him he is. The customer disputes this, and the men begin an argumentative back-and-forth exchange. Their exchange is a very shallow one, consisting mostly of petty and contradictory "is/isn't" responses, to the point that the customer feels that he is not getting what he paid for. They then argue over the very definition of an argument until Vibrating rings a bell and announces that the customer's paid time has concluded. The customer is dissatisfied and tries to argue with Vibrating over whether he really got as much time as he paid for, but he insists that he is not allowed to argue unless he is paid for another session. The man finally relents and pays more money for additional arguing time, but Vibrating continues to insist that he has not paid, and another argument breaks out over that issue. He believes that he has caught Vibrating in a contradiction—arguing without being paid—but Vibrating counters that he could be arguing in his spare time. Frustrated, the customer storms out of the room. He proceeds to explore other rooms in the clinic; he enters a room marked "Complaints" hoping to lodge a complaint, only to find that it is a complaintclinicin which the man in charge (Idle) is complaining about his shoes. The next office contains another man, Spreaders (Terry Jones), offering "being-hit-on-the-head lessons", which the customer finds a stupid concept. At that point aScotland Yarddetective, Inspector Fox "of the Light Entertainment Police, Comedy Division, Special Flying Squad" (Chapman) intervenes and declares the two men under arrest for participating in a confusing sketch. However, a second officer, Inspector Thompson's Gazelle "of the Programme Planning Police, Light Entertainment Division, Special Flying Squad" (Idle) comes in and charges the three for "self-conscious behaviour", saying "It's so and so of the Yard" every time the police appear, and for ending a sketch by having a police officer intervene. As he realizes that he is a part of the skit's absurdity, another policeman (Cleese) enters the room to stop Thompson's Gazelle, followed by a hairy hand stopping him, and the sketch ends. Afterwards, the globe ident appears on screen while the announcer introduces "one more minute ofMonty Python's Flying Circus".[2] The sketch parodies modern consumer culture, implying that anything can be purchased, even absurd things such as arguing, abuse, or being hit over the head.[3]The sketch was typical for Cleese and Chapman's writing at the time, as it relied on verbal comedy.[4]Python author Darl Larsen believes the sketch was influenced bymusic halland radio comedy, particularly that ofthe Goons, and notes that there is little camera movement during the original television recording.[3] One line in the middle of the sketch, "An argument is a connected series of statements intended to establish a definite proposition" was taken almost verbatim from theOxford English Dictionary.[3] The sketch originally appeared in the 29th episode of the original television series, entitled "The Money Programme",[5]and was released (in audio only) on the LPMonty Python's Previous Record, onCharisma Recordsin 1972.[6] The sketch was subsequently performed live at theHollywood Bowlin September 1980, which was filmed and released asMonty Python Live at the Hollywood Bowl.[7]The sketch features the discussion with the receptionist (played here byCarol Cleveland), the abuse from Chapman, and most of the argument between Cleese and Palin. It is then ended abruptly by the entrance ofTerry Gilliam, on wires, singing "I've Got Two Legs".[8]A further live performance occurred in 1989 at theSecret Policeman's Ball, where Cleveland and Chapman's roles were replaced byDawn FrenchandChris Langham. This performance was subsequently released on DVD.[9]The sketch was performed again in July 2014 duringMonty Python Live (Mostly), withTerry Jonesfilling in for Chapman's role and Gilliam reprising "I've Got Two Legs".[10] The sketch has been frequently used as an example of how not to argue, because, as Palin's character notes, it contains little more thanad hominemattacks andcontradiction,[11]and does not contribute tocritical thinking.[12]It has also been described as a "classical case in point" of dialogue where two parties are unwilling to co-operate,[13]and as an example of flawedlogic, since Palin is attempting to argue that Cleese is not arguing with him.[14] The text of the argument has been presented as a good example of the workings ofEnglish grammar, where sentences can be reduced to simple subject/verb pairs.[15]It has been included as an example of analysing English in school textbooks.[1]The sketch has become popular withphilosophystudents, who note that arguing is "all we are good at", and wonder about the intellectual exercise one could get from paying for a professional quality debate.[16] ThePython programming language, which contains many Monty Python references as feature names, has an internal-only module called "Argument Clinic" to pre-process Python files.[17][18] The sketch is referenced in a line of dialogue in the TV showHouse, season 6, episode 10, "Wilson". The character Dr. Wilson says to Dr. House "I didn't come here for an argument," to which House replies "No, right, that's room 12A," echoing the lines from the "abuse room" in the Argument Clinic sketch.
https://en.wikipedia.org/wiki/Argument_Clinic
Acontronymorcontranymis a word with twooppositemeanings. For example, the wordoriginalcan mean "authentic, traditional", or "novel, never done before". This feature is also calledenantiosemy,[1][2]enantionymy(enantio-means "opposite"),antilogyorautoantonymy. An enantiosemic term is by definitionpolysemic. A contronym is alternatively called anautantonym,auto-antonym,antagonym,[3][4]enantiodrome,enantionym,Janus word(after the Roman godJanus, who is usually depicted with two faces),[4]self-antonym,antilogy, oraddad(Arabic, singulardidd).[5][6] Some pairs of contronyms are truehomographs, i.e., distinct words with differentetymologieswhich happen to have the same form.[7]For instancecleave"separate" is fromOld Englishclēofan, whilecleave"adhere" is from Old Englishclifian, which was pronounced differently. Other contronyms are a form ofpolysemy, but where a single word acquires different and ultimately opposite definitions. For example,sanction—"permit" or "penalize";bolt(originally fromcrossbows)—"leave quickly" or "fix/immobilize";fast—"moving rapidly" or "fixed in place". Some English examples result fromnounsbeingverbedin the patterns of "add <noun> to" and "remove <noun> from"; e.g.dust,seed,stone.Denotationsandconnotationscan drift or branch over centuries. Anapocryphalstory relates howCharles II(or sometimesQueen Anne) describedSt Paul's Cathedral(using contemporaneous English) as "awful, pompous, and artificial", with the meaning (rendered in modern English) of "awe-inspiring, majestic, and ingeniously designed."[8] Negative words such asbad[9]andsicksometimes acquire ironic senses byantiphrasis[10]referring to traits that are impressive and admired, if not necessarily positive (that outfit is bad as hell;lyrics full of sick burns). Some contronyms result from differences invarieties of English. For example, totablea bill means "to put it up for debate" inBritish English, while it means "to remove it from debate" inAmerican English(where British English would have "shelve", which in this sense has an identical meaning in American English). Tobarrack, inAustralian English, is to loudly demonstrate support, while in British English it is to express disapproval and contempt. InLatin,sacerhas the double meaning "sacred, holy" and "accursed, infamous". Greekδημιουργόςgave Latin itsdemiurgus, from which English got itsdemiurge, which can refer either toGodas thecreatoror to thedevil, depending on philosophical context. In some languages, a word stem associated with a single event may treat the action of that event as unitary, so in translation it may appear contronymic. For example, Latinhospescan be translated as both "guest" and "host". In some varieties of English,borrowmay mean both "borrow" and "lend". Seeming contronyms can arise from translation. InHawaiian, for example,alohais translated both as "hello" and as "goodbye", but the essential meaning of the word is "love", whether used as a greeting or farewell. Similarly,안녕(annyeong) inKoreancan mean both "hello" and "goodbye" but the central meaning is "peace". TheItaliangreetingciaois translated as "hello" or "goodbye" depending on the context; the original meaning was "at your service" (literally "(I'm your) slave").[34]
https://en.wikipedia.org/wiki/Auto-antonym
Interm logic(a branch ofphilosophical logic), thesquare of oppositionis adiagramrepresentingtherelationsbetween the four basiccategorical propositions. The origin of the square can be traced back toAristotle's tractateOn Interpretationand its distinction between two oppositions:contradictionandcontrariety. However, Aristotle did not draw any diagram; this was done several centuries later byBoethius. Intraditional logic, aproposition(Latin:propositio) is aspokenassertion(oratio enunciativa), not the meaning of an assertion, as inmodern philosophyoflanguageandlogic. Acategorical propositionis a simple proposition containing two terms, subject (S) and predicate (P), in which the predicate is either asserted or denied of the subject. Every categorical proposition can be reduced to one of fourlogical forms, namedA,E,I, andObased on the Latinaffirmo(I affirm), for the affirmative propositionsAandI, andnego(I deny), for the negative propositionsEandO. These are: In tabular form: *PropositionAmay be stated as "AllSisP." However, PropositionEwhen stated correspondingly as "AllSis notP."is ambiguous[2]because it can be either anEorOproposition, thus requiring a context to determine the form; the standard form "NoSisP" is unambiguous, so it is preferred. PropositionOalso takes the forms "SometimesSis notP." and "A certainSis notP." (literally the Latin 'QuoddamSnōn estP.') **Sx{\displaystyle Sx}in the modern forms means that a statementS{\displaystyle S}applies on an objectx{\displaystyle x}. It may be simply interpreted as "x{\displaystyle x}isS{\displaystyle S}" in many cases.Sx{\displaystyle Sx}can be also written asS(x){\displaystyle S(x)}. Aristotle states (in chapters six and seven of thePeri hermēneias(Περὶ Ἑρμηνείας, LatinDe Interpretatione, English 'On Interpretation')), that there are certain logical relationships between these four kinds of proposition. He says that to every affirmation there corresponds exactly one negation, and that every affirmation and its negation are 'opposed' such that always one of them must be true, and the other false. A pair of an affirmative statement and its negation is, he calls, a 'contradiction' (in medieval Latin,contradictio). Examples of contradictories are 'every man is white' and 'not every man is white' (also read as 'some men are not white'), 'no man is white' and 'some man is white'. The below relations, contrary, subcontrary, subalternation, and superalternation, do hold based on the traditional logic assumption that things stated asS(or things satisfying a statementSin modern logic) exist. If this assumption is taken out, then these relations do not hold. 'Contrary' (medieval:contrariae) statements, are such that both statements cannot be true at the same time. Examples of these are the universal affirmative 'every man is white', and the universal negative 'no man is white'. These cannot be true at the same time. However, these are not contradictories because both of them may be false. For example, it is false that every man is white, since some men are not white. Yet it is also false that no man is white, since there are some white men. Since every statement has the contradictory opposite (its negation), and since a contradicting statement is true when its opposite is false, it follows that the opposites of contraries (which the medievals calledsubcontraries,subcontrariae) can both be true, but they cannot both be false. Since subcontraries are negations of universal statements, they were called 'particular' statements by the medieval logicians. Another logical relation implied by this, though not mentioned explicitly by Aristotle, is 'alternation' (alternatio), consisting of 'subalternation' and 'superalternation'. Subalternation is a relation between the particular statement and the universal statement of the same quality (affirmative or negative) such that the particular is implied by the universal, while superalternation is a relation between them such that the falsity of the universal (equivalently the negation of the universal) is implied by the falsity of the particular (equivalently the negation of the particular).[3](The superalternation is thecontrapositiveof the subalternation.) In these relations, the particular is the subaltern of the universal, which is the particular's superaltern. For example, if 'every man is white' is true, its contrary 'no man is white' is false. Therefore, the contradictory 'some man is white' is true. Similarly the universal 'no man is white' implies the particular 'not every man is white'.[4][5] In summary: These relationships became the basis of a diagram originating withBoethiusand used by medieval logicians to classify the logical relationships. The propositions are placed in the four corners of a square, and the relations represented as lines drawn between them, whence the name 'The Square of Opposition'. Therefore, the following cases can be made:[6] To memorize them, the medievals invented the following Latin rhyme:[7] It affirms thatAandEare not neither both true nor both false in each of the above cases. The same applies toIandO. While the first two are universal statements, the coupleI/Orefers to particular ones. The Square of Oppositions was used for the categorical inferences described by the Greek philosopher Aristotle:conversion,obversionandcontraposition. Each of those three types of categorical inference was applied to the four Boethian logical forms:A,E,I, andO. Subcontraries (IandO), which medieval logicians represented in the form 'quoddamAestB' (some particularAisB) and 'quoddamAnon estB' (some particularAis notB) cannot both be false, since their universal contradictory statements (noAisB/ everyAisB) cannot both be true. This leads to a difficulty firstly identified byPeter Abelard(1079 – 21 April 1142). 'SomeAisB' seems to imply 'something isA', in other words, there exists something that isA. For example, 'Some man is white' seems to imply that at least one thing that exists is a man, namely the man who has to be white, if 'some man is white' is true. But, 'some man is not white' also implies that something as a man exists, namely the man who is not white, if the statement 'some man is not white' is true. But Aristotelian logic requires that, necessarily, one of these statements (more generally 'some particularAisB' and 'some particularAis notB') is true, i.e., they cannot both be false. Therefore, since both statements imply the presence of at least one thing that is a man, the presence of a man or men is followed. But, as Abelard points out in theDialectica, surely men might not exist?[8] Abelard also points out that subcontraries containing subject terms denoting nothing, such as 'a man who is a stone', are both false. Terence Parsons(born 1939) argues that ancient philosophers did not experience the problem ofexistential importas only the A (universal affirmative) and I (particular affirmative) forms had existential import. (If a statement includes a term such that the statement is false if the term has no instances, i.e., no thing associated with the term exists, then the statement is said to haveexistential importwith respect to that term.) He goes on to cite a medieval philosopherWilliam of Ockham(1215–35 –c.1286), And points toBoethius' translation of Aristotle's work as giving rise to the mistaken notion that theOform has existential import. In the 19th century,George Boole(November 1815 – 8 December 1864) argued for requiringexistential importon both terms in particular claims (IandO), but allowing all terms of universal claims (AandE) to lack existential import. This decision madeVenn diagramsparticularly easy to use for term logic. The square of opposition, under this Boolean set of assumptions, is often called themodern square of opposition. In the modern square of opposition,AandOclaims are contradictories, as areEandI, but all other forms of opposition cease to hold; there are no contraries, subcontraries, subalternations, and superalternations. Thus, from a modern point of view, it often makes sense to talk about 'the' opposition of a claim, rather than insisting, as older logicians did, that a claim has several different opposites, which are in different kinds of opposition with the claim. Gottlob Frege(8 November 1848 – 26 July 1925)'sBegriffsschriftalso presents a square of oppositions, organised in an almost identical manner to the classical square, showing the contradictories, subalternates and contraries between four formulae constructed from universal quantification, negation and implication. Algirdas Julien Greimas(9 March 1917 – 27 February 1992)'semiotic squarewas derived from Aristotle's work. The traditional square of opposition is now often compared with squares based on inner- and outer-negation.[14] The square of opposition has been extended to a logical hexagon which includes the relationships of six statements. It was discovered independently by bothAugustin Sesmat(April 7, 1885 – December 12, 1957) andRobert Blanché(1898–1975).[15]It has been proven that both the square and the hexagon, followed by a "logical cube", belong to a regular series of n-dimensional objects called "logical bi-simplexes of dimensionn." The pattern also goes even beyond this.[16] The logical square, also called square of opposition or square ofApuleius, has its origin in the four marked sentences to be employed in syllogistic reasoning: "Every man is bad," the universal affirmative - The negation of the universal affirmative "Not every man is bad" (or "Some men are not bad") - "Some men are bad," the particular affirmative - and finally, the negation of the particular affirmative "No man is bad".Robert Blanchépublished with Vrin his Structures intellectuelles in 1966 and since then many scholars think that the logical square or square of opposition representing four values should be replaced by thelogical hexagonwhich by representing six values is a more potent figure because it has the power to explain more things about logic and natural language. In modernmathematical logic, statements containing words "all", "some" and "no", can be stated in terms ofset theoryif we assume a set-like domain of discourse. If the set of allA's is labeled ass(A){\displaystyle s(A)}and the set of allB's ass(B){\displaystyle s(B)}, then: By definition, theempty set∅{\displaystyle \emptyset }is a subset of all sets. From this fact it follows that, according to this mathematical convention, if there are noA's, then the statements "AllAisB" and "NoAisB" are always true whereas the statements "SomeAisB" and "SomeAis notB" are always false. This also implies that AaB does not entail AiB, and some of the syllogisms mentioned above are not valid when there are noA's (s(A)=∅{\displaystyle s(A)=\emptyset }).
https://en.wikipedia.org/wiki/Contrary_(logic)
Dialetheism(/daɪəˈlɛθiɪzəm/; fromGreekδι-di-'twice' andἀλήθειαalḗtheia'truth') is the view that there arestatementsthat are both true and false. More precisely, it is the belief that there can be a true statement whosenegationis also true. Such statements are called "truecontradictions",dialetheia, ornondualisms. Dialetheism is not asystem of formal logic; instead, it is a thesis abouttruththat influences the construction of a formal logic, often based on pre-existing systems. Introducing dialetheism has various consequences, depending on the theory into which it is introduced. A common mistake resulting from this is to reject dialetheism on the basis that, in traditional systems of logic (e.g.,classical logicandintuitionistic logic), every statement becomes atheoremif a contradiction is true,trivialisingsuch systems when dialetheism is included as anaxiom.[1]Other logical systems, however, do notexplodein this manner when contradictions are introduced; such contradiction-tolerant systems are known asparaconsistent logics. Dialetheists who do not want to allow that every statement is true are free to favour these over traditional, explosive logics. Graham Priestdefines dialetheism as the view that there are true contradictions.[2]Jc Beallis another advocate; his position differs from Priest's in advocating constructive (methodological)deflationismregarding the truth predicate.[3] The term was coined by Graham Priest andRichard Sylvan (then Routley).[citation needed] Theliar paradoxandRussell's paradoxdeal with self-contradictory statements in classical logic andnaïve set theory, respectively. Contradictions are problematic in these theories because they cause the theories toexplode—if a contradiction is true, then every proposition is true. The classical way to solve this problem is to ban contradictory statements: to revise the axioms of the logic so that self-contradictory statements do not appear (just as withRussell's paradox). Dialetheists, on the other hand, respond to this problem by accepting the contradictions as true. Dialetheism allows for the unrestrictedaxiom of comprehensioninset theory, claiming that any resulting contradiction is atheorem.[4] However, self-referentialparadoxes, such as the Strengthened Liar can be avoided without revising the axioms by abandoningclassical logicand accepting more than twotruth valueswith the help ofmany-valued logic, such asfuzzy logicorŁukasiewicz logic. Ambiguous situations may cause humans to affirm both a proposition and its negation. For example, if John stands in the doorway to a room, it may seem reasonable both to affirm thatJohn is in the roomand to affirm thatJohn is not in the room. Critics argue that this merely reflects an ambiguity in our language rather than a dialetheic quality in our thoughts; if we replace the given statement with one that is less ambiguous (such as "John is halfway in the room" or "John is in the doorway"), the contradiction disappears. The statements appeared contradictory only because of a syntactic play; here, the actual meaning of "being in the room" is not the same in both instances, and thus each sentence is not the exact logical negation of the other: therefore, they are not necessarily contradictory. Moreover, John appears to be standing in aconjunctionof two concepts. He is inxandnotxat the same time, but not inxandnot inxat the same time (that would result in acontradiction). He is on hislogical connectivetruth-functional operator, which shows the recurrent ambiguity of human language that often fails to capture the nature of somelogical statements. TheJainphilosophical doctrine ofanekantavada—non-one-sidedness—states that all statements are true in some sense and false in another.[5]Some interpret this as saying that dialetheia not only exist but are ubiquitous. Technically, however, alogical contradictionis a proposition that is true and false in thesamesense; a proposition which is true in one sense and false in another does not constitute a logical contradiction. (For example, although in one sense a man cannot both be a "father" and "celibate"—leaving aside such cases as either a celibate man adopting a child or a man fathering a child and only later adopting celibacy—there is no contradiction for a man to be aspiritualfather and also celibate; the sense of the word father is different here. In another example, although at the same time George W. Bush cannot both be president and not be president, he was president from 2001-2009, but was not president before 2001 or after 2009, so in different times he was both president and not president.) TheBuddhistlogic system, named "Catuṣkoṭi", similarly implies that a statement and its negation may possibly co-exist.[6][7] Graham Priestargues inBeyond the Limits of Thoughtthat dialetheia arise at the borders of expressibility, in a number of philosophical contexts other than formal semantics. In classical logics, taking a contradictionp∧¬p{\displaystyle p\wedge \neg p}(seeList of logic symbols) as a premise (that is, taking as a premise the truth of bothp{\displaystyle p}and¬p{\displaystyle \neg p}), allows us to prove any statementq{\displaystyle q}. Indeed, sincep{\displaystyle p}is true, the statementp∨q{\displaystyle p\vee q}is true (by generalization). Takingp∨q{\displaystyle p\vee q}together with¬p{\displaystyle \neg p}is adisjunctive syllogismfrom which we can concludeq{\displaystyle q}. (This is often called theprinciple of explosion, since the truth of a contradiction is imagined to make the number of theorems in a system "explode".)[1] The proponents of dialetheism mainly advocate its ability to avoid problems faced by other more orthodox resolutions as a consequence of their appeals to hierarchies. According to Graham Priest, "the whole point of the dialetheic solution to the semantic paradoxes is to get rid of the distinction between object language and meta-language".[2]Another possibility is to utilize dialetheism along with aparaconsistent logicto resurrect the program oflogicismadvocated for byFregeandRussell.[8]This even allows one to prove the truth of otherwise unprovable theorems such as thewell-ordering theoremand the falsity of others such as thecontinuum hypothesis.[citation needed] There are also dialetheic solutions to thesorites paradox.[citation needed] One criticism of dialetheism is that it fails to capture a crucial feature aboutnegation, known as absoluteness of disagreement.[9] Imagine John's utterance ofP. Sally's typical way of disagreeing with John is a consequent utterance of ¬P. Yet, if we accept dialetheism, Sally's so uttering does not prevent her from also acceptingP; after all,Pmay be a dialetheia and therefore it and its negation are both true. The absoluteness of disagreement is lost. A response is that disagreement can be displayed by uttering "¬Pand, furthermore,Pis not a dialetheia". However, the most obvious codification of "Pis not a dialetheia" is ¬(P∧{\displaystyle \wedge }¬P). Butthis itselfcould be a dialetheia as well. One dialetheist response is to offer a distinction betweenassertionand rejection. This distinction might be hashed out in terms of the traditional distinction betweenlogical qualities, or as a distinction between twoillocutionaryspeech acts:assertionand rejection. Another criticism is that dialetheism cannot describelogical consequences, once we believe in the relevance of logical consequences, because of its inability to describe hierarchies.[2][clarification needed]
https://en.wikipedia.org/wiki/Dialetheism
Adouble standardis the application of different sets ofprinciplesfor situations that are, in principle, the same.[1]It is often used to describe treatment whereby one group is given more latitude than another.[2]A double standard arises when two or more people, groups, organizations, circumstances, or events are treated differently even though they should be treated the same way.[3]A double standard "implies that two things which are the same are measured by different standards".[4] Applying different principles to similar situations may or may not indicate a double standard. To distinguish between the application of a double standard and a valid application of different standards toward circumstances that onlyappearto be the same, several factors must be examined. One is thesamenessof those circumstances – what are the parallels between those circumstances, and in what ways do they differ? Another is thephilosophyorbelief systeminforming which principles should be applied to those circumstances. Different standards can be applied to situations that appear similar based on a qualifyingtruthorfactthat, upon closer examination, renders those situations distinct (aphysicalreality ormoralobligation, for example). However, if similar-looking situations have been treated according to different principles and there is no truth, fact orprinciplethat distinguishes those situations, then a double standard has been applied. If correctly identified, a double standard usually indicates the presence ofhypocrisy,biasorunjustbehaviors. Double standards are believed to develop in people's minds for a multitude of possible reasons, including: finding an excuse for oneself, emotions clouding judgement, twisting facts to support beliefs (such asconfirmation biases,cognitive biases, attraction biases,prejudicesor the desire to be right). Human beings have a tendency to evaluate people's actions based on who did them. In a study conducted in 2000, Dr. Martha Foschi observed the application of double standards in group competency tests. She concluded thatstatuscharacteristics, such asgender,ethnicityandsocioeconomic class, can provide a basis for the formation of double standards in which stricter standards are applied to people who are perceived to be of lower status. Dr. Foschi also noted the ways in which double standards can form based on other socially valued attributes such asbeauty,morality, andmental health.[5] Dr. Tristan Botelho and Dr. Mabel Abraham, Assistant Professors at theYale School of ManagementandColumbia Business School, studied the effect that gender has on the way people rank others in financial markets. Their research showed that average-quality men were given the benefit of the doubt more than average-quality women, who were more often "penalized" in people's judgments. Botelho and Abraham also showed that women and men are similarly risk-loving, contrary to popular belief. Altogether, their research showed that double standards (at least in financial markets) do exist around gender. They encourage the adoption of controls to eliminategender biasin application, hiring, and evaluation processes within organizations. Examples of such controls include using only initials on applications so that applicants' genders are not apparent, or auditioning musicians from behind a screen so that their skills, and not their gender, influence their acceptance or rejection into orchestras.[6]Practices like these are, according to Botelho and Abraham, already being implemented in a number of organizations. It has long been debated how someone'sgender roleaffects others'moral,social,politicalandlegalresponses. Some believe that differences in the way men and women are perceived and treated is a function of social norms, thus indicating a double standard. For example, one claim is that a double standard exists in society's judgment of women's and men's sexual conduct. Research has found that casual sexual activity is regarded as more acceptable for men than for women.[7]According to William G Axinn,[8]double standards between men and women can potentially exist with regards to:dating,cohabitation,virginity,marriage/remarriage,sexual abuse/assault/harassment,domestic violenceandsingleness. Kennair et al. (2023) found no signs on a sexual double standard in long or short-term mating contexts, nor in choosing a friend. They did find however that women's self-stimulation was judged positively, and men's self-stimulation was judged negatively.[9]A 2017 study of American college students also found no evidence of a gendered double standard aroundpromiscuity.[10] A double standard may arise if two or more groups who have equal legal rights are given different degrees of legal protection or representation. Such double standards are seen as unjustified because they violate a commonmaximof modern legaljurisprudence- that all parties should stand equal before the law. Wherejudgesare expected to be impartial, they must apply the same standards to all people, regardless of their own subjectivebiasesorfavoritism, based on:social class,rank,ethnicity,gender,sexual orientation,religion, age or other distinctions.[citation needed] A double standard arises inpoliticswhen the treatment of the same political matters between two or more parties (such as the response to a public crisis or the allocation of funding) is handled differently.[11] Double standard policies can include situations when a country's or commentator's assessment of the same phenomenon, process or event ininternational relationsdepends on their relationship with or attitude to the parties involved.[12]InHarry's Game(1975),Gerald Seymourwrote: "One man'sterroristis another man'sfreedom fighter".[13] Double standards exist when people are preferred or rejected on the basis of their ethnicity in situations in which ethnicity is not a relevant or justifiable factor for discrimination (as might be the case for a cultural performance or ethnic ceremony). The intentional efforts of some people to counteractracismand ethnic double standards can sometimes be interpreted by others as actually perpetuating racism and double standards among ethnic groups. Writing forThe American Conservative,Rod Dreherquotes the account published inQuillettebyColeman Hughes, a black student atColumbia University, who said he was given an opportunity to play in a backup band for Grammy Award-winning pop artistRihannaat the 2016MTV Video Music AwardsShow. According to Hughes, several of his friends were also invited; however, one of them was fired and replaced because, according to Hughes, his white Hispanic background did not suit the all-black aesthetic that Rihanna's team had chosen for her show. The team had decided that all performers on stage were to be black, aside from Rihanna's regular guitar player.[14]Hughes was uncertain about whether he believed this action was unethical, given that the show was racially themed to begin with. He observed what he believed to be a double standard in the entertainment industry, saying, "if a black musician had been fired in order to achieve an all-white aesthetic — it would have made front page headlines. It would have been seen as an unambiguous moral infraction."[14]Dreher argues that Hughes's observations highlight the difficulty in distinguishing between the exclusion of one ethnic group in order to celebrate another, and the exclusion of an ethnic group as the exercise of racism or a double standard. Dreher also discussed another incident, in whichNew York TimescolumnistBari Weiss, who is Jewish, was heavily criticized for tweeting, "Immigrants: They get the job done", in a positive reference toMirai Nagasu, a Japanese-American Olympic ice skater, who Weiss was trying to honor.[14]The public debate about ethnicity and double standards remains controversial and, by all appearances, will continue.
https://en.wikipedia.org/wiki/Double_standard
Doublethinkis a process ofindoctrinationin which subjects are expected to simultaneously accept two conflicting beliefs as truth, often at odds with their own memory or sense of reality.[1]Doublethink is related to, but differs from,hypocrisy. George Orwellcoined the termdoublethinkas part of the fictional language ofNewspeakin his 1949dystopiannovelNineteen Eighty-Four.[2]In the novel, its origins within the citizenry are unclear; while it could be partly a product ofBig Brother's formalbrainwashingprogrammes,[i]the novel explicitly shows people learning doublethink and Newspeak due topeer pressureand a desire to "fit in", or gain status within the Party—to be seen as a loyal Party Member. In the novel, for someone to even recognize—let alone mention—any contradiction within the context of the Party line is akin toblasphemy, and could subject that person to disciplinary action and the instant social disapproval of fellow Party Members.[2] According toNineteen Eighty-FourbyGeorge Orwell, doublethink is: "To know and not to know, to be conscious of complete truthfulness while telling carefully constructed lies, to hold simultaneously two opinions which cancelled out, knowing them to be contradictory and believing in both of them, to use logic against logic, to repudiate morality while laying claim to it, to believe that democracy was impossible and that the Party was the guardian of democracy, to forget whatever it was necessary to forget, then to draw it back into memory again at the moment when it was needed, and then promptly to forget it again, and above all, to apply the same process to the process itself—that was the ultimate subtlety: consciously to induce unconsciousness, and then, once again, to become unconscious of the act of hypnosis you had just performed. Even to understand the word—doublethink—involved the use of doublethink."[2][3] Orwell'sdoublethinkis also credited with having inspired the commonly used termdoublespeak, which itself does not appear in the book. Comparisons have been made betweendoublespeakand Orwell's descriptions on political speech from his essay "Politics and the English Language", in which "unscrupulous politicians, advertisers, religionists, and other 'doublespeakers' of whatever stripe, continue to abuse language formanipulativepurposes."[4] Other concepts derived fromNineteen Eighty Four: Complementary pages
https://en.wikipedia.org/wiki/Doublethink
Paul Graham(/ɡræm/; born November 13, 1964)[3]is an English-Americancomputer scientist,writerandessayist,entrepreneurandinvestor. His work includes the programming languageArc, the startupViaweb(later renamedYahoo! Store), co-founding thestartup acceleratorand seed capital firmY Combinator, a number of essays and books, and the media webpageHacker News. He is the author of the computer programming booksOn Lisp,[4]ANSI Common Lisp,[5]andHackers & Painters.[6]Technology journalistSteven Levyhas described Graham as a "hacker philosopher".[7] Graham was born in England, where he and his family have maintained a permanent residence since 2016. He is also a citizen of the United States, where he attended all of his schooling and lived for 48 years prior to returning to England. Graham and his family moved toPittsburgh, Pennsylvania, in 1968, where he attendedGateway High School. Graham gained interest in science and mathematics from his father who was anuclear physicist.[8] Graham received aBachelor of Artswith a major inphilosophyfromCornell Universityin 1986.[9][10][11]He then received aMaster of Sciencein 1988, and aDoctor of Philosophyin 1990, both incomputer sciencefromHarvard University.[9][12] Graham has also studied fine arts and painting at theRhode Island School of Designand at theAccademia di Belle ArtiinFlorence.[9][12] In 1996, Graham andRobert MorrisfoundedViaweband recruitedTrevor Blackwellshortly after. They believed that Viaweb was the firstapplication service provider.[13]Graham received a patent for webapps based on his work at Viaweb.[14]Viaweb's software, written mostly inCommon Lisp, allowed users to make their ownInternet stores. In the summer of 1998, afterJerry Yangreceived a strong recommendation fromAli Partovi,[15]Viaweb was sold toYahoo!for 455,000 shares of Yahoo! stock, valued at $49.6 million.[16]After the acquisition, the product becameYahoo! Store. Graham later gained notice for his essays, which he posts on his personal website. Essay subjects range from "Beating the Averages",[17]which compares Lisp to otherprogramming languagesand introduced the hypothetical programming languageBlub, to "Why Nerds are Unpopular",[18]a discussion ofnerdlife in high school. A collection of his essays has been published asHackers & Painters[6]byO'Reilly Media, which includes a discussion of the growth of Viaweb and the advantages of Lisp to program it. In 2001, Graham announced that he was working on a newdialectof Lisp namedArc. It was released on 29 January 2008.[19]Over the years since, he has written several essays describing features or goals of the language, and some internal projects at Y Combinator have been written in Arc, including the Hacker News web forum and news aggregator program. In 2005, after giving a talk at the Harvard Computer Society later published as "How to Start a Startup", Graham along withTrevor Blackwell,Jessica Livingston, andRobert MorrisstartedY Combinatorto provideseed fundingtostartups, particularly those started by younger, more technically oriented founders. Y Combinator has invested in more than 1300 startups, includingReddit,Twitch(formerlyJustin.tv),Xobni,Dropbox,Airbnb, andStripe.[20] BusinessWeekincluded Paul Graham in the 2008 edition of its annual feature,The 25 Most Influential People on the Web.[21] In response to the proposedStop Online Piracy Act(SOPA), Graham announced in late 2011 that no representatives of any company supporting it would be invited to Y Combinator's Demo Day events.[22] In February 2014, Graham stepped down from his day-to-day role at Y Combinator.[23] In October 2019, Graham announced aspecificationfor another new dialect of Lisp, written in itself, named Bel.[24] Graham proposed a disagreement hierarchy in a 2008 essay "How to Disagree",[25]putting types ofargumentinto a seven-point hierarchy and observing that "If moving up the disagreement hierarchy makes people less mean, that will make most of them happier." Graham also suggested that the hierarchy can be thought of as a pyramid, as the highest forms of disagreement are rarer. Following this hierarchy, Graham notes that articulate forms of name-calling (e.g., "The author is a self-important dilettante") are no different from crude insults. When in disagreement people often become more animated and engaged, and this leads to them becoming angry.[26]At the lower levels, the attacks are directed against the person, which can be hateful. Higher levels of argument are directed against the idea, which is easier to recognize and accept.[27]When people argue at the higher levels, the exchange of viewpoint is more informative and helpful.[28] Graham considers the hierarchy ofprogramming languageswith the example ofBlub, a hypothetically average language "right in the middle of the abstractness continuum. It is not the most powerful language, but it is more powerful thanCobolormachine language."[29]It was used by Graham to illustrate a comparison, beyondTuring completeness, of programming language power, and more specifically to illustrate the difficulty of comparing a programming language one knows to one that one does not. ...These studies would like to formally prove that a certain language is more or less expressive than another language. Determining such a relation between languages objectively rather than subjectively seems to be somewhat problematic, a phenomenon that Paul Graham has discussed in "The Blub Paradox".[30][31] Graham considers a hypothetical Blub programmer. When the programmer looks down the "power continuum", they consider thelower languagesto be less powerful because they miss some feature that a Blub programmer is used to. But when they look up, they fail to realize that they are looking up: they merely see "weird languages" with unnecessary features and assumes they are equivalent in power, but with "other hairy stuff thrown in as well". When Graham considers the point of view of a programmer using a language higher than Blub, he describes that programmer as looking down on Blub and noting its "missing" features from the point of view of the higher language.[30] Graham describes this asthe Blub paradoxand concludes that "By induction, the only programmers in a position to see all the differences in power between the various languages are those who understand the most powerful one."[30] The concept has been cited by programmers such asJoel Spolsky.[32] In 2008, Graham marriedJessica Livingston.[33][34][35]They have two children, and have been living in England since 2016.[36][37]
https://en.wikipedia.org/wiki/Paul_Graham_(programmer)#Graham's_hierarchy_of_disagreement
Irony, in its broadest sense, is thejuxtapositionof what appears to be the case on the surface and what is actually the case or to be expected. It typically figures as arhetorical deviceandliterary technique. In some philosophical contexts, however, it takes on a larger significance as an entire way of life. Irony has been defined in many different ways, and there is no general agreement about the best way to organize its various types. The nebulous and difficult nature of its definition, however, some English speakers contend, has subjectedironyto abuse. 'Irony' comes from the Greekeironeia(εἰρωνεία) and dates back to the 5th century BCE. This term itself was coined in reference to a stock-character fromOld Comedy(such as that ofAristophanes) known as theeiron, who dissimulates and affects less intelligence than he has—and so ultimately triumphs over his opposite, thealazon, a vain-glorious braggart.[1][2][3] Although initially synonymous with lying, inPlato's dialogueseironeiacame to acquire a new sense of "an intended simulation which the audience or hearer was meant to recognise".[4]More simply put, it came to acquire the general definition, "the expression of one's meaning by using language that normally signifies the opposite, typically for humorous or emphatic effect".[5] Until theRenaissance, the Latinironiawas considered a part of rhetoric, usually a species ofallegory, along the lines established byCiceroandQuintiliannear the beginning of the 1st century CE.[6]"Irony" entered the English language as afigure of speechin the 16th century with a meaning similar to the Frenchironie, itself derived from the Latin.[7] Around the end of the 18th century, "irony" takes on another sense, primarily credited toFriedrich Schlegeland other participants in what came to be known asearly German Romanticism. They advance a concept of irony that is not a mere "artistic playfulness", but a "conscious form of literary creation", typically involving the "consistent alternation of affirmation and negation".[8]No longer just a rhetorical device, on their conception, it refers to an entire metaphysical stance on the world.[9] It is commonplace to begin a study of irony with the acknowledgement that the term quite simply eludes any single definition.[10][11][12]PhilosopherRichard J. Bernsteinopens hisIronic Lifewith the observation that a survey of the literature on irony leaves the reader with the "dominantimpression" that the authors are simply "talking about different subjects".[13]Indeed,Geoffrey Nunberg, alexical semantician, observes a trend ofsarcasmreplacing the linguistic role of verbal irony as a result of all this confusion.[14] In the 1906The King's English,Henry Watson Fowlerwrites, "any definition of irony—though hundreds might be given, and very few of them would be accepted—must include this, that the surface meaning and the underlying meaning of what is said are not the same." A consequence of this, he observes, is that an analysis of irony requires the concept of adouble audience"consisting of one party that hearing shall hear & shall not understand, & another party that, when more is meant than meets the ear, is aware both of that more & of the outsiders' incomprehension".[15] From this basic feature, literary theorist Douglas C. Muecke identifies three basic characteristics of all irony: According toWayne Booth, this uneven double-character of irony makes it a rhetorically complex phenomenon. Admired by some and feared by others, it has the power to tighten social bonds, but also to exacerbate divisions.[18] How best to organize irony into distinct types is almost as controversial as how best to define it. There have been many proposals, generally relying on the same cluster of types; still, there is little agreement as to how to organize the types and what if any hierarchical arrangements might exist. Nevertheless, academic reference volumes standardly include at least all four ofverbal irony,dramatic irony,cosmic irony, andRomantic ironyas major types.[19][20][21][22]The latter three types are sometimes contrasted with verbal irony as forms ofsituational irony, that is, irony in which there is no ironist; so, instead of "he is being ironical" one would instead say "it is ironical that".[23][9] Verbal ironyis "a statement in which the meaning that a speaker employs is sharply different from the meaning that is ostensibly expressed".[1]Moreover, it is producedintentionallyby the speaker, rather than being a literary construct, for instance, or the result of forces outside of their control.[19]Samuel Johnsongives as an example the sentence, "Bolingbrokewas a holy man" (he was anything but).[24][25]Verbal irony is sometimes also considered to encompass various other literary devices such ashyperboleand its opposite,litotes, conscious naïveté, and others.[26][27] Dramatic ironyprovides the audience with information of which characters are unaware, thereby placing the audience in a position of advantage to recognize their words and actions as counter-productive or opposed to what their situation actually requires.[28]Three stages may be distinguished — installation, exploitation, and resolution (often also called preparation, suspension, and resolution) — producing dramatic conflict in what one character relies or appears to rely upon, thecontraryof which is known by observers (especially the audience, sometimes to other characters within the drama) to be true.[29]Tragic ironyis a specific type of dramatic irony.[30] Cosmic irony, sometimes also called "the irony of fate", presents agents as always ultimately thwarted by forces beyond human control. It is strongly associated with the works ofThomas Hardy.[28][30]This form of irony is also given metaphysical significance in the work ofSøren Kierkegaard, among other philosophers.[8] Romantic ironyis closely related to cosmic irony, and sometimes the two terms are treated interchangeably.[9]Romantic irony is distinct, however, in that it is the author who assumes the role of the cosmic force. The narrator inTristram Shandyis one early example.[31]The term is closely associated withFriedrich Schlegeland theearly German Romantics, and in their hands it assumed a metaphysical significance similar to cosmic irony in the hands of Kierkegaard.[9]It was also of central importance to the literary theory advanced byNew Criticismin mid-20th century.[31][27] Building upon the double-level structure of irony, self-described "ironologist" D. C. Muecke proposes another, complementary way in which we may typify, and so better understand, ironic phenomena. What he proposes is a dual distinction between and among threegradesand fourmodesof ironic utterance. Grades of irony are distinguished "according to the degree to which the real meaning is concealed". Muecke names themovert,covert, andprivate:[32] Muecke's typology of modes are distinguished "according to the kind of relationship between the ironist and the irony". He calls theseimpersonal irony,self-disparaging irony,ingénue irony, anddramatized irony:[32] To consider irony from a rhetorical perspective means to consider it as an act of communication.[40]InA Rhetoric of Irony,Wayne C. Boothseeks to answer the question of "how we manage to share ironies and why we so often do not".[18] Because irony involves expressing something in a way contrary to literal meaning, it always involves a kind of "translation" on the part of the audience.[41]Booth identifies three principal kinds of agreement upon which the successful translation of irony depends: common mastery of language, shared cultural values, and (for artistic ironies) a common experience of genre.[42] A consequence of this element of in-group membership is that there is more at stake in whether one grasps an ironic utterance than there is in whether one grasps an utterance presented straight. As he puts it, the use of irony is An aggressively intellectual exercise that fuses fact and value, requiring us to construct alternative hierarchies and choose among them; [it] demands that we look down on other men's follies or sins; floods us with emotion-charged value judgments which claim to be backed by the mind; accuses other men not only of wrong beliefs but of being wrong at their very foundations and blind to what these foundations imply[.][43] This is why, when we misunderstand an intended ironic utterance, we often feel more embarrassed about our failure to recognize the incongruity than we typically do when we simply misunderstand a statement of fact.[44]When one's deepest beliefs are at issue, so too, often, is one's pride.[43]Nevertheless, even as it excludes its victims, irony also has the power to build and strengthen the community of those who do understand and appreciate.[45] Typically "irony" is used, as described above, with respect to some specific act or situation. In more philosophical contexts, however, the term is sometimes assigned a more general significance, in which it is used to describe an entire way of life or a universal truth about the human situation. Even Booth, whose interest is expressly rhetorical, notes that the word "irony" tends to attach to "a type of character — Aristophanes' foxyeirons, Plato's disconcerting Socrates — rather than to any one device".[46]In these contexts, what is expressed rhetorically by cosmic irony is ascribed existential or metaphysical significance. As Muecke puts it, such irony is that of "life itself or any general aspect of life seen as fundamentally and inescapably an ironic state of affairs. No longer is it a case of isolated victims .... we are all victims of impossible situations".[47][48] This usage has its origins primarily in the work ofFriedrich Schlegeland otherearly 19th-century German Romanticsand inSøren Kierkegaard's analysis ofSocratesinThe Concept of Irony.[49][48] Friedrich Schlegel was at the forefront of the intellectual movement that has come to be known asFrühromantik, or early German Romanticism, situated narrowly between 1797 and 1801.[50]For Schlegel, the "romantic imperative" (a rejoinder toImmanuel Kant's "categorical imperative") is to break down the distinction between art and life with the creation of a "new mythology" for the modern age.[51]In particular, Schlegel was responding to what he took to be the failure of thefoundationalistenterprise, exemplified for him by the philosophy ofJohann Gottlieb Fichte.[52] Irony is a response to the apparent epistemic uncertainties of anti-foundationalism. In the words of scholarFrederick C. Beiser, Schlegel presents irony as consisting in "the recognition that, even though we cannot attain truth, we still must forever strive toward it, because only then do we approach it." His model is Socrates, who "knew that he knew nothing", yet never ceased in his pursuit of truth and virtue.[53][54]According to Schlegel, instead of resting upon a single foundation, "the individual parts of a successful synthesis formation support and negate each other reciprocally".[55] Although Schlegel frequently does describe the Romantic project with a literary vocabulary, his use of the term "poetry" (Poesie) is non-standard. Instead, he goes back to the broader sense of the original Greekpoiētikós, which refers to any kind of making.[56]As Beiser puts it, "Schlegel intentionally explodes the narrow literary meaning ofPoesieby explicitly identifying the poetic with thecreative powerin human beings, and indeed with theproductive principlein nature itself." Poetry in the restricted literary sense is its highest form, but in no way its only form.[57] According to Schlegel, irony captures the human situation of always striving towards, but never completely possessing, what is infinite or true.[58] This presentation of Schlegel's account of irony is at odds with many 20th-century interpretations, which, neglecting the larger historical context, have been predominatelypostmodern.[59][60]These readings overstate the irrational dimension of early Romantic thought at the expense of its rational commitments—precisely the dilemma irony is introduced to resolve.[61] Already in Schlegel's own day,G. W. F. Hegelwas unfavorably contrasting Romantic irony with that of Socrates. On Hegel's reading, Socratic irony partially anticipates his owndialecticalapproach to philosophy. Romantic irony, by contrast, Hegel alleges to be fundamentally trivializing and opposed to all seriousness about what is of substantial interest.[62]According toRüdiger Bubner, however, Hegel's "misunderstanding" of Schlegel's concept of irony is "total" in its denunciation of a figure actually intended to preserve "our openness to a systematic philosophy".[63] Yet, it is Hegel's interpretation that would be taken up and amplified byKierkegaard, who further extends the critique to Socrates himself.[64] Thesis VIII of the Danish philosopher Søren Kierkegaard's dissertation,The Concept of Irony with Continual Reference to Socrates, states that "irony as infinite and absolute negativity is the lightest and the weakest form of subjectivity".[65]Although this terminology is Hegelian in origin, Kierkegaard employs it with a somewhat different meaning. Richard J. Bernstein elaborates: It isinfinitebecause it is directed not against this or that particular existing entity, but against the entire given actuality at a certain time. It is thoroughlynegativebecause it is incapable of offering any positive alternative. Nothing positive emerges out of this negativity. And it isabsolutebecause Socrates refuses to cheat.[66] In this way, contrary to traditional accounts, Kierkegaard portrays Socrates asgenuinelyignorant. According to Kierkegaard, Socrates is the embodiment of an ironic negativity that dismantles others' illusory knowledge without offering any positive replacement.[67] Almost all of Kierkegaard's post-dissertation publications were written under a variety of pseudonyms. Scholar K. Brian Söderquist argues that these fictive authors should be viewed as explorations of the existential challenges posed by such an ironic, poetic self-consciousness. Their awareness of their own unlimited powers of self-interpretation prevents them from fully committing to any single self-narrative, and this leaves them trapped in an entirely negative mode of uncertainty.[68] Nevertheless, seemingly against this, Thesis XV of the dissertation states that "Just as philosophy begins with doubt, so also a life that may be called human begins with irony".[65]Bernstein writes that the emphasis here must be onbegins.[66]Irony is not itself an authentic mode of life, but it is a precondition for attaining such a life. Although pure irony is self-destructive, it generates a space in which it becomes possible to reengage with the world in a genuine mode ofethical passion.[69]For Kierkegaard himself, this took the form of religious inwardness. What is crucial, however, is just to in some way move beyond the purely (or merely) ironic. Irony is what creates the space in which we can learn and meaningfully choose how to live a life worthy (vita digna[70]) of being called human.[71][72] Referring to earlier self-conscious works such asDon QuixoteandTristram Shandy, D. C. Muecke points particularly toPeter Weiss's 1964 play,Marat/Sade. This work is a play within a play set in a lunatic asylum, in which it is difficult to tell whether the players are speaking only to other players or also directly to the audience. When The Herald says, "The regrettable incident you've just seen was unavoidable indeed foreseen by our playwright", there is confusion as to who is being addressed, the "audience" on the stage or the audience in the theatre. Also, since the play within the play is performed by the inmates of a lunatic asylum, the theatre audience cannot tell whether the paranoia displayed before them is that of the players, or the people they are portraying. Muecke notes that, "in America, Romantic irony has had a bad press", while "in England...[it] is almost unknown."[73] In a book entitledEnglish Romantic Irony, Anne Mellor writes, referring toByron,Keats,Carlyle,Coleridge, andLewis Carroll:[74] Romantic irony is both a philosophical conception of the universe and an artistic program. Ontologically, it sees the world as fundamentally chaotic. No order, no far goal of time, ordained by God or right reason, determines the progression of human or natural events […] Of course, romantic irony itself has more than one mode. The style of romantic irony varies from writer to writer […] But however distinctive the voice, a writer is a romantic ironist if and when his or her work commits itself enthusiastically both in content and form to a hovering or unresolved debate between a world of merely man-made being and a world of ontological becoming. Similarly,metafictionis: "Fiction in which the author self-consciously alludes to the artificiality or literariness of a work by parodying or departing from novelistic conventions (esp. naturalism) and narrative techniques."[75]It is a type of fiction that self-consciously addresses the devices of fiction, thereby exposing the fictional illusion. Gesa Giesing writes that "the most common form of metafiction is particularly frequent in Romantic literature. The phenomenon is then referred to as Romantic Irony." Giesing notes that "There has obviously been an increased interest in metafiction again after World War II."[76] For example, Patricia Waugh quotes from several works at the top of her chapter headed "What is metafiction?". These include: The thing is this. That of all the several ways of beginning a book […] I am confident my own way of doing it is best Since I've started this story, I've gotten boils […] Additionally,The Cambridge Introduction to Postmodern Fictionsays ofJohn Fowles'sThe French Lieutenant's Woman, "For the first twelve chapters...the reader has been able to immerse him or herself in the story, enjoying the kind of 'suspension of disbelief' required of realist novels...what follows is a remarkable act of metafictional 'frame-breaking'". As evidence, chapter 13 "notoriously" begins: "I do not know. This story I am telling is all imagination. These characters I create never existed outside my own mind. […] if this is a novel, it cannot be a novel in the modern sense".[78] A fair amount of confusion has surrounded the issue of the relationship between verbal irony andsarcasm. For instance, various reference sources assert the following: The psychologistRod A. Martin, inThe Psychology of Humour(2007), is quite clear that irony is where "the literal meaning is opposite to the intended" and sarcasm is "aggressive humor that pokes fun".[84]He has the following examples: for irony he uses the statement "What a nice day" when it is raining. For sarcasm, he citesWinston Churchill, who is supposed to have said, when told byBessie Braddockthat he was drunk, "But I shall be sober in the morning, and you will still be ugly", as being sarcastic, while not saying the opposite of what is intended. Psychology researchers Lee and Katz have addressed the issue directly. They found that ridicule is an important aspect of sarcasm, but not of verbal irony in general. By this account, sarcasm is a particular kind of personal criticism levelled against a person or group of persons that incorporates verbal irony. For example, a woman reports to her friend that rather than going to a medical doctor to treat her cancer, she has decided to see a spiritual healer instead. In response her friend says sarcastically, "Oh, brilliant, what an ingenious idea, that's really going to cure you." The friend could have also replied with any number of ironic expressions that should not be labeled as sarcasm exactly, but still have many shared elements with sarcasm.[85] Most instances of verbal irony are labeled by research subjects as sarcastic, suggesting that the termsarcasmis more widely used than its technical definition suggests it should be.[86]Somepsycholinguistictheorists[87]suggest thatsarcasm,hyperbole,understatement,rhetorical questions,double entendre, andjocularityshould all be considered forms of verbal irony. The differences between these rhetorical devices (tropes) can be quite subtle and relate to typical emotional reactions of listeners, and the goals of the speakers. Regardless of the various ways theorists categorize figurative language types, people in conversation who are attempting to interpret speaker intentions and discourse goals do not generally identify the kinds of tropes used.[88] The more general casual usage of irony meaning "a contradiction between circumstance and expectation" originated in the 1640s.[89][example needed]And it has always been applicable in situations where there is no double audience; something required of only certain types. Some speakers of English complain that the wordsironyandironicare often misused;[90]the term is sometimes used as a synonym forincongruousand applied to "every trivial oddity".[91]The song"Ironic" by Alanis Morissettein 1996 is an often-pointed-to example. Meanwhile, Tim Conley cites the following:[92] Philip Howardassembled a list of seven implied meanings for the word "ironically", as it opens a sentence:
https://en.wikipedia.org/wiki/Irony
Inlogic, thelaw of noncontradiction(LNC; also known as thelaw of contradiction,principle of non-contradiction(PNC), or theprinciple of contradiction) states that propositions cannot both be true and false at the same time, e. g. the two propositions "the house is white" and "the house is not white" aremutually exclusive.Formally, this is expressed as thetautology¬(p ∧ ¬p). For example it is tautologous to say "the house is not both white and not white" since this results from putting "the house is white" in that formula, yielding "not (the house is white and not (the house is white))", then rewriting this in natural English. The law is not to be confused with thelaw of excluded middlewhich states that at least one of two propositions like "the house is white" and "the house is not white" holds. One reason to have this law is theprinciple of explosion, which states that anything follows from a contradiction. The law is employed in areductio ad absurdumproof. To express the fact that the law is tenseless and to avoidequivocation, sometimes the law is amended to say "contradictory propositions cannot both be true 'at the same time and in the same sense'". It is one of the so calledthree laws of thought, along with its complement, the law of excluded middle, and thelaw of identity. However, no system of logic is built on just these laws, and none of these laws provideinference rules, such asmodus ponensorDe Morgan's laws. The law of non-contradiction and the law of excluded middle create adichotomyin a so-called logical space, the points in which are all the consistent combinations of propositions. Each combination would contain exactly one member of each pair of contradictory propositions, so the space would have two parts which are mutually exclusive and jointly exhaustive. The law of non-contradiction is merely an expression of the mutually exclusive aspect of that dichotomy, and the law of excluded middle is an expression of its jointly exhaustive aspect. One difficulty in applying the law of non-contradiction is ambiguity in the propositions.[1]For instance, if it is not explicitly specified as part of the propositions A and B, thenAmay beBat one time, and not at another. A and B may in some cases be made to sound mutually exclusive linguistically even thoughAmay be partlyBand partly notBat the same time. However, it is impossible to predicate of the same thing, at the same time, and in the same sense, the absence and the presence of the same fixed quality. TheBuddhistTripitakaattributes toNigaṇṭha Nātaputta, who lived in the 6th century BCE, the implicit formulation of the law of noncontradiction, “‘See how upright, honest and sincere Citta, the householder, is’; and, a little later, he also says: ‘See how Citta, the householder, is not upright, honest or sincere.’ To this, Citta replies: ‘if your former statement is true, your latter statement is false and if your latter statement is true, your former statement is false.’” Early explicit formulations of the law of noncontradiction wereontic, with later 2nd century Buddhist philosopherNagarjunastating “when something is a single thing, it cannot be both existent and non-existent” similar to Aristotle’s own ontic formulation that “that a thing cannot at the same time be and not be”.[2] According to bothPlatoandAristotle,[3]Heraclituswassaidto have denied the law of non-contradiction. This is quite likely[4]if, as Platopointed out, the law of non-contradiction does not hold for changing things in the world. If a philosophy ofBecomingis not possible without change, then (the potential of) what is to become must already exist in the present object. In "We step and do not step into the same rivers; we are and we are not", both Heraclitus's and Plato's object simultaneously must, in some sense, be both what it now is and have the potential (dynamic) of what it might become.[5] So little remains of Heraclitus' aphorisms that not much about his philosophy can be said with certainty. He seems to have held that strife of opposites is universal both within and without, thereforebothopposite existents or qualities must simultaneously exist, although in some instances in different respects. "Theroad up and down are one and the same" implies either the road leads both ways, or there can be no road at all. This is the logicalcomplementof the law of non-contradiction. According to Heraclitus, change, and the constant conflict of opposites is the universallogosof nature. Personal subjective perceptions or judgments can only be said to be true at the same time in the same respect, in which case, the law of non-contradiction must be applicable to personal judgments. The most famous saying ofProtagorasis: "Man is the measure of all things: of things which are, that they are, and of things which are not, that they are not".[6]However, Protagoras was referring to things that are used by or in some way related to humans. This makes a great difference in the meaning of his aphorism. Properties, social entities, ideas, feelings, judgments, etc. originate in the human mind. However, Protagoras has never suggested that man must be the measure of stars or the motion of the stars. Parmenidesemployed anontologicalversion of the law of non-contradiction to prove that being is and to deny the void, change, and motion. He also similarly disproved contrary propositions. In his poemOn Nature, he said, the only routes of inquiry there are for thinking: the one that [it] is and that [it] cannot not beis the path of Persuasion (for it attends upon truth)the other, that [it] is not and that it is right that [it] not be,this I point out to you is a path wholly inscrutablefor you could not know what is not (for it is not to be accomplished) nor could you point it out... For the same thing is for thinking and for being The nature of the 'is' or what-is in Parmenides is a highly contentious subject. Some have taken it to be whatever exists, some to be whatever is or can be the object of scientific inquiry.[7] In Plato's early dialogues, Socrates uses theelenctic methodto investigate the nature or definition of ethical concepts such as justice or virtue. Elenctic refutation depends on adichotomousthesis, one that may be divided into exactly twomutually exclusiveparts, only one of which may be true. Then Socrates goes on to demonstrate the contrary of the commonly accepted part using the law of non-contradiction. According to Gregory Vlastos,[8]the method has the following steps: Plato's version of the law of non-contradiction states that "The same thing clearly cannot act or be acted upon in the same part or in relation to the same thing at the same time, in contrary ways" (TheRepublic(436b)). In this, Plato carefully phrases threeaxiomaticrestrictions onactionor reaction: in the same part, in the same relation, at the same time. The effect is to momentarily create a frozen, timelessstate, somewhat like figures frozen in action on the frieze of the Parthenon.[9] This way, he accomplishes two essential goals for his philosophy. First, he logically separates the Platonic world of constant change[10]from the formally knowable world of momentarily fixed physical objects.[11][12]Second, he provides the conditions for thedialecticmethod to be used in finding definitions, as for example in theSophist. So Plato's law of non-contradiction is the empirically derived necessary starting point for all else he has to say.[13] In contrast, Aristotle reverses Plato's order of derivation. Rather than starting withexperience, Aristotle beginsa prioriwith the law of non-contradiction as the fundamental axiom of an analytic philosophical system.[14]This axiom then necessitates the fixed, realist model. Now, he starts with much stronger logical foundations than Plato's non-contrariety of action in reaction to conflicting demands from the three parts of the soul. The traditional source of the law of non-contradiction isAristotle'sMetaphysicswhere he gives three different versions.[15] Aristotle attempts several proofs of this law. He first argues that every expression has a single meaning (otherwise we could not communicate with one another). This rules out the possibility that by "to be a man", "not to be a man" is meant. But "man" means "two-footed animal" (for example), and so if anything is a man, it is necessary (by virtue of the meaning of "man") that it must be a two-footed animal, and so it is impossible at the same time for itnotto be a two-footed animal. Thus "it is not possible to say truly at the same time that the same thing is and is not a man" (Metaphysics1006b 35). Another argument is that anyone who believes something cannot believe its contradiction (1008b): Avicenna'scommentaryon theMetaphysicsillustrates the common view that the law of non-contradiction "and their like are among the things that do not require our elaboration." Avicenna's words for "the obdurate" are quite facetious: "he must be subjected to the conflagration of fire, since 'fire' and 'not fire' are one. Pain must be inflicted on him through beating, since 'pain' and 'no pain' are one. And he must be denied food and drink, since eating and drinking and the abstention from both are one [and the same]."[19] Thomas Aquinasargued that the principle of non-contradiction is essential to the reasoning of human beings ("One cannot reasonably hold two mutually exclusive beliefs at the same time"). He argued that human reasoning without the principle of non-contradiction is utterly impossible because reason itself can't function with two contradictory ideas. Aquinas argued that this is the same both for moral arguments as well as theological arguments and even machinery (“the parts must work together, the machine can’t work if two parts are incompatible”).[20][21] LeibnizandKantboth used the law of non-contradiction to define the difference betweenanalytic and syntheticpropositions.[22]For Leibniz, analytic statements follow from the law of non-contradiction, and synthetic ones from theprinciple of sufficient reason. The principle was stated as atheoremofpropositional logicbyRussellandWhiteheadinPrincipia Mathematicaas: Graham Priestadvocates the view thatunder some conditions, some statements can be both true and false simultaneously, or may be true and false at different times.Dialetheismarises from formal logicalparadoxes, such as theLiar's paradoxandRussell's paradox, even though it isn't the only solution to them.[24][25][26] The law of non-contradiction is alleged to be neither verifiable nor falsifiable, on the ground that any proof or disproof must use the law itself prior to reaching the conclusion. In other words, in order to verify or falsify the laws of logic one must resort to logic as a weapon, an act that is argued to beself-defeating.[27]Since the early 20th century, certain logicians have proposed logics that deny the validity of the law. Logics known as "paraconsistent" are inconsistency-tolerant logics in that there, from P together with ¬P, it does not imply that any proposition follows. Nevertheless, not all paraconsistent logics deny the law of non-contradiction and some such logics even prove it.[28][29] Some, such asDavid Lewis, have objected to paraconsistent logic on the ground that it is simply impossible for a statement and its negation to be jointly true.[30]A related objection is that "negation" in paraconsistent logic is not reallynegation; it is merely asubcontrary-forming operator.[31][full citation needed][32][full citation needed]Those who (like the dialetheists) claim that the Law of Non-Contradiction can be violated are in fact using a different definition of negation, and therefore talking about something else other than the Law of Non-Contradiction which is based on a particular definition of negation and therefore cannot be violated.[33] TheFargoepisode "The Law of Non-Contradiction", which takes its name from the law, was noted for its several elements relating to the law of non-contradiction, as the episode's main character faces several paradoxes. For example, she is still the actingchief of policewhile having been demoted from the position, and tries to investigate a man that both was and was not named Ennis Stussy, and who both was and was not her stepfather. It also features the story of a robot who, after having spent millions of years unable to help humanity, is told that he greatly helped mankind all along by observing history.[34]
https://en.wikipedia.org/wiki/Law_of_noncontradiction
On Contradiction(simplified Chinese:矛盾论;traditional Chinese:矛盾論;pinyin:Máodùn Lùn;lit.'To Discuss Contradiction') is a 1937 essay by theChinese CommunistrevolutionaryMao Zedong. Along withOn Practice,it forms the philosophical underpinnings of the political ideology that would later becomeMaoism. It was written in August 1937, as an interpretation of the philosophy ofdialectical materialism, while Mao was at his guerrilla base inYan'an. Mao suggests that all movement and life is a result ofcontradiction. Mao separates his paper into different sections: the two world outlooks, the universality of contradiction, the particularity of contradiction, the principal contradiction and principal aspect of contradiction, the identity and struggle of aspects of contradiction, the place of antagonism in contradiction, and finally the conclusion. Mao further develops the theme laid out inOn Contradictionin his 1957 speechOn the Correct Handling of Contradictions Among the People. Mao describes existence as being made up of constant transformation and contradiction. Nothing is constant as inmetaphysicsand can only exist based on opposing contradictions. He uses the concept of contradiction to explain different Chinese historical time periods and social events. Mao's form of talking about contradiction creates a modified concept that brought forth the ideal ofChinese Marxism. This text continues to influence and educate Chinese Marxists.[1] Mao initially held views similar to a reformist or nationalist. He later said that he became a Marxist in 1919 when he took a second trip to Peking, although he had not declared his new belief at that time. In 1920, he metChen Duxiuin Shanghai and discussed the Marxist philosophy. Mao finally officially moved toward his new ideology when the Movement of Self-Government of Hunan failed. He found a more reasonable approach to fixing society's problems in Marxism. He once said, "Class struggle, some classes triumph, others are eliminated." He understood the need for Marxist ideas and struggles in order to more effectively take on the developing world.[2] LikeOn Practice,On Contradictionwas written by Mao during the Yan'an Period.[3]: 31On Contradictionwas written while the Communists and the Nationalists were in theSecond United Frontagainst the invading Japanese forces.[4]: 232 Some of the points made in "On Contradiction" were drawn and expanded from lectures Mao presented in 1937[5]: 8at theCounter-Japanese UniversityinYan'an. These lectures drew from the work ofKarl Marx,Friedrich Engels, andVladimir Lenin.[5]: 8Mao elaborated on their principles based on the practice of theChinese Communist Partyat the time.[5]: 8Mao's research was concentrated on pieces from Chinese Marxist philosophers. The most influential philosopher that Mao studied wasAi Siqi. Mao not only read Ai's works but also knew him personally. Mao studied Marxism diligently in the year before he wrote his "Lecture Notes onDialectical Materialism." He reviewed and annotated the Soviet Union's New Philosophy in order to actively understand the dialectical materialism concept.[6] In addition to elaborating on his ideological and philosophic views, Mao wroteOn Contradictionto help legitimize his political thinking within a Marxist framework and thus further solidify his leadership.[5]: 8–9 Indialectical materialism, contradiction, as derived byKarl Marx, usually refers to an opposition of social forces. This concept is one of the three main points of Marxism.[7]Mao held thatcapitalismis internally contradictory because differentsocial classeshave conflicting collective goals. These contradictions stem from the social structure of society and inherently lead toclass conflict,economic crisis, and eventuallyrevolution, the existing order's overthrow and the formerly oppressed classes' ascension to political power. "The dialectic asserts that nothing is permanent and all things perish in time."[7]Dialectics is the "logic of change" and can explain the concepts of evolution and transformation. Materialism refers to the existence of only one world. It also verifies that things can exist without the mind. Things existed well before humans had knowledge of them. For materialists, consciousness is the mind and it exists within the body rather than apart from it. All things are made of matter. Dialectical materialism combines the two concepts into an important Marxist ideal.[6]Mao saw dialectics as the study of contradiction based on a statement made by Lenin.[8] The published text ofOn Contradictionbegins by addressing Lenin's distinction between a metaphysical worldview and a dialectical worldview.[9]: 44Mao frames the metaphysical worldview as one which treats things as unitary, static, and isolated.[9]: 44In contrast, Mao frames the dialectical worldview as one which views things in dynamic interaction with each other while also being characterized by their own internal contradiction.[9]: 44In the dialectical worldview, progress results through reconciling internal and external contradictions, resulting in new things with their own internal and external contradictions.[9]: 44–45 For a long time the metaphysical view was held by both Chinese and Europeans. Eventually in Europe, the proletariat developed the dialectical materialistic outlook, and the bourgeoisie opposed the view. Mao refers to the metaphysicians as "vulgar evolutionists." They believe in a static and unchanging world where things repeat themselves rather than changing with history. It cannot explain change and development over time.[8]In dialectics, things are understood by their internal change and relationship with other objects. Contradiction within an object fuels its development and evolution. Hegel developed a dialectical idealism before Marx and Engels combined dialectics with materialism, and Lenin and Stalin further developed it. With dialectical materialism we can look at the concrete differences between objects and further understand their growth.[8] The absoluteness of contradiction has a twofold meaning.[10]One is that contradiction exists in the process of development of all things, and the other is that in the process of development of each thing a movement of opposites exists from beginning to end. Contradiction is the basis of life and drives it forward. No one phenomenon can exist without its contradictory opposite, such as victory and defeat.[8]"Unity of opposites" allows for a balance of contradiction. A most basic example of the cycle of contradiction is life and death. There are contradictions that can be found in mechanics, mathematics, science, social life, etc.[11]Deborin claims that there is only difference found in the world. Mao combats this saying that difference is made up of contradiction and is contradiction.[8]"No society—past, present, or future—could escape contradictions, for this was a characteristic of all matter in the universe."[12] Mao finds the best way to talk about the relativity of contradiction is to look at it in several different parts.[10]"The contradiction in each form of motion of matter has its particularity." This contradiction is the essence of a thing. When one can identify the particular essence, one can understand the object. These particular contradictions also differentiate one object from another. Knowledge is developed from cognition that can move from general to particular or particular to general. When old processes change, new processes and contradictions emerge. Each contradiction has its own way of being solved, and the resolution must be found accordingly to the particular contradiction. Particular contradictions also have particular aspects that have specific ways of being handled. Mao believes that one must look at things objectively when reviewing a conflict. When one is biased and subjective, he or she cannot fully understand the contradictions and aspects of an object. This is the way people should go about "studying the particularity of any kind of contradiction – the contradiction in each form of motion of matter, the contradiction in each of its processes of development, the two aspects of that contradiction in each process, the contradiction at each stage of a process, and the two aspects of the contradiction at each stage." Universality and particularity of a contradiction can be viewed as general and individual character of a contradiction. These two concepts depend on each other for existence. Mao says the idea of these two characters is necessary in understanding dialectics.[8] Mao observes that subjective thought can motivate humans to change their objective situation.[13]: 6 On Contradictionintroduces Mao's concepts of the "principal contradiction" and the "principal aspect of the contradiction".[13]: 6The principal contradiction is the contradiction whose resolution is decisive for resolving the secondary contradictions.[13]: 6The principal aspect of the contradiction is the side of the contradiction whose positive development will decisively resolve the contradiction.[13]: 6 This subject focuses on the concept of one contradiction allowing other contradictions to exist.[10]For example, in a capitalist society, the contradiction between the proletariat and the bourgeoisie allow the other contradictions, such as the one between imperialists and their colonies. According to Mao, complex phenomena have multiple contradictions, but one can always be identified as "play[ing] the lead role".[4]: 234When looking at numerous contradictions, one must understand which contradiction is superior. One must also remember the principal and non-principal contradictions are not static and will, over time, transform into one another. This also causes a transformation of the nature of the thing, for the principal contradiction is what primarily defines the thing. These two different contradictions prove that nothing is created equally by showing the lack of balance that allows one contradiction to be superior to another. Mao uses examples in Chinese history and society to symbolize the concept of a principal contradiction and its continual changing.[8]"Neither imperialist oppression of the colonies nor the fate of the colonies to suffer under that oppression can last forever." Based on the idea of contradiction, one day, the oppression will end and the colonies will gain power and freedom.[11] Mao defines identity as two different thoughts: the two aspects of contradiction coexist and aspects can transform into one another.[10]Any one aspect is dependent on the existence of at least one other aspect. Without death, there could be no life; without unhappiness, there could be no joy. Mao finds the more important point to also be a factor of identity; contradictions can transform into one another. In certain situations and under certain conditions, the contradictions coexist and change into one another. Identity both separates the contradictions and allows for the struggle between the contradictions; the identity is the contradiction. The two contradictions in an object inspire two forms of movement, relative rest and conspicuous change. Initially, an object changes quantitatively and seems to be at rest. Eventually, the culmination of the changes from the initial movement causes the object to seem to be conspicuously changing. Objects are constantly going through this process of motion; however, struggle between opposites happens in both states and is only solved in the second.[8]Transformation is motivated by the unity between contradictions. Particular condition of movement and the general condition of movement both are conditions under which contradictions can move. This movement is absolute and considered a struggle.[11] Antagonistic contradiction(Chinese:矛盾;pinyin:máodùn) is the impossibility of compromise between differentsocial classes.[10]The term is usually attributed toVladimir Lenin, although he may never have actually used the term in any of his written works. The term is most often applied inMaoisttheory, which holds that differences between the two primary classes, theworking class/proletariatand thebourgeoisieare so great that there is no way to bring about a reconciliation of their views. Because the groups involved have diametrically opposed concerns, their objectives are so dissimilar and contradictory that no mutually acceptable resolution can be found. Non-antagonistic contradictions may be resolved through mere debate, but antagonistic contradictions can only be resolved through struggle. InMaoism, the antagonistic contradiction was usually that between thepeasantryand thelandowning class.Mao Zedongexpressed his views on the policy in his famous February 1957 speechOn the Correct Handling of Contradictions Among the People.Mao focuses on antagonistic contradiction as the "struggle of opposites." It is an absolute and universal concept. When one tries to solve the conflict of antagonistic contradictions, one must find his solution based on each situation. As in any other concept, there are two sides. There can be antagonistic contradictions and non-antagonistic contradictions. Contradiction and antagonism are not equals and one can exist without the other.[8]Also, contradictions do not have to develop into antagonistic ones. An example of antagonism and non-antagonism can be found in two opposing states. They may continually struggle and disagree due to their opposite ideologies, but they will not always be at war against one another.[11]Avoiding antagonism requires an open space to allow the contradictions to emerge and be solved objectively. The non-antagonistic contradictions "exist among 'the people'," and the antagonistic contradictions are "between the enemy and the people."[12] In the conclusion, Mao sums up all the points that were made in his essay. The law of contradictions is a fundamental basis for dialectical materialistic thought. Contradiction is present in all things and allows all objects to exist. Contradiction depends on other contradictions to exist and can transform itself into another contradiction. Contradictions are separated by superiority and can sometimes have antagonistic relationships with one another. Each contradiction is particular to certain objects and gives objects identity. Understanding all of Mao's points will give one an understanding of this dense topic of Marxist thought.[14] On Contradiction,along with Mao's textOn Practice, elevated Mao's reputation as a Marxist theoretician.[15]: 37It became a foundational text ofMao Zedong Thought.[5]: 9After Mao was celebrated in theEastern Blocfollowing China's intervention in theKorean War, both texts became widely read in the USSR.[15]: 38 In April 1960, Petroleum MinisterYu Qiulistated thatOn Contradiction(along withOn Practice) would be the ideological core of the campaign to develop theDaqing Oil Fieldinnortheast China.[16]: 150Yu's efforts to mobilize workers in Daqing focused on ideological motivation rather than material incentives.[17]: 52–53The Ministry of the Petroleum Industry shipped thousands of copies of the texts by plane so that every Daqing oil worker would have copies and for work units to each set up their own study groups.[16]: 150The successful completion of Daqing despite harsh weather conditions and supply limitations became a model held up by the Communist Party as an example during subsequent industrialization campaigns.[17]: 52–54
https://en.wikipedia.org/wiki/On_Contradiction
Anoxymoron(plurals:oxymoronsandoxymora) is afigure of speechthatjuxtaposesconcepts with opposite meanings within a word or in a phrase that is aself-contradiction. As arhetorical device, an oxymoron illustrates a point to communicate and reveal aparadox.[1][2]A general meaning of "contradiction in terms" is recorded by the 1902 edition of theOxford English Dictionary.[3] The termoxymoronis first recorded as Latinized Greekoxymōrum, inMaurus Servius Honoratus(c. AD 400);[4]it is derived from theGreekwordὀξύςoksús"sharp, keen, pointed"[5]andμωρόςmōros"dull, stupid, foolish";[6]as it were, "sharp-dull", "keenly stupid", or "pointedly foolish".[7]The wordoxymoronisautological, i.e., it is itself an example of an oxymoron. The Greek compound wordὀξύμωρονoksýmōron, which would correspond to the Latin formation, does not appear in any Ancient Greek works prior to the formation of the Latin term.[8] Oxymorons in the narrow sense are a rhetorical device used deliberately by the speaker and intended to be understood as such by the listener. In a more extended sense, the term "oxymoron" has also been applied to inadvertent or incidental contradictions, as in the case of "dead metaphors" ("barely clothed" or "terribly good"). Lederer (1990), in the spirit of "recreational linguistics", goes as far as to construct "logological oxymorons" such as reading the wordnookcomposed of "no" and "ok" or the surnameNoyesas composed of "no" plus "yes", or refers to some oxymoronic candidates as puns through the conversion of nouns into verbs, as in "divorce court", or "press release". He refers to potential oxymora such as "war games", "peacekeeping missile", "United Nations", and "airline food" as opinion-based, because some may disagree that they contain an internal contradiction.[9] There are a number of single-word oxymorons built from "dependent morphemes"[9](i.e. no longer a productivecompoundin English, but loaned as a compound from a different language), as withpre-posterous(lit. "with the hinder part before", comparehysteron proteron, "upside-down", "head over heels", "ass-backwards" etc.)[10]orsopho-more(an artificial Greek compound, lit. "wise-foolish"). The most common form of oxymoron involves anadjective–nouncombination of two words, but they can also be devised in themeaningof sentences or phrases. One classic example of the use of oxymorons in English literature can be found in this example fromShakespeare'sRomeo and Juliet, whereRomeostrings together thirteen in a row:[11] O brawling love! O loving hate!O anything of nothing first create!O heavy lightness, serious vanity!Misshapen chaos of well-seeming forms!Feather of lead, bright smoke, cold fire, sick health!Still-waking sleep, that is not what it is!This love feel I, that feel no love in this. Other examples from English-language literature include: "hateful good" (Chaucer, translatingodibile bonum)[12]"proud humility" (Spenser),[13]"darkness visible" (Milton), "beggarly riches" (John Donne),[14]"damn with faint praise" (Pope),[15]"expressive silence" (Thomson, echoingCicero'sLatin:cum tacent clamant,lit.'when they are silent, they cry out'), "melancholy merriment" (Byron), "faith unfaithful", "falsely true" (Tennyson),[16]"conventionally unconventional", "tortuous spontaneity" (Henry James)[17]"delighted sorrow", "loyal treachery", and "scalding coolness" (Hemingway).[18] In literary contexts, the author does not usually signal the use of an oxymoron, but in rhetorical usage, it has become common practice to advertise the use of an oxymoron explicitly to clarify the argument, as in: In this example, "Epicurean pessimist" would be recognized as an oxymoron in any case, as the core tenet ofEpicureanismisequanimity(which would preclude any sort ofpessimist outlook). However, the explicit advertisement of the use of oxymorons opened up a sliding scale of less than obvious construction, ending in the "opinion oxymorons" such as "business ethics". J. R. R. Tolkieninterpreted his own surname as derived from theLow Germanequivalent ofdull-keen(High Germantoll-kühn) which would be a literal equivalent of Greekoxy-moron.[19] "Comical oxymoron" is a humorous claim that something is an oxymoron. This is called an "opinion oxymoron" by Lederer (1990).[9]The humor derives from implying that an assumption (which might otherwise be expected to be controversial or at least non-evident) is so obvious as to be part of thelexicon. An example of such a "comical oxymoron" is "educational television": the humor derives entirely from the claim that it is an oxymoron by the implication that "television" is so trivial as to be inherently incompatible with "education".[20]In a 2009 article called "Daredevil",Garry WillsaccusedWilliam F. Buckleyof popularizing this trend, based on the success of the latter's claim that "an intelligent liberal is an oxymoron".[21] Examples popularized by comedianGeorge Carlinin 1975 include "military intelligence" (a play on the lexical meanings of the term "intelligence", implying that "military" inherently excludes the presence of "intelligence") and "business ethics" (similarly implying that the mutual exclusion of the two terms is evident or commonly understood rather than the partisananti-corporateposition).[22] Similarly, the term "civil war" is sometimes jokingly referred to as an "oxymoron" (punning on the lexical meanings of the word "civil").[23] Other examples include "honest politician", "affordable caviar" (1993),[24]"happily married" and "Microsoft Works" (2000).[25] Listing of antonyms, such as "good and evil", "great and small", etc., does not create oxymorons, as it is not implied that any given object has the two opposing properties simultaneously. In some languages, it is not necessary to place a conjunction likeandbetween the two antonyms; such compounds (not necessarily of antonyms) are known asdvandvas(a term taken fromSanskrit grammar). For example, in Chinese, compounds like 男女 (man and woman, male and female, gender), 陰陽 (yin and yang), 善惡 (good and evil, morality) are used to indicate couples, ranges, or the trait that these are extremes of. The Italianpianoforteorfortepianois an example from a Western language; the term is short forgravicembalo col piano e forte, as it were "harpsichord with a range of different volumes", implying that it is possible to play both soft and loud (as well as intermediate) notes, not that the sound produced is somehow simultaneously "soft and loud".
https://en.wikipedia.org/wiki/Oxymoron
Paraconsistent logicis a type ofnon-classical logicthat allows for the coexistence of contradictory statements without leading to a logical explosion where anything can be proven true. Specifically, paraconsistent logic is the subfield oflogicthat is concerned with studying and developing "inconsistency-tolerant" systems of logic, purposefully excluding theprinciple of explosion. Inconsistency-tolerant logics have been discussed since at least 1910 (and arguably much earlier, for example in the writings ofAristotle);[1]however, the termparaconsistent("beside the consistent") was first coined in 1976, by thePeruvianphilosopherFrancisco Miró Quesada Cantuarias.[2]The study of paraconsistent logic has been dubbedparaconsistency,[3]which encompasses the school ofdialetheism. Inclassical logic(as well asintuitionistic logicand most other logics), contradictionsentaileverything. This feature, known as theprinciple of explosionorex contradictione sequitur quodlibet(Latin, "from a contradiction, anything follows")[4]can be expressed formally as Which means: ifPand its negation ¬Pare both assumed to be true, then of the two claimsPand (some arbitrary)A, at least one is true. Therefore,PorAis true. However, if we know that eitherPorAis true, and also thatPis false (that ¬Pis true) we can conclude thatA, which could be anything, is true. Thus if atheorycontains a single inconsistency, the theory istrivial– that is, it has every sentence as a theorem. The characteristic or defining feature of a paraconsistent logic is that it rejects the principle of explosion. As a result, paraconsistent logics, unlike classical and other logics, can be used to formalize inconsistent but non-trivial theories. The entailment relations of paraconsistent logics arepropositionallyweakerthanclassical logic; that is, they deemfewerpropositional inferences valid. The point is that a paraconsistent logic can never be a propositional extension of classical logic, that is, propositionally validate every entailment that classical logic does. In some sense, then, paraconsistent logic is more conservative or cautious than classical logic. It is due to such conservativeness that paraconsistent languages can be moreexpressivethan their classical counterparts including the hierarchy ofmetalanguagesdue toAlfred Tarskiand others. According toSolomon Feferman: "natural language abounds with directly or indirectlyself-referentialyet apparently harmless expressions—all of which are excluded from the Tarskian framework."[5]This expressive limitation can be overcome in paraconsistent logic. A primary motivation for paraconsistent logic is the conviction that it ought to be possible to reason with inconsistentinformationin a controlled and discriminating way. The principle of explosion precludes this, and so must be abandoned. In non-paraconsistent logics, there is only one inconsistent theory: the trivial theory that has every sentence as a theorem. Paraconsistent logic makes it possible to distinguish between inconsistent theories and to reason with them. Research into paraconsistent logic has also led to the establishment of the philosophical school ofdialetheism(most notably advocated byGraham Priest), which asserts that true contradictions exist in reality, for example groups of people holding opposing views on various moral issues.[6]Being a dialetheist rationally commits one to some form of paraconsistent logic, on pain of otherwise embracingtrivialism, i.e. accepting that all contradictions (and equivalently all statements) are true.[7]However, the study of paraconsistent logics does not necessarily entail a dialetheist viewpoint. For example, one need not commit to either the existence of true theories or true contradictions, but would rather prefer a weaker standard likeempirical adequacy, as proposed byBas van Fraassen.[8] In classical logic, Aristotle's three laws, namely, the excluded middle (por ¬p), non-contradiction ¬ (p∧ ¬p) and identity (piffp), are regarded as the same, due to the inter-definition of the connectives. Moreover, traditionally contradictoriness (the presence of contradictions in a theory or in a body of knowledge) and triviality (the fact that such a theory entails all possible consequences) are assumed inseparable, granted that negation is available. These views may be philosophically challenged, precisely on the grounds that they fail to distinguish between contradictoriness and other forms of inconsistency. On the other hand, it is possible to derive triviality from the 'conflict' between consistency and contradictions, once these notions have been properly distinguished. The very notions of consistency and inconsistency may be furthermore internalized at the object language level. Paraconsistency involves tradeoffs. In particular, abandoning the principle of explosion requires one to abandon at least one of the following two principles:[9] Both of these principles have been challenged. One approach is to reject disjunction introduction but keep disjunctivesyllogismand transitivity. In this approach, rules ofnatural deductionhold, except fordisjunction introductionandexcluded middle; moreover, inference A⊢B does not necessarily mean entailment A⇒B. Also, the following usual Boolean properties hold:double negationas well asassociativity,commutativity,distributivity,De Morgan, andidempotenceinferences (for conjunction and disjunction). Furthermore, inconsistency-robust proof of negation holds for entailment: (A⇒(B∧¬B))⊢¬A. Another approach is to reject disjunctive syllogism. From the perspective ofdialetheism, it makes perfect sense that disjunctive syllogism should fail. The idea behind this syllogism is that, if¬ A, thenAis excluded andBcan be inferred fromA ∨ B. However, ifAmay hold as well as¬A, then the argument for the inference is weakened. Yet another approach is to do both simultaneously. In many systems ofrelevant logic, as well aslinear logic, there are two separate disjunctive connectives. One allows disjunction introduction, and one allows disjunctive syllogism. Of course, this has the disadvantages entailed by separate disjunctive connectives including confusion between them and complexity in relating them. Furthermore, the rule of proof of negation (below) just by itself is inconsistency non-robust in the sense that the negation of every proposition can be proved from a contradiction. Strictly speaking, having just the rule above is paraconsistent because it is not the case thateveryproposition can be proved from a contradiction. However, if the ruledouble negation elimination(¬¬A⊢A{\displaystyle \neg \neg A\vdash A}) is added as well, then every proposition can be proved from a contradiction. Double negation elimination does not hold forintuitionistic logic. One example of paraconsistent logic is the system known as LP ("Logic of Paradox"), first proposed by theArgentinianlogicianFlorencio González Asenjoin 1966 and later popularized byPriestand others.[10] One way of presenting the semantics for LP is to replace the usualfunctionalvaluation with arelationalone.[11]The binary relationV{\displaystyle V\,}relates aformulato atruth value:V(A,1){\displaystyle V(A,1)\,}means thatA{\displaystyle A\,}is true, andV(A,0){\displaystyle V(A,0)\,}means thatA{\displaystyle A\,}is false. A formula must be assignedat leastone truth value, but there is no requirement that it be assignedat mostone truth value. The semantic clauses fornegationanddisjunctionare given as follows: (The otherlogical connectivesare defined in terms of negation and disjunction as usual.) Or to put the same point less symbolically: (Semantic)logical consequenceis then defined as truth-preservation: Now consider a valuationV{\displaystyle V\,}such thatV(A,1){\displaystyle V(A,1)\,}andV(A,0){\displaystyle V(A,0)\,}but it is not the case thatV(B,1){\displaystyle V(B,1)\,}. It is easy to check that this valuation constitutes acounterexampleto both explosion and disjunctive syllogism. However, it is also a counterexample tomodus ponensfor thematerial conditionalof LP. For this reason, proponents of LP usually advocate expanding the system to include a stronger conditional connective that is not definable in terms of negation and disjunction.[12] As one can verify, LP preserves most other inference patterns that one would expect to be valid, such asDe Morgan's lawsand the usualintroduction and elimination rulesfor negation,conjunction, and disjunction. Surprisingly, thelogical truths(ortautologies) of LP are precisely those of classical propositional logic.[13](LP and classical logic differ only in theinferencesthey deem valid.) Relaxing the requirement that every formula be either true or false yields the weaker paraconsistent logic commonly known asfirst-degree entailment(FDE). Unlike LP, FDE contains no logical truths. LP is only one ofmanyparaconsistent logics that have been proposed.[14]It is presented here merely as an illustration of how a paraconsistent logic can work. One important type of paraconsistent logic isrelevance logic. A logic isrelevantif it satisfies the following condition: It follows that arelevance logiccannot have (p∧ ¬p) →qas a theorem, and thus (on reasonable assumptions) cannot validate the inference from {p, ¬p} toq. Paraconsistent logic has significant overlap withmany-valued logic; however, not all paraconsistent logics are many-valued (and, of course, not all many-valued logics are paraconsistent).Dialetheic logics, which are also many-valued, are paraconsistent, but the converse does not hold. The ideal 3-valued paraconsistent logic given below becomes the logicRM3when the contrapositive is added. Intuitionistic logicallowsA∨ ¬Anot to be equivalent to true, while paraconsistent logic allowsA∧ ¬Anot to be equivalent to false. Thus it seems natural to regard paraconsistent logic as the "dual" of intuitionistic logic. However, intuitionistic logic is a specific logical system whereas paraconsistent logic encompasses a large class of systems. Accordingly, the dual notion to paraconsistency is calledparacompleteness, and the "dual" of intuitionistic logic (a specific paracomplete logic) is a specific paraconsistent system calledanti-intuitionisticordual-intuitionistic logic(sometimes referred to asBrazilian logic, for historical reasons).[15]The duality between the two systems is best seen within asequent calculusframework. While in intuitionistic logic the sequent is not derivable, in dual-intuitionistic logic is not derivable[citation needed]. Similarly, in intuitionistic logic the sequent is not derivable, while in dual-intuitionistic logic is not derivable. Dual-intuitionistic logic contains a connective # known aspseudo-differencewhich is the dual of intuitionistic implication. Very loosely,A#Bcan be read as "Abut notB". However, # is nottruth-functionalas one might expect a 'but not' operator to be; similarly, the intuitionistic implication operator cannot be treated like "¬ (A∧ ¬B)". Dual-intuitionistic logic also features a basic connective ⊤ which is the dual of intuitionistic ⊥: negation may be defined as¬A= (⊤ #A) A full account of the duality between paraconsistent and intuitionistic logic, including an explanation on why dual-intuitionistic and paraconsistent logics do not coincide, can be found in Brunner and Carnielli (2005). These other logics avoid explosion:implicational propositional calculus,positive propositional calculus,equivalential calculusandminimal logic. The latter, minimal logic, is both paraconsistent and paracomplete (a subsystem of intuitionistic logic). The other three simply do not allow one to express a contradiction to begin with since they lack the ability to form negations. Here is an example of athree-valued logicwhich is paraconsistent andidealas defined in "Ideal Paraconsistent Logics" by O. Arieli, A. Avron, and A. Zamansky, especially pages 22–23.[16]The three truth-values are:t(true only),b(both true and false), andf(false only). A formula is true if its truth-value is eithertorbfor the valuation being used. A formula is a tautology of paraconsistent logic if it is true in every valuation which maps atomic propositions to {t,b,f}. Every tautology of paraconsistent logic is also a tautology of classical logic. For a valuation, the set of true formulas is closed undermodus ponensand thededuction theorem. Any tautology of classical logic which contains no negations is also a tautology of paraconsistent logic (by mergingbintot). This logic is sometimes referred to as "Pac" or "LFI1". Some tautologies of paraconsistent logic are: Some tautologies of classical logic which arenottautologies of paraconsistent logic are: Suppose we are faced with a contradictory set of premises Γ and wish to avoid being reduced to triviality. In classical logic, the only method one can use is to reject one or more of the premises in Γ. In paraconsistent logic, we may try to compartmentalize the contradiction. That is, weaken the logic so that Γ→Xis no longer a tautology provided the propositional variableXdoes not appear in Γ. However, we do not want to weaken the logic any more than is necessary for that purpose. So we wish to retain modus ponens and the deduction theorem as well as the axioms which are the introduction and elimination rules for the logical connectives (where possible). To this end, we add a third truth-valuebwhich will be employed within the compartment containing the contradiction. We makeba fixed point of all the logical connectives. We must makeba kind of truth (in addition tot) because otherwise there would be no tautologies at all. To ensure that modus ponens works, we must have that is, to ensure that a true hypothesis and a true implication lead to a true conclusion, we must have that a not-true (f) conclusion and a true (torb) hypothesis yield a not-true implication. If all the propositional variables in Γ are assigned the valueb, then Γ itself will have the valueb. If we giveXthe valuef, then So Γ→Xwill not be a tautology. Limitations: (1) There must not be constants for the truth values because that would defeat the purpose of paraconsistent logic. Havingbwould change the language from that of classical logic. Havingtorfwould allow the explosion again because would be tautologies. Note thatbis not a fixed point of those constants sinceb≠tandb≠f. (2) This logic's ability to contain contradictions applies only to contradictions among particularized premises, not to contradictions among axiom schemas. (3) The loss of disjunctive syllogism may result in insufficient commitment to developing the 'correct' alternative, possibly crippling mathematics. (4) To establish that a formula Γ is equivalent to Δ in the sense that either can be substituted for the other wherever they appear as a subformula, one must show This is more difficult than in classical logic because the contrapositives do not necessarily follow. Paraconsistent logic has been applied as a means of managing inconsistency in numerous domains, including:[17] Logic, as it is classically understood, rests on three main rules (Laws of Thought): TheLaw of Identity(LOI), theLaw of Non-Contradiction(LNC), and theLaw of the Excluded Middle(LEM). Paraconsistent logic deviates from classical logic by refusing to acceptLNC. However, theLNCcan be seen as closely interconnected with theLOIas well as theLEM: LoIstates thatAisA(A≡A). This means thatAis distinct from its opposite or negation (not A, or ¬A). In classical logic this distinction is supported by the fact that whenAis true, its opposite is not. However, without theLNC, bothAandnot Acan be true (A∧¬A), which blurs their distinction. And without distinction, it becomes challenging to define identity. Dropping theLNCthus runs risk to also eliminate theLoI. LEMstates that eitherAornot Aare true (A∨¬A). However, without theLNC, bothAandnot Acan be true (A∧¬A). Dropping theLNCthus runs risk to also eliminate theLEM Hence, dropping theLNCin a careless manner risks losing both theLOIandLEMas well. And droppingallthree classical laws does not just change thekindof logic—it leaves us without any functional system of logic altogether. Loss ofalllogic eliminates the possibility of structured reasoning, A careless paraconsistent logic therefore might run risk of disapproving of any means of thinking other than chaos. Paraconsistent logic aims to evade this danger using careful and precise technical definitions. As a consequence, most criticism of paraconsistent logic also tends to be highly technical in nature (e.g. surrounding questions such as whether a paradox can be true). However, even on a highly technical level, paraconsistent logic can be challenging to argue against. It is obvious that paraconsistent logic leads to contradictions. However, the paraconsistent logician embraces contradictions, including any contradictions that are a part or the result of paraconsistent logic. As a consequence, much of the critique has focused on the applicability and comparative effectiveness of paraconsistent logic. This is an important debate since embracing paraconsistent logic comes at the risk of losing a large amount oftheoremsthat form the basis ofmathematicsandphysics. LogicianStewart Shapiroaimed to make a case for paraconsistent logic as part of his argument for a pluralistic view of logic (the view that different logics are equally appropriate, or equally correct). He found that a case could be made that either,intuitonistic logicas the "One True Logic", or a pluralism ofintuitonistic logicandclassical logicis interesting and fruitful. However, when it comes to paraconsistent logic, he found "no examples that are ... compelling (at least to me)".[30] In "Saving Truth from Paradox",Hartry Fieldexamines the value of paraconsistent logic as a solution toparadoxa.[31]Field argues for a view that avoids both truth gluts (where a statement can be both true and false) and truth gaps (where a statement is neither true nor false). One of Field's concerns is the problem of a paraconsistentmetatheory: If the logic itself allows contradictions to be true, then the metatheory that describes or governs the logic might also have to be paraconsistent. If the metatheory is paraconsistent, then the justification of the logic (why we should accept it) might be suspect, because any argument made within a paraconsistent framework could potentially be both valid and invalid. This creates a challenge for proponents of paraconsistent logic to explain how their logic can be justified without falling into paradox or losing explanatory power.Stewart Shapiroexpressed similar concerns: "there are certain notions and concepts that the dialetheist invokes (informally), but which she cannot adequately express, unless the meta-theory is (completely) consistent. The insistence on a consistent meta-theory would undermine the key aspect of dialetheism"[32] In his book "In Contradiction", which argues in favor of paraconsistent dialetheism,Graham Priestadmits to metatheoretic difficulties: "Is there a metatheory for paraconsistent logics that is acceptable in paraconsistent terms? The answer to this question is not at all obvious."[33] Littmann andKeith Simmonsargued that dialetheist theory is unintelligible: "Once we realize that the theory includes not only the statement '(L) is both true and false' but also the statement '(L) isn't both true and false' we may feel at a loss."[34] Some philosophers have argued against dialetheism on the grounds that the counterintuitiveness of giving up any of the three principles above outweighs any counterintuitiveness that the principle of explosion might have. Others, such asDavid Lewis, have objected to paraconsistent logic on the ground that it is simply impossible for a statement and its negation to be jointly true.[35]A related objection is that "negation" in paraconsistent logic is not reallynegation; it is merely asubcontrary-forming operator.[36] Approaches exist that allow for resolution of inconsistent beliefs without violating any of the intuitive logical principles. Most such systems usemulti-valued logicwithBayesian inferenceand theDempster-Shafer theory, allowing that no non-tautological belief is completely (100%) irrefutable because it must be based upon incomplete, abstracted, interpreted, likely unconfirmed, potentially uninformed, and possibly incorrect knowledge (of course, this very assumption, if non-tautological, entails its own refutability, if by "refutable" we mean "not completely [100%] irrefutable"). Notable figures in the history and/or modern development of paraconsistent logic include:
https://en.wikipedia.org/wiki/Paraconsistent_logic
Aparadoxis alogicallyself-contradictory statement or a statement that runs contrary to one's expectation.[1][2]It is a statement that, despite apparently valid reasoning from true or apparently true premises, leads to a seemingly self-contradictory or a logically unacceptable conclusion.[3][4]A paradox usually involves contradictory-yet-interrelated elements that exist simultaneously and persist over time.[5][6][7]They result in "persistent contradiction between interdependent elements" leading to a lasting "unity of opposites".[8] Inlogic, many paradoxes exist that are known to beinvalidarguments, yet are nevertheless valuable in promotingcritical thinking,[9]while other paradoxes have revealed errors in definitions that were assumed to be rigorous, and have causedaxiomsof mathematics and logic to be re-examined. One example isRussell's paradox, which questions whether a "list of all lists that do not contain themselves" would include itself and showed that attempts to foundset theoryon the identification of sets withpropertiesorpredicateswere flawed.[10][11]Others, such asCurry's paradox, cannot be easily resolved by making foundational changes in a logical system.[12] Examples outside logic include theship of Theseusfrom philosophy, a paradox that questions whether a ship repaired over time by replacing each and all of its wooden parts one at a time would remain the same ship.[13]Paradoxes can also take the form of images or other media. For example,M. C. Escherfeaturedperspective-basedparadoxes in many of his drawings, with walls that are regarded as floors from other points of view, and staircases that appear to climb endlessly.[14] Informally, the termparadoxis often used to describe a counterintuitive result. Self-reference,contradictionandinfinite regressare core elements of many paradoxes.[15]Other common elements includecircular definitions, and confusion or equivocation between different levels ofabstraction. Self-referenceoccurs when asentence, idea orformularefers to itself. Although statements can be self referential without being paradoxical ("This statement is written in English" is a true and non-paradoxical self-referential statement), self-reference is a common element of paradoxes. One example occurs in theliar paradox, which is commonly formulated as the self-referential statement "This statement is false".[16]Another example occurs in thebarber paradox, which poses the question of whether abarberwho shaves all and only those who do not shave themselves will shave himself. In this paradox, the barber is a self-referential concept. Contradiction, along with self-reference, is a core feature of many paradoxes.[15]The liar paradox, "This statement is false," exhibits contradiction because the statement cannot be false and true at the same time.[17]The barber paradox is contradictory because it implies that the barber shaves himself if and only if the barber does not shave himself. As with self-reference, a statement can contain a contradiction without being a paradox. "This statement is written in French" is an example of a contradictory self-referential statement that is not a paradox and is instead false.[15] Another core aspect of paradoxes is non-terminatingrecursion, in the form ofcircular reasoningorinfinite regress.[15]When this recursion creates a metaphysical impossibility through contradiction, the regress or circularity isvicious. Again, the liar paradox is an instructive example: "This statement is false"—if the statement is true, then the statement is false, thereby making the statement true, thereby making the statement false, and so on.[15][18] The barber paradox also exemplifies vicious circularity: The barber shaves those who do not shave themselves, so if the barber does not shave himself, then he shaves himself, then he does not shave himself, and so on. Other paradoxes involve false statements andhalf-truthsor rely on hasty assumptions (A father and his son are in a car crash; the father is killed and the boy is rushed to the hospital. The doctor says, "I can't operate on this boy. He's my son." There is no contradiction, the doctor is the boy's mother.). Paradoxes that are not based on a hidden error generally occur at the fringes of context orlanguage, and require extending the context or language in order to lose their paradoxical quality. Paradoxes that arise from apparently intelligible uses of language are often of interest tologiciansandphilosophers. "This sentence is false" is an example of the well-knownliar paradox: it is a sentence that cannot be consistently interpreted as either true or false, because if it is known to be false, then it can be inferred that it must be true, and if it is known to be true, then it can be inferred that it must be false.Russell's paradox, which shows that the notion ofthesetof all those sets that do not contain themselvesleads to a contradiction, was instrumental in the development of modern logic and set theory.[10] Thought experimentscan also yield interesting paradoxes. Thegrandfather paradox, for example, would arise if atime travelerwere to kill his own grandfather before his mother or father had been conceived, thereby preventing his own birth. This is a specific instance of thebutterfly effect– in that any interaction a time traveler has with the past would alter conditions such that divergent events "propagate" through the world over time, ultimately altering the circumstances in which the time travel initially takes place. Often a seemingly paradoxical conclusion arises from an inconsistent or inherently contradictory definition of the initial premise. In the case of that apparent paradox of a time traveler killing his own grandfather, it is the inconsistency of defining the past to which he returns as being somehow different from the one that leads up to the future from which he begins his trip, but also insisting that he must have come to that past from the same future as the one that it leads up to. W. V. O. Quine(1962) distinguished between three classes of paradoxes:[19][20] Averidical paradoxproduces a result that appears counter tointuition, but is demonstrated to be true nonetheless: Afalsidical paradoxestablishes a result that appears false and actually is false, due to afallacyin the demonstration. Therefore, falsidical paradoxes can be classified asfallacious arguments: Anantinomyis a paradox which reaches a self-contradictory result by properly applying accepted ways of reasoning. For example, theGrelling–Nelson paradoxpoints out genuine problems in our understanding of the ideas of truth and description. Sometimes described since Quine's work, adialetheiais a paradox that is both true and false at the same time. It may be regarded as a fourth kind, or alternatively as a special case of antinomy. In logic, it is often assumed, followingAristotle, that nodialetheiaexist, but they are allowed in someparaconsistent logics. Frank Ramseydrew a distinction between logical paradoxes and semantic paradoxes, withRussell's paradoxbelonging to the former category, and theliar paradoxand Grelling's paradoxes to the latter.[21]Ramsey introduced the by-now standard distinction between logical and semantical contradictions. Logical contradictions involve mathematical or logical terms likeclassandnumber, and hence show that our logic or mathematics is problematic. Semantical contradictions involve, besides purely logical terms, notions likethought,language, andsymbolism, which, according to Ramsey, are empirical (not formal) terms. Hence these contradictions are due to faulty ideas about thought or language, and they properly belong toepistemology.[22] Aparadoxical reactionto adrugis the opposite of what one would expect, such as becoming agitated by asedativeor sedated by astimulant. Some are common and are used regularly in medicine, such as the use of stimulants such asAdderallandRitalinin the treatment ofattention deficit hyperactivity disorder(also known as ADHD), while others are rare and can be dangerous as they are not expected, such as severe agitation from abenzodiazepine.[23] The actions ofantibodiesonantigenscan rarely take paradoxical turns in certain ways. One example isantibody-dependent enhancement(immune enhancement) of a disease's virulence; another is thehook effect(prozone effect), of which there are several types. However, neither of these problems is common, and overall, antibodies are crucial to health, as most of the time they do their protective job quite well. In thesmoker's paradox, cigarette smoking, despite itsproven harms, has a surprising inverse correlation with the epidemiological incidence of certain diseases.
https://en.wikipedia.org/wiki/Paradox
TRIZ(/ˈtriːz/;Russian:теория решения изобретательских задач,romanized:teoriya resheniya izobretatelskikh zadach,lit.'theory of inventive problem solving') is a methodology that combines an organized, systematic method of problem-solving with analysis and forecasting techniques derived from the study of patterns of invention in global patent literature. The development and improvement of products and technologies in accordance with TRIZ are guided by thelaws of technical systems evolution.[1][2]Its development, by Soviet inventor and science-fiction authorGenrich Altshullerand his colleagues, began in 1946. InEnglish, TRIZ is typically rendered as the theory of inventive problem solving.[3][4] TRIZ developed from a foundation of research into hundreds of thousands of inventions in many fields to produce an approach which defines patterns in inventive solutions and the characteristics of the problems these inventions have overcome.[5]The research has produced three findings: TRIZ applies these findings to create and improve products, services, and systems.[6] TRIZ was developed by the Soviet inventor and science-fiction writer Genrich Altshuller and his associates. Altshuller began developing TRIZ in 1946 while working in the inventions-inspection department of theCaspian Sea flotillaof theSoviet Navy. His job was to help initiate invention proposals, to rectify and document them, and to prepare applications to the patent office. Altshuller realized that a problem requires an inventive solution if there are technical contradictions (improving one parameter negatively affects another). His work on what later became TRIZ was interrupted in 1950 by his arrest and 25-year sentence to theVorkutaGulag. The arrest was partially triggered by letters he and Raphael Shapiro sent toStalin, ministers, and newspapers about Soviet government decisions they considered erroneous.[7]Altshuller and Shapiro were freed during theKhrushchev Thawwhich followed Stalin's death in 1953,[8]and they returned toBaku. The first paper on TRIZ, "On the psychology of inventive creation", was published in 1956 in theIssues in Psychology(Voprosi Psichologii) journal.[9] Altshuller observed clever and creative people at work, discovering patterns in their thinking with which he developed thinking tools and techniques. The tools included Smart Little People[10]and Thinking in Time and Scale (or the Screens of Talented Thought).[11] In 1986, Altshuller's attention shifted from technical TRIZ to the development of individual creativity. He developed a version of TRIZ for children which was tried in several schools.[12]After theCold War, emigrants from the formerSoviet Unionbrought TRIZ to other countries.[13] One tool which evolved as an extension of TRIZ was acontradiction matrix.[14]The ideal final result (IFR) is the ultimate solution of a problem when the desired result is achieved by itself.[15] Altshuller screened patents to discover which contradictions were resolved or eliminated by the invention and how this had been achieved. He developed a set of 40 inventive principles and, later, a matrix of contradictions.[14]Although TRIZ was developed from analyzing technical systems, it has been used to understand and solve management problems.[16]The German-based nonprofit European TRIZ Association,[17]founded in 2000,[18]hosts conferences with publications.[19] Samsung has invested in embedding TRIZ throughout the company.[20] BAE Systems and GE also use TRIZ,[21][self-published source]Mars has documented how TRIZ led to a new patent for chocolate packaging.[22][self-published source]It has been used by Leafield Engineering, Smart Stabilizer Systems, and Buro Happold to solve problems and generate new patents.[23] The automakersRolls-Royce,[24]Ford, andDaimler-Chrysler,Johnson & Johnson, aeronautics companiesBoeing,NASA, technology companiesHewlett-Packard,Motorola,General Electric,Xerox,IBM,LG,Samsung,Intel,Procter & Gamble,Expedia, andKodakhave used TRIZ methods in projects.[8][25][26][27] TOP-TRIZ is a modern version of developed and integrated TRIZ methods. "TOP-TRIZ includes further development of problem formulation and problem modeling, development of Standard Solutions into Standard Techniques, further development of ARIZ and Technology Forecasting. TOP-TRIZ has integrated its methods into a universal and user-friendly system for innovation."[28]In 1992, several TRIZ practitioners fleeing the collapsing Soviet Union relocated and formed Ideation International.[29]They developed I-TRIZ, their version of TRIZ. InLiberating Structures, the facilitation method, also calledTRIZ, was "inspired by one small element of" the original TRIZ methodology but is used in a distinct context.[30]The method helps groups to identify and eliminate counterproductive practices by imagining the worst possible outcomes, recognizing current actions contributing to these scenarios, and designing practical steps to prevent them. This participatory approach emphasizes collaborative problem-solving within teams, setting it apart from the engineering-focused origins of the original TRIZ framework.
https://en.wikipedia.org/wiki/TRIZ
"All models are wrong" is a commonaphorismandanapodotoninstatistics. It is often expanded as "All models are wrong, but some are useful". The aphorism acknowledges thatstatistical modelsalways fall short of the complexities of reality but can still be useful nonetheless. The aphorism is generally attributed toGeorge E. P. Box, a Britishstatistician, although the underlying concept predates Box's writings. The phrase "all models are wrong" was attributed[1]to George Box who used the phrase in a 1976 paper to refer to the limitations of models, arguing that while no model is ever completely accurate, simpler models can still provide valuable insights if applied judiciously.[2]: 792In their 1983 book on generalized linear models, Peter McCullagh and John Nelder stated that while modeling in science is a creative process, some models are better than others, even though none can claim eternal truth.[3][4]In 1996, an Applied Statistician's Creed was proposed by M.R. Nester, which incorporated the aphorism as a central tenet.[1] Although the aphorism is most commonly associated with George Box, the underlying idea has been historically expressed by various thinkers in the past.Alfred Korzybskinoted in 1933, "A map is not the territory it represents, but, if correct, it has a similar structure to the territory, which accounts for its usefulness."[5]In 1939,Walter Shewhartdiscussed the impossibility of constructing a model that fully characterizes a state ofstatistical control, noting that no model can exactly represent any specific characteristic of such a state.[6]John von Neumann, in 1947, remarked that "truth is much too complicated to allow anything but approximations."[2] Box used the aphorism again in 1979, where he expanded on the idea by discussing how models serve as useful approximations, despite failing to perfectly describe empirical phenomena.[7]He reiterated this sentiment in his later works,where he discussed how models should be judged based on their utility rather than their absolute correctness.[8][6] David Cox, in a 1995 commentary, argued that stating all models are wrong is unhelpful, as models by their nature simplify reality. He emphasized that statistical models, like other scientific models, aim to capture important aspects of systems through idealized representations.[9] In their 2002 book on statistical model selection, Burnham and Anderson reiterated Box’s statement, noting that while models are simplifications of reality, they vary in usefulness, from highly useful to essentially useless.[10] J. Michael Steeleused the analogy of city maps to explain that models, like maps, serve practical purposes despite their limitations, emphasizing that certain models, though simplified, are not necessarily wrong.[11]In response, Andrew Gelman acknowledged Steele’s point but defended the usefulness of the aphorism, particularly in drawing attention to the inherent imperfections of models.[12] Philosopher Peter Truran, in a 2013 essay, discussed how seemingly incompatible models can make accurate predictions by representing different aspects of the same phenomenon, illustrating the point with an example of two observers viewing a cylindrical object from different angles.[13] In 2014,David Handreiterated that models are meant to aid in understanding or decision-making about the real world, a point emphasized by Box’s famous remark.[14]
https://en.wikipedia.org/wiki/All_models_are_wrong
Anelephant in Cairois a term used incomputer programmingto describe a piece of data that matches the search criteria purposefully inserted at the end of a search space, in order to make sure the search algorithm terminates; it is a humorous example of asentinel value. The term derives from a humorous essay circulated on theInternetthat was published inBytemagazine in September 1989, describing how various professions would go about hunting elephants.[1] When hunting elephants, the article describes programmers as following this algorithm:[1] This algorithm has a bug, namely abounds checkingerror: if no elephants are found, the programmer will continue northwards and end up in the Mediterranean sea, causingabnormal terminationbydrowning. Thus experienced programmers modify the above algorithm by placing a known elephant inCairoto ensure that the algorithm will terminate.[2]The modified algorithm is therefore as follows:
https://en.wikipedia.org/wiki/Elephant_in_Cairo
Statutory interpretationis the process by which courts interpret and applylegislation. Some amount of interpretation is often necessary when a case involves astatute. Sometimes the words of a statute have a plain and a straightforward meaning, but in many cases, there is someambiguityin the words of the statute that must be resolved by the judge. To find the meanings of statutes, judges use various tools and methods of statutory interpretation, including traditional canons of statutory interpretation, legislative history, and purpose. Incommon lawjurisdictions, thejudiciarymay apply rules of statutory interpretation both to legislation enacted by thelegislatureand todelegated legislationsuch as administrative agencyregulations. Statutory interpretation first became significant in common law systems, of which historicallyEnglandis the exemplar. In Roman and civil law, a statute (or code) guides the magistrate, but there is no judicial precedent. In England, Parliament historically failed to enact a comprehensive code of legislation, which is why it was left to the courts to develop the common law; and having decided a case and givenreasons for the decision, the decision would become binding on later courts. Accordingly, a particular interpretation of a statute would also become binding, and it became necessary to introduce a consistent framework for statutory interpretation. In the construction (interpretation) of statutes, the principal aim of the court must be to carry out the "intention of Parliament", and the English courts developed three main rules (plus some minor ones) to assist them in the task. These were: themischief rule, theliteral rule, and thegolden rule. Statutes may be presumed to incorporate certain components, as Parliament is "presumed" to have intended their inclusion.[1]For example: Where legislation andcase laware in conflict, there is a presumption that legislation takes precedence insofar as there is any inconsistency. In theUnited Kingdomthis principle is known asparliamentary sovereignty; but while Parliament has exclusive competence to legislate, the courts (mindful of their historic role of having developed the entire system of common law) retain sole competence tointerpretstatutes. The age old process of application of the enacted law has led to the formulation of certain rules of interpretation. According to Cross, "Interpretation is the process by which the courts determine the meaning of a statutory provision for the purpose of applying it to the situation before them",[6]while Salmond calls it "the process by which the courts seek to ascertain the meaning of the legislature through the medium of authoritative forms in which it is expressed".[7]Interpretation of a particular statute depends upon the degree of creativity applied by the judges or the court in the reading of it, employed to achieve some stated end. It is often mentioned that common law statutes can be interpreted by using the Golden Rule, the Mischief Rule or the Literal Rule. However, according toFrancis Bennion, author of texts on statutory interpretation,[8]there are no such simple devices to elucidate complex statutes, "[i]nstead there are a thousand and one interpretativecriteria".[9] A statute is an edict of a legislature,[10]and the conventional way of interpreting a statute is to seek the "intention" of its maker of framer. It is the judicature's duty to act upon the true intention of the legislature or the mens or sentential legis. The courts have to objectively determine the interpretation with guidance furnished by the accepted principles.[11]If a statutory provision is open to more than one interpretation the court has to choose that interpretation which represents the true intention of the legislature.[12][13]The function of the courts is only to expound and not to legislate.[14] Federaljurisdictions may presume that either federal or local government authority prevails in the absence of a defined rule. InCanada, there are areas of law where provincial governments and the federal government have concurrent jurisdiction. In these cases the federal law is held to be paramount. However, in areas where the Canadian constitution is silent, the federal government does not necessarily have superior jurisdiction. Rather, an area of law that is not expressly mentioned in Canada's Constitution will have to be interpreted to fall under either the federal residual jurisdiction found in the preamble of s. 91—known as the Peace, Order and Good Government clause—or the provinces residual jurisdiction of "Property and Civil Rights" under s. 92(13A) of the 1867 Constitution Act. This contrasts with other federal jurisdictions, notably theUnited StatesandAustralia, where it is presumed that if legislation is not enacted pursuant to a specific provision of the federalConstitution, the states will have authority over the relevant matter in their respective jurisdictions, unless the state's definitions of their statutes conflicts with federally established or recognized rights Thejudiciaryinterprets how legislation should apply in a particular case as no legislation unambiguously and specifically addresses all matters. Legislation may contain uncertainties for a variety of reasons: Therefore, the court must try to determine how a statute should be enforced. This requiresstatutory construction. It is a tenet of statutory construction that the legislature is supreme (assuming constitutionality) when creating law and that the court is merely an interpreter of the law. Nevertheless, in practice, by performing the construction the court can make sweeping changes in the operation of the law. Moreover, courts must also often view a case'sstatutory context. While cases occasionally focus on a few key words or phrases, judges may occasionally turn to viewing a case in its whole in order to gain deeper understanding. The totality of the language of a particular case allows the Justices presiding to better consider their rulings when it comes to these key words and phrases.[18] Statutory interpretation is the process by which a court looks at a statute and determines what it means. A statute, which is a bill or law passed by the legislature, imposes obligations and rules on the people. Although legislature makes the Statute, it may be open to interpretation and have ambiguities. Statutory interpretation is the process of resolving those ambiguities and deciding how a particular bill or law will apply in a particular case. Assume, for example, that a statute mandates that all motor vehicles travelling on a public roadway must be registered with the Department of Motor Vehicles (DMV). If the statute does not define the term "motor vehicles", then that term will have to be interpreted if questions arise in a court of law. A person driving a motorcycle might be pulled over and the police may try to fine him if his motorcycle is not registered with the DMV. If that individual argued to the court that a motorcycle is not a "motor vehicle", then the court would have to interpret the statute to determine what the legislature meant by "motor vehicle" and whether or not the motorcycle fell within that definition and was covered by the statute. There are numerous rules of statutory interpretation. The first and most important rule is the rule dealing with the statute's plain language. This rule essentially states that the statute means what it says. If, for example, the statute says "motor vehicles", then the court is most likely to construe that the legislation is referring to the broad range of motorised vehicles normally required to travel along roadways and not "aeroplanes" or "bicycles" even though aeroplanes are vehicles propelled by a motor and bicycles may be used on a roadway. In Australia and in the United States, the courts have consistently stated that the text of the statute is used first, and it is read as it is written, using the ordinary meaning of the words of the statute. Below are various quotes on this topic from US courts: It is presumed that a statute will be interpreted so as to be internally consistent. A particular section of the statute shall not be divorced from the rest of the act. Theejusdem generis(oreiusdem generis, Latin for "of the same kind") rule applies to resolve the problem of giving meaning to groups of words where one of the words is ambiguous or inherently unclear. The rule states that where "general words follow enumerations of particular classes or persons or things, the general words shall be construed as applicable only to persons or things of the same general nature or kind as those enumerated".[19] A statute shall not be interpreted so as to be inconsistent with other statutes. Where there is an apparent inconsistency, the judiciary will attempt to provide a harmonious interpretation.[example needed] Legislative bodies themselves may try to influence or assist the courts in interpreting their laws by placing into the legislation itself statements to that effect. These provisions have many different names, but are typically noted as: In most legislatures internationally, these provisions of the bill simply give the legislature's goals and desired effects of the law, and are considered non-substantive and non-enforceable in and of themselves.[21][22] However in the case of the European Union, a supranational body, the recitals in Union legislation must specify the reasons the operative provisions were adopted, and if they do not, the legislation is void.[23]This has been interpreted by the courts as giving them a role in statutory interpretation with Klimas, Tadas and Vaiciukaite explaining "recitals in EC law are not considered to have independent legal value, but they can expand an ambiguous provision's scope. They cannot, however, restrict an unambiguous provision's scope, but they can be used to determine the nature of a provision, and this can have a restrictive effect."[23] Also known as canons of construction, canons give common sense guidance to courts in interpreting the meaning of statutes. Most canons emerge from thecommon lawprocess through the choices of judges. Critics[who?]of the use of canons argue that the canons constrain judges and limit the ability of the courts tolegislate from the bench. Proponents[who?]argue that a judge always has a choice between competing canons that lead to different results, so judicial discretion is only hidden through the use of canons, not reduced. These canons can be divided into two major groups: Textual canons are rules of thumb for understanding the words of the text. Some of the canons are still known by their traditionalLatinnames. Substantive canons instruct the court to favor interpretations that promote certain values or policy results. Deference canons instruct the court to defer to the interpretation of another institution, such as an administrative agency or Congress. These canons reflect an understanding that the judiciary is not the only branch of government entrusted with constitutional responsibility. The avoidance canon was discussed inBond v. United Stateswhen the defendant placed toxic chemicals on frequently touched surfaces of a friend.[54]The statute in question made using a chemical weapon a crime; however, the separation of power between states and the federal government would be infringed upon if the Supreme Court interpreted the statute to extend to local crimes.[55]Therefore, the Court utilized the canon of constitutional avoidance and decided to "read the statute more narrowly, to exclude the defendant's conduct".[56] The application of this rule in the United Kingdom is not entirely clear. The literal meaning rule – that if "Parliament's meaning is clear, that meaning is binding no matter how absurd the result may seem"[59]– has a tension with the "golden rule", permitting courts to avoid absurd results in cases of ambiguity. At times, courts are not "concerned with what parliament intended, but simply with what it has said in the statute".[60]Different judges have different views. InNothman v. London Borough of Barnet, Lord Denning of the Court of Appeals attacked "those who adopt the strict literal and grammatical construction of the words" and saying that the "[t]he literal method is now completely out-of-date [and] replaced by the ... 'purposive' approach".[61]On appeal, however, against Denning's decision, Lord Russell in the House of Lords "disclaim[ed] the sweeping comments of Lord Denning".[62] For jurisprudence in the United States, "an absurdity is not mere oddity. The absurdity bar is high, as it should be. The result must be preposterous, one that 'no reasonable person could intend'".[63][64]Moreover, the avoidance applies only when "it is quite impossible that Congress could have intended the result ... and where the alleged absurdity is so clear as to be obvious to most anyone".[65]"To justify a departure from the letter of the law upon that ground, the absurdity must be so gross as to shock the general moral or common sense",[66]with an outcome "so contrary to perceived social values that Congress could not have 'intended' it".[67] Critics of the use of canons argue that canons impute some sort of "omniscience" to the legislature, suggesting that it is aware of the canons when constructing the laws. In addition, it is argued that the canons give a credence to judges who want to construct the law a certain way, imparting a false sense of justification to their otherwise arbitrary process. In a classic article,Karl Llewellynargued that every canon had a "counter-canon" that would lead to the opposite interpretation of the statute.[68][69] Some scholars argue that interpretive canons should be understood as an open set, despite conventional assumptions that traditional canons capture all relevant language generalizations. Empirical evidence, for example, suggests that ordinary people readily incorporate a "nonbinary gender canon" and "quantifier domain restriction canon" in the interpretation of legal rules.[70] Other scholars argue that the canons should be reformulated as "canonical" or archetypical queries helping to direct genuine inquiry rather than purporting to somehow help provide answers in themselves.[71] The French philosopherMontesquieu(1689–1755) believed that courts should act as "the mouth of the law", but soon it was found that some interpretation is inevitable. Following the German scholarFriedrich Carl von Savigny(1779–1861) the four main interpretation methods are: It is controversial[citation needed]whether there is a hierarchy between interpretation methods. Germans prefer a "grammatical" (literal) interpretation, because the statutory text has a democratic legitimation, and "sensible" interpretations are risky, in particular in view of German history. "Sensible" means different things to different people. The modern, common-law perception that courts actuallymakelaw is very different. In a German perception, courts can onlyfurtherdevelop law (Rechtsfortbildung). All of the above methods may seem reasonable: The freedom of interpretation varies by area of law. Criminal law and tax law must be interpreted very strictly, and never to the disadvantage of citizens,[citation needed]but liability law requires more elaborate interpretation, because here (usually) both parties are citizens. Here the statute may even be interpretedcontra legemin exceptional cases, if otherwise a patently unreasonable result would follow. The interpretation of international treaties is governed by another treaty, theVienna Convention on the Law of Treaties, notably Articles 31–33. Some states (such as the United States) are not a parties to the treaty, but recognize that the Convention is, at least in part, merely a codification of customary international law. The rule set out in the Convention is essentially that the text of a treaty is decisive unless it either leaves the meaning ambiguous, or obscure, or leads to a result that is manifestly absurd or unreasonable. Recourse to "supplementary means of interpretation" is allowed only in that case, like the preparatory works, also known by the French designation oftravaux préparatoires. Within the United States,purposivismandtextualismare the two most prevalent methods of statutory interpretation.[72]Also recognized is the theory of intentionalists, which is to prioritize and consider sources beyond the text. "Purposivists often focus on the legislative process, taking into account the problem that Congress was trying to solve by enacting the disputed law and asking how the statute accomplished that goal."[73]Purposivists believe in reviewing the processes surrounding the power of the legislative body as stated in the constitution as well as the rationale that a "reasonable person conversant with the circumstances underlying enactment would suppress the mischief and advance the remedy"[74]Purposivists would understand statutes by examining "how Congress makes its purposes known, through text and reliable accompanying materials constituting legislative history."[75][76] "In contrast to purposivists, textualists focus on the words of a statute, emphasizing text over any unstated purpose."[77] Textualists believe that everything which the courts need in deciding on cases are enumerated in the text of legislative statutes. In other words, if any other purpose was intended by the legislature then it would have been written within the statutes and since it is not written, it implies that no other purpose or meaning was intended. By looking at the statutory structure and hearing the words as they would sound in the mind of a skilled, objectively reasonable user of words,[78]textualists believe that they would respect the constitutional separation of power and best respectlegislative supremacy.[74]Critiques of modern textualism on the United States Supreme Court abound.[79][80] Intentionalists refer to the specific intent of the enacting legislature on a specific issue. Intentionalists can also focus on general intent. It is important to note that private motives do not eliminate the common goal that the legislature carries. This theory differs from others mainly on the types of sources that will be considered. Intentional theory seeks to refer to as many different sources as possible to consider the meaning or interpretation of a given statute. This theory is adjacent to a contextualist theory, which prioritizes the use of context to determine why a legislature enacted any given statute.
https://en.wikipedia.org/wiki/Expressio_unius_est_exclusio_alterius
Falsifiability(orrefutability) is adeductivestandard of evaluation of scientific theories and hypotheses, introduced by thephilosopher of scienceKarl Popperin his bookThe Logic of Scientific Discovery(1934).[B]Atheoryorhypothesisisfalsifiableif it can be logically contradicted by anempirical test. Popper emphasized the asymmetry created by the relation of a universal law with basic observation statements[C]and contrasted falsifiability to the intuitively similar concept ofverifiabilitythat was then current inlogical positivism. He argued that the only way to verify a claim such as "All swans are white" would be if one could theoretically observe all swans,[D]which is not possible. On the other hand, the falsifiability requirement for an anomalous instance, such as the observation of a single black swan, is theoretically reasonable and sufficient to logically falsify the claim. Popper proposed falsifiability as the cornerstone solution to both theproblem of inductionand theproblem of demarcation. He insisted that, as a logical criterion, his falsifiability is distinct from the related concept "capacity to be proven wrong" discussed inLakatos's falsificationism.[E][F][G]Even being a logical criterion, its purpose is to make the theorypredictiveandtestable, and thus useful in practice. By contrast, theDuhem–Quine thesissays that definitive experimental falsifications are impossible[1]and that no scientific hypothesis is by itself capable of making predictions, because anempiricaltest of the hypothesis requires one or more background assumptions.[2] Popper's response is that falsifiability does not have the Duhem problem[H]because it is a logical criterion. Experimental research has the Duhem problem and other problems, such as the problem of induction,[I]but, according to Popper, statistical tests, which are only possible when a theory is falsifiable, can still be useful within acritical discussion. As a key notion in the separation of science fromnon-scienceandpseudoscience, falsifiability has featured prominently in many scientific controversies and applications, even being used as legal precedent. However, falsifiability is not asufficientcondition for demarcating science as theories have to actually be tested in order to eliminate theories that are wrong. In scientific practice, this can cause theories to change from being falsified back to unfalsified, such as when the once-falsifiedgeocentricworld view was restored as a viable reference frame withinspecial relativity. There is ambiguity surrounding the status of theories that cannot currently be tested.[3] One of the questions in thescientific methodis: how does one move fromobservationstoscientific laws? This is the problem of induction. Suppose we want to put the hypothesis that all swans are white to the test. We come across a white swan. We cannotvalidlyargue (orinduce) from "here is a white swan" to "all swans are white"; doing so would require alogical fallacysuch as, for example,affirming the consequent.[4] Popper's idea to solve this problem is that while it is impossible to verify that every swan is white, finding a single black swan shows thatnotevery swan is white. Such falsification uses the valid inferencemodus tollens: if from a lawL{\displaystyle L}we logically deduceQ{\displaystyle Q}, but what is observed is¬Q{\displaystyle \neg Q}, we infer that the lawL{\displaystyle L}is false. For example, given the statementL={\displaystyle L=}"all swans are white", we can deduceQ={\displaystyle Q=}"the specific swan here is white", but if what is observed is¬Q={\displaystyle \neg Q=}"the specific swan here is not white" (say black), then "all swans are white" is false. More accurately, the statementQ{\displaystyle Q}that can be deduced is broken into an initial condition and a prediction as inC⇒P{\displaystyle C\Rightarrow P}in whichC={\displaystyle C=}"the thing here is a swan" andP={\displaystyle P=}"the thing here is a white swan". If what is observed is C being true while P is false (formally,C∧¬P{\displaystyle C\wedge \neg P}), we can infer that the law is false. For Popper, induction is actually never needed in science.[J][K]Instead, in Popper's view, laws are conjectured in a non-logical manner on the basis of expectations and predispositions.[5]This has ledDavid Miller, a student and collaborator of Popper, to write "the mission is to classify truths, not to certify them".[6]In contrast, thelogical empiricismmovement, which included such philosophers asMoritz Schlick,Rudolf Carnap,Otto Neurath, andA. J. Ayerwanted to formalize the idea that, for a law to be scientific, it must be possible to argue on the basis of observations either in favor of its truth or its falsity. There was no consensus among these philosophers about how to achieve that, but the thought expressed by Mach's dictum that "where neither confirmation nor refutation is possible, science is not concerned" was accepted as a basic precept of critical reflection about science.[7][8][9] Popper said that a demarcation criterion was possible, but we have to use thelogical possibilityof falsifications, which is falsifiability. He cited his encounter withpsychoanalysisin the 1910s. It did not matter what observation was presented, psychoanalysis could explain it. Unfortunately, the reason it could explain everything is that it did not exclude anything also.[L]For Popper, this was a failure, because it meant that it could not make any prediction. From a logical standpoint, if one finds an observation that does not contradict a law, it does not mean that the law is true. A verification has no value in itself. But, if the law makes risky predictions and these are corroborated, Popper says, there is a reason to prefer this law over another law that makes less risky predictions or no predictions at all.[M][N]In thedefinition of falsifiability, contradictions with observations are not used to support eventual falsifications, but forlogical"falsifications" that show that the law makes risky predictions, which is completely different. On the basic philosophical side of this issue, Popper said that some philosophers of theVienna Circlehad mixed two different problems, that of meaning and that of demarcation, and had proposed inverificationisma single solution to both: a statement that could not be verified was considered meaningless. In opposition to this view, Popper said that there are meaningful theories that are not scientific, and that, accordingly, a criterion of meaningfulness does not coincide with acriterion of demarcation.[O] The problem of induction is often called Hume's problem.David Humestudied how human beings obtain new knowledge that goes beyond known laws and observations, including how we can discover new laws. He understood that deductive logic could not explain this learning process and argued in favour of a mental or psychological process of learning that would not require deductive logic. He even argued that this learning process cannot be justified by any general rules, deductive or not.[10]Popper accepted Hume's argument and therefore viewed progress in science as the result of quasi-induction, which does the same as induction, but has no inference rules to justify it.[11][12]Philip N. Johnson-Laird, professor of psychology, also accepted Hume's conclusion that induction has no justification. For him induction does not require justification and therefore can exist in the same manner as Popper's quasi-induction does.[13] When Johnson-Laird says that no justification is needed, he does not refer to a general inductive method of justification that, to avoid a circular reasoning, would not itself require any justification. On the contrary, in agreement with Hume, he means that there is no general method of justification for induction and that's ok, because the induction steps do not require justification.[13]Instead, these steps usepatterns of induction, which are not expected to have a general justification: they may or may not be applicable depending on the background knowledge. Johnson-Laird wrote: "[P]hilosophers have worried about which properties of objects warrant inductive inferences. The answer rests on knowledge: we don't infer that all the passengers on a plane are male because the first ten off the plane are men. We know that this observation doesn't rule out the possibility of a woman passenger."[13]The reasoning pattern that was not applied here isenumerative induction. Popper was interested in the overall learning process in science, to quasi-induction, which he also called the "path of science".[11]However, Popper did not show much interest in these reasoning patterns, which he globally referred to as psychologism.[14]He did not deny the possibility of some kind of psychological explanation for the learning process, especially when psychology is seen as an extension of biology, but he felt that these biological explanations were not within the scope of epistemology.[P][Q]Popper proposed an evolutionary mechanism to explain the success of science,[15]which is much in line with Johnson-Laird's view that "induction is just something that animals, including human beings, do to make life possible",[13]but Popper did not consider it a part of his epistemology.[16]He wrote that his interest was mainly in thelogicof science and that epistemology should be concerned with logical aspects only.[R]Instead of asking why science succeeds he considered the pragmatic problem of induction.[17]This problem is not how to justify a theory or what is the global mechanism for the success of science but only what methodology do we use to pick one theory among theories that are already conjectured. His methodological answer to the latter question is that we pick the theory that is the most tested with the available technology: "the one, which in the light of ourcritical discussion, appears to be the best so far".[17]By his own account, because only a negative approach was supported by logic, Popper adopted a negative methodology.[S]The purpose of his methodology is to prevent "the policy of immunizing our theories against refutation". It also supports some "dogmatic attitude" in defending theories against criticism, because this allows the process to be more complete.[18]This negative view of science was much criticized and not only by Johnson-Laird. In practice, some steps based on observations can be justified under assumptions, which can be very natural. For example, Bayesian inductive logic[19]is justified by theorems that make explicit assumptions. These theorems are obtained with deductive logic, not inductive logic. They are sometimes presented as steps of induction, because they refer to laws of probability, even though they do not go beyond deductive logic. This is yet a third notion of induction, which overlaps with deductive logic in the following sense that it is supported by it. These deductive steps are not really inductive, but the overall process that includes the creation of assumptions is inductive in the usual sense. In afallibilistperspective, a perspective that is widely accepted by philosophers, including Popper,[20]every logical step of learning only creates an assumption or reinstates one that was doubted—that is all that science logically does. Popper distinguished between the logic of science and its appliedmethodology.[E]For example, the falsifiability of Newton's law of gravitation, as defined by Popper, depends purely on the logical relation it has with a statement such as "The brick fell upwards when released".[21][T]A brick that falls upwards would not alone falsify Newton's law of gravitation. The capacity to verify the absence of conditions such as a hidden string[U]attached to the brick is also needed for this state of affairs[A]to eventually falsify Newton's law of gravitation. However, these applied methodological considerations are irrelevant in falsifiability, because it is a logical criterion. The empirical requirement on the potential falsifier, also called thematerial requirement,[V]is only that it is observableinter-subjectivelywith existing technologies. There is no requirement that the potential falsifier can actually show the law to be false. The purely logical contradiction, together with the material requirement, are sufficient. The logical part consists of theories, statements, and their purely logical relationship together with this material requirement, which is needed for a connection with the methodological part. The methodological part consists, in Popper's view, of informal rules, which are used to guess theories, accept observation statements as factual, etc. These include statistical tests: Popper is aware that observation statements are accepted with the help of statistical methods and that these involve methodological decisions.[22]When this distinction is applied to the term "falsifiability", it corresponds to a distinction between two completely different meanings of the term. The same is true for the term "falsifiable". Popper said that he only uses "falsifiability" or "falsifiable" in reference to the logical side and that, when he refers to the methodological side, he speaks instead of "falsification" and its problems.[F] Popper said that methodological problems require proposing methodological rules. For example, one such rule is that, if one refuses to go along with falsifications, then one has retired oneself from the game of science.[23]The logical side does not have such methodological problems, in particular with regard to the falsifiability of a theory, because basic statements are not required to be possible. Methodological rules are only needed in the context of actual falsifications. So observations have two purposes in Popper's view. On the methodological side, observations can be used to show that a law is false, which Popper calls falsification. On the logical side, observations, which are purely logical constructions, do not show a law to be false, but contradict a law to show its falsifiability. Unlike falsifications andfree from the problems of falsification, these contradictions establish the value of the law, which may eventually be corroborated. Popper wrote that an entire literature exists because this distinction between the logical aspect and the methodological aspect was not observed.[G]This is still seen in a more recent literature. For example, in their 2019 articleEvidence based medicine as science, Vere and Gibson wrote "[falsifiability has] been considered problematic because theories are not simply tested through falsification but in conjunction with auxiliary assumptions and background knowledge."[24] In Popper's view of science, statements of observation can be analyzed within a logical structure independently of any factual observations.[W][X]The set of all purely logical observations that are considered constitutes the empirical basis. Popper calls them thebasic statementsortest statements. They are the statements that can be used to show the falsifiability of a theory. Popper says that basic statements do not have to be possible in practice. It is sufficient that they are accepted by convention as belonging to the empirical language, a language that allowsintersubjective verifiability: "they must be testable by intersubjective observation (the material requirement)".[25][Y]See the examples in section§ Examples of demarcation and applications. In more than twelve pages ofThe Logic of Scientific Discovery,[26]Popper discusses informally which statements among those that are considered in the logical structure are basic statements. A logical structure uses universal classes to define laws. For example, in the law "all swans are white" the concept of swans is a universal class. It corresponds to a set of properties that every swan must have. It is not restricted to the swans that exist, existed or will exist. Informally, a basic statement is simply a statement that concerns only a finite number of specific instances in universal classes. In particular, an existential statement such as "there exists a black swan" is not a basic statement, because it is not specific about the instance. On the other hand, "this swan here is black" is a basic statement. Popper says that it is a singular existential statement or simply a singular statement. So, basic statements are singular (existential) statements. Thornton says that basic statements are statements that correspond to particular "observation-reports". He then gives Popper's definition of falsifiability: "A theory is scientific if and only if it divides the class of basic statements into the following two non-empty sub-classes: (a) the class of all those basic statements with which it is inconsistent, or which it prohibits—this is the class of its potential falsifiers (i.e., those statements which, if true, falsify the whole theory), and (b) the class of those basic statements with which it is consistent, or which it permits (i.e., those statements which, if true, corroborate it, or bear it out)." As in the case of actual falsifiers, decisions must be taken by scientists to accept a logical structure and its associated empirical basis, but these are usually part of a background knowledge that scientists have in common and, often, no discussion is even necessary.[Z]The first decision described by Lakatos[27]is implicit in this agreement, but the other decisions are not needed. This agreement, if one can speak of agreement when there is not even a discussion, exists only in principle. This is where the distinction between the logical and methodological sides of science becomes important. When an actual falsifier is proposed, the technology used is considered in detail and, as described in section§ Dogmatic falsificationism, an actual agreement is needed. This may require using a deeper empirical basis,[AA]hidden within the current empirical basis, to make sure that the properties or values used in the falsifier were obtained correctly (Andersson 2016gives some examples). Popper says that despite the fact that the empirical basis can be shaky, more comparable to a swamp than to solid ground,[AA]the definition that is given above is simply the formalization of a natural requirement on scientific theories, without which the whole logical process of science[W]would not be possible. In his analysis of the scientific nature of universal laws, Popper arrived at the conclusion that laws must "allow us to deduce, roughly speaking, moreempiricalsingular statements than we can deduce from the initial conditions alone."[28]A singular statement that has one part only cannot contradict a universal law. A falsifier of a law has always two parts: the initial condition and the singular statement that contradicts the prediction. However, there is no need to require that falsifiers have two parts in the definition itself. This removes the requirement that a falsifiable statement must make prediction. In this way, the definition is more general and allows the basic statements themselves to be falsifiable.[28]Criteria that require thata lawmust be predictive, just as is required by falsifiability (when applied to laws), Popper wrote, "have been put forward as criteria of the meaningfulness of sentences (rather than as criteria of demarcation applicable to theoretical systems) again and again after the publication of my book, even by critics who pooh-poohed my criterion of falsifiability."[29] Scientists such as theNobel laureateHerbert A. Simonhave studied the semantic aspects of the logical side of falsifiability.[30][31]Here it is proposed that there are two formal requirements for a formally defined and stringent falsifiability that a scientific theory must satisfy to qualify as scientific: that they befinitelyandirrevocablytestable.[32]These studies were done in the perspective that a logic is a relation between formal sentences in languages and a collection of mathematical structures, each of which is considered a model within model theory.[32]The relation, usually denotedA⊨ϕ{\displaystyle {\mathfrak {A}}\models \phi }, says the formal sentenceϕ{\displaystyle \phi }is true when interpreted in the structureA{\displaystyle {\mathfrak {A}}}—it provides the semantic of the languages.[AB]According toRynasiewicz, in this semantic perspective, falsifiability as defined by Popper means that in some observation structure (in the collection) there exists a set of observations which refutes the theory.[33]An even stronger notion of falsifiability was considered, which requires, not only that there exists one structure with a contradicting set of observations, but also that all structures in the collection that cannot be expanded to a structure that satisfiesϕ{\displaystyle \phi }contain such a contradicting set of observations.[34] In response to Lakatos who suggested that Newton's theory was as hard to show falsifiable as Freud's psychoanalytic theory, Popper gave the example of an apple that moves from the ground up to a branch and then starts to dance from one branch to another.[T]Popper thought that it was a basic statement that was a potential falsifier for Newton's theory, because the position of the apple at different times can be measured. Popper's claims on this point arecontroversial, since Newtonian physics does not deny that there could be forces acting on the apple that are stronger than Earth's gravity. Another example of a basic statement is "The inert mass of this object is ten times larger than its gravitational mass." This is a basic statement because the inert mass and the gravitational mass can both be measured separately, even though it never happens that they are different. It is, as described by Popper, a valid falsifier for Einstein's equivalence principle.[AC] In a discussion of the theory of evolution, Popper mentioned industrial melanism[35]as an example of a falsifiable law. A corresponding basic statement that acts as a potential falsifier is "In this industrial area, the relative fitness of the white-bodiedpeppered mothis high." Here "fitness" means "reproductive success over the next generation".[AD][AE]It is a basic statement, because it is possible to separately determine the kind of environment, industrial vs natural, and the relative fitness of the white-bodied form (relative to the black-bodied form) in an area, even though it never happens that the white-bodied form has a high relative fitness in an industrial area. A famous example of a basic statement fromJ. B. S. Haldaneis "[These are] fossil rabbits in the Precambrian era." This is a basic statement because it is possible to find a fossil rabbit and to determine that the date of a fossil is in the Precambrian era, even though it never happens that the date of a rabbit fossil is in the Precambrian era. Despiteopinions to the contrary,[36]sometimes wrongly attributed to Popper,[AF]this shows the scientific character of paleontology or the history of the evolution of life on Earth, because it contradicts the hypothesis in paleontology that all mammals existed in a much more recent era.Richard Dawkinsadds that any other modern animal, such as a hippo, would suffice.[37][38][39] A simple example of a non-basic statement is "This angel does not have large wings." It is not a basic statement, because though the absence of large wings can be observed, no technology (independent of the presence of wings[AG]) exists to identify angels. Even if it is accepted that angels exist, the sentence "All angels have large wings" is not falsifiable. Another example from Popper of a non-basic statement is "This human action is altruistic." It is not a basic statement, because no accepted technology allows us to determine whether or not an action is motivated by self-interest. Because no basic statement falsifies it, the statement that "All human actions are egotistic, motivated by self-interest" is thus not falsifiable.[AH] Some adherents ofyoung-Earth creationismmake an argument (called the Omphalos hypothesis after the Greek word for navel) that the world was created with the appearance of age; e.g., the sudden appearance of a mature chicken capable of laying eggs. This ad hoc hypothesis introduced into young-Earth creationism is unfalsifiable because it says that the time of creation (of a species) measured by the accepted technology is illusory and no accepted technology is proposed to measure the claimed "actual" time of creation. Moreover, if the ad hoc hypothesis says that the world was created as we observe it today without stating further laws, by definition it cannot be contradicted by observations and thus is not falsifiable. This is discussed by Dienes in the case of a variation on the Omphalos hypothesis, which, in addition, specifies that God made the creation in this way to test our faith.[40] Grover Maxwell[es]discussed statements such as "All men are mortal."[41]This is not falsifiable, because it does not matter how old a man is, maybe he will die next year.[42]Maxwell said that this statement is nevertheless useful, because it is often corroborated. He coined the term "corroboration without demarcation". Popper's view is that it is indeed useful, because Popper considers that metaphysical statements can be useful, but also because it is indirectly corroborated by the corroboration of the falsifiable law "All men die before the age of 150." For Popper, if no such falsifiable law exists, then the metaphysical law is less useful, because it is not indirectly corroborated.[AI]This kind of non-falsifiable statements in science was noticed by Carnap as early as 1937.[43] Maxwell also used the example "All solids have a melting point." This is not falsifiable, because maybe the melting point will be reached at a higher temperature.[41][42]The law is falsifiable and more useful if we specify an upper bound on melting points or a way to calculate this upper bound.[AJ] Another example from Maxwell is "Allbeta decaysare accompanied with a neutrino emission from the same nucleus."[44]This is also not falsifiable, because maybe the neutrino can be detected in a different manner. The law is falsifiable and much more useful from a scientific point of view, if themethod to detect the neutrinois specified.[45]Maxwell said that most scientific laws are metaphysical statements of this kind,[46]which, Popper said, need to be made more precise before they can be indirectly corroborated.[AI]In other words, specific technologies must be provided to make the statements inter-subjectively-verifiable, i.e., so that scientists know what the falsification or its failure actually means. In his critique of the falsifiability criterion, Maxwell considered the requirement for decisions in the falsification of, both, the emission of neutrinos (see§ Dogmatic falsificationism) and the existence of the melting point.[44]For example, he pointed out that had no neutrino been detected, it could have been because some conservation law is false. Popper did not argue against the problems of falsification per se. He always acknowledged these problems. Popper's response was at the logical level. For example, he pointed out that, if a specific way is given to trap the neutrino, then, at the level of the language, the statement is falsifiable, because "no neutrino was detected after using this specific way" formally contradicts it (and it is inter-subjectively-verifiable—people can repeat the experiment). In the 5th and 6th editions ofOn the Origin of Species, following a suggestion ofAlfred Russel Wallace, Darwin used "Survival of the fittest", an expression first coined byHerbert Spencer, as a synonym for "Natural Selection".[AK]Popper and others said that, if one uses the most widely accepted definition of "fitness" in modern biology (see subsection§ Evolution), namely reproductive success itself, the expression "survival of the fittest" is a tautology.[AL][AM][AN] DarwinistRonald Fisherworked out mathematical theorems to help answer questions regarding natural selection. But, for Popper and others, there is no (falsifiable) law of Natural Selection in this, because these tools only apply to some rare traits.[AO][AP]Instead, for Popper, the work of Fisher and others on Natural Selection is part of an important and successful metaphysical research program.[47] Popper said that not all unfalsifiable statements are useless in science. Mathematical statements are good examples. Like allformal sciences, mathematics is not concerned with the validity of theories based on observations in theempiricalworld, but rather, mathematics is occupied with the theoretical, abstract study of such topics asquantity,structure,spaceandchange. Methods of the mathematical sciences are, however, applied in constructing and testing scientific models dealing with observablereality.Albert Einsteinwrote, "One reason why mathematics enjoys special esteem, above all other sciences, is that its laws are absolutely certain and indisputable, while those of other sciences are to some extent debatable and in constant danger of being overthrown by newly discovered facts."[48] Popper made a clear distinction between the original theory of Marx and what came to be known as Marxism later on.[49]For Popper, the original theory of Marx contained genuine scientific laws. Though they could not make preordained predictions, these laws constrained how changes can occur in society. One of them was that changes in society cannot "be achieved by the use of legal or political means".[AQ]In Popper's view, this was both testable and subsequently falsified. "Yet instead of accepting the refutations", Popper wrote, "the followers of Marx re-interpreted both the theory and the evidence in order to make them agree. ... They thus gave a 'conventionalist twist' to the theory; and by this stratagem, they destroyed its much advertised claim to scientific status."[AR][AS]Popper's attacks were not directed toward Marxism, or Marx's theories, which were falsifiable, but toward Marxists who he considered to have ignored the falsifications which had happened.[50]Popper more fundamentally criticized 'historicism' in the sense of any preordained prediction of history, given what he saw as our right, ability and responsibility to control our own destiny.[50] Falsifiability has been used in theMcLean v. Arkansascase (in 1982),[51]theDaubertcase (in 1993)[52]and other cases. A survey of 303 federal judges conducted in 1998[AT]found that "[P]roblems with the nonfalsifiable nature of an expert's underlying theory and difficulties with an unknown or too-large error rate were cited in less than 2% of cases."[53] In the ruling of theMcLean v. Arkansascase, JudgeWilliam Overtonused falsifiability as one of the criteria to determine that "creation science" was not scientific and should not be taught inArkansaspublic schoolsas such (it can be taught as religion). In his testimony, philosopherMichael Rusedefined the characteristics which constitute science as (seePennock 2000, p. 5, andRuse 2010): In his conclusion related to this criterion Judge Overton stated that: While anybody is free to approach a scientific inquiry in any fashion they choose, they cannot properly describe the methodology as scientific, if they start with the conclusion and refuse to change it regardless of the evidence developed during the course of the investigation. In several cases of theUnited States Supreme Court, the court described scientific methodology using thefive Daubert factors, which include falsifiability.[AU]The Daubert result cited Popper and other philosophers of science: Ordinarily, a key question to be answered in determining whether a theory or technique is scientific knowledge that will assist the trier of fact will be whether it can be (and has been) tested.Scientific methodology today is based on generating hypotheses and testing them to see if they can be falsified; indeed, this methodology is what distinguishes science from other fields of human inquiry.Green 645. See also C. Hempel, Philosophy of Natural Science 49 (1966) ([T]he statements constituting a scientific explanation must be capable of empirical test); K. Popper, Conjectures and Refutations: The Growth of Scientific Knowledge 37 (5th ed. 1989) ([T]he criterion of the scientific status of a theory is its falsifiability, or refutability, or testability) (emphasis deleted). David H. Kaye[AV]said that references to the Daubert majority opinion confused falsifiability and falsification and that "inquiring into the existence of meaningful attempts at falsification is an appropriate and crucial consideration in admissibility determinations."[AW] Considering the specific detection procedure that was used in the neutrino experiment, without mentioning its probabilistic aspect, Popper wrote "it provided a test of the much more significantfalsifiabletheory that such emitted neutrinos could be trapped in a certain way". In this manner, in his discussion of the neutrino experiment, Popper did not raise at all the probabilistic aspect of the experiment.[45]Together with Maxwell, who raised the problems of falsification in the experiment,[44]he was aware that some convention must be adopted to fix what it means to detect or not a neutrino in this probabilistic context. This is the third kind of decisions mentioned by Lakatos.[54]For Popper and most philosophers, observations are theory impregnated. In this example, the theory that impregnates observations (and justifies that we conventionally accept the potential falsifier "no neutrino was detected") is statistical. In statistical language, the potential falsifier that can be statistically accepted (not rejected to say it more correctly) is typically the null hypothesis, as understood even in popular accounts on falsifiability.[55][56][57] Different ways are used by statisticians to draw conclusions about hypotheses on the basis of available evidence.Fisher,NeymanandPearsonproposed approaches that require no prior probabilities on the hypotheses that are being studied. In contrast,Bayesian inferenceemphasizes the importance of prior probabilities.[58]But, as far as falsification as a yes/no procedure in Popper's methodology is concerned, any approach that provides a way to accept or not a potential falsifier can be used, including approaches that use Bayes' theorem and estimations of prior probabilities that are made using critical discussions and reasonable assumptions taken from the background knowledge.[AX]There is no general rule that considers as falsified an hypothesis with small Bayesian revised probability, because as pointed out byMayoand argued before by Popper, the individual outcomes described in detail will easily have very small probabilities under available evidence without being genuine anomalies.[59]Nevertheless, Mayo adds, "they can indirectly falsify hypotheses by adding a methodological falsification rule".[59]In general, Bayesian statistic can play a role in critical rationalism in the context of inductive logic,[60]which is said to be inductive because implications are generalized to conditional probabilities.[61]According to Popper and other philosophers such asColin Howson, Hume's argument precludes inductive logic, but only when the logic makes no use "of additional assumptions: in particular, about what is to be assigned positive prior probability".[62]Inductive logic itself is not precluded, especially not when it is a deductively valid application of Bayes' theorem that is used to evaluate the probabilities of the hypotheses using the observed data and what is assumed about the priors. Gelman and Shalizi mentioned that Bayes' statisticians do not have to disagree with the non-inductivists.[63] Because statisticians often associate statistical inference with induction, Popper's philosophy is often said to have a hidden form of induction. For example, Mayo wrote "The falsifying hypotheses ... necessitate an evidence-transcending (inductive) statistical inference. This is hugely problematic for Popper".[64]Yet, also according to Mayo, Popper [as a non-inductivist] acknowledged the useful role of statistical inference in the falsification problems: she mentioned that Popper wrote her (in the context of falsification based on evidence) "I regret not studying statistics" and that her thought was then "not as much as I do".[65] Imre Lakatosdivided the problems of falsification in two categories. The first category corresponds to decisions that must be agreed upon by scientists before they can falsify a theory. The other category emerges when one tries to use falsifications and corroborations to explainprogress in science. Lakatos described four kind of falsificationisms in view of how they address these problems.Dogmatic falsificationismignores both types of problems.Methodological falsificationismaddresses the first type of problems by accepting that decisions must be taken by scientists.Naive methodological falsificationismornaive falsificationismdoes not do anything to address the second type of problems.[66][67]Lakatos used dogmatic and naive falsificationism to explain how Popper's philosophy changed over time and viewedsophisticated falsificationismas his own improvement on Popper's philosophy, but also said that Popper some times appears as a sophisticated falsificationist.[68]Popper responded that Lakatos misrepresented his intellectual history with these terminological distinctions.[69] A dogmatic falsificationist ignores that every observation is theory-impregnated. Being theory-impregnated means that it goes beyond direct experience. For example, the statement "Here is a glass of water" goes beyond experience, because the concepts of glass and water "denote physical bodies which exhibit a certain law-like behaviour" (Popper).[70]This leads to the critique that it is unclear which theory is falsified. Is it the one that is being studied or the one behind the observation?[AY]This is sometimes called the 'Duhem–Quine problem'.An example is Galileo's refutationof the theory that celestial bodies are faultless crystal balls. Many considered that it was the optical theory of the telescope that was false, not the theory of celestial bodies. Another example is the theory that neutrinos are emitted inbeta decays. Had they not been observed in theCowan–Reines neutrino experiment, many would have considered that the strength of thebeta-inverse reactionused to detect the neutrinos was not sufficiently high. At the time,Grover Maxwell[es]wrote, the possibility that this strength was sufficiently high was a "pious hope".[44] A dogmatic falsificationist ignores the role of auxiliary hypotheses. The assumptions or auxiliary hypotheses of a particular test are all the hypotheses that are assumed to be accurate in order for the test to work as planned.[71]The predicted observation that is contradicted depends on the theory and these auxiliary hypotheses. Again, this leads to the critique that it cannot be told if it is the theory or one of the required auxiliary hypotheses that is false. Lakatos gives the example of the path of a planet. If the path contradicts Newton's law, we will not know if it is Newton's law that is false or the assumption that no other body influenced the path. Lakatos says that Popper's solution to these criticisms requires that one relaxes the assumption that an observation can show a theory to be false:[F] If a theory is falsified [in the usual sense], it is proven false; if it is 'falsified' [in the technical sense], it may still be true. Methodological falsificationismreplaces the contradicting observation in a falsification with a "contradicting observation" accepted by convention among scientists, a convention that implies four kinds of decisions that have these respective goals: the selection of allbasic statements(statements that correspond to logically possible observations), selection of theaccepted basic statementsamong the basic statements, making statistical laws falsifiable and applying the refutation to the specific theory (instead of an auxiliary hypothesis).[AZ]The experimental falsifiers and falsifications thus depend on decisions made by scientists in view of the currently accepted technology and its associated theory. According to Lakatos, naive falsificationism is the claim that methodological falsifications can by themselves explain how scientific knowledge progresses. Very often a theory is still useful and used even after it is found in contradiction with some observations. Also, when scientists deal with two or more competing theories which are both corroborated, considering only falsifications, it is not clear why one theory is chosen above the other, even when one is corroborated more often than the other. In fact, a stronger version of the Quine-Duhem thesis says that it is not always possible to rationally pick one theory over the other using falsifications.[72]Considering only falsifications, it is not clear why often a corroborating experiment is seen as a sign of progress. Popper's critical rationalism uses both falsifications and corroborations to explain progress in science.[BA]How corroborations and falsifications can explain progress in science was a subject of disagreement between many philosophers, especially between Lakatos and Popper.[BB] Popper distinguished between the creative and informal process from which theories and accepted basic statements emerge and the logical and formal process where theories are falsified or corroborated.[E][BC][BD]The main issue is whether the decision to select a theory among competing theories in the light of falsifications and corroborations could be justified using some kind of formal logic.[BE]It is a delicate question, because this logic would be inductive: it justifies a universal law in view of instances. Also, falsifications, because they are based on methodological decisions, are useless in a strict justification perspective. The answer of Lakatos and many others to that question is that it should.[BF][BG]In contradistinction, for Popper, the creative and informal part is guided by methodological rules, which naturally say to favour theories that are corroborated over those that are falsified,[BH]but this methodology can hardly be made rigorous.[BI] Popper's way to analyze progress in science was through the concept ofverisimilitude, a way to define how close a theory is to the truth, which he did not consider very significant, except (as an attempt) to describe a concept already clear in practice. Later, it was shown that the specific definition proposed by Popper cannot distinguish between two theories that are false, which is the case for all theories in the history of science.[BJ]Today, there is still on going research on the general concept of verisimilitude.[73] Hume explained induction with a theory of the mind[74]that was in part inspired by Newton's theory of gravitation.[BK]Popper rejected Hume's explanation of induction and proposed his own mechanism: science progresses by trial and error within an evolutionary epistemology. Hume believed that his psychological induction process follows laws of nature, but, for him, this does not imply the existence of a method of justification based on logical rules. In fact, he argued that any induction mechanism, including the mechanism described by his theory, could not be justified logically.[75]Similarly, Popper adopted an evolutionary epistemology, which implies that some laws explain progress in science, but yet insists that the process of trial and error is hardly rigorous and that there is always an element of irrationality in the creative process of science. The absence of a method of justification is a built-in aspect of Popper's trial and error explanation. As rational as they can be, these explanations that refer to laws, but cannot be turned into methods of justification (and thus do not contradict Hume's argument or its premises), were not sufficient for some philosophers. In particular,Russellonce expressed the view that if Hume's problem cannot be solved, “there is no intellectual difference between sanity and insanity”[75]and actually proposed a method of justification.[76][77]He rejected Hume's premise that there is a need to justify any principle that is itself used to justify induction.[BL]It might seem that this premise is hard to reject, but to avoid circular reasoning we do reject it in the case of deductive logic. It makes sense to also reject this premise in the case of principles to justify induction. Lakatos's proposal of sophisticated falsificationism was very natural in that context. Therefore, Lakatos urged Popper to find an inductive principle behind the trial and error learning process[BM]and sophisticated falsificationism was his own approach to address this challenge.[BN][BO]Kuhn, Feyerabend, Musgrave and others mentioned and Lakatos himself acknowledged that, as a method of justification, this attempt failed, because there was no normative methodology to justify—Lakatos's methodology was anarchy in disguise.[BP][BQ][BR][BS][BT] Popper's philosophy is sometimes said to fail to recognize the Quine-Duhem thesis, which would make it a form of dogmatic falsificationism. For example, Watkins wrote "apparently forgetting that he had once said 'Duhem is right [...]', Popper set out to devise potential falsifiers just for Newton's fundamental assumptions".[78]But, Popper's philosophy is not always qualified of falsificationism in the pejorative manner associated with dogmatic or naive falsificationism.[79]The problems of falsification are acknowledged by the falsificationists. For example, Chalmers points out that falsificationists freely admit that observation is theory impregnated.[80]Thornton, referring to Popper's methodology, says that the predictions inferred from conjectures are not directly compared with the facts simply because all observation-statements are theory-laden.[81]For the critical rationalists, the problems of falsification are not an issue, because they do not try to make experimental falsifications logical or to logically justify them, nor to use them to logically explain progress in science. Instead, their faith rests on critical discussions around these experimental falsifications.[5]Lakatos made a distinction between a "falsification" (with quotation marks) in Popper's philosophy and a falsification (without quotation marks) that can be used in a systematic methodology where rejections are justified.[82]He knew that Popper's philosophy is not and has never been about this kind of justification, but he felt that it should have been.[BM]Sometimes, Popper and other falsificationists say that when a theory is falsified it is rejected,[83][84]which appears as dogmatic falsificationism, but the general context is always critical rationalism in which all decisions are open to critical discussions and can be revised.[85] As described in section§ Naive falsificationism, Lakatos and Popper agreed that universal laws cannot be logically deduced (except from laws that say even more). But unlike Popper, Lakatos felt that if the explanation for new laws cannot be deductive, it must be inductive. He urged Popper explicitly to adopt some inductive principle[BM]and sets himself the task to find an inductive methodology.[BU]However, the methodology that he found did not offer any exact inductive rules. In a response to Kuhn, Feyerabend and Musgrave, Lakatos acknowledged that the methodology depends on the good judgment of the scientists.[BP]Feyerabend wrote in "Against Method" that Lakatos's methodology of scientific research programmes is epistemological anarchism in disguise[BQ]and Musgrave made a similar comment.[BR]In more recent work, Feyerabend says that Lakatos uses rules, but whether or not to follow any of these rules is left to the judgment of the scientists.[BS]This is also discussed elsewhere.[BT] Popper also offered a methodology with rules, but these rules are also not-inductive rules, because they are not by themselves used to accept laws or establish their validity. They do that through the creativity or "good judgment" of the scientists only. For Popper, the required non deductive component of science never had to be an inductive methodology. He always viewed this component as a creative process beyond the explanatory reach of any rational methodology, but yet used to decide which theories should be studied and applied, find good problems and guess useful conjectures.[BV]Quoting Einstein to support his view, Popper said that this renders obsolete the need for an inductive methodology or logical path to the laws.[BW][BX][BY]For Popper, no inductive methodology was ever proposed to satisfactorily explain science. Section§ Methodless creativity versus inductive methodologysays that both Lakatos's and Popper's methodology are not inductive. Yet Lakatos's methodology extended importantly Popper's methodology: it added a historiographical component to it. This allowed Lakatos to find corroborations for his methodology in the history of science. The basic units in his methodology, which can be abandoned or pursued, are research programmes. Research programmes can be degenerative or progressive and only degenerative research programmes must be abandoned at some point. For Lakatos, this is mostly corroborated by facts in history. In contradistinction, Popper did not propose his methodology as a tool to reconstruct the history of science. Yet, some times, he did refer to history to corroborate his methodology. For example, he remarked that theories that were considered great successes were also the most likely to be falsified. Zahar's view was that, with regard to corroborations found in the history of science, there was only a difference of emphasis between Popper and Lakatos. As an anecdotal example, in one of his articles Lakatos challenged Popper to show that his theory was falsifiable: he asked "Under what conditions would you give up your demarcation criterion?".[86]Popper replied "I shall give up my theory if Professor Lakatos succeeds in showing that Newton's theory is no more falsifiable by 'observable states of affairs' than is Freud's."[87]According to David Stove, Lakatos succeeded, since Lakatos showed there is no such thing as a "non-Newtonian" behaviour of an observable object. Stove argued that Popper's counterexamples to Lakatos were either instances ofbegging the question, such as Popper's example of missiles moving in a "non-Newtonian track", or consistent with Newtonian physics, such as objects not falling to the ground without "obvious" countervailing forces against Earth's gravity.[88] Thomas Kuhnanalyzed what he calls periods of normal science as well as revolutions from one period of normal science to another,[89]whereas Popper's view is that only revolutions are relevant.[BZ][CA]For Popper, the role of science, mathematics and metaphysics, actually the role of any knowledge, is to solve puzzles.[CB]In the same line of thought, Kuhn observes that in periods of normal science the scientific theories, which represent some paradigm, are used to routinely solve puzzles and the validity of the paradigm is hardly in question. It is only when important new puzzles emerge that cannot be solved by accepted theories that a revolution might occur. This can be seen as a viewpoint on the distinction made by Popper between the informal and formal process in science (see section§ Naive falsificationism). In the big picture presented by Kuhn, the routinely solved puzzles are corroborations. Falsifications or otherwise unexplained observations are unsolved puzzles. All of these are used in the informal process that generates a new kind of theory. Kuhn says that Popper emphasizes formal or logical falsifications and fails to explain how the social and informal process works. Popper often uses astrology as an example of a pseudoscience. He says that it is not falsifiable because both the theory itself and its predictions are too imprecise.[CC]Kuhn, as a historian of science, remarked that many predictions made by astrologers in the past were quite precise and they were very often falsified. He also said that astrologers themselves acknowledged these falsifications.[CD] Paul Feyerabendrejected any prescriptive methodology at all. He rejected Lakatos's argument forad hochypothesis, arguing that science would not have progressed without making use of any and all available methods to support new theories. He rejected any reliance on a scientific method, along with any special authority for science that might derive from such a method.[90]He said that if one is keen to have a universally valid methodological rule,epistemological anarchismoranything goeswould be the only candidate.[91]For Feyerabend, any special status that science might have, derives from the social and physical value of the results of science rather than its method.[92] In their bookFashionable Nonsense(from 1997, published in the UK asIntellectual Impostures) the physicistsAlan SokalandJean Bricmontcriticised falsifiability.[93]They include this critique in the "Intermezzo" chapter, where they expose their own views on truth in contrast to the extreme epistemological relativism of postmodernism. Even though Popper is clearly not a relativist, Sokal and Bricmont discuss falsifiability because they see postmodernist epistemological relativism as a reaction to Popper's description of falsifiability, and more generally, to his theory of science.[94]
https://en.wikipedia.org/wiki/Falsifiability
Moving the goalposts(orshifting the goalposts) is ametaphor, derived fromgoal-based sports such asfootballandhockey, that means to change the rule or criterion ("goal") of a process or competition while it is still in progress, in such a way that the new goal offers one side an advantage or disadvantage.[1] This phrase is British in origin and derives from sports that usegoalposts.[2][3]The figurative use alludes to the perceived unfairness in changing thegoalone is trying to achieve after the process one is engaged in (such as a game offootball) has already started.[1] Moving the goalposts is aninformal fallacyin which evidence presented in response to a specific claim is dismissed and some other (often greater) evidence is demanded. That is, after an attempt has been made to score a goal, the goalposts are moved to exclude the attempt.[4]The problem with changing the rules of the game is that the meaning of the result is changed, too.[5] Some include this metaphor as description of the tactics ofharassment. In such cases, a re-defining of another's goals may in reality be intentionally devised so as to assure that an athlete, for example, will ultimately never be able to finally achieve the ever shifting goals.[6]Inworkplace bullying, shifting the goalposts is a conventional tactic in the process ofhumiliation.[7] Karl Poppercoined the conceptconventionalist twistorconventionalist stratageminConjectures and Refutationswith similar use as this fallacy but in the context of thefalsifiabilityof certainscientific theories.[8] Deliberately moving the goalposts constitutes aprofessional foulinrugby footballand anunfair actingridiron football. The officials are grantedcarte blancheto assess whatever penalty they see fit, including awarding the score for any attempt at a goal missed or invalidating any goal scored as a result of the moved goalposts. In both rugby and gridiron, goalposts are anchored into the ground; the distance they can be moved (most easily in gridiron by pulling down on one end of the crossbar to tilt both posts either to the left or the right) is far more restricted.[citation needed]Inadvertently moving the goalposts in atouchdown celebrationis anunsportsmanlike conductpenalty of 15 yards against the offending team. Moving goalposts is common inice hockey, where physical contact with the posts is common. If the goalposts are knocked off their moorings in the course of play, play is stopped until the goal is put back in place. If the goalposts are deliberately moved to stop an opponent from scoring, the opponent may be granted apenalty shot; if the goaltender does so, the goaltender may be ejected from the game, a rule imposed at most levels of the game in 2014 afterDavid Leggiodeliberately moved the goalposts during a two-person breakaway, believing he would have a better chance of stopping a penalty shot.[9][10]Leggio later used the tactic in theDeutsche Eishockey Liga, where he had played since 2015; that league had not yet outlawed the maneuver,[11]but promptly did so after Leggio's first attempt at using the tactic.[12]The DEL instead automaticallyawards the goalto the opposing team.[12]TheNational Hockey Leagueapproved this rule in 2019.[13] In 2009, Danish goalkeeperKim Christensenwas recorded on camera moving the goalposts in order to gain advantage over the opposing team. Christensen's moving the goalposts was discovered by a referee about 20 minutes into the game, but Christensen did not suffer a suspension or any fines for his actions.[14][15] In May 2022, a scandal erupted when video published byAftenpostenshowedViking FK's goalkeeperPatrik Gunnarssonreducing his goal size by moving the goalposts by 15–20 centimetres (6–8 in), after the referee inspection but prior to kickoff, in home games during the 2022 season.[16][17][18]
https://en.wikipedia.org/wiki/Moving_the_goalposts
No true Scotsmanorappeal to purityis aninformal fallacyin which one modifies a prior claim in response to acounterexampleby asserting the counterexample is excluded by definition.[1][2][3]Rather than admitting error or providing evidence to disprove the counterexample, the original claim is changed by using a non-substantive modifier such as "true", "pure", "genuine", "authentic", "real", or other similar terms.[4][2] PhilosopherBradley Dowdenexplains the fallacy as an "ad hoc rescue" of a refuted generalization attempt.[1]The following is a simplified rendition of the fallacy:[5] Person A: "NoScotsmanputs sugar on hisporridge."Person B: "But my uncle Angus is a Scotsman and he puts sugar on his porridge."Person A: "But notrueScotsman puts sugar on his porridge." The "no true Scotsman" fallacy is committed when the arguer satisfies the following conditions:[3][4][6] An appeal to purity is commonly associated with protecting a preferred group. Scottish national pride may be at stake if someone regularly considered to be Scottish commits a heinous crime. To protect people of Scottish heritage from a possible accusation ofguilt by association, one may use this fallacy to deny that the group is associated with this undesirable member or action. "NotrueScotsman would do something so undesirable"; i.e., the people who would do such a thing aretautologically(definitionally) excluded from being part of our group such that they cannot serve as a counterexample to the group's good nature.[4] The description of the fallacy in this form is attributed to the British philosopherAntony Flew, who wrote, in his 1966 bookGod & Philosophy, In this ungracious move a brash generalization, such asNo Scotsmen put sugar on their porridge, when faced with falsifying facts, is transformed while you wait into an impotent tautology: if ostensible Scotsmen put sugar on their porridge, then this is by itself sufficient to prove them nottrueScotsmen. In his 1975 bookThinking About Thinking, Flew wrote:[4] Imagine some Scottish chauvinist settled down one Sunday morning with his customary copy ofThe News of the World. He reads the story under the headline, "SidcupSex Maniac Strikes Again". Our reader is, as he confidently expected, agreeably shocked: "No Scot would do such a thing!" Yet the very next Sunday he finds in that same favourite source a report of the even more scandalous on-goings of Mr Angus McSporran inAberdeen. This clearly constitutes a counter example, which definitively falsifies the universal proposition originally put forward. ('Falsifies' here is, of course, simply the opposite of 'verifies'; and it therefore means 'shows to be false'.) Allowing that this is indeed such a counter example, he ought to withdraw; retreating perhaps to a rather weaker claim about most or some. But even an imaginary Scot is, like the rest of us, human; and none of us always does what we ought to do. So what he is in fact saying is: "No true Scotsman would do such a thing!" David P. Goldman, writing under his pseudonym "Spengler", compared distinguishing between "mature" democracies, whichnever start wars, and "emerging democracies", which may start them, with the "no true Scotsman" fallacy. Spengler alleges that political scientists have attempted to save the "US academic dogma" that democracies never start wars against other democracies from counterexamples by declaring any democracy which does indeed start a war against another democracy to be flawed, thus maintaining that notrue and maturedemocracy starts a war against a fellow democracy.[5] Cognitive psychologistSteven Pinkerhas suggested that phrases like "no true Christian ever kills, no true communist state is repressive and no trueTrumpsupporter endorses violence" exemplify the fallacy.[7]
https://en.wikipedia.org/wiki/No_true_Scotsman
"Out of left field" (also "out in left field", and simply "left field" or "leftfield") is American slang meaning "unexpected", "odd" or "strange". InSafire's Political Dictionary, columnistWilliam Safirewrites that the phrase "out of left field" means "out of the ordinary, out of touch, far out."[1]The variation "out in left field" means alternately "removed from the ordinary, unconventional" or "out of contact with reality, out of touch."[1]He opines that the term has only a tangential connection to the political left or theLeft Coast, political slang for the coastal states of the American west.[1] Popular music historianArnold Shawwrote in 1949 for theMusic Library Associationthat the term "out of left field" was first used in the idiomatic sense of "from out of nowhere" by themusic industryto refer to a song that unexpectedly performed well in the market.[2]Based on baseball lingo, a sentence such as "That was a hit out of left field" was used bysong pluggerswho promoted recordings and sheet music, to describe a song requiring no effort to sell.[2]A "rocking chair hit" was the kind of song which came "out of left field" and sold itself, allowing the song plugger to relax.[2]A 1943 article inBillboardexpands the use to describe people unexpectedly drawn to radio broadcasting: Latest twist in radio linked with the war is the exceptional number of quasi-clerical groups and individuals who have come out of left field in recent months and are trying to buy, not promote, radio time.[3] Further instances of the phrase were published in the 1940s, including inBillboardand once in a humor book titledHow to Be Poor.[4][5][6] In May 1981, Safire asked readers ofThe New York Timesto send him any ideas they had regarding the origin of the phrase "out of left field"—he did not know where it came from, and did not refer to Shaw's work.[7]On June 28, 1981, he devoted most of his Sunday column to the phrase, offering up various responses he received.[8][9]The earliest scholarly citation Safire could find was a 1961 article in the journalAmerican Speech, which defined the variation "out in left field" as meaning "disoriented, out of contact with reality."[9][10]LinguistJohn Algeotold Safire that the phrase most likely came from baseball observers rather than from baseball fans or players.[11] In 1998, American English professorRobert L. Chapman, in his bookAmerican Slang, wrote that the phrase "out of left field" was in use by 1953.[12]He did not cite Shaw's work and he did not point to printed instances of the phrase in the 1940s. Marcus Callies, an associate professor of English and philology at theUniversity of Mainzin Germany, wrote that "the precise origin is unclear and disputed", referring to Christine Ammer's conclusion inThe American Heritage Dictionary of Idioms.[13]Callies suggested that the left fielder in baseball might throw the ball tohome platein an effort to get the runner out before he scores, and that the ball, coming from behind the runner out of left field, would surprise the runner.[13] According to the 2007Concise New Partridge Dictionary of Slang and Unconventional English, the phrase came frombaseballterminology, referring to a play in which the ball is thrown from the area covered by theleft fielderto either home plate or first base, surprising the runner. Variations include "out in left field" and simply "left field".[14] At the site of the University of Illinois Medical Center in Chicago, Illinois, a 2008 plaque marks the site of the former West Side Park, where the Chicago Cubs played from 1893 to 1915. The plaque states that the location of the county hospital and its psychiatric patients just beyond left field is the origin of the phrase "way out in left field."[15]
https://en.wikipedia.org/wiki/Out_of_left_field
Inlinguisticsandphilosophy, apresuppositionis animplicit assumptionabout the world or background belief relating to an utterance whose truth is taken for granted indiscourse. Examples of presuppositions include: A presupposition is information that is linguistically presented as being mutually known or assumed by the speaker and addressee. This may be required for the utterance to be considered appropriate in context, but it is not uncommon for new information to be encoded in presuppositions without disrupting the flow of conversation (see accommodation below).[1]A presupposition remains mutually known by the speaker and addressee whether the utterance is placed in the form of an assertion, denial, or question, and can be associated with a specificlexical itemor grammatical feature (presupposition trigger) in the utterance. Crucially,negationof an expression does not change its presuppositions:I want to do it againandI don't want to do it againboth presuppose that the subject has done it already one or more times;My wife is pregnantandMy wife is not pregnantboth presuppose that the subject has a wife. In this respect, presupposition is distinguished fromentailmentandimplicature. For example,The president was assassinatedentails thatThe president is dead, but if the expression is negated, theentailmentis notnecessarily true. If presuppositions of a sentence are not consistent with the actual state of affairs, then one of two approaches can be taken. Given the sentencesMy wife is pregnantandMy wife is not pregnantwhen one has no wife, then either: Bertrand Russelltries to solve this dilemma with two interpretations of the negated sentence: For the first phrase, Russell would claim that it is false, whereas the second would be true according to him. A presupposition of a part of an utterance is sometimes also a presupposition of the whole utterance, and sometimes not. For instance, the phrasemy wifetriggers the presupposition that I have a wife. The first sentence below carries that presupposition, even though the phrase occurs inside an embeddedclause. In the second sentence, however, it does not. John might be mistaken about his belief that I have a wife, or he might be deliberately trying to misinform his audience, and this has an effect on the meaning of the second sentence, but, perhaps surprisingly, not on the first one. Thus, this seems to be a property of the main verbs of the sentences,thinkandsay, respectively. After work byLauri Karttunen,[2][3]verbs that allow presuppositions to "pass up" to the whole sentence ("project") are calledholes, and verbs that block such passing up, orprojectionof presuppositions are calledplugs. Some linguistic environments are intermediate between plugs and holes: They block some presuppositions and allow others to project. These are calledfilters. An example of such an environment areindicative conditionals("If-then" clauses). A conditional sentence contains anantecedentand aconsequent. The antecedent is the part preceded by the word "if," and the consequent is the part that is (or could be) preceded by "then." If the consequent contains a presupposition trigger, and the triggered presupposition is explicitly stated in the antecedent of the conditional, then the presupposition is blocked. Otherwise, it is allowed to project up to the entire conditional. Here is an example: Here, the presupposition (that I have a wife) triggered by the expressionmy wifeis blocked, because it is stated in the antecedent of the conditional: That sentence doesn't imply that I have a wife. In the following example, it is not stated in the antecedent, so it is allowed to project, i.e. the sentencedoesimply that I have a wife. Hence, conditional sentences act asfiltersfor presuppositions that are triggered by expressions in their consequent. A significant amount of current work insemanticsandpragmaticsis devoted to a proper understanding of when and how presuppositions project. A presupposition trigger is a lexical item or linguistic construction which is responsible for the presupposition, and thus "triggers" it.[4]The following is a selection of presuppositional triggers followingStephen C. Levinson's classic textbook onPragmatics, which in turn draws on a list produced byLauri Karttunen. As is customary, the presuppositional triggers themselves are italicized, and the symbol » stands for 'presupposes'.[5] Definite descriptions are phrases of the form "the X" where X represents a noun phrase. The description is said to beproperwhen the phrase applies to exactly one object, and conversely, it is said to beimproperwhen either there exist more than one potential referents, as in "the senator from Ohio", or none at all, as in "the king of France". In conventional speech, definite descriptions are implicitly assumed to be proper, hence such phrases trigger the presupposition that the referent is unique and existent. In Western epistemology, there is a tradition originating withPlatoof defining knowledge as justified true belief. On this definition, for someone to know X, it is required that X be true. A linguistic question thus arises regarding the usage of such phrases: does a person who states "John knows X" implicitly claim the truth of X?Steven Pinkerexplored this question in apopular scienceformat in a 2007 book on language and cognition, using a widely publicized example from a speech by a U.S. president.[6]A 2003 speech by George W. Bush included the line, "British Intelligence has learned that Saddam Hussein recently sought significant quantities of uranium from Africa."[7]Over the next few years, it became apparent that this intelligence lead was incorrect. But the way the speech was phrased, using a factive verb, implicitly framed the lead as truth rather than hypothesis. There is however a strong alternative view that thefactivity thesis, the proposition that relational predicates having to do with knowledge, such asknows, learn, remembers,andrealized, presuppose the factual truth of their object, is incorrect.[8] Some further factive predicates:know; be sorry that; be proud that; be indifferent that; be glad that; be sad that. Some further implicative predicates:X happened to V»X didn't plan or intend toV;X avoided Ving»X was expected to, or usually did, or ought toV, etc. With these presupposition triggers, the current unfolding situation is considered presupposed information.[9] Some further change of state verbs:start; finish; carry on; cease; take(as inX took Y from Z» Y was at/in/with Z);leave; enter; come; go; arrive;etc. These types of triggers presuppose the existence of a previous state of affairs.[9] Further iteratives:another time; to come back; restore; repeat; for the nth time. The situation explained in a clause that begins with a temporal clause constructor is typically considered backgrounded information.[9] Further temporal clause constructors:after; during; whenever; as(as inAs John was getting up, he slipped). Cleft sentence structures highlight particular aspects of a sentence and consider the surrounding information to be backgrounded knowledge. These sentences are typically not spoken to strangers, but rather to addressees who are aware of the ongoing situation.[9] Comparisons and contrasts may be marked by stress (or by other prosodic means), by particles like "too", or by comparatives constructions. Questions often presuppose what the assertive part of the question presupposes, but interrogative parts might introduce further presuppositions. There arethree different types of questions: yes/no questions, alternative questions and WH-questions. A presupposition of a sentence must normally be part of thecommon groundof the utterance context (the shared knowledge of theinterlocutors) in order for the sentence to be felicitous. Sometimes, however, sentences may carry presuppositions that are not part of the common ground and nevertheless be felicitous. For example, I can, upon being introduced to someone, out of the blue explain thatmy wife is a dentist,this without my addressee having ever heard, or having any reason to believe that I have a wife. In order to be able to interpret my utterance, the addressee must assume that I have a wife. This process of an addressee assuming that a presupposition is true, even in the absence of explicit information that it is, is usually calledpresupposition accommodation. We have just seen that presupposition triggers likemy wife(definite descriptions) allow for such accommodation. In "Presupposition and Anaphora: Remarks on the Formulation of the Projection Problem",[10]the philosopherSaul Kripkenoted that some presupposition triggers do not seem to permit such accommodation. An example of that is the presupposition triggertoo. This word triggers the presupposition that, roughly, something parallel to what is stated has happened. For example, if pronounced with emphasis onJohn, the following sentence triggers the presupposition that somebody other than John had dinner in New York last night. But that presupposition, as stated, is completely trivial, given what we know about New York. Several million people had dinner in New York last night, and that in itself doesn't satisfy the presupposition of the sentence. What is needed for the sentence to be felicitous is really that somebody relevant to the interlocutors had dinner in New York last night, and that this has been mentioned in the previous discourse, or that this information can be recovered from it. Presupposition triggers that disallow accommodation are calledanaphoricpresupposition triggers. Critical discourse analysis(CDA) is a broad study belonging to not one research category. It focuses on identifying presuppositions of an abstract nature from varying perspectives. CDA is considered critical, not only in the sense of being analytical, but also in the ideological sense.[11]Through the analysis of written texts and verbal speech,Teun A. van Dijk(2003) says CDA studies power imbalances existing in both the conversational and political spectrum.[11]With the purpose of first identifying and then tackling inequality in society, van Dijk describes CDA as a nonconformist piece of work.[11]One notable feature of ideological presuppositions researched in CDA is a concept termedsynthetic personalisation[12] To describe apresuppositionin the context ofpropositional calculusandtruth-bearers,Belnapdefines "Asentenceis apresuppositionof a question if the truth of the sentence is a necessary condition of the question's having some true answer." Then referring to thesemantic theory of truth,interpretationsare used to formulate apresupposition: "Every interpretation which makes the question truly answerable is an interpretation which makes the presupposed sentence true as well." A sentence thatexpresses a presuppositionin a question may becharacterizedas follows: the question has some true answer if and only if the sentence is true.[13]
https://en.wikipedia.org/wiki/Presupposition
Don Quixote,[a][b]the full title beingThe Ingenious Gentleman Don Quixote of La Mancha,[c]is a SpanishnovelbyMiguel de Cervantes. The novel, originally published in two parts, in 1605 and 1615 is considered a founding work ofWestern literature. It's often said to be the first modernnovel.[2][3]The novel has been labelled by many well-known authors as the "best novel of all time"[d]and the "best and most central work in world literature".[5][4]Don Quixoteis also one of themost-translated books in the world[6]and one of thebest-selling novels of all time. The plot revolves around the adventures of a member of the lowest nobility, anhidalgo[e]fromLa ManchanamedAlonso Quijano, who reads so manychivalric romancesthat he loses his mind and decides to become aknight-errant(caballero andante) to revivechivalryand serve his nation, under the nameDonQuixote de la Mancha.[b]He recruits as his squire a simple farm labourer,Sancho Panza, who brings an earthy wit to Don Quixote's lofty rhetoric. In the first part of the book, Don Quixote does not see the world for what it is and prefers to imagine that he is living out a knightly story meant for the annals of all time. However, asSalvador de Madariagapointed out in hisGuía del lector del Quijote(1972 [1926]),[7]referring to "the Sanchification of Don Quixote and the Quixotization of Sancho", as "Sancho's spirit ascends from reality to illusion, Don Quixote's declines from illusion to reality".[8] The book had a major influence on the literary community, as evidenced by direct references inAlexandre Dumas'sThe Three Musketeers(1844),[9]andEdmond Rostand'sCyrano de Bergerac(1897)[10]as well as the wordquixotic.Mark Twainreferred to the book as having "swept the world's admiration for the mediaeval chivalry-silliness out of existence".[11][f]It has been described by some as the greatest work ever written.[12][13] For Cervantes and the readers of his day,Don Quixotewas a one-volume book published in 1605, divided internally into four parts, not the first part of a two-part set. The mention in the 1605 book of further adventures yet to be told was totally conventional, did not indicate any authorial plans for a continuation, and was not taken seriously by the book's first readers.[14] Cervantes, in ametafictionalnarrative, writes that the first few chapters were taken from "the archives of La Mancha", and the rest were translated from an Arabic text by theMoorishhistorianCide Hamete Benengeli. Alonso Quixanois ahidalgonearing 50 years of age who lives in a deliberately unspecified region ofLa Manchawith his niece and housekeeper. While he lives a frugal life, he is full of fantasies about chivalry stemming from his obsession with chivalric romance books. Eventually, his obsession becomes madness when he decides to become aknight errant, donning an old suit of armor. He renames himself "Don Quixote", names his old workhorse "Rocinante", and designates Aldonza Lorenzo (a slaughterhouse worker with a famed hand for salting pork) hislady love, renaming herDulcinea del Toboso. As he travels in search of adventure, he arrives at an inn that he believes to be a castle, calls the prostitutes he meets there "ladies", and demands that the innkeeper, whom he takes to be the lord of the castle, dub him a knight. The innkeeper agrees. Quixote starts the night holdingvigilat the inn's horse trough, which Quixote imagines to be a chapel. He then becomes involved in a fight withmuleteerswho try to remove his armor from the horse trough to water their mules. In a pretend ceremony, the innkeeper dubs him a knight to be rid of him and sends him on his way. Quixote next encounters a servant named Andres who is tied to a tree and being beaten by his master over disputed wages. Quixote orders the master to stop the beating, untie Andres and swear to treat his servant fairly. However, the beating is resumed, and redoubled, as soon as Quixote leaves. Quixote then chances upon traders fromToledo. He demands that they agree that Dulcinea del Toboso is the most beautiful woman in the world. One of them demands to see her picture so that he can decide for himself. Enraged, Quixote charges at them but his horse stumbles, causing him to fall. One of the traders beats up Quixote, who is left at the side of the road until a neighboring peasant brings him back home. While Quixote lies unconscious in his bed, his niece, the housekeeper, the parishcurate, and the local barber burn most of his chivalric and other books, seeing them as the root of his madness. They seal up the library room, later telling Quixote that it was done by a wizard. Don Quixote asks his neighbour, the poor farm labourerSancho Panza, to be his squire, promising him a petty governorship. Sancho agrees and they sneak away at dawn. Their adventures together begin with Quixote's attack on some windmills which he believes to be ferocious giants. They next encounter two Benedictinefriarsand, nearby, an unrelated lady in a carriage. Quixote takes the friars to be enchanters who are holding the lady captive, knocks one of them from his horse, and is challenged by an armedBasquetravelling with the company. The combat ends with the lady leaving her carriage and begging him not to harm the Basque. After a friendly encounter with some goatherds and a less friendly one with someYanguesanporters drivingGalician ponies, Quixote and Sancho enter an inn owned by Juan Palomeque, where a mix-up involving a servant girl's romantic rendezvous with another guest results in a brawl. Quixote explains to Sancho that the inn is enchanted. They decide to leave, but Quixote, following the example of the fictional knights, leaves without paying. Sancho ends up wrapped in a blanket and tossed in the air by several mischievous guests at the inn before he manages to follow. After further adventures involving a dead body, a barber's basin that Quixote imagines as the legendary helmet ofMambrino, and a group ofgalley slaves, they wander into theSierra Morena. There they encounter the dejected and mostly mad Cardenio, who relateshis story. Inspired by Cardenio, Quixote decides to imitate what he has read in his chivalric romances and live like a hermit in a display of devotion to Dulcinea. He sends Sancho to deliver a letter to Dulcinea, but instead Sancho finds the barber and priest from his village. They make a plan to trick Quixote into coming home, recruiting Dorotea, a woman they discover in the forest, to pose as the Princess Micomicona, a damsel in distress. The plan works and Quixote and the group return to the inn, though Quixote is now convinced, thanks to a lie told by Sancho when asked about the letter, that Dulcinea wants to see him. At the inn, several other plots intersect and are resolved. Meanwhile, a sleepwalking Quixote does battle with somewineskinswhich he takes to be the giant who stole the princess Micomicona's kingdom. An officer of theSanta Hermandadarrives with a warrant for Quixote's arrest for freeing the galley slaves, but the priest begs for the officer to have mercy on account of Quixote's insanity. The officer agrees and Quixote is locked in a cage which he is made to think is an enchantment. He has a learned conversation with a Toledocanonhe encounters by chance on the road, in which the canon expresses his scorn for untruthful chivalric books, but Don Quixote defends them. The group stops to eat and lets Quixote out of the cage; he gets into a fight with a goatherd and with a group of pilgrims, who beat him into submission, before he is finally brought home. The narrator ends the story by saying that he has found manuscripts of Quixote's further adventures. Although the two parts are now published as a single work,Don Quixote, Part Twowas a sequel published ten years after the original novel. In an early example ofmetafiction, Part Two indicates that several of its characters have read the first part of the novel and are thus familiar with the history and peculiarities of the two protagonists. Don Quixote and Sancho are on their way to El Toboso to meet Dulcinea, with Sancho aware that his story about Dulcinea was a complete fabrication. They reach the city at daybreak and decide to enter at nightfall. However, a bad omen frightens Quixote into retreat and they quickly leave. Sancho is instead sent out alone by Quixote to meet Dulcinea and act as a go-between. Sancho's luck brings three peasant girls along the road and he quickly tells Quixote that they are Dulcinea and her ladies-in-waiting and as beautiful as ever. Since Quixote only sees the peasant girls, Sancho goes on to pretend that an enchantment of some sort is at work. A duke and duchess encounter the duo. These nobles have read Part One of the story and are themselves very fond of books of chivalry. They decide to play along for their own amusement, beginning a string of imagined adventures and practical jokes. As part of one prank, Quixote and Sancho are led to believe that the only way to release Dulcinea from her spell is for Sancho to give himself three thousand three hundred lashes. Sancho naturally resists this course of action, leading to friction with his master. Under the duke's patronage, Sancho eventually gets his promised governorship, though it is false, and he proves to be a wise and practical ruler before all ends in humiliation. Near the end, Don Quixote reluctantly sways towards sanity. Quixote battles the Knight of the White Moon (a young man from Quixote's hometown who had earlier posed as the Knight of Mirrors) on the beach inBarcelona. Defeated, Quixote submits to prearranged chivalric terms: the vanquished must obey the will of the conqueror. He is ordered to lay down his arms and cease his acts of chivalry for a period of one year, by which time his friends and relatives hope he will be cured. On the way back home, Quixote and Sancho "resolve" the disenchantment of Dulcinea. Upon returning to his village, Quixote announces his plan to retire to the countryside as a shepherd, but his housekeeper urges him to stay at home. Soon after, he retires to his bed with a deathly illness, and later awakes from a dream, having fully become Alonso Quixano once more. Sancho tries to restore his faith and his interest in Dulcinea, but Quixano only renounces his previous ambition and apologizes for the harm he has caused. He dictates his will, which includes a provision that his niece will be disinherited if she marries a man who reads books of chivalry. After Quixano dies, the author emphasizes that there are no more adventures to relate and that any further books about Don Quixote would be spurious. Don Quixote, Part Onecontains a number of stories which do not directly involve the two main characters, but which are narrated by some of thepicaresquefigures encountered by the Don and Sancho during their travels. The longest and best known of these is "El Curioso Impertinente" (The Ill-Advised Curiosity), found in Part One, Book Four. This story, read to a group of travelers at an inn, tells of aFlorentinenobleman, Anselmo, who becomes obsessed with testing his wife's fidelity and talks his close friendLothariointo attempting to seduce her, with disastrous results for all. InPart Two, the author acknowledges the criticism of his digressions inPart Oneand promises to concentrate the narrative on the central characters (although at one point he laments that his narrative muse has been constrained in this manner). Nevertheless, "Part Two" contains several back narratives related by peripheral characters. Several abridged editions have been published which delete some or all of the extra tales in order to concentrate on the central narrative.[15] Thestory within a storyrelates that, for no particular reason, Anselmo decides to test the fidelity of his wife, Camilla, and asks his friend, Lothario, to seduce her. Thinking that to be madness, Lothario reluctantly agrees, and soon reports to Anselmo that Camilla is a faithful wife. Anselmo learns that Lothario has lied and attempted no seduction. He makes Lothario promise to try in earnest and leaves town to make this easier. Lothario tries and Camilla writes letters to her husband telling him of the attempts by Lothario and asking him to return. Anselmo makes no reply and does not return. Lothario then falls in love with Camilla, who eventually reciprocates; an affair between them ensues, but is not disclosed to Anselmo, and their affair continues after Anselmo returns. One day, Lothario sees a man leaving Camilla's house and jealously presumes she has taken another lover. He tells Anselmo that, at last, he has been successful and arranges a time and place for Anselmo to see the seduction. Before this rendezvous, however, Lothario learns that the man was the lover of Camilla's maid. He and Camilla then contrive to deceive Anselmo further: When Anselmo watches them, she refuses Lothario, protests her love for her husband, and stabs herself lightly in the breast. Anselmo is reassured of her fidelity. The affair restarts with Anselmo none the wiser. Later, the maid's lover is discovered by Anselmo. Fearing that Anselmo will kill her, the maid says she will tell Anselmo a secret the next day. Anselmo tells Camilla that this is to happen, and Camilla expects that her affair is to be revealed. Lothario and Camilla flee that night. The maid flees the next day. Anselmo searches for them in vain before learning from a stranger of his wife's affair. He starts to write the story, but dies of grief before he can finish. Lothario is killed in battle soon afterward and Camilla dies of grief. The novel's farcical elements make use of punning and similar verbal playfulness. Character-naming inDon Quixotemakes ample figural use of contradiction, inversion, and irony, such as the namesRocinante[16](a reversal) andDulcinea(an allusion to illusion), and the wordquixoteitself, possibly a pun onquijada(jaw) but certainly[17][18]cuixot(Catalan: thighs), a reference to a horse'srump.[19] As a military term, the wordquijoterefers tocuisses, part of a full suit ofplate armourprotecting the thighs. The Spanish suffix-otedenotes the augmentative—for example,grandemeans large, butgrandotemeans extra large, with grotesque connotations. Following this example,Quixotewould suggest 'The Great Quijano', an oxymoronic play on words that makes much sense in light of the character's delusions of grandeur.[20] Cervantes wrote his work inEarly Modern Spanish, heavily borrowing fromOld Spanish, the medieval form of the language. The language ofDon Quixote, although still containingarchaisms, is far more understandable to modern Spanish readers than is, for instance, the completely medieval Spanish of thePoema de mio Cid, a kind of Spanish that is as different from Cervantes' language asMiddle Englishis fromModern English. The Old Castilian language was also used to show the higher class that came with being a knight errant. InDon Quixote, there are basically two different types of Castilian: Old Castilian is spoken only by Don Quixote, while the rest of the roles speak a contemporary (late 16th century) version of Spanish. The Old Castilian of Don Quixote is a humoristic resource—he copies the language spoken in the chivalric books that drove him to madness; and many times when he talks nobody is able to understand him because his language is too old. This humorous effect is more difficult to see nowadays because the reader must be able to distinguish the two old versions of the language, but when the book was published it was much celebrated. (English translations can get some sense of the effect by having Don Quixote useKing James Bibleor Shakespearean English, or evenMiddle English.)[21][22] In Old Castilian, the letterxrepresented the sound writtenshin modern English, so the name was originally pronounced[kiˈʃote]. However, as Old Castilian evolved towards modern Spanish, asound changecaused it to be pronounced with avoiceless velar fricative[x]sound (like theScotsorGermanch), and today the Spanish pronunciation of "Quixote" is[kiˈxote]. The original pronunciation is reflected in languages such asAsturian,Leonese,Galician,Catalan,Italian,Portuguese,TurkishandFrench, where it is pronounced with a "sh" or "ch" sound; the French operaDon Quichotteis one of the best-known modern examples of this pronunciation. Today, English speakers generally attempt something close to the modern Spanish pronunciation ofQuixote(Quijote), as/kiːˈhoʊti/,[1]although the traditional Englishspelling-based pronunciationwith the value of the letter x in modern English is still sometimes used, resulting in/ˈkwɪksət/or/ˈkwɪksoʊt/. InAustralian English, the preferred pronunciation amongst members of the educated classes was/ˈkwɪksət/until well into the 1970s, as part of a tendency for the upper class to "anglicise its borrowing ruthlessly".[23]The traditional English rendering is preserved in the pronunciation of the adjectival formquixotic, i.e.,/kwɪkˈsɒtɪk/,[24][25]defined byMerriam-Websteras the foolishly impractical pursuit of ideals, typically marked by rash and lofty romanticism.[26] Harold BloomsaysDon Quixoteis the first modern novel, and that the protagonist is at war withFreud'sreality principle, which accepts the necessity of dying. Bloom says that the novel has an endless range of meanings, but that a recurring theme is the human need to withstand suffering.[27] Edith Grossman, who wrote and published a highly acclaimed[28]English translation of the novel in 2003, says that the book is mostly meant to move people into emotion using a systematic change of course, on the verge of both tragedy and comedy at the same time. Grossman has stated: The question is that Quixote has multiple interpretations [...] and how do I deal with that in my translation. I'm going to answer your question by avoiding it [...] so when I first started reading the Quixote I thought it was the most tragic book in the world, and I would read it and weep [...] As I grew older [...] my skin grew thicker [...] and so when I was working on the translation I was actually sitting at my computer and laughing out loud. This is done [...] as Cervantes did it [...] by never letting the reader rest. You are never certain that you truly got it. Because as soon as you think you understand something, Cervantes introduces something that contradicts your premise.[29] The novel's structure isepisodicin form. The full title is indicative of the tale's object, asingenioso(Spanish) means "quick with inventiveness",[30]marking the transition of modern literature fromdramaticto thematic unity. The novel takes place over a long period of time, including many adventures united by common themes of the nature of reality, reading, and dialogue in general. Althoughburlesqueon the surface, the novel, especially in its second half, has served as an important thematic source not only in literature but also in much of art and music, inspiring works byPablo PicassoandRichard Strauss. The contrasts between the tall, thin, fancy-struck and idealistic Quixote and the fat diddy, world-weary Panza is a motif echoed ever since the book's publication, and Don Quixote's imaginings are the butt of outrageous and cruel practical jokes in the novel. Even faithful and simple Sancho is forced to deceive him at certain points. The novel is considered a satire oforthodoxy, veracity and even nationalism.[citation needed]In exploring the individualism of his characters, Cervantes helped lead literary practice beyond the narrow convention of thechivalric romance. Hespoofsthe chivalric romance[31]through a straightforward retelling of a series of acts that redound to theknightly virtuesof the hero. The character of Don Quixote became so well known in its time that the wordquixoticwas quickly adopted by many languages. Characters such as Sancho Panza and Don Quixote's steed,Rocinante, are emblems of Western literary culture. The phrase "tilting at windmills" to describe an act of attacking imaginary enemies (or an act of extreme idealism), derives from an iconic scene in the book. It stands in a unique position between medievalromanceand the modern novel. The former consists of disconnected stories featuring the same characters and settings with little exploration of the inner life of even the main character. The latter are usually focused on the psychological evolution of their characters. In Part I, Quixote imposes himself on his environment. By Part II, people know about him through "having read his adventures", and so, he needs to do less to maintain his image. By his deathbed, he has regained his sanity, and is once more "Alonso Quixano the Good". The cave ofMedrano[32](also known as thecasa de Medrano) inArgamasilla de Alba, which has been known since the beginning of the 17th century, and according to the tradition of Argamasilla de Alba, was the prison of Miguel de Cervantes and the place where he conceived and began to write his famous work "Don Quixote de la Mancha."[33][34][35][36][37][38][39] Sources forDon Quixoteinclude the Castilian novelAmadis de Gaula, which had enjoyed great popularity throughout the 16th century. Another prominent source, which Cervantes evidently admires more, isTirant lo Blanch, which the priest describes in Chapter VI ofQuixoteas "the best book in the world." (However, the sense in which it was "best" is much debated among scholars. Since the 19th century, the passage has been called "the most difficult passage ofDon Quixote".) The scene of the book burning provides a list of Cervantes's likes and dislikes about literature. Cervantes makes a number of references to the Italian poemOrlando furioso. In chapter 10 of the first part of the novel, Don Quixote says he must take the magical helmet ofMambrino, an episode from Canto I ofOrlando, and itself a reference toMatteo Maria Boiardo'sOrlando innamorato.[40]The interpolated story in chapter 33 of Part four of the First Part is a retelling of a tale from Canto 43 ofOrlando, regarding a man who tests the fidelity of his wife.[41] Another important source appears to have been Apuleius'sThe Golden Ass, one of the earliest known novels, a picaresque from late classical antiquity. The wineskins episode near the end of the interpolated tale "The Curious Impertinent" in chapter 35 of the first part ofDon Quixoteis a clear reference to Apuleius, and recent scholarship suggests that the moral philosophy and the basic trajectory of Apuleius's novel are fundamental to Cervantes' program.[42]Similarly, many of both Sancho's adventures in Part II and proverbs throughout are taken from popular Spanish and Italian folklore. Cervantes' experiences as agalley slavein Algiers also influencedQuixote.[43] Medical theories may have also influenced Cervantes' literary process. Cervantes had familial ties to the distinguished medical community. His father, Rodrigo de Cervantes, and his great-grandfather, Juan Díaz de Torreblanca, were surgeons. Additionally, his sister, Andrea de Cervantes, was a nurse.[44]He also befriended many individuals involved in the medical field, in that he knew medical author Francisco Díaz, an expert in urology, and royal doctorAntonio Ponce de Santa Cruzwho served as a personal doctor to both Philip III and Philip IV of Spain.[45] Apart from the personal relations Cervantes maintained within the medical field, Cervantes' personal life was defined by an interest in medicine. He frequently visited patients from the Hospital de Inocentes in Sevilla.[44]Furthermore, Cervantes explored medicine in his personal library. His library contained more than 200 volumes and included books likeExamen de Ingenios, byJuan HuarteandPractica y teórica de cirugía, by Dionisio Daza Chacón that defined medical literature and medical theories of his time.[45] Researchers Isabel Sanchez Duque and Francisco Javier Escudero have found that Cervantes was a friend of the family Villaseñor, which was involved in a combat with Francisco de Acuña. Both sides combated disguised as medieval knights in the road fromEl TobosotoMiguel Estebanin 1581. They also found a person called Rodrigo Quijada, who bought the title of nobility of "hidalgo", and created diverse conflicts with the help of a squire.[46][47] It is not certain when Cervantes began writingPart TwoofDon Quixote, but he had probably not proceeded much further than Chapter LIX by late July 1614. In about September, however, a spurious Part Two, entitledSecond Volume of the Ingenious Gentleman Don Quixote of La Mancha: by the Licenciado (doctorate)Alonso Fernández de Avellaneda, ofTordesillas, was published inTarragonaby an unidentifiedAragonesewho was an admirer ofLope de Vega, rival of Cervantes.[48]It was translated intoEnglishby William Augustus Yardley, Esquire in two volumes in 1784. Some modern scholars suggest that Don Quixote's fictional encounter with Avellaneda's book in Chapter 59 of Part II should not be taken as the date that Cervantes encountered it, which may have been much earlier. Avellaneda's identity has been the subject of many theories, but there is no consensus as to who he was. In its prologue, the author gratuitously insulted Cervantes, who took offense and responded; the last half of Chapter LIX and most of the following chapters of Cervantes'sSegunda Partelend some insight into the effects upon him; Cervantes manages to work in some subtle digs at Avellaneda's own work, and in his preface to Part II, comes very near to criticizing Avellaneda directly. In his introduction toThe Portable Cervantes,Samuel Putnam, a noted translator of Cervantes' novel, calls Avellaneda's version "one of the most disgraceful performances in history".[49] The second part of Cervantes'Don Quixote, finished as a direct result of the Avellaneda book, has come to be regarded by some literary critics[50]as superior to the first part, because of its greater depth of characterization, its discussions, mostly between Quixote and Sancho, on diverse subjects, and its philosophical insights. In Cervantes'sSegunda Parte, Don Quixote visits a printing-house in Barcelona and finds Avellaneda'sSecond Partbeing printed there, in an early example ofmetafiction.[51]Don Quixote and Sancho Panza also meet one of the characters from Avellaneda's book, Don Alvaro Tarfe, and make him swear that the "other" Quixote and Sancho are impostors.[52] Cervantes' story takes place on the plains ofLa Mancha, specifically thecomarcaofCampo de Montiel. En un lugar de La Mancha, de cuyo nombre no quiero acordarme, no ha mucho tiempo que vivía un hidalgo de los de lanza en astillero, adarga antigua, rocín flaco y galgo corredor.(Somewhere in La Mancha, in a place whose name I do not care to remember, a gentleman lived not long ago, one of those who has a lance and ancient shield on a shelf and keeps a skinny nag and a greyhound for racing.) The location of the village to which Cervantes alludes in the opening sentence ofDon Quixotehas been the subject of debate since its publication over four centuries ago. Indeed, Cervantes deliberately omits the name of the village, giving an explanation in the final chapter: Such was the end of the Ingenious Gentleman of La Mancha, whose village Cide Hamete would not indicate precisely, in order to leave all the towns and villages of La Mancha to contend among themselves for the right to adopt him and claim him as a son, as the seven cities of Greece contended for Homer. In 2004, a team of academics fromComplutense University, led by Francisco Parra Luna, Manuel Fernández Nieto, and Santiago Petschen Verdaguer, deduced that the village was that ofVillanueva de los Infantes.[53]Their findings were published in a paper titled "'El Quijote' como un sistema de distancias/tiempos: hacia la localización del lugar de la Mancha", which was later published as a book:El enigma resuelto del Quijote. The result was replicated in two subsequent investigations: "La determinación del lugar de la Mancha como problema estadístico" and "The Kinematics of the Quixote and the Identity of the 'Place in La Mancha'".[54][55] Translators ofDon Quixote, such asJohn Ormsby,[56]have commented that the region ofLa Manchais one of the most desertlike, unremarkable regions of Spain, the least romantic and fanciful place that one would imagine as the home of a courageous knight. On the other hand, as Borges points out: I suspect that inDon Quixote, it does not rain a single time. The landscapes described by Cervantes have nothing in common with the landscapes of Castile: they are conventional landscapes, full of meadows, streams, and copses that belong in an Italian novel. The story also takes place inEl Tobosowhere Don Quixote goes to seekDulcinea's blessings. Don Quixoteis said to reflect the Spanish society in which Cervantes lived and wrote.[58]Spain's status as a world powerwas declining, and the Spanish national treasury was bankrupt due to expensive foreign wars.[58]Spanish cultural dominance was also waning as theProtestant Reformationhad put the Spanish Roman Catholic Church on the defensive, which had led to the establishment of theSpanish Inquisition.[58]Meanwhile, thehidalgoclass was losing relevance because of changes in Spanish society which made the high ideals ofchivalryobsolete.[58] In 2002 theNorwegian Nobel Instituteconducted a study among writers from 55 countries, themajorityvotedDon Quixote"the greatest work of fiction ever written".[59] The opening sentence of the book created a classic Spanish cliché with the phrasede cuyo nombre no quiero acordarme("whose name I do not wish to recall"):[60]En un lugar de la Mancha, de cuyo nombre no quiero acordarme, no ha mucho tiempo que vivía un hidalgo de los de lanza en astillero, adarga antigua, rocín flaco y galgo corredor.[61]("In a village of La Mancha, whose name I do not wish to recall, there lived, not very long ago, one of those gentlemen with a lance in the lance-rack, an ancient shield, a skinny old horse, and a fast greyhound.")[62] Don Quixotealongside its many translations, has also provided a number of idioms and expressions to the English language. Examples with their own articles include the phrase "the pot calling the kettle black" and the adjective "quixotic".[63][64] Tilting at windmills is an Englishidiomthat means "attacking imaginary enemies". The expression is derived fromDon Quixote, and the word "tilt" in this context refers tojousting. This phrase is sometimes also expressed as "charging at windmills" or "fighting the windmills".[65] The phrase is sometimes used to describe either confrontations where adversaries are incorrectly perceived, or courses of action that are based on misinterpreted or misapplied heroic, romantic, or idealistic justifications.[66]It may also connote an inopportune, unfounded, and vain effort against adversaries real or imagined.[67] Dulcibella, a deep-sea amphipod species, was named after the character Dulcinea in the novel, following the tradition of naming amphipods after literary figures. In July 1604, Cervantes sold the rights ofEl ingenioso hidalgo don Quixote de la Mancha(known asDon Quixote, Part I) to the publisher-booksellerFrancisco de Roblesfor an unknown sum.[68]License to publish was granted in September, the printing was finished in December, and the book came out on 16 January 1605.[69][70] The novel was an immediate success. Most of the 400 copies of the firsteditionwere sent to theNew World, with the publisher hoping to get a better price in the Americas.[71]Although most of them disappeared in a shipwreck nearLa Havana, approximately 70 copies reachedLima, from where they were sent toCuzco, in the heart of the defunctInca Empire.[71] No sooner was it in the hands of the public than preparations were made to issue derivative (pirated) editions. In 1614 a fake second part was published by a mysterious author under the pen name Avellaneda. This author was never satisfactorily identified. This rushed Cervantes into writing and publishing a genuine second part in 1615, which was a year before his own death.[51]Don Quixotehad been growing in favour, and its author's name was now known beyond thePyrenees. By August 1605, there were two Madrid editions, two published in Lisbon, and one inValencia. Publisher Francisco de Robles secured additional copyrights forAragonand Portugal for a second edition.[72] Sale of these publishing rights deprived Cervantes of further financial profit onPart One. In 1607, an edition was printed inBrussels.Robles, the Madrid publisher, found it necessary to meet demand with a third edition, a seventh publication in all, in 1608. Popularity of the book in Italy was such that a Milan bookseller issued an Italian edition in 1610. Yet another Brussels edition was called for in 1611.[70]Since then, numerous editions have been released and in total, the novel is believed to have sold more than 500 million copies worldwide.[73]The work has been produced in numerous editions and languages, the Cervantes Collection, at theState Library of New South Walesincludes over 1,100 editions. These were collected, byBen Haneman, over a period of thirty years.[74] In 1613, Cervantes published theNovelas ejemplares, dedicated to theMaecenasof the day, theConde de Lemos. Eight and a half years afterPart Onehad appeared came the first hint of a forthcomingSegunda Parte(Part Two). "You shall see shortly", Cervantes says, "the further exploits of Don Quixote and humours of Sancho Panza."[75]Don Quixote, Part Two, published by the same press as its predecessor, appeared late in 1615, and quickly reprinted in Brussels and Valencia (1616) and Lisbon (1617). Parts One and Two were published as one edition in Barcelona in 1617. Historically, Cervantes' work has been said to have "smiledSpain's chivalryaway", suggesting that Don Quixote as a chivalric satire contributed to the demise of Spanish Chivalry.[76] There are many translations of the book, and it has been adapted many times in shortened versions. Many derivative editions were also written at the time, as was the custom of envious or unscrupulous writers. Seven years after theParte Primeraappeared,Don Quixotehad been translated into French, German, Italian, and English, with the first French translation of 'Part II' appearing in 1618, and the first English translation in 1620. One abridged adaptation, authored by Agustín Sánchez, runs slightly over 150 pages, cutting away about 750 pages.[77] Thomas Shelton's English translation of theFirst Partappeared in 1612 while Cervantes was still alive, although there is no evidence that Shelton had met the author. Although Shelton's version is cherished by some, according toJohn OrmsbyandSamuel Putnam, it was far from satisfactory as a carrying over of Cervantes' text.[72]Shelton's translation of the novel'sSecond Partappeared in 1620. Near the end of the 17th century,John Phillips, a nephew of poetJohn Milton, published what Putnam considered the worst English translation. The translation, as literary critics claim, was not based on Cervantes' text but mostly on a French work by Filleau de Saint-Martin and on notes which Thomas Shelton had written. Around 1700, a version byPierre Antoine Motteuxappeared. Motteux's translation enjoyed lasting popularity; it was reprinted as theModern LibrarySeries edition of the novel until recent times.[78]Nonetheless, future translators would find much to fault in Motteux's version: Samuel Putnam criticized "the prevailing slapstick quality of this work, especially whereSancho Panzais involved, the obtrusion of the obscene where it is found in the original, and the slurring of difficulties through omissions or expanding upon the text". John Ormsby considered Motteux's version "worse than worthless", and denounced its "infusion of Cockney flippancy and facetiousness" into the original.[79] The proverb "The proof of the pudding is in the eating" is widely attributed to Cervantes. The Spanish word for pudding (budín), however, does not appear in the original text but premieres in the Motteux translation.[80]In Smollett's translation of 1755 he notes that the original text reads literally "you will see when the eggs are fried", meaning "time will tell".[81] A translation by CaptainJohn Stevens, which revised Thomas Shelton's version, also appeared in 1700, but its publication was overshadowed by the simultaneous release of Motteux's translation.[78] In 1742, theCharles Jervastranslation appeared, posthumously. Through a printer's error, it came to be known, and is still known, as "the Jarvis translation". It was the most scholarly and accurate English translation of the novel up to that time, but future translator John Ormsby points out in his own introduction to the novel that the Jarvis translation has been criticized as being too stiff. Nevertheless, it became the most frequently reprinted translation of the novel until about 1885. Another 18th-century translation into English was that ofTobias Smollett, himself a novelist, first published in 1755. Like the Jarvis translation, it continues to be reprinted today. A translation by Alexander James Duffield appeared in 1881 and another by Henry Edward Watts in 1888. Most modern translators take as their model the 1885 translation by John Ormsby.[82] An expurgated children's version, under the titleThe Story of Don Quixote, was published in 1922 (available onProject Gutenberg). It leaves out the risqué sections as well as chapters that young readers might consider dull, and embellishes a great deal on Cervantes' original text. The title page actually gives credit to the two editors as if they were the authors, and omits any mention of Cervantes.[83] The most widely read English-language translations of the mid-20th century are bySamuel Putnam(1949),J. M. Cohen(1950;Penguin Classics), andWalter Starkie(1957). The last English translation of the novel in the 20th century was byBurton Raffel, published in 1996. The 21st century has already seen five new translations of the novel into English. The first is byJohn D. Rutherfordand the second byEdith Grossman. Reviewing the novel inThe New York Times,Carlos Fuentescalled Grossman's translation a "major literary achievement"[84]and another called it the "most transparent and least impeded among more than a dozen English translations going back to the 17th century."[85] In 2005, the year of the novel's 400th anniversary, Tom Lathrop published a new English translation of the novel, based on a lifetime of specialized study of the novel and its history.[86]The fourth translation of the 21st century was released in 2006 by former university librarian James H. Montgomery, 26 years after he had begun it, in an attempt to "recreate the sense of the original as closely as possible, though not at the expense of Cervantes' literary style."[87] In 2011, another translation by Gerald J. Davis appeared, which is self-published via Lulu.com.[88]The latest and the sixth translation of the 21st century is Diana de Armas Wilson's 2020 revision ofBurton Raffel's translation. Reviewing 26 out of the current 28 English translations as a whole in 2008, Daniel Eisenberg stated that there is no one translation ideal for every purpose but expressed a preference for those of Putnam and the revision of Ormsby's translation by Douglas and Jones.[89]
https://en.wikipedia.org/wiki/The_proof_of_the_pudding
Inlogic,reductio ad absurdum(Latinfor "reduction to absurdity"), also known asargumentum ad absurdum(Latinfor "argument to absurdity") orapagogical argument, is the form of argument that attempts to establish a claim by showing that the opposite scenario would lead to absurdity or contradiction.[1][2][3][4] This argument form traces back toAncient Greek philosophyand has been used throughout history in both formal mathematical and philosophical reasoning, as well as in debate. In mathematics, the technique is calledproof by contradiction. In formal logic, this technique is captured by an axiom for "Reductio ad Absurdum", normally given the abbreviation RAA, which is expressible inpropositional logic. This axiom is the introduction rule for negation (seenegation introduction). The "absurd" conclusion of areductio ad absurdumargument can take a range of forms, as these examples show: The first example argues that denial of the premise would result in a ridiculous conclusion, against the evidence of our senses (empirical evidence).[5]The second example is a mathematicalproof by contradiction(also known as an indirect proof[6]), which argues that the denial of the premise would result in alogical contradiction(there is a "smallest" number and yet there is a number smaller than it).[7] Reductio ad absurdumwas used throughoutGreek philosophy. The earliest example of areductioargument can be found in a satirical poem attributed toXenophanes of Colophon(c. 570 – c. 475BCE).[8]CriticizingHomer's attribution of human faults to the gods, Xenophanes states that humans also believe that the gods' bodies have human form. But if horses and oxen could draw, they would draw the gods with horse and ox bodies.[9]The gods cannot have both forms, so this is a contradiction. Therefore, the attribution of other human characteristics to the gods, such as human faults, is also false. Greek mathematicians proved fundamental propositions usingreductio ad absurdum.Euclid of Alexandria(mid-4th – mid-3rd centuries BCE) andArchimedes of Syracuse(c. 287 – c. 212 BCE) are two very early examples.[10] The earlier dialogues ofPlato(424–348 BCE), relating the discourses ofSocrates, raised the use ofreductioarguments to a formal dialectical method (elenchus), also called theSocratic method.[11]Typically, Socrates' opponent would make what would seem to be an innocuous assertion. In response, Socrates, via a step-by-step train of reasoning, bringing in other background assumptions, would make the person admit that the assertion resulted in an absurd or contradictory conclusion, forcing him to abandon his assertion and adopt a position ofaporia.[6] The technique was also a focus of the work ofAristotle(384–322 BCE), particularly in hisPrior Analyticswhere he referred to it as demonstration to the impossible (Ancient Greek:ἡ εἰς τὸ ἀδύνατον ἀπόδειξις,lit.'demonstration to the impossible', 62b).[4] Another example of this technique is found in thesorites paradox, where it was argued that if 1,000,000 grains of sand formed a heap, and removing one grain from a heap left it a heap, then a single grain of sand (or even no grains) forms a heap.[12] Much ofMadhyamakaBuddhist philosophycenters on showing how variousessentialistideas have absurd conclusions throughreductio ad absurdumarguments (known asprasaṅga, "consequence" in Sanskrit). In theMūlamadhyamakakārikā,Nāgārjuna'sreductio ad absurdumarguments are used to show that any theory of substance or essence was unsustainable and therefore, phenomena (dharmas) such as change, causality, and sense perception were empty (sunya) of any essential existence. Nāgārjuna's main goal is often seen by scholars as refuting the essentialism of certain BuddhistAbhidharmaschools (mainlyVaibhasika) which posited theories ofsvabhava(essential nature) and also the HinduNyāyaandVaiśeṣikaschools which posited a theory of ontological substances (dravyatas).[13] In 13:5, Nagarjuna wishes to demonstrate consequences of the presumption that things essentially, or inherently, exist, pointing out that if a "young man" exists in himself then it follows he cannot grow old (because he would no longer be a "young man"). As we attempt to separate the man from his properties (youth), we find that everything is subject to momentary change, and are left with nothing beyond the merely arbitrary convention that such entities as "young man" depend upon. Aristotle clarified the connection between contradiction and falsity in hisprinciple of non-contradiction, which states that a proposition cannot be both true and false.[15][16]That is, a propositionQ{\displaystyle Q}and its negation¬Q{\displaystyle \lnot Q}(not-Q) cannot both be true. Therefore, if a proposition and its negation can both be derived logically from a premise, it can be concluded that the premise is false. This technique, known as indirect proof orproof by contradiction,[6]has formed the basis ofreductio ad absurdumarguments in formal fields such aslogicand mathematics.
https://en.wikipedia.org/wiki/Reductio_ad_absurdum
Askunked termis a word or phrase that becomes difficult to use because it isevolvingfrom one meaning to another, perhaps inconsistent or evenopposite, usage,[1]or that becomes difficult to use due to other controversy surrounding the term.[2]Puristsmay insist on the old usage, whiledescriptivistsmay be more open to newer usages. Readers may not know which sense is meant especially whenprescriptivistsinsist on a meaning that accords with interests that often conflict.[citation needed] The term was coined by thelexicographerBryan A. GarnerinGarner's Modern American Usageand has since been adopted by some otherstyle guides.[2] Garner recommends avoiding such terms if their use may distract readers from the intended meaning of a text.[3] Some terms, such as "fulsome", may become skunked, and then eventually revert to their original meaning over time.[4]
https://en.wikipedia.org/wiki/Skunked_term
Ingeometry, anincidencerelationis aheterogeneous relationthat captures the idea being expressed when phrases such as "a pointlies ona line" or "a line iscontained ina plane" are used. The most basic incidence relation is that between a point,P, and a line,l, sometimes denotedPIl. IfPandlare incident,PIl, the pair(P,l)is called aflag. There are many expressions used in common language to describe incidence (for example, a linepasses througha point, a pointlies ina plane, etc.) but the term "incidence" is preferred because it does not have the additional connotations that these other terms have, and it can be used in a symmetric manner. Statements such as "linel1intersects linel2" are also statements about incidence relations, but in this case, it is because this is a shorthand way of saying that "there exists a pointPthat is incident with both linel1and linel2". When one type of object can be thought of as a set of the other type of object (viz., a plane is a set of points) then an incidence relation may be viewed ascontainment. Statements such as "any two lines in a plane meet" are calledincidence propositions. This particular statement is true in aprojective plane, though not true in theEuclidean planewhere lines may beparallel. Historically,projective geometrywas developed in order to make the propositions of incidence true without exceptions, such as those caused by the existence of parallels. From the point of view ofsynthetic geometry, projective geometryshould bedeveloped using such propositions asaxioms. This is most significant for projective planes due to the universal validity ofDesargues' theoremin higher dimensions. In contrast, theanalytic approachis to defineprojective spacebased onlinear algebraand utilizinghomogeneous co-ordinates. The propositions of incidence are derived from the following basic result onvector spaces: given subspacesUandWof a (finite-dimensional) vector spaceV, the dimension of their intersection isdimU+ dimW− dim (U+W). Bearing in mind that the geometric dimension of the projective spaceP(V)associated toVisdimV− 1and that the geometric dimension of any subspace is positive, the basic proposition of incidence in this setting can take the form:linear subspacesLandMof projective spacePmeet provideddimL+ dimM≥ dimP.[1] The following sections are limited toprojective planesdefined overfields, often denoted byPG(2,F), whereFis a field, orP2F. However these computations can be naturally extended to higher-dimensional projective spaces, and the field may be replaced by adivision ring(or skewfield) provided that one pays attention to the fact that multiplication is notcommutativein that case. LetVbe the three-dimensional vector space defined over the fieldF. The projective planeP(V) = PG(2,F)consists of the one-dimensional vector subspaces ofV, calledpoints, and the two-dimensional vector subspaces ofV, calledlines. Incidence of a point and a line is given by containment of the one-dimensional subspace in the two-dimensional subspace. Fix a basis forVso that we may describe its vectors as coordinate triples (with respect to that basis). A one-dimensional vector subspace consists of a non-zero vector and all of its scalar multiples. The non-zero scalar multiples, written as coordinate triples, are the homogeneous coordinates of the given point, calledpoint coordinates. With respect to this basis, the solution space of a single linear equation{(x,y,z) |ax+by+cz= 0} is a two-dimensional subspace ofV, and hence a line ofP(V). This line may be denoted byline coordinates[a,b,c], which are also homogeneous coordinates since non-zero scalar multiples would give the same line. Other notations are also widely used. Point coordinates may be written as column vectors,(x,y,z)T, with colons,(x:y:z), or with a subscript,(x,y,z)P. Correspondingly, line coordinates may be written as row vectors,(a,b,c), with colons,[a:b:c]or with a subscript,(a,b,c)L. Other variations are also possible. Given a pointP= (x,y,z)and a linel= [a,b,c], written in terms of point and line coordinates, the point is incident with the line (often written asPIl), if and only if, This can be expressed in other notations as: No matter what notation is employed, when the homogeneous coordinates of the point and line are just considered as ordered triples, their incidence is expressed as having theirdot productequal 0. LetP1andP2be a pair of distinct points with homogeneous coordinates(x1,y1,z1)and(x2,y2,z2)respectively. These points determine a unique linelwith an equation of the formax+by+cz= 0and must satisfy the equations: In matrix form this system of simultaneous linear equations can be expressed as: This system has a nontrivial solution if and only if thedeterminant, Expansion of this determinantal equation produces a homogeneous linear equation, which must be the equation of linel. Therefore, up to a common non-zero constant factor we havel= [a,b,c]where: In terms of thescalar triple productnotation for vectors, the equation of this line may be written as: whereP= (x,y,z)is a generic point. Points that are incident with the same line are said to becollinear. The set of all points incident with the same line is called arange. IfP1= (x1,y1,z1),P2= (x2,y2,z2), andP3= (x3,y3,z3), then these points are collinear if and only if i.e., if and only if thedeterminantof the homogeneous coordinates of the points is equal to zero. Letl1= [a1,b1,c1]andl2= [a2,b2,c2]be a pair of distinct lines. Then the intersection of linesl1andl2is point aP= (x0,y0,z0)that is the simultaneous solution (up to a scalar factor) of the system of linear equations: The solution of this system gives: Alternatively, consider another linel= [a,b,c]passing through the pointP, that is, the homogeneous coordinates ofPsatisfy the equation: Combining this equation with the two that defineP, we can seek a non-trivial solution of the matrix equation: Such a solution exists provided the determinant, The coefficients ofa,bandcin this equation give the homogeneous coordinates ofP. The equation of the generic line passing through the pointPin scalar triple product notation is: Lines that meet at the same point are said to beconcurrent. The set of all lines in a plane incident with the same point is called apencil of linescentered at that point. The computation of the intersection of two lines shows that the entire pencil of lines centered at a point is determined by any two of the lines that intersect at that point. It immediately follows that the algebraic condition for three lines,[a1,b1,c1], [a2,b2,c2], [a3,b3,c3]to be concurrent is that the determinant,
https://en.wikipedia.org/wiki/Incidence_(geometry)
Inmathematics,incidence geometryis the study ofincidence structures. A geometric structure such as theEuclidean planeis a complicated object that involves concepts such as length, angles, continuity, betweenness, andincidence. Anincidence structureis what is obtained when all other concepts are removed and all that remains is the data about which points lie on which lines. Even with this severe limitation, theorems can be proved and interesting facts emerge concerning this structure. Such fundamental results remain valid when additional concepts are added to form a richer geometry. It sometimes happens that authors blur the distinction between a study and the objects of that study, so it is not surprising to find that some authors refer to incidence structures as incidence geometries.[1] Incidence structures arise naturally and have been studied in various areas of mathematics. Consequently, there are different terminologies to describe these objects. Ingraph theorythey are calledhypergraphs, and incombinatorial design theorythey are calledblock designs. Besides the difference in terminology, each area approaches the subject differently and is interested in questions about these objects relevant to that discipline. Using geometric language, as is done in incidence geometry, shapes the topics and examples that are normally presented. It is, however, possible to translate the results from one discipline into the terminology of another, but this often leads to awkward and convoluted statements that do not appear to be natural outgrowths of the topics. In the examples selected for this article we use only those with a natural geometric flavor. A special case that has generated much interest deals with finite sets of points in theEuclidean planeand what can be said about the number and types of (straight) lines they determine. Some results of this situation can extend to more general settings since only incidence properties are considered. Anincidence structure(P,L, I)consists of a setPwhose elements are calledpoints, a disjoint setLwhose elements are calledlinesand anincidence relationIbetween them, that is, a subset ofP×Lwhose elements are calledflags.[2]If(A,l)is a flag, we say thatAisincident withlor thatlis incident withA(the terminology is symmetric), and writeAIl. Intuitively, a point and line are in this relation if and only if the point isonthe line. Given a pointBand a linemwhich do not form a flag, that is, the point is not on the line, the pair(B,m)is called ananti-flag. There is no natural concept of distance (ametric) in an incidence structure. However, a combinatorial metric does exist in the correspondingincidence graph (Levi graph), namely the length of the shortestpathbetween two vertices in thisbipartite graph. The distance between two objects of an incidence structure – two points, two lines or a point and a line – can be defined to be the distance between the corresponding vertices in the incidence graph of the incidence structure. Another way to define a distance again uses a graph-theoretic notion in a related structure, this time thecollinearity graphof the incidence structure. The vertices of the collinearity graph are the points of the incidence structure and two points are joined if there exists a line incident with both points. The distance between two points of the incidence structure can then be defined as their distance in the collinearity graph. When distance is considered in an incidence structure, it is necessary to mention how it is being defined. Incidence structures that are most studied are those that satisfy some additional properties (axioms), such asprojective planes,affine planes,generalized polygons,partial geometriesandnear polygons. Very general incidence structures can be obtained by imposing "mild" conditions, such as: Apartial linear spaceis an incidence structure for which the following axioms are true:[3] In a partial linear space it is also true that every pair of distinct lines meet in at most one point. This statement does not have to be assumed as it is readily proved from axiom one above. Further constraints are provided by the regularity conditions: RLk: Each line is incident with the same number of points. If finite this number is often denoted byk. RPr: Each point is incident with the same number of lines. If finite this number is often denoted byr. The second axiom of a partial linear space implies thatk> 1. Neither regularity condition implies the other, so it has to be assumed thatr> 1. A finite partial linear space satisfying both regularity conditions withk,r> 1is called atactical configuration.[4]Some authors refer to these simply asconfigurations,[5]orprojective configurations.[6]If a tactical configuration hasnpoints andmlines, then, by double counting the flags, the relationshipnr=mkis established. A common notation refers to(nr,mk)-configurations. In the special case wheren=m(and hence,r=k) the notation(nk,nk)is often simply written as(nk). Alinear spaceis a partial linear space such that:[7] Some authors add a "non-degeneracy" (or "non-triviality") axiom to the definition of a (partial) linear space, such as: This is used to rule out some very small examples (mainly when the setsPorLhave fewer than two elements) that would normally be exceptions to general statements made about the incidence structures. An alternative to adding the axiom is to refer to incidence structures that do not satisfy the axiom as beingtrivialand those that do asnon-trivial. Each non-trivial linear space contains at least three points and three lines, so the simplest non-trivial linear space that can exist is a triangle. A linear space having at least three points on every line is aSylvester–Gallai design. Some of the basic concepts and terminology arises from geometric examples, particularlyprojective planesandaffine planes. Aprojective planeis a linear space in which: and that satisfies the non-degeneracy condition: There is abijectionbetweenPandLin a projective plane. IfPis a finite set, the projective plane is referred to as afiniteprojective plane. Theorderof a finite projective plane isn=k– 1, that is, one less than the number of points on a line. All known projective planes have orders that areprime powers. A projective plane of ordernis an((n2+n+ 1)n+ 1)configuration. The smallest projective plane has order two and is known as theFano plane. This famous incidence geometry was developed by the Italian mathematicianGino Fano. In his work[9]on proving the independence of the set of axioms forprojectiven-spacethat he developed,[10]he produced a finite three-dimensional space with 15 points, 35 lines and 15 planes, in which each line had only three points on it.[11]The planes in this space consisted of seven points and seven lines and are now known asFano planes. The Fano plane cannot be represented in theEuclidean planeusing only points and straight line segments (i.e., it is not realizable). This is a consequence of theSylvester–Gallai theorem, according to which every realizable incidence geometry must include anordinary line, a line containing only two points. The Fano plane has no such line (that is, it is aSylvester–Gallai configuration), so it is not realizable.[12] Acomplete quadrangleconsists of four points, no three of which are collinear. In the Fano plane, the three points not on a complete quadrangle are the diagonal points of that quadrangle and are collinear. This contradicts theFano axiom, often used as an axiom for the Euclidean plane, which states that the three diagonal points of a complete quadrangle are never collinear. Anaffine planeis a linear space satisfying: and satisfying the non-degeneracy condition: The lineslandmin the statement of Playfair's axiom are said to beparallel. Every affine plane can be uniquely extended to a projective plane. Theorderof a finite affine plane isk, the number of points on a line. An affine plane of ordernis an((n2)n+ 1, (n2+n)n)configuration. The affine plane of order three is a(94, 123)configuration. When embedded in some ambient space it is called theHesse configuration. It is not realizable in the Euclidean plane but is realizable in thecomplex projective planeas the nineinflection pointsof anelliptic curvewith the 12 lines incident with triples of these. The 12 lines can be partitioned into four classes of three lines apiece where, in each class the lines are mutually disjoint. These classes are calledparallel classesof lines. Adding four new points, each being added to all the lines of a single parallel class (so all of these lines now intersect), and one new line containing just these four new points produces the projective plane of order three, a(134)configuration. Conversely, starting with the projective plane of order three (it is unique) and removing any single line and all the points on that line produces this affine plane of order three (it is also unique). Removing one point and the four lines that pass through that point (but not the other points on them) produces the(83)Möbius–Kantor configuration. Given an integerα≥ 1, a tactical configuration satisfying: is called apartial geometry. If there ares+ 1points on a line andt+ 1lines through a point, the notation for a partial geometry ispg(s,t,α). Ifα= 1these partial geometries aregeneralized quadrangles. Ifα=s+ 1these are calledSteiner systems. Forn> 2,[13]ageneralizedn-gonis a partial linear space whose incidence graphΓhas the property: Ageneralized 2-gonis an incidence structure, which is not a partial linear space, consisting of at least two points and two lines with every point being incident with every line. The incidence graph of a generalized 2-gon is a complete bipartite graph. A generalizedn-gon contains noordinarym-gonfor2 ≤m<nand for every pair of objects (two points, two lines or a point and a line) there is an ordinaryn-gon that contains them both. Generalized 3-gons are projective planes. Generalized 4-gons are calledgeneralized quadrangles. By the Feit-Higman theorem the only finite generalizedn-gons with at least three points per line and three lines per point haven= 2, 3, 4, 6 or 8. For a non-negative integerdanear2d-gonis an incidence structure such that: A near 0-gon is a point, while a near 2-gon is a line. The collinearity graph of a near 2-gon is acomplete graph. A near 4-gon is a generalized quadrangle (possibly degenerate). Every finite generalized polygon except the projective planes is a near polygon. Any connected bipartite graph is a near polygon and any near polygon with precisely two points per line is a connected bipartite graph. Also, alldual polar spacesare near polygons. Many near polygons are related tofinite simple groupslike theMathieu groupsand theJanko group J2. Moreover, the generalized 2d-gons, which are related toGroups of Lie type, are special cases of near 2d-gons. An abstract Möbius plane (or inversive plane) is an incidence structure where, to avoid possible confusion with the terminology of the classical case, the lines are referred to ascyclesorblocks. Specifically, a Möbius plane is an incidence structure of points and cycles such that: The incidence structure obtained at any pointPof a Möbius plane by taking as points all the points other thanPand as lines only those cycles that containP(withPremoved), is an affine plane. This structure is called theresidualatPin design theory. A finite Möbius plane ofordermis a tactical configuration withk=m+ 1points per cycle that is a3-design, specifically a3-(m2+ 1,m+ 1, 1)block design. A question raised byJ.J. Sylvesterin 1893 and finally settled byTibor Gallaiconcerned incidences of a finite set of points in the Euclidean plane. Theorem (Sylvester-Gallai): A finite set of points in the Euclidean plane is eithercollinearor there exists a line incident with exactly two of the points. A line containing exactly two of the points is called anordinary linein this context. Sylvester was probably led to the question while pondering about the embeddability of the Hesse configuration. A related result is thede Bruijn–Erdős theorem.Nicolaas Govert de BruijnandPaul Erdősproved the result in the more general setting of projective planes, but it still holds in the Euclidean plane. The theorem is:[14] As the authors pointed out, since their proof was combinatorial, the result holds in a larger setting, in fact in any incidence geometry in which there is a unique line through every pair of distinct points. They also mention that the Euclidean plane version can be proved from the Sylvester-Gallai theorem usinginduction. A bound on the number of flags determined by a finite set of points and the lines they determine is given by: Theorem (Szemerédi–Trotter): givennpoints andmlines in the plane, the number of flags (incident point-line pairs) is: and this bound cannot be improved, except in terms of the implicit constants. This result can be used to prove Beck's theorem. A similar bound for the number of incidences is conjectured for point-circle incidences, but only weaker upper bounds are known.[15] Beck's theorem says that finite collections of points in the plane fall into one of two extremes; one where a large fraction of points lie on a single line, and one where a large number of lines are needed to connect all the points. The theorem asserts the existence of positive constantsC,Ksuch that given anynpoints in the plane, at least one of the following statements is true: In Beck's original argument,Cis 100 andKis an unspecified constant; it is not known what the optimal values ofCandKare.
https://en.wikipedia.org/wiki/Incidence_geometry
Inmathematics, specificallyprojective geometry, aconfigurationin the plane consists of a finite set ofpoints, and a finitearrangement of lines, such that each point isincidentto the same number of lines and each line is incident to the same number of points.[1] Although certain specific configurations had been studied earlier (for instance byThomas Kirkmanin 1849), the formal study of configurations was first introduced byTheodor Reyein 1876, in the second edition of his bookGeometrie der Lage, in the context of a discussion ofDesargues' theorem.Ernst Steinitzwrote his dissertation on the subject in 1894, and they were popularized byHilbertandCohn-Vossen's 1932 bookAnschauliche Geometrie, reprinted in English asHilbert & Cohn-Vossen (1952). Configurations may be studied either as concrete sets of points and lines in a specific geometry, such as theEuclideanorprojective planes(these are said to berealizablein that geometry), or as a type of abstractincidence geometry. In the latter case they are closely related toregularhypergraphsandbiregularbipartite graphs, but with some additional restrictions: every two points of the incidence structure can be associated with at most one line, and every two lines can be associated with at most one point. That is, thegirthof the corresponding bipartite graph (theLevi graphof the configuration) must be at least six. A configuration in the plane is denoted by (pγℓπ), wherepis the number of points,ℓthe number of lines,γthe number of lines per point, andπthe number of points per line. These numbers necessarily satisfy the equation as this product is the number of point-line incidences (flags). Configurations having the same symbol, say (pγℓπ), need not beisomorphicasincidence structures. For instance, there exist three different (9393) configurations: thePappus configurationand two less notable configurations. In some configurations,p=ℓand consequently,γ=π. These are calledsymmetricorbalancedconfigurations[2]and the notation is often condensed to avoid repetition. For example, (9393) abbreviates to (93). Notable projective configurations include the following: Theprojective dualof a configuration (pγℓπ) is a (ℓπpγ) configuration in which the roles of "point" and "line" are exchanged. Types of configurations therefore come in dual pairs, except when taking the dual results in an isomorphic configuration. These exceptions are calledself-dualconfigurations and in such casesp=ℓ.[5] The number of nonisomorphic configurations of type (n3), starting atn= 7, is given by the sequence These numbers count configurations as abstract incidence structures, regardless of realizability.[6]AsGropp (1997)discusses, nine of the ten (103) configurations, and all of the (113) and (123) configurations, are realizable in the Euclidean plane, but for eachn≥ 16there is at least one nonrealizable (n3) configuration. Gropp also points out a long-lasting error in this sequence: an 1895 paper attempted to list all (123) configurations, and found 228 of them, but the 229th configuration, the Gropp configuration, was not discovered until 1988. There are several techniques for constructing configurations, generally starting from known configurations. Some of the simplest of these techniques construct symmetric (pγ) configurations. Some self dual configurations (pk) are cyclic configurations and can be constructed by one "generator line", like {0,1,3}, with vertices indexed from zero, and where indices in following lines are cycled forward modulop. This is guaranteed to produce a symmetric configuration when valid. An invalid generator line produces disconnected configurations, or it may break the axiom requiring at most one line between any two points.[7] Everypolygonas configuration (p2) is trivially a cyclic configuration with generator line {0,1}. A triangle (32) has lines {{0,1},{1,2},{2,0}}. The Fano plane, (73), the smallest self-dual order 3 symmetric configuration, can be defined by generator line {0,1,3} as lines {{0,1,3}, {1,2,4}, {2,3,5}, {3,4,6}, {4,5,0}, {5,6,1}, {6,0,2}}. They also can be represented in configuration table: The smallest self-dual order 5 symmetric configuration is (215) is a cyclic configuration and can be generated by the line {0,3,4,9,11}.[8] Anyfinite projective planeof ordern, PG(2,n), is an [(n2+n+1)n+1] configuration. Since projective planes are known to exist for all ordersnwhich are powers of primes, these constructions provide infinite families of symmetric configurations. Automorphismsfor PG(2,n), withn=qm(qprime) are (m!)(n3-1)(n3-n)(n3-n2)/(n-1).[9] Not all symmetric configurations are realizable. Specificallynmust be a power prime. For instance, PG(2,6) or (437) configuration does not exist.[10]However,Gropp (1990)has provided a construction which shows that fork≥ 3, a (pk) configuration exists for allp≥ 2ℓk+ 1, whereℓkis the length of an optimalGolomb rulerof orderk. The concept of a configuration may be generalized to higher dimensions,[11]for instance to points and lines or planes inspace. In such cases, the restrictions that no two points belong to more than one line may be relaxed, because it is possible for two points to belong to more than one plane. Notable three-dimensional configurations are theMöbius configuration, consisting of two mutually inscribed tetrahedra,Reye's configuration, consisting of twelve points and twelve planes, with six points per plane and six planes per point, theGray configurationconsisting of a 3×3×3 grid of 27 points and the 27 orthogonal lines through them, and theSchläfli double six, a configuration with 30 points, 12 lines, two lines per point, and five points per line. Configuration in the projective plane that is realized by points andpseudolinesis called topological configuration.[2]For instance, it is known that there exists no point-line (194) configurations, however, there exists a topological configuration with these parameters. Another generalization of the concept of a configuration concerns configurations of points and circles, a notable example being the (8364)Miquel configuration.[2]
https://en.wikipedia.org/wiki/Projective_configuration
Inmathematics, anabstract polytopeis an algebraicpartially ordered setwhich captures the dyadic property of a traditionalpolytopewithout specifying purely geometric properties such as points and lines. A geometricpolytopeis said to be arealizationof an abstract polytope in some realN-dimensional space, typicallyEuclidean. This abstract definition allows more generalcombinatorialstructures than traditional definitions of a polytope, thus allowing new objects that have no counterpart in traditional theory. In Euclidean geometry, two shapes that are notsimilarcan nonetheless share a common structure. For example, asquareand atrapezoidboth comprise an alternating chain of fourverticesand four sides, which makes themquadrilaterals. They are said to beisomorphicor “structure preserving”. This common structure may be represented in an underlying abstract polytope, a purely algebraic partially ordered set which captures the pattern of connections (orincidences)between the various structural elements. The measurable properties of traditional polytopes such as angles, edge-lengths, skewness, straightness and convexity have no meaning for an abstract polytope. What is true for traditional polytopes (also called classical or geometric polytopes) may not be so for abstract ones, and vice versa. For example, a traditional polytope is regular if all its facets and vertex figures are regular, but this is not necessarily so for an abstract polytope.[1] A traditional polytope is said to be arealizationof the associated abstract polytope. A realization is a mapping or injection of the abstract object into a real space, typicallyEuclidean, to construct a traditional polytope as a real geometric figure. The six quadrilaterals shown are all distinct realizations of the abstract quadrilateral, each with different geometric properties. Some of them do not conform to traditional definitions of a quadrilateral and are said to beunfaithfulrealizations. A conventional polytope is a faithful realization. In an abstract polytope, each structural element (vertex, edge, cell, etc.) is associated with a corresponding member of the set. The termfaceis used to refer to any such element e.g. a vertex (0-face), edge (1-face) or a generalk-face, and not just a polygonal 2-face. The faces arerankedaccording to their associated real dimension: vertices have rank 0, edges rank 1 and so on. Incident faces of different ranks, for example, a vertex F of an edge G, are ordered by the relation F < G. F is said to be asubfaceof G. F, G are said to beincidentif either F = G or F < G or G < F. This usage of "incidence" also occurs infinite geometry, although it differs from traditional geometry and some other areas of mathematics. For example, in the squareABCD, edgesABandBCare not abstractly incident (although they are both incident with vertex B).[citation needed] A polytope is then defined as a set of facesPwith an order relation<. Formally,P(with<) will be a (strict)partially ordered set, orposet. Just as the number zero is necessary in mathematics, so also every set has theempty set∅ as a subset. In an abstract polytope ∅ is by convention identified as theleastornullface and is a subface of all the others.[why?]Since the least face is one level below the vertices or 0-faces, its rank is −1 and it may be denoted asF−1. Thus F−1≡ ∅ and the abstract polytope also contains the empty set as an element.[2]It is usually not realized, though the lack of its realization could be interpreted as it being realized as the set containing no points, the empty set. There is also a single face of which all the others are subfaces. This is called thegreatestface. In ann-dimensional polytope, the greatest face has rank =nand may be denoted asFn. It is sometimes realized as the interior of the geometric figure. These least and greatest faces are sometimes calledimproperfaces, with all others beingproperfaces.[3] The faces of the abstract quadrilateral or square are shown in the table below: The relation < comprises a set of pairs, which here include Order relations aretransitive, i.e. F < G and G < H implies that F < H. Therefore, to specify the hierarchy of faces, it is not necessary to give every case of F < H, only the pairs where one is thesuccessorof the other, i.e. where F < H and no G satisfies F < G < H. The edges W, X, Y and Z are sometimes written asab,ad,bc, andcdrespectively, but such notation is not always appropriate. All four edges are structurally similar and the same is true of the vertices. The figure therefore has the symmetries of a square and is usually referred to as the square. Smaller posets, and polytopes in particular, are often best visualized in aHasse diagram, as shown. By convention, faces of equal rank are placed on the same vertical level. Each "line" between faces, say F, G, indicates an ordering relation < such that F < G where F is below G in the diagram. The Hasse diagram defines the unique poset and therefore fully captures the structure of the polytope. Isomorphic polytopes give rise to isomorphic Hasse diagrams, and vice versa. The same is not generally true for thegraphrepresentation of polytopes. Therankof a face F is defined as (m− 2), wheremis the maximum number of faces in anychain(F', F", ... , F) satisfying F' < F" < ... < F. F' is always the least face, F−1. Therankof an abstract polytopePis the maximum ranknof any face. It is always the rank of the greatest face Fn. The rank of a face or polytope usually corresponds to thedimensionof its counterpart in traditional theory. For some ranks, their face-types are named in the following table. † Traditionally "face" has meant a rank 2 face or 2-face. In abstract theory the term "face" denotes a face ofanyrank. In geometry, aflagis a maximalchainof faces, i.e. a (totally) ordered set Ψ of faces, each a subface of the next (if any), and such that Ψ is not a subset of any larger chain. Given any two distinct faces F, G in a flag, either F < G or F > G. For example, {ø,a,ab,abc} is a flag in the triangleabc. For a given polytope, all flags contain the same number of faces. Other posets do not, in general, satisfy this requirement. Any subset P' of a poset P is a poset (with the same relation <, restricted to P'). In an abstract polytope, given any two facesF,Hof P withF≤H, the set {G|F≤G≤H} is called asectionofP, and denotedH/F. (In order theory, a section is called aclosed intervalof the poset and denoted [F,H].) For example, in the prismabcxyz(see diagram) the sectionxyz/ø(highlighted green) is the triangle Ak-sectionis a section of rankk. P is thus a section of itself. This concept of sectiondoes nothave the same meaning as in traditional geometry. Thefacetfor a givenj-faceFis the (j−1)-sectionF/∅, whereFjis the greatest face. For example, in the triangleabc, the facet atabisab/∅= {∅, a, b, ab}, which is a line segment. The distinction betweenFandF/∅ is not usually significant and the two are often treated as identical. Thevertex figureat a given vertexVis the (n−1)-sectionFn/V, whereFnis the greatest face. For example, in the triangleabc, the vertex figure atbisabc/b= {b, ab, bc, abc}, which is a line segment. The vertex figures of a cube are triangles. A poset P isconnectedif P has rank ≤ 1, or, given any two proper faces F and G, there is a sequence of proper faces such that F = H1, G = Hk, and each Hi, i < k, is incident with its successor. The above condition ensures that a pair of disjoint trianglesabcandxyzisnota (single) polytope. A poset P isstrongly connectedif every section of P (including P itself) is connected. With this additional requirement, two pyramids that share just a vertex are also excluded. However, two square pyramids, for example,can, be "glued" at their square faces - giving an octahedron. The "common face" isnotthen a face of the octahedron. Anabstract polytopeis apartially ordered set, whose elements we callfaces, satisfying the 4 axioms:[citation needed] Ann-polytopeis a polytope of rankn. The abstract polytope associated with a realconvex polytopeis also referred to as itsface lattice.[4] There is just one poset for each rank −1 and 0. These are, respectively, the null face and the point. These are not always considered to be valid abstract polytopes. There is only one polytope of rank 1, which is the line segment. It has a least face, just two 0-faces and a greatest face, for example {ø,a, b, ab}. It follows that the verticesaandbhave rank 0, and that the greatest faceab, and therefore the poset, both have rank 1. For eachp, 3 ≤p<∞{\displaystyle \infty }, we have (the abstract equivalent of) the traditional polygon withpvertices andpedges, or ap-gon. For p = 3, 4, 5, ... we have the triangle, square, pentagon, .... Forp= 2, we have thedigon, andp=∞{\displaystyle \infty }we get theapeirogon. Adigonis a polygon with just 2 edges. Unlike any other polygon, both edges have the same two vertices. For this reason, it isdegeneratein theEuclidean plane. Faces are sometimes described using "vertex notation" - e.g. {ø,a,b,c,ab,ac,bc,abc} for the triangleabc. This method has the advantage ofimplyingthe<relation. With the digon this vertex notationcannot be used. It is necessary to give the faces individual symbols and specify the subface pairs F < G. Thus, a digon is defined as a set {ø,a,b, E', E", G} with the relation<given by where E' and E" are the two edges, and G the greatest face. This need to identify each element of the polytope with a unique symbol applies to many other abstract polytopes and is therefore common practice. A polytope can only be fully described using vertex notation ifevery face is incident with a unique set of vertices. A polytope having this property is said to beatomistic. The set ofj-faces (−1 ≤j≤n) of a traditionaln-polytope form an abstractn-polytope. The concept of an abstract polytope is more general and also includes: The digon is generalized by thehosohedronand higher dimensional hosotopes, which can all be realized asspherical polyhedra– they tessellate the sphere. Four examples of non-traditional abstract polyhedra are theHemicube(shown),Hemi-octahedron,Hemi-dodecahedron, and theHemi-icosahedron. These are the projective counterparts of thePlatonic solids, and can be realized as (globally)projective polyhedra– they tessellate thereal projective plane. The hemicube is another example of where vertex notation cannot be used to define a polytope - all the 2-faces and the 3-face have the same vertex set. Every geometric polytope has adualtwin. Abstractly, the dual is the same polytope but with the ranking reversed in order: the Hasse diagram differs only in its annotations. In ann-polytope, each of the originalk-faces maps to an (n−k− 1)-face in the dual. Thus, for example, then-face maps to the (−1)-face. The dual of a dual is (isomorphicto) the original. A polytope is self-dual if it is the same as, i.e. isomorphic to, its dual. Hence, the Hasse diagram of a self-dual polytope must be symmetrical about the horizontal axis half-way between the top and bottom. The square pyramid in the example above is self-dual. The vertex figure at a vertexVis the dual of the facet to whichVmaps in the dual polytope. Formally, an abstract polytope is defined to be "regular" if itsautomorphism groupactstransitively on the set of its flags. In particular, any twok-facesF,Gof ann-polytope are "the same", i.e. that there is an automorphism which mapsFtoG. When an abstract polytope is regular, its automorphism group is isomorphic to a quotient of aCoxeter group. All polytopes of rank ≤ 2 are regular. The most famous regular polyhedra are the five Platonic solids. The hemicube (shown) is also regular. Informally, for each rankk, this means that there is no way to distinguish anyk-face from any other - the faces must be identical, and must have identical neighbors, and so forth. For example, a cube is regular because all the faces are squares, each square's vertices are attached to three squares, and each of these squares is attached to identical arrangements of other faces, edges and vertices, and so on. This condition alone is sufficient to ensure that any regular abstract polytope has isomorphic regular (n−1)-faces and isomorphic regular vertex figures. This is a weaker condition than regularity for traditional polytopes, in that it refers to the (combinatorial) automorphism group, not the (geometric) symmetry group. For example, any abstract polygon is regular, since angles, edge-lengths, edge curvature, skewness etc. do not exist for abstract polytopes. There are several other weaker concepts, some not yet fully standardized, such assemi-regular,quasi-regular,uniform,chiral, andArchimedeanthat apply to polytopes that have some, but not all of their faces equivalent in each rank. A set of pointsVin a Euclidean space equipped with a surjection from the vertex set of an abstract polytopePsuch that automorphisms ofPinduceisometricpermutations ofVis called arealizationof an abstract polytope.[5][6]Two realizations are called congruent if the natural bijection between their sets of vertices is induced by an isometry of their ambient Euclidean spaces.[7][8] If an abstractn-polytope is realized inn-dimensional space, such that the geometrical arrangement does not break any rules for traditional polytopes (such as curved faces, or ridges of zero size), then the realization is said to befaithful. In general, only a restricted set of abstract polytopes of ranknmay be realized faithfully in any givenn-space. The characterization of this effect is an outstanding problem. For a regular abstract polytope, if the combinatorial automorphisms of the abstract polytope are realized by geometric symmetries then the geometric figure will be a regular polytope. The groupGof symmetries of a realizationVof an abstract polytopePis generated by two reflections, the product of which translates each vertex ofPto the next.[9][10]The product of the two reflections can be decomposed as a product of a non-zero translation, finitely many rotations, and possibly trivial reflection.[11][10] Generally, themoduli spaceof realizations of an abstract polytope is aconvex coneof infinite dimension.[12][13]The realization cone of the abstract polytope has uncountably infinitealgebraic dimensionand cannot beclosedin theEuclidean topology.[11][14] An important question in the theory of abstract polytopes is theamalgamation problem. This is a series of questions such as For example, ifKis the square, andLis the triangle, the answers to these questions are It is known that if the answer to the first question is 'Yes' for some regularKandL, then there is a unique polytope whose facets areKand whose vertex figures areL, called theuniversalpolytope with these facets and vertex figures, whichcoversall other such polytopes. That is, supposePis the universal polytope with facetsKand vertex figuresL. Then any other polytopeQwith these facets and vertex figures can be writtenQ=P/N, where Q=P/Nis called aquotientofP, and we sayPcoversQ. Given this fact, the search for polytopes with particular facets and vertex figures usually goes as follows: These two problems are, in general, very difficult. Returning to the example above, ifKis the square, andLis the triangle, the universal polytope {K,L} is the cube (also written {4,3}). The hemicube is the quotient {4,3}/N, whereNis a group of symmetries (automorphisms) of the cube with just two elements - the identity, and the symmetry that maps each corner (or edge or face) to its opposite. IfLis, instead, also a square, the universal polytope {K,L} (that is, {4,4}) is the tessellation of the Euclidean plane by squares. This tessellation has infinitely many quotients with square faces, four per vertex, some regular and some not. Except for the universal polytope itself, they all correspond to various ways to tessellate either atorusor an infinitely longcylinderwith squares. The11-cell, discovered independently byH. S. M. CoxeterandBranko Grünbaum, is an abstract 4-polytope. Its facets are hemi-icosahedra. Since its facets are, topologically, projective planes instead of spheres, the 11-cell is not a tessellation of any manifold in the usual sense. Instead, the 11-cell is alocallyprojective polytope. It is self-dual and universal: it is theonlypolytope with hemi-icosahedral facets and hemi-dodecahedral vertex figures. The57-cellis also self-dual, with hemi-dodecahedral facets. It was discovered by H. S. M. Coxeter shortly after the discovery of the 11-cell. Like the 11-cell, it is also universal, being the only polytope with hemi-dodecahedral facets and hemi-icosahedral vertex figures. On the other hand, there are many other polytopes with hemi-dodecahedral facets and Schläfli type {5,3,5}. The universal polytope with hemi-dodecahedral facets and icosahedral (not hemi-icosahedral) vertex figures is finite, but very large, with 10006920 facets and half as many vertices. The amalgamation problem has, historically, been pursued according tolocal topology. That is, rather than restrictingKandLto be particular polytopes, they are allowed to be any polytope with a giventopology, that is, any polytopetessellatinga givenmanifold. IfKandLarespherical(that is, tessellations of a topologicalsphere), thenPis calledlocally sphericaland corresponds itself to a tessellation of some manifold. For example, ifKandLare both squares (and so are topologically the same as circles),Pwill be a tessellation of the plane,torusorKlein bottleby squares. A tessellation of ann-dimensional manifold is actually a rankn+ 1 polytope. This is in keeping with the common intuition that thePlatonic solidsare three dimensional, even though they can be regarded as tessellations of the two-dimensional surface of a ball. In general, an abstract polytope is calledlocally Xif its facets and vertex figures are, topologically, either spheres orX, but not both spheres. The11-celland57-cellare examples of rank 4 (that is, four-dimensional)locally projectivepolytopes, since their facets and vertex figures are tessellations ofreal projective planes. There is a weakness in this terminology however. It does not allow an easy way to describe a polytope whose facets aretoriand whose vertex figures are projective planes, for example. Worse still if different facets have different topologies, or no well-defined topology at all. However, much progress has been made on the complete classification of the locally toroidal regular polytopes[15] LetΨbe a flag of an abstractn-polytope, and let −1 <i<n. From the definition of an abstract polytope, it can be proven that there is a unique flag differing fromΨby a rankielement, and the same otherwise. If we call this flagΨ(i), then this defines a collection of maps on the polytopes flags, sayφi. These maps are calledexchange maps, since they swap pairs of flags : (Ψφi)φi=Ψalways. Some other properties of the exchange maps : The exchange maps and the flag action in particular can be used to prove thatanyabstract polytope is a quotient of some regular polytope. A polytope can also be represented by tabulating itsincidences. The following incidence matrix is that of a triangle: The table shows a 1 wherever a face is a subface of another,or vice versa(so the table issymmetricabout the diagonal)- so in fact, the table hasredundant information; it would suffice to show only a 1 when the row face ≤ the column face. Since both the body and the empty set are incident with all other elements, the first row and column as well as the last row and column are trivial and can conveniently be omitted. Further information is gained by counting each occurrence. This numerative usage enables asymmetrygrouping, as in theHasse Diagramof thesquare pyramid: If vertices B, C, D, and E are considered symmetrically equivalent within the abstract polytope, then edges f, g, h, and j will be grouped together, and also edges k, l, m, and n, And finally also the trianglesP,Q,R, andS. Thus the corresponding incidence matrix of this abstract polytope may be shown as: In this accumulated incidence matrix representation the diagonal entries represent the total counts of either element type. Elements of different type of the same rank clearly are never incident so the value will always be 0; however, to help distinguish such relationships, an asterisk (*) is used instead of 0. The sub-diagonal entries of each row represent the incidence counts of the relevant sub-elements, while the super-diagonal entries represent the respective element counts of the vertex-, edge- or whatever -figure. Already this simplesquare pyramidshows that the symmetry-accumulated incidence matrices are no longer symmetrical. But there is still a simple entity-relation (beside the generalised Euler formulae for the diagonal, respectively the sub-diagonal entities of each row, respectively the super-diagonal elements of each row - those at least whenever no holes or stars etc. are considered), as for any such incidence matrixI=(Iij){\displaystyle I=(I_{ij})}holds: Iii⋅Iij=Iji⋅Ijj(i<j).{\displaystyle I_{ii}\cdot I_{ij}=I_{ji}\cdot I_{jj}\ \ (i<j).} In the 1960sBranko Grünbaumissued a call to the geometric community to consider generalizations of the concept ofregular polytopesthat he calledpolystromata. He developed a theory of polystromata, showing examples of new objects including the11-cell. The11-cellis aself-dual4-polytopewhosefacetsare noticosahedra, but are "hemi-icosahedra" — that is, they are the shape one gets if one considers opposite faces of the icosahedra to be actually thesameface (Grünbaum, 1977). A few years after Grünbaum's discovery of the11-cell,H.S.M. Coxeterdiscovered a similar polytope, the57-cell(Coxeter 1982, 1984), and then independently rediscovered the 11-cell. With the earlier work byBranko Grünbaum,H. S. M. CoxeterandJacques Titshaving laid the groundwork, the basic theory of the combinatorial structures now known as abstract polytopes was first described byEgon Schultein his 1980 PhD dissertation. In it he defined "regular incidence complexes" and "regular incidence polytopes". Subsequently, he andPeter McMullendeveloped the basics of the theory in a series of research articles that were later collected into a book. Numerous other researchers have since made their own contributions, and the early pioneers (including Grünbaum) have also accepted Schulte's definition as the "correct" one. Since then, research in the theory of abstract polytopes has focused mostly onregularpolytopes, that is, those whoseautomorphismgroupsacttransitivelyon the set of flags of the polytope.
https://en.wikipedia.org/wiki/Abstract_polytope
Thecausal setsprogram is an approach toquantum gravity. Its founding principles are thatspacetimeis fundamentally discrete (a collection of discrete spacetime points, called the elements of the causal set) and that spacetime events are related by apartial order. This partial order has the physical meaning of thecausality relationsbetween spacetime events. For some decades after the formulation ofgeneral relativity, the attitude towards Lorentzian geometry was mostly dedicated to understanding its physical implications and not concerned with theoretical issues.[1]However, early attempts to use causality as a starting point have been provided byHermann WeylandHendrik Lorentz.[2]Alfred Robbin two books in 1914 and 1936 suggested an axiomatic framework where the causal precedence played a critical role.[1]The first explicit proposal of quantising the causal structure of spacetime is attributed by Sumati Surya[1]to E. H. Kronheimer andRoger Penrose,[3]who invented causal spaces in order to "admit structures which can be very different from a manifold". Causal spaces are defined axiomatically, by considering not only causal precedence, but also chronological precedence. The program of causal sets is based on a theorem[4]byDavid Malament, extending former results byChristopher Zeeman,[5]and byStephen Hawking, A. R. King and P. J. McCarthy.[6][1]Malament's theorem states that if there is abijectivemap between twopast and future distinguishingspace times that preserves theircausal structurethen the map is aconformal isomorphism. The conformal factor that is left undetermined is related to the volume of regions in the spacetime. This volume factor can be recovered by specifying a volume element for each space time point. The volume of a space time region could then be found by counting the number of points in that region. Causal sets was initiated byRafael Sorkinwho continues to be the main proponent of the program. He has coined the slogan "Order + Number = Geometry" to characterize the above argument. The program provides a theory in which space time is fundamentally discrete while retaining localLorentz invariance. A causal set (or causet) is a setC{\displaystyle C}with apartial orderrelation⪯{\displaystyle \preceq }that is We'll writex≺y{\displaystyle x\prec y}ifx⪯y{\displaystyle x\preceq y}andx≠y{\displaystyle x\neq y}. The setC{\displaystyle C}represents the set ofspacetime eventsand the order relation⪯{\displaystyle \preceq }represents the causal relationship between events (seecausal structurefor the analogous idea in aLorentzian manifold). Although this definition uses the reflexive convention we could have chosen the irreflexive convention in which the order relation isirreflexiveandasymmetric. Thecausal relationof aLorentzian manifold(without closedcausal curves) satisfies the first three conditions. It is the local finiteness condition that introduces spacetime discreteness. Given a causal set we may ask whether it can beembeddedinto aLorentzian manifold. An embedding would be a map taking elements of the causal set into points in the manifold such that the order relation of the causal set matches the causal ordering of the manifold. A further criterion is needed however before the embedding is suitable. If, on average, the number of causal set elements mapped into a region of the manifold is proportional to the volume of the region then the embedding is said to befaithful. In this case we can consider the causal set to be 'manifold-like'. A central conjecture of the causal set program, called theHauptvermutung('fundamental conjecture'), is that the same causal set cannot be faithfully embedded into two spacetimes that are not similar on large scales. It is difficult to define this conjecture precisely because it is difficult to decide when two spacetimes are 'similar on large scales'. Modelling spacetime as a causal set would require us to restrict attention to those causal sets that are 'manifold-like'. Given a causal set this is a difficult property to determine. The difficulty of determining whether a causal set can be embedded into a manifold can be approached from the other direction. We can create a causal set by sprinkling points into a Lorentzian manifold. By sprinkling points in proportion to the volume of the spacetime regions and using the causal order relations in the manifold to induce order relations between the sprinkled points, we can produce a causal set that (by construction) can be faithfully embedded into the manifold. To maintain Lorentz invariance this sprinkling of points must be done randomly using aPoisson process. Thus the probability of sprinklingn{\displaystyle n}points into a region of volumeV{\displaystyle V}is P(n)=(ρV)ne−ρVn!{\displaystyle P(n)={\frac {(\rho V)^{n}e^{-\rho V}}{n!}}} whereρ{\displaystyle \rho }is the density of the sprinkling. Sprinkling points as a regular lattice would not keep the number of points proportional to the region volume. Some geometrical constructions in manifolds carry over to causal sets. When defining these we must remember to rely only on the causal set itself, not on any background spacetime into which it might be embedded. For an overview of these constructions, see.[7] Alinkin a causal set is a pair of elementsx,y∈C{\displaystyle x,y\in C}such thatx≺y{\displaystyle x\prec y}but with noz∈C{\displaystyle z\in C}such thatx≺z≺y{\displaystyle x\prec z\prec y}. Achainis a sequence of elementsx0,x1,…,xn{\displaystyle x_{0},x_{1},\ldots ,x_{n}}such thatxi≺xi+1{\displaystyle x_{i}\prec x_{i+1}}fori=0,…,n−1{\displaystyle i=0,\ldots ,n-1}. The length of a chain isn{\displaystyle n}. If everyxi,xi+1{\displaystyle x_{i},x_{i+1}}in the chain form a link, then the chain is called apath. We can use this to define the notion of ageodesicbetween two causal set elements, provided they are order comparable, that is, causally connected (physically, this means they are time-like). A geodesic between two elementsx⪯y∈C{\displaystyle x\preceq y\in C}is a chain consisting only of links such that In general there can be more than one geodesic between two comparable elements. Myrheim[8]first suggested that the length of such a geodesic should be directly proportional to the proper time along a timelike geodesic joining the two spacetime points. Tests of this conjecture have been made using causal sets generated from sprinklings into flat spacetimes. The proportionality has been shown to hold and is conjectured to hold for sprinklings in curved spacetimes too. Much work has been done in estimating the manifolddimensionof a causal set. This involves algorithms using the causal set aiming to give the dimension of the manifold into which it can be faithfully embedded. The algorithms developed so far are based on finding the dimension of aMinkowski spacetimeinto which the causal set can be faithfully embedded. This approach relies on estimating the number ofk{\displaystyle k}-length chains present in a sprinkling intod{\displaystyle d}-dimensional Minkowski spacetime. Counting the number ofk{\displaystyle k}-length chains in the causal set then allows an estimate ford{\displaystyle d}to be made. This approach relies on the relationship between the proper time between two points in Minkowski spacetime and the volume of thespacetime intervalbetween them. By computing the maximal chain length (to estimate the proper time) between two pointsx{\displaystyle x}andy{\displaystyle y}and counting the number of elementsz{\displaystyle z}such thatx≺z≺y{\displaystyle x\prec z\prec y}(to estimate the volume of the spacetime interval) the dimension of the spacetime can be calculated. These estimators should give the correct dimension for causal sets generated by high-density sprinklings intod{\displaystyle d}-dimensional Minkowski spacetime. Tests in conformally-flat spacetimes[9]have shown these two methods to be accurate. An ongoing task is to develop the correct dynamics for causal sets. These would provide a set of rules that determine which causal sets correspond to physically realistic spacetimes. The most popular approach to developing causal set dynamics is based on thesum-over-historiesversion ofquantum mechanics. This approach would perform a sum-over-causal sets by growing a causal set one element at a time. Elements would be added according to quantum mechanical rules and interference would ensure a large manifold-like spacetime would dominate the contributions. The best model for dynamics at the moment is a classical model in which elements are added according to probabilities. This model, due to David Rideout andRafael Sorkin, is known as classical sequential growth (CSG) dynamics.[10]The classical sequential growth model is a way to generate causal sets by adding new elements one after another. Rules for how new elements are added are specified and, depending on the parameters in the model, different causal sets result. In analogy to thepath integral formulationof quantum mechanics, one approach to developing a quantum dynamics for causal sets has been to apply anaction principlein the sum-over-causal sets approach. Sorkin has proposed a discrete analogue for thed'Alembertian, which can in turn be used to define theRicci curvature scalarand thereby the Benincasa–Dowker action on a causal set.[11][12]Monte-Carlo simulationshave provided evidence for a continuum phase in 2D using the Benincasa–Dowker action.[13]
https://en.wikipedia.org/wiki/Causal_sets
Inmathematics, acyclic orderis a way to arrange a set of objects in acircle.[nb]Unlike most structures inorder theory, a cyclic order is not modeled as abinary relation, such as "a<b". One does not say that east is "more clockwise" than west. Instead, a cyclic order is defined as aternary relation[a,b,c], meaning "aftera, one reachesbbeforec". For example, [June, October, February], but not [June, February, October], cf. picture. A ternary relation is called a cyclic order if it iscyclic, asymmetric, transitive, and connected. Dropping the "connected" requirement results in apartial cyclic order. Asetwith a cyclic order is called acyclically ordered setor simply acycle.[nb]Some familiar cycles are discrete, having only afinite numberofelements: there are sevendays of the week, fourcardinal directions, twelve notes in thechromatic scale, and three plays inrock-paper-scissors. In a finite cycle, each element has a "next element" and a "previous element". There are also cyclic orders with infinitely many elements, such as the orientedunit circlein the plane. Cyclic orders are closely related to the more familiarlinear orders, which arrange objects in aline. Any linear order can be bent into a circle, and any cyclic order can be cut at a point, resulting in a line. These operations, along with the related constructions of intervals and covering maps, mean that questions about cyclic orders can often be transformed into questions about linear orders. Cycles have more symmetries than linear orders, and they often naturally occur as residues of linear structures, as in thefinite cyclic groupsor thereal projective line. A cyclic order on a setXwithnelements is like an arrangement ofXon a clock face, for ann-hour clock. Each elementxinXhas a "next element" and a "previous element", and taking either successors or predecessors cycles exactly once through the elements asx(1),x(2), ...,x(n). There are a few equivalent ways to state this definition. A cyclic order onXis the same as apermutationthat makes all ofXinto a singlecycle, which is a special type of permutation -a circular permutation. Alternatively, a cycle withnelements is also aZn-torsor: a set with a free transitiveactionby afinite cyclic group.[1]Another formulation is to makeXinto the standarddirected cycle graphonnvertices, by some matching of elements to vertices. It can be instinctive to use cyclic orders forsymmetric functions, for example as in where writing the finalmonomialasxzwould distract from the pattern. A substantial use of cyclic orders is in the determination of theconjugacy classesoffree groups. Two elementsgandhof the free groupFon a setYare conjugate if and only if, when they are written as products of elementsyandy−1withyinY, and then those products are put in cyclic order, the cyclic orders are equivalent under therewritingrules that allow one to remove or add adjacentyandy−1. A cyclic order on a setXcan be determined by a linear order onX, but not in a unique way. Choosing a linear order is equivalent to choosing a first element, so there are exactlynlinear orders that induce a given cyclic order. Since there aren!possible linear orders (as inpermutations), there are(n− 1)!possible cyclic orders (as incircular permutations). Aninfinite setcan also be ordered cyclically. Important examples of infinite cycles include theunit circle,S1, and therational numbers,Q. The basic idea is the same: we arrange elements of the set around a circle. However, in the infinite case we cannot rely upon an immediate successor relation, because points may not have successors. For example, given a point on the unit circle, there is no "next point". Nor can we rely upon a binary relation to determine which of two points comes "first". Traveling clockwise on a circle, neither east or west comes first, but each follows the other. Instead, we use a ternary relation denoting that elementsa,b,coccur after each other (not necessarily immediately) as we go around the circle. For example, in clockwise order, [east, south, west]. Bycurryingthe arguments of the ternary relation[a,b,c], one can think of a cyclic order as a one-parameter family of binary order relations, calledcuts, or as a two-parameter family of subsets ofK, calledintervals. The general definition is as follows: a cyclic order on a setXis a relationC⊂X3, written[a,b,c], that satisfies the following axioms:[nb] The axioms are named by analogy with theasymmetry,transitivity, andconnectednessaxioms for a binary relation, which together define astrict linear order.Edward Huntington(1916,1924) considered other possible lists of axioms, including one list that was meant to emphasize the similarity between a cyclic order and abetweenness relation. A ternary relation that satisfies the first three axioms, but not necessarily the axiom of totality, is apartial cyclic order. Given a linear order<on a setX, the cyclic order onXinduced by<is defined as follows:[2] Two linear orders induce the same cyclic order if they can be transformed into each other by a cyclic rearrangement, as incutting a deck of cards.[3]One may define a cyclic order relation as a ternary relation that is induced by a strict linear order as above.[4] Cutting a single point out of a cyclic order leaves a linear order behind. More precisely, given a cyclically ordered set(K,[⋅,⋅,⋅]){\displaystyle (K,[\cdot ,\cdot ,\cdot ])}, each elementa∈K{\displaystyle a\in K}defines a natural linear order<a{\displaystyle <_{a}}on the remainder of the set,K∖{a}{\displaystyle K\setminus \{a\}}, by the following rule:[5] Moreover,<a{\displaystyle <_{a}}can be extended by adjoininga{\displaystyle a}as a least element; the resulting linear order onK{\displaystyle K}is called the principal cut with least elementa{\displaystyle a}. Likewise, adjoininga{\displaystyle a}as a greatest element results in a cut<a{\displaystyle <^{a}}.[6] Given two elementsa≠b∈K{\displaystyle a\neq b\in K}, theopen intervalfroma{\displaystyle a}tob{\displaystyle b}, written(a,b){\displaystyle (a,b)}, is the set of allx∈K{\displaystyle x\in K}such that[a,x,b]{\displaystyle [a,x,b]}. The system of open intervals completely defines the cyclic order and can be used as an alternate definition of a cyclic order relation.[7] An interval(a,b){\displaystyle (a,b)}has a natural linear order given by<a{\displaystyle <_{a}}. One can define half-closed and closed intervals[a,b){\displaystyle [a,b)},(a,b]{\displaystyle (a,b]}, and[a,b]{\displaystyle [a,b]}by adjoininga{\displaystyle a}as aleast elementand/orb{\displaystyle b}as agreatest element.[8]As a special case, the open interval(a,a){\displaystyle (a,a)}is defined as the cutK∖a{\displaystyle K\setminus a}. More generally, a proper subsetS{\displaystyle S}ofK{\displaystyle K}is calledconvexif it contains an interval between every pair of points: fora≠b∈S{\displaystyle a\neq b\in S}, either(a,b){\displaystyle (a,b)}or(b,a){\displaystyle (b,a)}must also be inS{\displaystyle S}.[9]A convex set is linearly ordered by the cut<x{\displaystyle <_{x}}for anyx{\displaystyle x}not in the set; this ordering is independent of the choice ofx{\displaystyle x}. As a circle has aclockwiseorder and a counterclockwise order, any set with a cyclic order has twosenses. Abijectionof the set that preserves the order is called anordered correspondence. If the sense is maintained as before, it is adirect correspondence, otherwise it is called anopposite correspondence.[10]Coxeter uses aseparation relationto describe cyclic order, and this relation is strong enough to distinguish the two senses of cyclic order. Theautomorphismsof a cyclically ordered set may be identified with C2, the two-element group, of direct and opposite correspondences. The "cyclic order = arranging in a circle" idea works because anysubsetof a cycle is itself a cycle. In order to use this idea to impose cyclic orders on sets that are not actually subsets of the unit circle in the plane, it is necessary to considerfunctionsbetween sets. A function between two cyclically ordered sets,f:X→Y, is called amonotonic functionor ahomomorphismif it pulls back the ordering onY: whenever[f(a),f(b),f(c)], one has[a,b,c]. Equivalently,fis monotone if whenever[a,b,c]andf(a),f(b), andf(c)are all distinct, then[f(a),f(b),f(c)]. A typical example of a monotone function is the following function on the cycle with 6 elements: A function is called anembeddingif it is both monotone andinjective.[nb]Equivalently, an embedding is a function that pushes forward the ordering onX: whenever[a,b,c], one has[f(a),f(b),f(c)]. As an important example, ifXis a subset of a cyclically ordered setY, andXis given its natural ordering, then theinclusion mapi:X→Yis an embedding. Generally, an injective functionffrom an unordered setXto a cycleYinduces a unique cyclic order onXthat makesfan embedding. A cyclic order on a finite setXcan be determined by an injection into the unit circle,X→S1. There are many possible functions that induce the same cyclic order—in fact, infinitely many. In order to quantify this redundancy, it takes a more complex combinatorial object than a simple number. Examining theconfiguration spaceof all such maps leads to the definition of an(n− 1)-dimensionalpolytopeknown as acyclohedron. Cyclohedra were first applied to the study ofknot invariants;[11]they have more recently been applied to the experimental detection ofperiodically expressedgenesin the study ofbiological clocks.[12] The category of homomorphisms of the standard finite cycles is called thecyclic category; it may be used to constructAlain Connes'cyclic homology. One may define a degree of a function between cycles, analogous to thedegree of a continuous mapping. For example, the natural map from thecircle of fifthsto thechromatic circleis a map of degree 7. One may also define arotation number. The set of all cuts is cyclically ordered by the following relation:[<1, <2, <3]if and only if there existx,y,zsuch that:[17] A certain subset of this cycle of cuts is theDedekind completionof the original cycle. Starting from a cyclically ordered setK, one may form a linear order by unrolling it along an infinite line. This captures the intuitive notion of keeping track of how many times one goes around the circle. Formally, one defines a linear order on theCartesian productZ×K, whereZis the set ofintegers, by fixing an elementaand requiring that for alli:[18] For example, the months January 2025, May 2025, September 2025, and January 2026 occur in that order. This ordering ofZ×Kis called theuniversal coverofK.[nb]Itsorder typeis independent of the choice ofa, but the notation is not, since the integer coordinate "rolls over" ata. For example, although the cyclic order ofpitch classesis compatible with the A-to-G alphabetical order, C is chosen to be the first note in each octave, so innote-octavenotation, B3is followed by C4. The inverse construction starts with a linearly ordered set and coils it up into a cyclically ordered set. Given a linearly ordered setLand an order-preservingbijectionT:L→Lwith unbounded orbits, theorbit spaceL/Tis cyclically ordered by the requirement:[7][nb] In particular, one can recoverKby definingT(xi) =xi+1onZ×K. There are alson-fold coverings for finiten; in this case, one cyclically ordered set covers another cyclically ordered set. For example, the24-hour clockis a double cover of the12-hour clock. In geometry, thepencilofraysemanating from a point in the oriented plane is a double cover of the pencil of unorientedlinespassing through the same point.[19]These covering maps can be characterized by lifting them to the universal cover.[7] Given a cyclically ordered set(K, [ ])and a linearly ordered set(L, <), the (total) lexicographic product is a cyclic order on theproduct setK×L, defined by[(a,x), (b,y), (c,z)]if one of the following holds:[20] The lexicographic productK×Lglobally looks likeKand locally looks likeL; it can be thought of asKcopies ofL. This construction is sometimes used to characterize cyclically ordered groups.[21] One can also glue together different linearly ordered sets to form a circularly ordered set. For example, given two linearly ordered setsL1andL2, one may form a circle by joining them together at positive and negative infinity. A circular order on the disjoint unionL1∪L2∪ {−∞, ∞} is defined by∞ <L1< −∞ <L2< ∞, where the induced ordering onL1is the opposite of its original ordering. For example, the set of alllongitudesis circularly ordered by joining all points west and all points east, along with theprime meridianand the180th meridian.Kuhlmann, Marshall & Osiak (2011)use this construction while characterizing the spaces of orderings andreal placesof doubleformal Laurent seriesover areal closed field.[22] The open intervals form abasefor a naturaltopology, the cyclicorder topology. Theopen setsin this topology are exactly those sets which are open ineverycompatible linear order.[23]To illustrate the difference, in the set [0, 1), the subset [0, 1/2) is a neighborhood of 0 in the linear order but not in the cyclic order. Interesting examples of cyclically ordered spaces include the conformal boundary of asimply connectedLorentz surface[24]and theleaf spaceof a liftedessential laminationof certain 3-manifolds.[25]Discrete dynamical systemson cyclically ordered spaces have also been studied.[26] The interval topology forgets the original orientation of the cyclic order. This orientation can be restored by enriching the intervals with their induced linear orders; then one has a set covered with an atlas of linear orders that are compatible where they overlap. In other words, a cyclically ordered set can be thought of as a locally linearly ordered space: an object like amanifold, but with order relations instead of coordinate charts. This viewpoint makes it easier to be precise about such concepts as covering maps. The generalization to a locally partially ordered space is studied inRoll (1993); see alsoDirected topology. Acyclically ordered groupis a set with both agroup structureand a cyclic order, such that left and right multiplication both preserve the cyclic order. Cyclically ordered groups were first studied in depth byLadislav Riegerin 1947.[27]They are a generalization ofcyclic groups: theinfinite cyclic groupZand thefinite cyclic groupsZ/n. Since a linear order induces a cyclic order, cyclically ordered groups are also a generalization oflinearly ordered groups: therational numbersQ, the real numbersR, and so on. Some of the most important cyclically ordered groups fall into neither previous category: thecircle groupTand its subgroups, such as thesubgroup of rational points. Every cyclically ordered group can be expressed as a quotientL/Z, whereLis a linearly ordered group andZis a cyclic cofinal subgroup ofL. Every cyclically ordered group can also be expressed as a subgroup of a productT×L, whereLis a linearly ordered group. If a cyclically ordered group is Archimedean or compact, it can be embedded inTitself.[28] Apartial cyclic orderis a ternary relation that generalizes a (total) cyclic order in the same way that apartial ordergeneralizes atotal order. It is cyclic, asymmetric, and transitive, but it need not be total. Anorder varietyis a partial cyclic order that satisfies an additionalspreadingaxiom.[29]Replacing the asymmetry axiom with a complementary version results in the definition of aco-cyclic order. Appropriately total co-cyclic orders are related to cyclic orders in the same way that≤is related to<. A cyclic order obeys a relatively strong 4-point transitivity axiom. One structure that weakens this axiom is aCC system: a ternary relation that is cyclic, asymmetric, and total, but generally not transitive. Instead, a CC system must obey a 5-point transitivity axiom and a newinteriorityaxiom, which constrains the 4-point configurations that violate cyclic transitivity.[30] A cyclic order is required to be symmetric under cyclic permutation,[a,b,c] ⇒ [b,c,a], and asymmetric under reversal:[a,b,c] ⇒ ¬[c,b,a]. A ternary relation that isasymmetricunder cyclic permutation andsymmetricunder reversal, together with appropriate versions of the transitivity and totality axioms, is called abetweenness relation. Aquaternary relationcalledpoint-pair separationdistinguishes the two intervals that a point-pair determines on a circle. The relationship between a circular order and a point-pair separation is analogous to the relationship between a linear order and a betweenness relation.[31] Evans, Macpherson & Ivanov (1997)provide a model-theoretic description of the covering maps of cycles. Tararin (2001,2002) studies groups of automorphisms of cycles with varioustransitivityproperties.Giraudet & Holland (2002)characterize cycles whose full automorphism groups actfreely and transitively.Campero-Arena & Truss (2009)characterizecountablecoloredcycles whose automorphism groups act transitively.Truss (2009)studies the automorphism group of the unique (up to isomorphism) countable dense cycle. Kulpeshov & Macpherson (2005)studyminimalityconditions on circularly orderedstructures, i.e. models of first-order languages that include a cyclic order relation. These conditions are analogues ofo-minimalityandweak o-minimalityfor the case of linearly ordered structures. Kulpeshov (2006,2009) continues with some characterizations ofω-categoricalstructures.[32] Hans Freudenthalhas emphasized the role of cyclic orders in cognitive development, as a contrast toJean Piagetwho addresses only linear orders. Some experiments have been performed to investigate the mental representations of cyclically ordered sets, such as the months of the year. ^cyclic orderThe relation may be called acyclic order(Huntington 1916, p. 630), acircular order(Huntington 1916, p. 630), acyclic ordering(Kok 1973, p. 6), or acircular ordering(Mosher 1996, p. 109). Some authors call such an ordering atotal cyclic order(Isli & Cohn 1998, p. 643), acomplete cyclic order(Novák 1982, p. 462), alinear cyclic order(Novák 1984, p. 323), or anl-cyclic orderor ℓ-cyclic order(Černák 2001, p. 32), to distinguish from the broader class ofpartial cyclic orders, which they call simplycyclic orders. Finally, some authors may takecyclic orderto mean an unoriented quaternaryseparation relation(Bowditch 1998, p. 155). ^cycleA set with a cyclic order may be called acycle(Novák 1982, p. 462) or acircle(Giraudet & Holland 2002, p. 1). The above variations also appear in adjective form:cyclically ordered set(cyklicky uspořádané množiny,Čech 1936, p. 23),circularly ordered set,total cyclically ordered set,complete cyclically ordered set,linearly cyclically ordered set,l-cyclically ordered set, ℓ-cyclically ordered set. All authors agree that a cycle is totally ordered. ^ternary relationThere are a few different symbols in use for a cyclic relation.Huntington (1916, p. 630) uses concatenation:ABC.Čech (1936, p. 23) and (Novák 1982, p. 462) use ordered triples and the set membership symbol:(a,b,c) ∈C.Megiddo (1976, p. 274) uses concatenation and set membership:abc∈C, understandingabcas a cyclically ordered triple. The literature on groups, such asŚwierczkowski (1959a, p. 162) andČernák & Jakubík (1987, p. 157), tend to use square brackets:[a,b,c].Giraudet & Holland (2002, p. 1) use round parentheses:(a,b,c), reserving square brackets for a betweenness relation.Campero-Arena & Truss (2009, p. 1) use a function-style notation:R(a,b,c).Rieger (1947), cited afterPecinová 2008, p. 82) uses a "less-than" symbol as a delimiter:<x,y,z<. Some authors use infix notation:a<b<c, with the understanding that this does not carry the usual meaning ofa<bandb<cfor some binary relation < (Černy 1978, p. 262).Weinstein (1996, p. 81) emphasizes the cyclic nature by repeating an element:p↪r↪q↪p. ^embeddingNovák (1984, p. 332) calls an embedding an "isomorphic embedding". ^rollIn this case,Giraudet & Holland (2002, p. 2) write thatKisL"rolled up". ^orbit spaceThe mapTis calledarchimedeanbyBowditch (2004, p. 33),coterminalbyCampero-Arena & Truss (2009, p. 582), and atranslationbyMcMullen (2009, p. 10). ^universal coverMcMullen (2009, p. 10) callsZ×Kthe "universal cover" ofK.Giraudet & Holland (2002, p. 3) write thatKisZ×K"coiled".Freudenthal & Bauer (1974, p. 10) callZ×Kthe "∞-times covering" ofK. Often this construction is written as the anti-lexicographic order onK×Z.
https://en.wikipedia.org/wiki/Cyclic_order
Inorder theory, a field ofmathematics, anincidence algebrais anassociative algebra, defined for everylocally finite partially ordered setandcommutative ringwith unity.Subalgebrascalledreduced incidence algebrasgive a natural construction of various types ofgenerating functionsused incombinatoricsandnumber theory. Alocally finiteposetis one in which everyclosed interval isfinite. The members of the incidence algebra are thefunctionsfassigning to eachnonemptyinterval [a, b] a scalarf(a,b), which is taken from theringof scalars, a commutative ring with unity. On this underlying set one defines addition and scalar multiplication pointwise, and "multiplication" in the incidence algebra is aconvolutiondefined by An incidence algebra is finite-dimensionalif and only ifthe underlying poset is finite. An incidence algebra is analogous to agroup algebra; indeed, both the group algebra and the incidence algebra are special cases of acategory algebra, defined analogously;groupsandposetsbeing special kinds ofcategories. Consider the case of a partial order ≤ over anyn-element setS. We enumerateSass1, …,sn, and in such a way that the enumeration is compatible with the order ≤ onS, that is,si≤sjimpliesi≤j, which is always possible. Then, functionsfas above, from intervals to scalars, can be thought of asmatricesAij, whereAij=f(si,sj)wheneveri≤j, andAij= 0otherwise.Since we arrangedSin a way consistent with the usual order on the indices of the matrices, they will appear asupper-triangular matriceswith a prescribed zero-pattern determined by the incomparable elements inSunder ≤. The incidence algebra of ≤ is thenisomorphicto the algebra of upper-triangular matrices with this prescribed zero-pattern and arbitrary (including possibly zero) scalar entries everywhere else, with the operations being ordinarymatrix addition, scaling andmultiplication.[1] The multiplicative identity element of the incidence algebra is thedelta function, defined by Thezeta functionof an incidence algebra is the constant functionζ(a,b) = 1 for every nonempty interval [a, b]. Multiplying byζis analogous tointegration. One can show that ζ isinvertiblein the incidence algebra (with respect to the convolution defined above). (Generally, a memberhof the incidence algebra is invertible if and only ifh(x,x) is invertible for everyx.) The multiplicative inverse of the zeta function is theMöbius functionμ(a, b); every value ofμ(a, b) is an integral multiple of 1 in the base ring. The Möbius function can also be defined inductively by the following relation: Multiplying byμis analogous todifferentiation, and is calledMöbius inversion. The square of the zeta function gives the number of elements in an interval:ζ2(x,y)=∑z∈[x,y]ζ(x,z)ζ(z,y)=∑z∈[x,y]1=#[x,y].{\displaystyle \zeta ^{2}(x,y)=\sum _{z\in [x,y]}\zeta (x,z)\,\zeta (z,y)=\sum _{z\in [x,y]}1=\#[x,y].} A poset isboundedif it has smallest and largest elements, which we call 0 and 1 respectively (not to be confused with the 0 and 1 of the ring of scalars). TheEuler characteristicof a bounded finite poset isμ(0,1). The reason for this terminology is the following: IfPhas a 0 and 1, thenμ(0,1) is the reducedEuler characteristicof thesimplicial complexwhose faces are chains inP\ {0, 1}. This can be shown using Philip Hall's theorem, relating the value ofμ(0,1) to the number of chains of lengthi. Thereduced incidence algebraconsists of functions which assign the same value to any two intervals which are equivalent in an appropriate sense, usually meaningisomorphicas posets. This is a subalgebra of the incidence algebra, and it clearly contains the incidence algebra's identity element and zeta function. Any element of the reduced incidence algebra that is invertible in the larger incidence algebra has its inverse in the reduced incidence algebra. Thus the Möbius function is also in the reduced incidence algebra. Reduced incidence algebras were introduced by Doubillet, Rota, and Stanley to give a natural construction of various rings ofgenerating functions.[2] For the poset(N,≤),{\displaystyle (\mathbb {N} ,\leq ),}the reduced incidence algebra consists of functionsf(a,b){\displaystyle f(a,b)}invariant under translation,f(a+k,b+k)=f(a,b){\displaystyle f(a+k,b+k)=f(a,b)}for allk≥0,{\displaystyle k\geq 0,}so as to have the same value on isomorphic intervals [a+k,b+k] and [a,b]. Lettdenote the function witht(a,a+1) = 1 andt(a,b) = 0 otherwise, a kind of invariant delta function on isomorphism classes of intervals. Its powers in the incidence algebra are the other invariant delta functionstn(a,a+n) = 1 andtn(x,y) = 0 otherwise. These form abasisfor the reduced incidence algebra, and we may write any invariant function asf=∑n≥0f(0,n)tn{\displaystyle \textstyle f=\sum _{n\geq 0}f(0,n)t^{n}}. This notation makes clear the isomorphism between the reduced incidence algebra and the ring of formal power seriesR[[t]]{\displaystyle R[[t]]}over the scalarsR,also known as the ring of ordinarygenerating functions. We may write the zeta function asζ=1+t+t2+⋯=11−t,{\displaystyle \zeta =1+t+t^{2}+\cdots ={\tfrac {1}{1-t}},}the reciprocal of the Möbius functionμ=1−t.{\displaystyle \mu =1-t.} For the Boolean poset of finite subsetsS⊂{1,2,3,…}{\displaystyle S\subset \{1,2,3,\ldots \}}ordered by inclusionS⊂T{\displaystyle S\subset T}, the reduced incidence algebra consists of invariant functionsf(S,T),{\displaystyle f(S,T),}defined to have the same value on isomorphic intervals [S,T] and [S′,T′] with |T\S| = |T′\S′|. Again, lettdenote the invariant delta function witht(S,T) = 1 for |T\S| = 1 andt(S,T) = 0 otherwise. Its powers are:tn(S,T)=∑t(T0,T1)t(T1,T2)…t(Tn−1,Tn)={n!if|T∖S|=n0otherwise,{\displaystyle t^{n}(S,T)=\,\sum t(T_{0},T_{1})\,t(T_{1},T_{2})\dots t(T_{n-1},T_{n})=\left\{{\begin{array}{cl}n!&{\text{if}}\,\,|T\smallsetminus S|=n\\0&{\text{otherwise,}}\end{array}}\right.}where the sum is over all chainsS=T0⊂T1⊂⋯⊂Tn=T,{\displaystyle S=T_{0}\subset T_{1}\subset \cdots \subset T_{n}=T,}and the only non-zero terms occur for saturated chains with|Ti∖Ti−1|=1;{\displaystyle |T_{i}\smallsetminus T_{i-1}|=1;}since these correspond to permutations ofn, we get the unique non-zero valuen!. Thus, the invariant delta functions are the divided powerstnn!,{\displaystyle {\tfrac {t^{n}}{n!}},}and we may write any invariant function asf=∑n≥0f(∅,[n])tnn!,{\displaystyle \textstyle f=\sum _{n\geq 0}f(\emptyset ,[n]){\frac {t^{n}}{n!}},}where [n] = {1, . . . ,n}. This gives a natural isomorphism between the reduced incidence algebra and the ring ofexponential generating functions. The zeta function isζ=∑n≥0tnn!=exp⁡(t),{\displaystyle \textstyle \zeta =\sum _{n\geq 0}{\frac {t^{n}}{n!}}=\exp(t),}with Möbius function:μ=1ζ=exp⁡(−t)=∑n≥0(−1)ntnn!.{\displaystyle \mu ={\frac {1}{\zeta }}=\exp(-t)=\sum _{n\geq 0}(-1)^{n}{\frac {t^{n}}{n!}}.}Indeed, this computation with formal power series proves thatμ(S,T)=(−1)|T∖S|.{\displaystyle \mu (S,T)=(-1)^{|T\smallsetminus S|}.}Many combinatorial counting sequences involving subsets or labeled objects can be interpreted in terms of the reduced incidence algebra, andcomputedusing exponential generating functions. Consider the posetDof positive integers ordered bydivisibility, denoted bya|b.{\displaystyle a\,|\,b.}The reduced incidence algebra consists of functionsf(a,b){\displaystyle f(a,b)}that are invariant under multiplication:f(ka,kb)=f(a,b){\displaystyle f(ka,kb)=f(a,b)}for allk≥1.{\displaystyle k\geq 1.}(This multiplicative equivalence of intervals is a much stronger relation than poset isomorphism; e.g., for primesp, the two-element intervals [1,p] are all inequivalent.) For an invariant function,f(a,b) depends only onb/a, so a natural basis consists of invariant delta functionsδn{\displaystyle \delta _{n}}defined byδn(a,b)=1{\displaystyle \delta _{n}(a,b)=1}ifb/a=nand 0 otherwise; then any invariant function can be writtenf=∑n≥0f(1,n)δn.{\displaystyle \textstyle f=\sum _{n\geq 0}f(1,n)\,\delta _{n}.} The product of two invariant delta functions is: since the only non-zero term comes fromc=naandb=mc=nma. Thus, we get an isomorphism from the reduced incidence algebra to the ring of formalDirichlet seriesby sendingδn{\displaystyle \delta _{n}}ton−s,{\displaystyle n^{-s}\!,}so thatfcorresponds to∑n≥1f(1,n)ns.{\textstyle \sum _{n\geq 1}{\frac {f(1,n)}{n^{s}}}.} The incidence algebra zeta function ζD(a,b) = 1 corresponds to the classicalRiemann zeta functionζ(s)=∑n≥11ns,{\displaystyle \zeta (s)=\textstyle \sum _{n\geq 1}{\frac {1}{n^{s}}},}having reciprocal1ζ(s)=∑n≥1μ(n)ns,{\textstyle {\frac {1}{\zeta (s)}}=\sum _{n\geq 1}{\frac {\mu (n)}{n^{s}}},}whereμ(n)=μD(1,n){\displaystyle \mu (n)=\mu _{D}(1,n)}is the classicalMöbius functionof number theory. Many otherarithmetic functionsarise naturally within the reduced incidence algebra, and equivalently in terms of Dirichlet series. For example, thedivisor functionσ0(n){\displaystyle \sigma _{0}(n)}is the square of the zeta function,σ0(n)=ζ2(1,n),{\displaystyle \sigma _{0}(n)=\zeta ^{2}\!(1,n),}a special case of the above result thatζ2(x,y){\displaystyle \zeta ^{2}\!(x,y)}gives the number of elements in the interval [x,y]; equivalenty,ζ(s)2=∑n≥1σ0(n)ns.{\textstyle \zeta (s)^{2}=\sum _{n\geq 1}{\frac {\sigma _{0}(n)}{n^{s}}}.} The product structure of the divisor poset facilitates the computation of its Möbius function.Unique factorization into primesimpliesDis isomorphic to an infinite Cartesian productN×N×…{\displaystyle \mathbb {N} \times \mathbb {N} \times \dots }, with the order given by coordinatewise comparison:n=p1e1p2e2…{\displaystyle n=p_{1}^{e_{1}}p_{2}^{e_{2}}\dots }, wherepk{\displaystyle p_{k}}is thekthprime, corresponds to its sequence of exponents(e1,e2,…).{\displaystyle (e_{1},e_{2},\dots ).}Now the Möbius function ofDis the product of the Möbius functions for the factor posets, computed above, giving the classical formula: The product structure also explains the classicalEuler productfor the zeta function. The zeta function ofDcorresponds to a Cartesian product of zeta functions of the factors, computed above as11−t,{\textstyle {\frac {1}{1-t}},}so thatζD≅∏k≥111−t,{\textstyle \zeta _{D}\cong \prod _{k\geq 1}\!{\frac {1}{1-t}},}where the right side is a Cartesian product. Applying the isomorphism which sendstin thekthfactor topk−s{\displaystyle p_{k}^{-s}}, we obtain the usual Euler product. Incidence algebras of locally finite posets were treated in a number of papers ofGian-Carlo Rotabeginning in 1964, and by many latercombinatorialists. Rota's 1964 paper was:
https://en.wikipedia.org/wiki/Incidence_algebra
In the mathematical field ofcategory theory, anallegoryis acategorythat has some of the structure of the categoryRelofsetsandbinary relationsbetween them. Allegories can be used as an abstraction of categories of relations, and in this sense the theory of allegories is a generalization ofrelation algebrato relations between different sorts. Allegories are also useful in defining and investigating certain constructions in category theory, such asexactcompletions. In this article we adopt the convention thatmorphismscompose from right to left, soRSmeans "first doS, then doR". An allegory is acategoryin which all such that Here, we are abbreviating using the order defined by the intersection:R⊆S{\displaystyle R\subseteq S}meansR=R∩S.{\displaystyle R=R\cap S.} A first example of an allegory is thecategory of sets and relations. Theobjectsof this allegory are sets, and a morphismX→Y{\displaystyle X\to Y}is a binary relation betweenXandY. Composition of morphisms iscomposition of relations, and the anti-involution ofR{\displaystyle R}is theconverse relationR∘{\displaystyle R^{\circ }}:yR∘x{\displaystyle yR^{\circ }x}if and only ifxRy{\displaystyle xRy}. Intersection of morphisms is (set-theoretic)intersectionof relations. In a categoryC, arelationbetween objectsXandYis aspanof morphismsX←R→Y{\displaystyle X\gets R\to Y}that is jointlymonic. Two such spansX←S→Y{\displaystyle X\gets S\to Y}andX←T→Y{\displaystyle X\gets T\to Y}are considered equivalent when there is an isomorphism betweenSandTthat make everything commute; strictly speaking, relations are only defined up to equivalence (one may formalise this either by usingequivalence classesor by usingbicategories). If the categoryChas products, a relation betweenXandYis the same thing as amonomorphismintoX×Y(or an equivalence class of such). In the presence ofpullbacksand a properfactorization system, one can define the composition of relations. The compositionX←R→Y←S→Z{\displaystyle X\gets R\to Y\gets S\to Z}is found by first pulling back the cospanR→Y←S{\displaystyle R\to Y\gets S}and then taking the jointly-monic image of the resulting spanX←R←∙→S→Z.{\displaystyle X\gets R\gets \bullet \to S\to Z.} Composition of relations will be associative if the factorization system is appropriately stable. In this case, one can consider a categoryRel(C), with the same objects asC, but where morphisms are relations between the objects. The identity relations are the diagonalsX→X×X.{\displaystyle X\to X\times X.} Aregular category(a category with finite limits and images in which covers are stable under pullback) has a stable regular epi/mono factorization system. The category of relations for a regular category is always an allegory. Anti-involution is defined by turning the source/target of the relation around, and intersections are intersections ofsubobjects, computed by pullback. A morphismRin an allegoryAis called amapif it is entire(1⊆R∘R){\displaystyle (1\subseteq R^{\circ }R)}and deterministic(RR∘⊆1).{\displaystyle (RR^{\circ }\subseteq 1).}Another way of saying this is that a map is a morphism that has aright adjointinAwhenAis considered, using the local order structure, as a2-category. Maps in an allegory are closed under identity and composition. Thus, there is asubcategoryMap(A)ofAwith the same objects but only the maps as morphisms. For a regular categoryC, there is an isomorphism of categoriesC≅Map⁡(Rel⁡(C)).{\displaystyle C\cong \operatorname {Map} (\operatorname {Rel} (C)).}In particular, a morphism inMap(Rel(Set))is just an ordinaryset function. In an allegory, a morphismR:X→Y{\displaystyle R\colon X\to Y}istabulatedby a pair of mapsf:Z→X{\displaystyle f\colon Z\to X}andg:Z→Y{\displaystyle g\colon Z\to Y}ifgf∘=R{\displaystyle gf^{\circ }=R}andf∘f∩g∘g=1.{\displaystyle f^{\circ }f\cap g^{\circ }g=1.}An allegory is calledtabularif every morphism has a tabulation. For a regular categoryC, the allegoryRel(C)is always tabular. On the other hand, for any tabular allegoryA, the categoryMap(A)of maps is a locally regular category: it has pullbacks,equalizers, and images that are stable under pullback. This is enough to study relations inMap(A), and in this setting,A≅Rel⁡(Map⁡(A)).{\displaystyle A\cong \operatorname {Rel} (\operatorname {Map} (A)).} Aunitin an allegory is an objectUfor which the identity is the largest morphismU→U,{\displaystyle U\to U,}and such that from every other object, there is an entire relation toU. An allegory with a unit is calledunital. Given a tabular allegoryA, the categoryMap(A)is a regular category (it has aterminal object) if and only ifAis unital. Additional properties of allegories can be axiomatized.Distributive allegorieshave aunion-like operation that is suitably well-behaved, anddivision allegorieshave a generalization of the division operation ofrelation algebra.Power allegoriesare distributive division allegories with additionalpowerset-like structure. The connection between allegories and regular categories can be developed into a connection between power allegories andtoposes.
https://en.wikipedia.org/wiki/Allegory_(category_theory)
All definitions tacitly require thehomogeneous relationR{\displaystyle R}betransitive: for alla,b,c,{\displaystyle a,b,c,}ifaRb{\displaystyle aRb}andbRc{\displaystyle bRc}thenaRc.{\displaystyle aRc.}A term's definition may require additional properties that are not listed in this table. Inmathematics, abinary relationassociates some elements of onesetcalled thedomainwith some elements of another set called thecodomain.[1]Precisely, a binary relation over setsX{\displaystyle X}andY{\displaystyle Y}is a set ofordered pairs(x,y){\displaystyle (x,y)}, wherex{\displaystyle x}is an element ofX{\displaystyle X}andy{\displaystyle y}is an element ofY{\displaystyle Y}.[2]It encodes the common concept of relation: an elementx{\displaystyle x}isrelatedto an elementy{\displaystyle y},if and only ifthe pair(x,y){\displaystyle (x,y)}belongs to the set of ordered pairs that defines the binary relation. An example of a binary relation is the "divides" relation over the set ofprime numbersP{\displaystyle \mathbb {P} }and the set ofintegersZ{\displaystyle \mathbb {Z} }, in which each primep{\displaystyle p}is related to each integerz{\displaystyle z}that is amultipleofp{\displaystyle p}, but not to an integer that is not amultipleofp{\displaystyle p}. In this relation, for instance, the prime number2{\displaystyle 2}is related to numbers such as−4{\displaystyle -4},0{\displaystyle 0},6{\displaystyle 6},10{\displaystyle 10}, but not to1{\displaystyle 1}or9{\displaystyle 9}, just as the prime number3{\displaystyle 3}is related to0{\displaystyle 0},6{\displaystyle 6}, and9{\displaystyle 9}, but not to4{\displaystyle 4}or13{\displaystyle 13}. Binary relations, and especiallyhomogeneous relations, are used in many branches of mathematics to model a wide variety of concepts. These include, among others: Afunctionmay be defined as a binary relation that meets additional constraints, for example, it maps only one element in its domain to an element of its codomain (while there exist binary relations as one-to-many mappings).[3]Binary relations are also heavily used incomputer science. A binary relation over setsX{\displaystyle X}andY{\displaystyle Y}is an element of thepower setofX×Y.{\displaystyle X\times Y.}Since the latter set is ordered byinclusion(⊆{\displaystyle \subseteq }), each relation has a place in thelatticeof subsets ofX×Y.{\displaystyle X\times Y.}A binary relation is called ahomogeneous relationwhenX=Y{\displaystyle X=Y}. A binary relation is also called aheterogeneous relationwhen it is not necessary thatX=Y{\displaystyle X=Y}. Since relations are sets, they can be manipulated using set operations, includingunion,intersection, andcomplementation, and satisfying the laws of analgebra of sets. Beyond that, operations like theconverseof a relation and thecomposition of relationsare available, satisfying the laws of acalculus of relations, for which there are textbooks byErnst Schröder,[4]Clarence Lewis,[5]andGunther Schmidt.[6]A deeper analysis of relations involves decomposing them into subsets calledconcepts, and placing them in acomplete lattice. In some systems ofaxiomatic set theory, relations are extended toclasses, which are generalizations of sets. This extension is needed for, among other things, modeling the concepts of "is an element of" or "is a subset of" in set theory, without running into logical inconsistencies such asRussell's paradox. A binary relation is the most studied special casen=2{\displaystyle n=2}of ann{\displaystyle n}-ary relationover setsX1,…,Xn{\displaystyle X_{1},\dots ,X_{n}}, which is a subset of theCartesian productX1×⋯×Xn.{\displaystyle X_{1}\times \cdots \times X_{n}.}[2] Given setsX{\displaystyle X}andY{\displaystyle Y}, the Cartesian productX×Y{\displaystyle X\times Y}is defined as{(x,y)∣x∈Xandy∈Y},{\displaystyle \{(x,y)\mid x\in X{\text{ and }}y\in Y\},}and its elements are calledordered pairs. Abinary relationR{\displaystyle R}over setsX{\displaystyle X}andY{\displaystyle Y}is a subset ofX×Y.{\displaystyle X\times Y.}[2][7]The setX{\displaystyle X}is called thedomain[2]orset of departureofR{\displaystyle R}, and the setY{\displaystyle Y}thecodomainorset of destinationofR{\displaystyle R}. In order to specify the choices of the setsX{\displaystyle X}andY{\displaystyle Y}, some authors define abinary relationorcorrespondenceas an ordered triple(X,Y,G){\displaystyle (X,Y,G)}, whereG{\displaystyle G}is a subset ofX×Y{\displaystyle X\times Y}called thegraphof the binary relation. The statement(x,y)∈R{\displaystyle (x,y)\in R}reads "x{\displaystyle x}isR{\displaystyle R}-related toy{\displaystyle y}" and is denoted byxRy{\displaystyle xRy}.[4][5][6][note 1]Thedomain of definitionoractive domain[2]ofR{\displaystyle R}is the set of allx{\displaystyle x}such thatxRy{\displaystyle xRy}for at least oney{\displaystyle y}. Thecodomain of definition,active codomain,[2]imageorrangeofR{\displaystyle R}is the set of ally{\displaystyle y}such thatxRy{\displaystyle xRy}for at least onex{\displaystyle x}. ThefieldofR{\displaystyle R}is the union of its domain of definition and its codomain of definition.[9][10][11] WhenX=Y,{\displaystyle X=Y,}a binary relation is called ahomogeneous relation(orendorelation). To emphasize the fact thatX{\displaystyle X}andY{\displaystyle Y}are allowed to be different, a binary relation is also called aheterogeneous relation.[12][13][14]The prefixheterois from the Greek ἕτερος (heteros, "other, another, different"). A heterogeneous relation has been called arectangular relation,[14]suggesting that it does not have the square-like symmetry of ahomogeneous relation on a setwhereA=B.{\displaystyle A=B.}Commenting on the development of binary relations beyond homogeneous relations, researchers wrote, "... a variant of the theory has evolved that treats relations from the very beginning asheterogeneousorrectangular, i.e. as relations where the normal case is that they are relations between different sets."[15] The termscorrespondence,[16]dyadic relationandtwo-place relationare synonyms for binary relation, though some authors use the term "binary relation" for any subset of a Cartesian productX×Y{\displaystyle X\times Y}without reference toX{\displaystyle X}andY{\displaystyle Y}, and reserve the term "correspondence" for a binary relation with reference toX{\displaystyle X}andY{\displaystyle Y}.[citation needed] In a binary relation, the order of the elements is important; ifx≠y{\displaystyle x\neq y}thenyRx{\displaystyle yRx}can be true or false independently ofxRy{\displaystyle xRy}. For example,3{\displaystyle 3}divides9{\displaystyle 9}, but9{\displaystyle 9}does not divide3{\displaystyle 3}. IfR{\displaystyle R}andS{\displaystyle S}are binary relations over setsX{\displaystyle X}andY{\displaystyle Y}thenR∪S={(x,y)∣xRyorxSy}{\displaystyle R\cup S=\{(x,y)\mid xRy{\text{ or }}xSy\}}is theunion relationofR{\displaystyle R}andS{\displaystyle S}overX{\displaystyle X}andY{\displaystyle Y}. The identity element is the empty relation. For example,≤{\displaystyle \leq }is the union of < and =, and≥{\displaystyle \geq }is the union of > and =. IfR{\displaystyle R}andS{\displaystyle S}are binary relations over setsX{\displaystyle X}andY{\displaystyle Y}thenR∩S={(x,y)∣xRyandxSy}{\displaystyle R\cap S=\{(x,y)\mid xRy{\text{ and }}xSy\}}is theintersection relationofR{\displaystyle R}andS{\displaystyle S}overX{\displaystyle X}andY{\displaystyle Y}. The identity element is the universal relation. For example, the relation "is divisible by 6" is the intersection of the relations "is divisible by 3" and "is divisible by 2". IfR{\displaystyle R}is a binary relation over setsX{\displaystyle X}andY{\displaystyle Y}, andS{\displaystyle S}is a binary relation over setsY{\displaystyle Y}andZ{\displaystyle Z}thenS∘R={(x,z)∣there existsy∈Ysuch thatxRyandySz}{\displaystyle S\circ R=\{(x,z)\mid {\text{ there exists }}y\in Y{\text{ such that }}xRy{\text{ and }}ySz\}}(also denoted byR;S{\displaystyle R;S}) is thecomposition relationofR{\displaystyle R}andS{\displaystyle S}overX{\displaystyle X}andZ{\displaystyle Z}. The identity element is the identity relation. The order ofR{\displaystyle R}andS{\displaystyle S}in the notationS∘R,{\displaystyle S\circ R,}used here agrees with the standard notational order forcomposition of functions. For example, the composition (is parent of)∘{\displaystyle \circ }(is mother of) yields (is maternal grandparent of), while the composition (is mother of)∘{\displaystyle \circ }(is parent of) yields (is grandmother of). For the former case, ifx{\displaystyle x}is the parent ofy{\displaystyle y}andy{\displaystyle y}is the mother ofz{\displaystyle z}, thenx{\displaystyle x}is the maternal grandparent ofz{\displaystyle z}. IfR{\displaystyle R}is a binary relation over setsX{\displaystyle X}andY{\displaystyle Y}thenRT={(y,x)∣xRy}{\displaystyle R^{\textsf {T}}=\{(y,x)\mid xRy\}}is theconverse relation,[17]also calledinverse relation,[18]ofR{\displaystyle R}overY{\displaystyle Y}andX{\displaystyle X}. For example,={\displaystyle =}is the converse of itself, as is≠{\displaystyle \neq }, and<{\displaystyle <}and>{\displaystyle >}are each other's converse, as are≤{\displaystyle \leq }and≥.{\displaystyle \geq .}A binary relation is equal to its converse if and only if it issymmetric. IfR{\displaystyle R}is a binary relation over setsX{\displaystyle X}andY{\displaystyle Y}thenR¯={(x,y)∣¬xRy}{\displaystyle {\bar {R}}=\{(x,y)\mid \neg xRy\}}(also denoted by¬R{\displaystyle \neg R}) is thecomplementary relationofR{\displaystyle R}overX{\displaystyle X}andY{\displaystyle Y}. For example,={\displaystyle =}and≠{\displaystyle \neq }are each other's complement, as are⊆{\displaystyle \subseteq }and⊈{\displaystyle \not \subseteq },⊇{\displaystyle \supseteq }and⊉{\displaystyle \not \supseteq },∈{\displaystyle \in }and∉{\displaystyle \not \in }, and fortotal ordersalso<{\displaystyle <}and≥{\displaystyle \geq }, and>{\displaystyle >}and≤{\displaystyle \leq }. The complement of theconverse relationRT{\displaystyle R^{\textsf {T}}}is the converse of the complement:RT¯=R¯T.{\displaystyle {\overline {R^{\mathsf {T}}}}={\bar {R}}^{\mathsf {T}}.} IfX=Y,{\displaystyle X=Y,}the complement has the following properties: IfR{\displaystyle R}is a binaryhomogeneous relationover a setX{\displaystyle X}andS{\displaystyle S}is a subset ofX{\displaystyle X}thenR|S={(x,y)∣xRyandx∈Sandy∈S}{\displaystyle R_{\vert S}=\{(x,y)\mid xRy{\text{ and }}x\in S{\text{ and }}y\in S\}}is therestriction relationofR{\displaystyle R}toS{\displaystyle S}overX{\displaystyle X}. IfR{\displaystyle R}is a binary relation over setsX{\displaystyle X}andY{\displaystyle Y}and ifS{\displaystyle S}is a subset ofX{\displaystyle X}thenR|S={(x,y)∣xRyandx∈S}{\displaystyle R_{\vert S}=\{(x,y)\mid xRy{\text{ and }}x\in S\}}is theleft-restriction relationofR{\displaystyle R}toS{\displaystyle S}overX{\displaystyle X}andY{\displaystyle Y}.[clarification needed] If a relation isreflexive, irreflexive,symmetric,antisymmetric,asymmetric,transitive,total,trichotomous, apartial order,total order,strict weak order,total preorder(weak order), or anequivalence relation, then so too are its restrictions. However, the transitive closure of a restriction is a subset of the restriction of the transitive closure, i.e., in general not equal. For example, restricting the relation "x{\displaystyle x}is parent ofy{\displaystyle y}" to females yields the relation "x{\displaystyle x}is mother of the womany{\displaystyle y}"; its transitive closure does not relate a woman with her paternal grandmother. On the other hand, the transitive closure of "is parent of" is "is ancestor of"; its restriction to females does relate a woman with her paternal grandmother. Also, the various concepts ofcompleteness(not to be confused with being "total") do not carry over to restrictions. For example, over thereal numbersa property of the relation≤{\displaystyle \leq }is that everynon-emptysubsetS⊆R{\displaystyle S\subseteq \mathbb {R} }with anupper boundinR{\displaystyle \mathbb {R} }has aleast upper bound(also called supremum) inR.{\displaystyle \mathbb {R} .}However, for the rational numbers this supremum is not necessarily rational, so the same property does not hold on the restriction of the relation≤{\displaystyle \leq }to the rational numbers. A binary relationR{\displaystyle R}over setsX{\displaystyle X}andY{\displaystyle Y}is said to becontained ina relationS{\displaystyle S}overX{\displaystyle X}andY{\displaystyle Y}, writtenR⊆S,{\displaystyle R\subseteq S,}ifR{\displaystyle R}is a subset ofS{\displaystyle S}, that is, for allx∈X{\displaystyle x\in X}andy∈Y,{\displaystyle y\in Y,}ifxRy{\displaystyle xRy}, thenxSy{\displaystyle xSy}. IfR{\displaystyle R}is contained inS{\displaystyle S}andS{\displaystyle S}is contained inR{\displaystyle R}, thenR{\displaystyle R}andS{\displaystyle S}are calledequalwrittenR=S{\displaystyle R=S}. IfR{\displaystyle R}is contained inS{\displaystyle S}butS{\displaystyle S}is not contained inR{\displaystyle R}, thenR{\displaystyle R}is said to besmallerthanS{\displaystyle S}, writtenR⊊S.{\displaystyle R\subsetneq S.}For example, on therational numbers, the relation>{\displaystyle >}is smaller than≥{\displaystyle \geq }, and equal to the composition>∘>{\displaystyle >\circ >}. Binary relations over setsX{\displaystyle X}andY{\displaystyle Y}can be represented algebraically bylogical matricesindexed byX{\displaystyle X}andY{\displaystyle Y}with entries in theBoolean semiring(addition corresponds to OR and multiplication to AND) wherematrix additioncorresponds to union of relations,matrix multiplicationcorresponds to composition of relations (of a relation overX{\displaystyle X}andY{\displaystyle Y}and a relation overY{\displaystyle Y}andZ{\displaystyle Z}),[19]theHadamard productcorresponds to intersection of relations, thezero matrixcorresponds to the empty relation, and thematrix of onescorresponds to the universal relation. Homogeneous relations (whenX=Y{\displaystyle X=Y}) form amatrix semiring(indeed, amatrix semialgebraover the Boolean semiring) where theidentity matrixcorresponds to the identity relation.[20] While the 2nd example relation is surjective (seebelow), the 1st is not. Some important types of binary relationsR{\displaystyle R}over setsX{\displaystyle X}andY{\displaystyle Y}are listed below. Uniqueness properties: Totality properties (only definable if the domainX{\displaystyle X}and codomainY{\displaystyle Y}are specified): Uniqueness and totality properties (only definable if the domainX{\displaystyle X}and codomainY{\displaystyle Y}are specified): If relations over proper classes are allowed: Certain mathematical "relations", such as "equal to", "subset of", and "member of", cannot be understood to be binary relations as defined above, because their domains and codomains cannot be taken to be sets in the usual systems ofaxiomatic set theory. For example, to model the general concept of "equality" as a binary relation={\displaystyle =}, take the domain and codomain to be the "class of all sets", which is not a set in the usual set theory. In most mathematical contexts, references to the relations of equality, membership and subset are harmless because they can be understood implicitly to be restricted to some set in the context. The usual work-around to this problem is to select a "large enough" setA{\displaystyle A}, that contains all the objects of interest, and work with the restriction=A{\displaystyle =_{A}}instead of={\displaystyle =}. Similarly, the "subset of" relation⊆{\displaystyle \subseteq }needs to be restricted to have domain and codomainP(A){\displaystyle P(A)}(the power set of a specific setA{\displaystyle A}): the resulting set relation can be denoted by⊆A.{\displaystyle \subseteq _{A}.}Also, the "member of" relation needs to be restricted to have domainA{\displaystyle A}and codomainP(A){\displaystyle P(A)}to obtain a binary relation∈A{\displaystyle \in _{A}}that is a set.Bertrand Russellhas shown that assuming∈{\displaystyle \in }to be defined over all sets leads to a contradiction innaive set theory, seeRussell's paradox. Another solution to this problem is to use a set theory with proper classes, such asNBGorMorse–Kelley set theory, and allow the domain and codomain (and so the graph) to beproper classes: in such a theory, equality, membership, and subset are binary relations without special comment. (A minor modification needs to be made to the concept of the ordered triple(X,Y,G){\displaystyle (X,Y,G)}, as normally a proper class cannot be a member of an ordered tuple; or of course one can identify the binary relation with its graph in this context.)[31]With this definition one can for instance define a binary relation over every set and its power set. Ahomogeneous relationover a setX{\displaystyle X}is a binary relation overX{\displaystyle X}and itself, i.e. it is a subset of the Cartesian productX×X.{\displaystyle X\times X.}[14][32][33]It is also simply called a (binary) relation overX{\displaystyle X}. A homogeneous relationR{\displaystyle R}over a setX{\displaystyle X}may be identified with adirected simple graph permitting loops, whereX{\displaystyle X}is the vertex set andR{\displaystyle R}is the edge set (there is an edge from a vertexx{\displaystyle x}to a vertexy{\displaystyle y}if and only ifxRy{\displaystyle xRy}). The set of all homogeneous relationsB(X){\displaystyle {\mathcal {B}}(X)}over a setX{\displaystyle X}is thepower set2X×X{\displaystyle 2^{X\times X}}which is aBoolean algebraaugmented with theinvolutionof mapping of a relation to itsconverse relation. Consideringcomposition of relationsas abinary operationonB(X){\displaystyle {\mathcal {B}}(X)}, it forms asemigroup with involution. Some important properties that a homogeneous relationR{\displaystyle R}over a setX{\displaystyle X}may have are: Apartial orderis a relation that is reflexive, antisymmetric, and transitive. Astrict partial orderis a relation that is irreflexive, asymmetric, and transitive. Atotal orderis a relation that is reflexive, antisymmetric, transitive and connected.[37]Astrict total orderis a relation that is irreflexive, asymmetric, transitive and connected. Anequivalence relationis a relation that is reflexive, symmetric, and transitive. For example, "x{\displaystyle x}dividesy{\displaystyle y}" is a partial, but not a total order onnatural numbersN,{\displaystyle \mathbb {N} ,}"x<y{\displaystyle x<y}" is a strict total order onN,{\displaystyle \mathbb {N} ,}and "x{\displaystyle x}is parallel toy{\displaystyle y}" is an equivalence relation on the set of all lines in theEuclidean plane. All operations defined in section§ Operationsalso apply to homogeneous relations. Beyond that, a homogeneous relation over a setX{\displaystyle X}may be subjected to closure operations like: Developments inalgebraic logichave facilitated usage of binary relations. Thecalculus of relationsincludes thealgebra of sets, extended bycomposition of relationsand the use ofconverse relations. The inclusionR⊆S,{\displaystyle R\subseteq S,}meaning thataRb{\displaystyle aRb}impliesaSb{\displaystyle aSb}, sets the scene in alatticeof relations. But sinceP⊆Q≡(P∩Q¯=∅)≡(P∩Q=P),{\displaystyle P\subseteq Q\equiv (P\cap {\bar {Q}}=\varnothing )\equiv (P\cap Q=P),}the inclusion symbol is superfluous. Nevertheless, composition of relations and manipulation of the operators according toSchröder rules, provides a calculus to work in thepower setofA×B.{\displaystyle A\times B.} In contrast to homogeneous relations, thecomposition of relationsoperation is only apartial function. The necessity of matching target to source of composed relations has led to the suggestion that the study of heterogeneous relations is a chapter ofcategory theoryas in thecategory of sets, except that themorphismsof this category are relations. Theobjectsof the categoryRelare sets, and the relation-morphisms compose as required in acategory.[citation needed] Binary relations have been described through their inducedconcept lattices: AconceptC⊂R{\displaystyle C\subset R}satisfies two properties: For a given relationR⊆X×Y,{\displaystyle R\subseteq X\times Y,}the set of concepts, enlarged by their joins and meets, forms an "induced lattice of concepts", with inclusion⊑{\displaystyle \sqsubseteq }forming apreorder. TheMacNeille completion theorem(1937) (that any partial order may be embedded in acomplete lattice) is cited in a 2013 survey article "Decomposition of relations on concept lattices".[38]The decomposition is Particular cases are considered below:E{\displaystyle E}total order corresponds to Ferrers type, andE{\displaystyle E}identity corresponds to difunctional, a generalization ofequivalence relationon a set. Relations may be ranked by theSchein rankwhich counts the number of concepts necessary to cover a relation.[39]Structural analysis of relations with concepts provides an approach fordata mining.[40] The idea of a difunctional relation is to partition objects by distinguishing attributes, as a generalization of the concept of anequivalence relation. One way this can be done is with an intervening setZ={x,y,z,…}{\displaystyle Z=\{x,y,z,\ldots \}}ofindicators. The partitioning relationR=FGT{\displaystyle R=FG^{\textsf {T}}}is acomposition of relationsusingfunctionalrelationsF⊆A×ZandG⊆B×Z.{\displaystyle F\subseteq A\times Z{\text{ and }}G\subseteq B\times Z.}Jacques Riguetnamed these relationsdifunctionalsince the compositionFGT{\displaystyle FG^{\mathsf {T}}}involves functional relations, commonly calledpartial functions. In 1950 Riguet showed that such relations satisfy the inclusion:[41] Inautomata theory, the termrectangular relationhas also been used to denote a difunctional relation. This terminology recalls the fact that, when represented as alogical matrix, the columns and rows of a difunctional relation can be arranged as ablock matrixwith rectangular blocks of ones on the (asymmetric) main diagonal.[42]More formally, a relationR{\displaystyle R}onX×Y{\displaystyle X\times Y}is difunctional if and only if it can be written as the union of Cartesian productsAi×Bi{\displaystyle A_{i}\times B_{i}}, where theAi{\displaystyle A_{i}}are a partition of a subset ofX{\displaystyle X}and theBi{\displaystyle B_{i}}likewise a partition of a subset ofY{\displaystyle Y}.[43] Using the notation{y∣xRy}=xR{\displaystyle \{y\mid xRy\}=xR}, a difunctional relation can also be characterized as a relationR{\displaystyle R}such that whereverx1R{\displaystyle x_{1}R}andx2R{\displaystyle x_{2}R}have a non-empty intersection, then these two sets coincide; formallyx1∩x2≠∅{\displaystyle x_{1}\cap x_{2}\neq \varnothing }impliesx1R=x2R.{\displaystyle x_{1}R=x_{2}R.}[44] In 1997 researchers found "utility of binary decomposition based on difunctional dependencies indatabasemanagement."[45]Furthermore, difunctional relations are fundamental in the study ofbisimulations.[46] In the context of homogeneous relations, apartial equivalence relationis difunctional. Astrict orderon a set is a homogeneous relation arising inorder theory. In 1951Jacques Riguetadopted the ordering of aninteger partition, called aFerrers diagram, to extend ordering to binary relations in general.[47] The corresponding logical matrix of a general binary relation has rows which finish with a sequence of ones. Thus the dots of a Ferrer's diagram are changed to ones and aligned on the right in the matrix. An algebraic statement required for a Ferrers type relation R isRR¯TR⊆R.{\displaystyle R{\bar {R}}^{\textsf {T}}R\subseteq R.} If any one of the relationsR,R¯,RT{\displaystyle R,{\bar {R}},R^{\textsf {T}}}is of Ferrers type, then all of them are.[48] SupposeB{\displaystyle B}is thepower setofA{\displaystyle A}, the set of allsubsetsofA{\displaystyle A}. Then a relationg{\displaystyle g}is acontact relationif it satisfies three properties: Theset membershiprelation,ϵ={\displaystyle \epsilon =}"is an element of", satisfies these properties soϵ{\displaystyle \epsilon }is a contact relation. The notion of a general contact relation was introduced byGeorg Aumannin 1970.[49][50] In terms of the calculus of relations, sufficient conditions for a contact relation includeCTC¯⊆∋C¯≡C∋C¯¯⊆C,{\displaystyle C^{\textsf {T}}{\bar {C}}\subseteq \ni {\bar {C}}\equiv C{\overline {\ni {\bar {C}}}}\subseteq C,}where∋{\displaystyle \ni }is the converse of set membership (∈{\displaystyle \in }).[51]: 280 Every relationR{\displaystyle R}generates apreorderR∖R{\displaystyle R\backslash R}which is theleft residual.[52]In terms of converse and complements,R∖R≡RTR¯¯.{\displaystyle R\backslash R\equiv {\overline {R^{\textsf {T}}{\bar {R}}}}.}Forming the diagonal ofRTR¯{\displaystyle R^{\textsf {T}}{\bar {R}}}, the corresponding row ofRT{\displaystyle R^{\textsf {T}}}and column ofR¯{\displaystyle {\bar {R}}}will be of opposite logical values, so the diagonal is all zeros. Then To showtransitivity, one requires that(R∖R)(R∖R)⊆R∖R.{\displaystyle (R\backslash R)(R\backslash R)\subseteq R\backslash R.}Recall thatX=R∖R{\displaystyle X=R\backslash R}is the largest relation such thatRX⊆R.{\displaystyle RX\subseteq R.}Then Theinclusionrelation Ω on thepower setofU{\displaystyle U}can be obtained in this way from themembership relation∈{\displaystyle \in }on subsets ofU{\displaystyle U}: Given a relationR{\displaystyle R}, itsfringeis the sub-relation defined asfringe⁡(R)=R∩RR¯TR¯.{\displaystyle \operatorname {fringe} (R)=R\cap {\overline {R{\bar {R}}^{\textsf {T}}R}}.} WhenR{\displaystyle R}is a partial identity relation, difunctional, or a block diagonal relation, thenfringe⁡(R)=R{\displaystyle \operatorname {fringe} (R)=R}. Otherwise thefringe{\displaystyle \operatorname {fringe} }operator selects a boundary sub-relation described in terms of its logical matrix:fringe⁡(R){\displaystyle \operatorname {fringe} (R)}is the side diagonal ifR{\displaystyle R}is an upper right triangularlinear orderorstrict order.fringe⁡(R){\displaystyle \operatorname {fringe} (R)}is the block fringe ifR{\displaystyle R}is irreflexive (R⊆I¯{\displaystyle R\subseteq {\bar {I}}}) or upper right block triangular.fringe⁡(R){\displaystyle \operatorname {fringe} (R)}is a sequence of boundary rectangles whenR{\displaystyle R}is of Ferrers type. On the other hand,fringe⁡(R)=∅{\displaystyle \operatorname {fringe} (R)=\emptyset }whenR{\displaystyle R}is adense, linear, strict order.[51] Given two setsA{\displaystyle A}andB{\displaystyle B}, the set of binary relations between themB(A,B){\displaystyle {\mathcal {B}}(A,B)}can be equipped with aternary operation[a,b,c]=abTc{\displaystyle [a,b,c]=ab^{\textsf {T}}c}wherebT{\displaystyle b^{\mathsf {T}}}denotes theconverse relationofb{\displaystyle b}. In 1953Viktor Wagnerused properties of this ternary operation to definesemiheaps, heaps, and generalized heaps.[53][54]The contrast of heterogeneous and homogeneous relations is highlighted by these definitions: There is a pleasant symmetry in Wagner's work between heaps, semiheaps, and generalised heaps on the one hand, and groups, semigroups, and generalised groups on the other. Essentially, the various types of semiheaps appear whenever we consider binary relations (and partial one-one mappings) betweendifferentsetsA{\displaystyle A}andB{\displaystyle B}, while the various types of semigroups appear in the case whereA=B{\displaystyle A=B}.
https://en.wikipedia.org/wiki/Binary_relation
Inmathematics, specificallyset theory, theCartesian productof twosetsAandB, denotedA×B, is the set of allordered pairs(a,b)whereais an element ofAandbis an element ofB.[1]In terms ofset-builder notation, that isA×B={(a,b)∣a∈Aandb∈B}.{\displaystyle A\times B=\{(a,b)\mid a\in A\ {\mbox{ and }}\ b\in B\}.}[2][3] A table can be created by taking the Cartesian product of a set of rows and a set of columns. If the Cartesian productrows×columnsis taken, the cells of the table contain ordered pairs of the form(row value, column value).[4] One can similarly define the Cartesian product ofnsets, also known as ann-fold Cartesian product, which can be represented by ann-dimensional array, where each element is ann-tuple. An ordered pair is a2-tuple or couple. More generally still, one can define the Cartesian product of anindexed familyof sets. The Cartesian product is named afterRené Descartes,[5]whose formulation ofanalytic geometrygave rise to the concept, which is further generalized in terms ofdirect product. A rigorous definition of the Cartesian product requires a domain to be specified in theset-builder notation. In this case the domain would have to contain the Cartesian product itself. For defining the Cartesian product of the setsA{\displaystyle A}andB{\displaystyle B}, with the typicalKuratowski's definitionof a pair(a,b){\displaystyle (a,b)}as{{a},{a,b}}{\displaystyle \{\{a\},\{a,b\}\}}, an appropriate domain is the setP(P(A∪B)){\displaystyle {\mathcal {P}}({\mathcal {P}}(A\cup B))}whereP{\displaystyle {\mathcal {P}}}denotes thepower set. Then the Cartesian product of the setsA{\displaystyle A}andB{\displaystyle B}would be defined as[6]A×B={x∈P(P(A∪B))∣∃a∈A∃b∈B:x=(a,b)}.{\displaystyle A\times B=\{x\in {\mathcal {P}}({\mathcal {P}}(A\cup B))\mid \exists a\in A\ \exists b\in B:x=(a,b)\}.} An illustrative example is thestandard 52-card deck. Thestandard playing cardranks {A, K, Q, J, 10, 9, 8, 7, 6, 5, 4, 3, 2} form a 13-element set. The card suits{♠,♥,♦, ♣} form a four-element set. The Cartesian product of these sets returns a 52-element set consisting of 52ordered pairs, which correspond to all 52 possible playing cards. Ranks×Suitsreturns a set of the form {(A, ♠), (A,♥), (A,♦), (A, ♣), (K, ♠), ..., (3, ♣), (2, ♠), (2,♥), (2,♦), (2, ♣)}. Suits×Ranksreturns a set of the form {(♠, A), (♠, K), (♠, Q), (♠, J), (♠, 10), ..., (♣, 6), (♣, 5), (♣, 4), (♣, 3), (♣, 2)}. These two sets are distinct, evendisjoint, but there is a naturalbijectionbetween them, under which (3, ♣) corresponds to (♣, 3) and so on. The main historical example is theCartesian planeinanalytic geometry. In order to represent geometrical shapes in a numerical way, and extract numerical information from shapes' numerical representations,René Descartesassigned to each point in the plane a pair ofreal numbers, called itscoordinates. Usually, such a pair's first and second components are called itsxandycoordinates, respectively (see picture). The set of all such pairs (i.e., the Cartesian productR×R{\displaystyle \mathbb {R} \times \mathbb {R} }, withR{\displaystyle \mathbb {R} }denoting the real numbers) is thus assigned to the set of all points in the plane.[7] A formal definition of the Cartesian product fromset-theoreticalprinciples follows from a definition ofordered pair. The most common definition of ordered pairs,Kuratowski's definition, is(x,y)={{x},{x,y}}{\displaystyle (x,y)=\{\{x\},\{x,y\}\}}. Under this definition,(x,y){\displaystyle (x,y)}is an element ofP(P(X∪Y)){\displaystyle {\mathcal {P}}({\mathcal {P}}(X\cup Y))}, andX×Y{\displaystyle X\times Y}is a subset of that set, whereP{\displaystyle {\mathcal {P}}}represents thepower setoperator. Therefore, the existence of the Cartesian product of any two sets inZFCfollows from the axioms ofpairing,union,power set, andspecification. Sincefunctionsare usually defined as a special case ofrelations, and relations are usually defined as subsets of the Cartesian product, the definition of the two-set Cartesian product is necessarily prior to most other definitions. LetA,B,C, andDbe sets. The Cartesian productA×Bis notcommutative,A×B≠B×A,{\displaystyle A\times B\neq B\times A,}[4]because theordered pairsare reversed unless at least one of the following conditions is satisfied:[8] For example: Strictly speaking, the Cartesian product is notassociative(unless one of the involved sets is empty).(A×B)×C≠A×(B×C){\displaystyle (A\times B)\times C\neq A\times (B\times C)}If for exampleA= {1}, then(A×A) ×A= {((1, 1), 1)} ≠{(1, (1, 1))} =A× (A×A). A= [1,4],B= [2,5], andC= [4,7], demonstratingA× (B∩C)= (A×B) ∩ (A×C),A× (B∪C) = (A×B) ∪ (A×C), and A= [2,5],B= [3,7],C= [1,3],D= [2,4], demonstrating The Cartesian product satisfies the following property with respect tointersections(see middle picture).(A∩B)×(C∩D)=(A×C)∩(B×D){\displaystyle (A\cap B)\times (C\cap D)=(A\times C)\cap (B\times D)} In most cases, the above statement is not true if we replace intersection withunion(see rightmost picture).(A∪B)×(C∪D)≠(A×C)∪(B×D){\displaystyle (A\cup B)\times (C\cup D)\neq (A\times C)\cup (B\times D)} In fact, we have that:(A×C)∪(B×D)=[(A∖B)×C]∪[(A∩B)×(C∪D)]∪[(B∖A)×D]{\displaystyle (A\times C)\cup (B\times D)=[(A\setminus B)\times C]\cup [(A\cap B)\times (C\cup D)]\cup [(B\setminus A)\times D]} For the set difference, we also have the following identity:(A×C)∖(B×D)=[A×(C∖D)]∪[(A∖B)×C]{\displaystyle (A\times C)\setminus (B\times D)=[A\times (C\setminus D)]\cup [(A\setminus B)\times C]} Here are some rules demonstrating distributivity with other operators (see leftmost picture):[8]A×(B∩C)=(A×B)∩(A×C),A×(B∪C)=(A×B)∪(A×C),A×(B∖C)=(A×B)∖(A×C),{\displaystyle {\begin{aligned}A\times (B\cap C)&=(A\times B)\cap (A\times C),\\A\times (B\cup C)&=(A\times B)\cup (A\times C),\\A\times (B\setminus C)&=(A\times B)\setminus (A\times C),\end{aligned}}}(A×B)∁=(A∁×B∁)∪(A∁×B)∪(A×B∁),{\displaystyle (A\times B)^{\complement }=\left(A^{\complement }\times B^{\complement }\right)\cup \left(A^{\complement }\times B\right)\cup \left(A\times B^{\complement }\right)\!,}whereA∁{\displaystyle A^{\complement }}denotes theabsolute complementofA. Other properties related withsubsetsare: if bothA,B≠∅, thenA×B⊆C×D⟺A⊆CandB⊆D.{\displaystyle {\text{if both }}A,B\neq \emptyset {\text{, then }}A\times B\subseteq C\times D\!\iff \!A\subseteq C{\text{ and }}B\subseteq D.}[9] Thecardinalityof a set is the number of elements of the set. For example, defining two sets:A= {a, b}andB= {5, 6}. Both setAand setBconsist of two elements each. Their Cartesian product, written asA×B, results in a new set which has the following elements: where each element ofAis paired with each element ofB, and where each pair makes up one element of the output set. The number of values in each element of the resulting set is equal to the number of sets whose Cartesian product is being taken; 2 in this case. The cardinality of the output set is equal to the product of the cardinalities of all the input sets. That is, In this case,|A×B| = 4 Similarly, and so on. The setA×Bisinfiniteif eitherAorBis infinite, and the other set is not the empty set.[10] The Cartesian product can be generalized to then-ary Cartesian productovernsetsX1, ...,Xnas the setX1×⋯×Xn={(x1,…,xn)∣xi∈Xifor everyi∈{1,…,n}}{\displaystyle X_{1}\times \cdots \times X_{n}=\{(x_{1},\ldots ,x_{n})\mid x_{i}\in X_{i}\ {\text{for every}}\ i\in \{1,\ldots ,n\}\}} ofn-tuples. If tuples are defined asnested ordered pairs, it can be identified with(X1× ... ×Xn−1) ×Xn. If a tuple is defined as a function on{1, 2, ...,n} that takes its value atito be thei-th element of the tuple, then the Cartesian productX1× ... ×Xnis the set of functions{x:{1,…,n}→X1∪⋯∪Xn|x(i)∈Xifor everyi∈{1,…,n}}.{\displaystyle \{x:\{1,\ldots ,n\}\to X_{1}\cup \cdots \cup X_{n}\ |\ x(i)\in X_{i}\ {\text{for every}}\ i\in \{1,\ldots ,n\}\}.} TheCartesian squareof a setXis the Cartesian productX2=X×X. An example is the 2-dimensionalplaneR2=R×RwhereRis the set ofreal numbers:[1]R2is the set of all points(x,y)wherexandyare real numbers (see theCartesian coordinate system). Then-ary Cartesian powerof a setX, denotedXn{\displaystyle X^{n}}, can be defined asXn=X×X×⋯×X⏟n={(x1,…,xn)|xi∈Xfor everyi∈{1,…,n}}.{\displaystyle X^{n}=\underbrace {X\times X\times \cdots \times X} _{n}=\{(x_{1},\ldots ,x_{n})\ |\ x_{i}\in X\ {\text{for every}}\ i\in \{1,\ldots ,n\}\}.} An example of this isR3=R×R×R, withRagain the set of real numbers,[1]and more generallyRn. Then-ary Cartesian power of a setXisisomorphicto the space of functions from ann-element set toX. As a special case, the 0-ary Cartesian power ofXmay be taken to be asingleton set, corresponding to theempty functionwithcodomainX. Let Cartesian products be givenA=A1×⋯×An{\displaystyle A=A_{1}\times \dots \times A_{n}}andB=B1×⋯×Bn{\displaystyle B=B_{1}\times \dots \times B_{n}}. Then Inn-tuple algebra(NTA),[12]such a matrix-like representation of Cartesian products is called aC-n-tuple. With this in mind, the union of some Cartesian products given in the same universe can be expressed as a matrix bounded by square brackets, in which the rows represent the Cartesian products involved in the union: Such a structure is called aC-systemin NTA. Then the complement of the Cartesian productA{\displaystyle A}will look like the followingC-system expressed as a matrix of the dimensionn×n{\displaystyle n\times n}: The diagonal components of this matrixAi∁{\displaystyle A_{i}^{\complement }}are equal correspondingly toXi∖Ai{\displaystyle X_{i}\setminus A_{i}}. In NTA, a diagonalC-systemA∁{\displaystyle A^{\complement }}, that represents the complement of aC-n-tupleA{\displaystyle A}, can be written concisely as a tuple of diagonal components bounded by inverted square brackets: This structure is called aD-n-tuple. Then the complement of theC-systemR{\displaystyle R}is a structureR∁{\displaystyle R^{\complement }}, represented by a matrix of the same dimension and bounded by inverted square brackets, in which all components are equal to the complements of the components of the initial matrixR{\displaystyle R}. Such a structure is called aD-system and is calculated, if necessary, as the intersection of theD-n-tuples contained in it. For instance, if the followingC-system is given: then its complement will be theD-system Let us consider some new relations for structures with Cartesian products obtained in the process of studying the properties of NTA.[12]The structures defined in the same universe are calledhomotypicones. It is possible to define the Cartesian product of an arbitrary (possiblyinfinite)indexed familyof sets. IfIis anyindex set, and{Xi}i∈I{\displaystyle \{X_{i}\}_{i\in I}}is a family of sets indexed byI, then the Cartesian product of the sets in{Xi}i∈I{\displaystyle \{X_{i}\}_{i\in I}}is defined to be∏i∈IXi={f:I→⋃i∈IXi|∀i∈I.f(i)∈Xi},{\displaystyle \prod _{i\in I}X_{i}=\left\{\left.f:I\to \bigcup _{i\in I}X_{i}\ \right|\ \forall i\in I.\ f(i)\in X_{i}\right\},}that is, the set of all functions defined on theindex setIsuch that the value of the function at a particular indexiis an element ofXi. Even if each of theXiis nonempty, the Cartesian product may be empty if theaxiom of choice, which is equivalent to the statement that every such product is nonempty, is not assumed.∏i∈IXi{\displaystyle \prod _{i\in I}X_{i}}may also be denotedX{\displaystyle {\mathsf {X}}}i∈IXi{\displaystyle {}_{i\in I}X_{i}}.[13] For eachjinI, the functionπj:∏i∈IXi→Xj,{\displaystyle \pi _{j}:\prod _{i\in I}X_{i}\to X_{j},}defined byπj(f)=f(j){\displaystyle \pi _{j}(f)=f(j)}is called thej-thprojection map. Cartesian poweris a Cartesian product where all the factorsXiare the same setX. In this case,∏i∈IXi=∏i∈IX{\displaystyle \prod _{i\in I}X_{i}=\prod _{i\in I}X}is the set of all functions fromItoX, and is frequently denotedXI. This case is important in the study ofcardinal exponentiation. An important special case is when the index set isN{\displaystyle \mathbb {N} }, thenatural numbers: this Cartesian product is the set of all infinite sequences with thei-th term in its corresponding setXi. For example, each element of∏n=1∞R=R×R×⋯{\displaystyle \prod _{n=1}^{\infty }\mathbb {R} =\mathbb {R} \times \mathbb {R} \times \cdots }can be visualized as avectorwith countably infinite real number components. This set is frequently denotedRω{\displaystyle \mathbb {R} ^{\omega }}, orRN{\displaystyle \mathbb {R} ^{\mathbb {N} }}. If several sets are being multiplied together (e.g.,X1,X2,X3, ...), then some authors[14]choose to abbreviate the Cartesian product as simply×Xi. Iffis a function fromXtoAandgis a function fromYtoB, then their Cartesian productf×gis a function fromX×YtoA×Bwith(f×g)(x,y)=(f(x),g(y)).{\displaystyle (f\times g)(x,y)=(f(x),g(y)).} This can be extended totuplesand infinite collections of functions. This is different from the standard Cartesian product of functions considered as sets. LetA{\displaystyle A}be a set andB⊆A{\displaystyle B\subseteq A}. Then thecylinderofB{\displaystyle B}with respect toA{\displaystyle A}is the Cartesian productB×A{\displaystyle B\times A}ofB{\displaystyle B}andA{\displaystyle A}. Normally,A{\displaystyle A}is considered to be theuniverseof the context and is left away. For example, ifB{\displaystyle B}is a subset of the natural numbersN{\displaystyle \mathbb {N} }, then the cylinder ofB{\displaystyle B}isB×N{\displaystyle B\times \mathbb {N} }. Although the Cartesian product is traditionally applied to sets,category theoryprovides a more general interpretation of theproductof mathematical structures. This is distinct from, although related to, the notion of aCartesian squarein category theory, which is a generalization of thefiber product. Exponentiationis theright adjointof the Cartesian product; thus any category with a Cartesian product (and afinal object) is aCartesian closed category. Ingraph theory, theCartesian product of two graphsGandHis the graph denoted byG×H, whosevertexset is the (ordinary) Cartesian productV(G) ×V(H)and such that two vertices(u,v)and(u′,v′)are adjacent inG×H, if and only ifu=u′andvis adjacent withv′ inH,orv=v′anduis adjacent withu′ inG. The Cartesian product of graphs is not aproductin the sense of category theory. Instead, the categorical product is known as thetensor product of graphs.
https://en.wikipedia.org/wiki/Cartesian_product
Inmathematics, specificallyset theory, theCartesian productof twosetsAandB, denotedA×B, is the set of allordered pairs(a,b)whereais an element ofAandbis an element ofB.[1]In terms ofset-builder notation, that isA×B={(a,b)∣a∈Aandb∈B}.{\displaystyle A\times B=\{(a,b)\mid a\in A\ {\mbox{ and }}\ b\in B\}.}[2][3] A table can be created by taking the Cartesian product of a set of rows and a set of columns. If the Cartesian productrows×columnsis taken, the cells of the table contain ordered pairs of the form(row value, column value).[4] One can similarly define the Cartesian product ofnsets, also known as ann-fold Cartesian product, which can be represented by ann-dimensional array, where each element is ann-tuple. An ordered pair is a2-tuple or couple. More generally still, one can define the Cartesian product of anindexed familyof sets. The Cartesian product is named afterRené Descartes,[5]whose formulation ofanalytic geometrygave rise to the concept, which is further generalized in terms ofdirect product. A rigorous definition of the Cartesian product requires a domain to be specified in theset-builder notation. In this case the domain would have to contain the Cartesian product itself. For defining the Cartesian product of the setsA{\displaystyle A}andB{\displaystyle B}, with the typicalKuratowski's definitionof a pair(a,b){\displaystyle (a,b)}as{{a},{a,b}}{\displaystyle \{\{a\},\{a,b\}\}}, an appropriate domain is the setP(P(A∪B)){\displaystyle {\mathcal {P}}({\mathcal {P}}(A\cup B))}whereP{\displaystyle {\mathcal {P}}}denotes thepower set. Then the Cartesian product of the setsA{\displaystyle A}andB{\displaystyle B}would be defined as[6]A×B={x∈P(P(A∪B))∣∃a∈A∃b∈B:x=(a,b)}.{\displaystyle A\times B=\{x\in {\mathcal {P}}({\mathcal {P}}(A\cup B))\mid \exists a\in A\ \exists b\in B:x=(a,b)\}.} An illustrative example is thestandard 52-card deck. Thestandard playing cardranks {A, K, Q, J, 10, 9, 8, 7, 6, 5, 4, 3, 2} form a 13-element set. The card suits{♠,♥,♦, ♣} form a four-element set. The Cartesian product of these sets returns a 52-element set consisting of 52ordered pairs, which correspond to all 52 possible playing cards. Ranks×Suitsreturns a set of the form {(A, ♠), (A,♥), (A,♦), (A, ♣), (K, ♠), ..., (3, ♣), (2, ♠), (2,♥), (2,♦), (2, ♣)}. Suits×Ranksreturns a set of the form {(♠, A), (♠, K), (♠, Q), (♠, J), (♠, 10), ..., (♣, 6), (♣, 5), (♣, 4), (♣, 3), (♣, 2)}. These two sets are distinct, evendisjoint, but there is a naturalbijectionbetween them, under which (3, ♣) corresponds to (♣, 3) and so on. The main historical example is theCartesian planeinanalytic geometry. In order to represent geometrical shapes in a numerical way, and extract numerical information from shapes' numerical representations,René Descartesassigned to each point in the plane a pair ofreal numbers, called itscoordinates. Usually, such a pair's first and second components are called itsxandycoordinates, respectively (see picture). The set of all such pairs (i.e., the Cartesian productR×R{\displaystyle \mathbb {R} \times \mathbb {R} }, withR{\displaystyle \mathbb {R} }denoting the real numbers) is thus assigned to the set of all points in the plane.[7] A formal definition of the Cartesian product fromset-theoreticalprinciples follows from a definition ofordered pair. The most common definition of ordered pairs,Kuratowski's definition, is(x,y)={{x},{x,y}}{\displaystyle (x,y)=\{\{x\},\{x,y\}\}}. Under this definition,(x,y){\displaystyle (x,y)}is an element ofP(P(X∪Y)){\displaystyle {\mathcal {P}}({\mathcal {P}}(X\cup Y))}, andX×Y{\displaystyle X\times Y}is a subset of that set, whereP{\displaystyle {\mathcal {P}}}represents thepower setoperator. Therefore, the existence of the Cartesian product of any two sets inZFCfollows from the axioms ofpairing,union,power set, andspecification. Sincefunctionsare usually defined as a special case ofrelations, and relations are usually defined as subsets of the Cartesian product, the definition of the two-set Cartesian product is necessarily prior to most other definitions. LetA,B,C, andDbe sets. The Cartesian productA×Bis notcommutative,A×B≠B×A,{\displaystyle A\times B\neq B\times A,}[4]because theordered pairsare reversed unless at least one of the following conditions is satisfied:[8] For example: Strictly speaking, the Cartesian product is notassociative(unless one of the involved sets is empty).(A×B)×C≠A×(B×C){\displaystyle (A\times B)\times C\neq A\times (B\times C)}If for exampleA= {1}, then(A×A) ×A= {((1, 1), 1)} ≠{(1, (1, 1))} =A× (A×A). A= [1,4],B= [2,5], andC= [4,7], demonstratingA× (B∩C)= (A×B) ∩ (A×C),A× (B∪C) = (A×B) ∪ (A×C), and A= [2,5],B= [3,7],C= [1,3],D= [2,4], demonstrating The Cartesian product satisfies the following property with respect tointersections(see middle picture).(A∩B)×(C∩D)=(A×C)∩(B×D){\displaystyle (A\cap B)\times (C\cap D)=(A\times C)\cap (B\times D)} In most cases, the above statement is not true if we replace intersection withunion(see rightmost picture).(A∪B)×(C∪D)≠(A×C)∪(B×D){\displaystyle (A\cup B)\times (C\cup D)\neq (A\times C)\cup (B\times D)} In fact, we have that:(A×C)∪(B×D)=[(A∖B)×C]∪[(A∩B)×(C∪D)]∪[(B∖A)×D]{\displaystyle (A\times C)\cup (B\times D)=[(A\setminus B)\times C]\cup [(A\cap B)\times (C\cup D)]\cup [(B\setminus A)\times D]} For the set difference, we also have the following identity:(A×C)∖(B×D)=[A×(C∖D)]∪[(A∖B)×C]{\displaystyle (A\times C)\setminus (B\times D)=[A\times (C\setminus D)]\cup [(A\setminus B)\times C]} Here are some rules demonstrating distributivity with other operators (see leftmost picture):[8]A×(B∩C)=(A×B)∩(A×C),A×(B∪C)=(A×B)∪(A×C),A×(B∖C)=(A×B)∖(A×C),{\displaystyle {\begin{aligned}A\times (B\cap C)&=(A\times B)\cap (A\times C),\\A\times (B\cup C)&=(A\times B)\cup (A\times C),\\A\times (B\setminus C)&=(A\times B)\setminus (A\times C),\end{aligned}}}(A×B)∁=(A∁×B∁)∪(A∁×B)∪(A×B∁),{\displaystyle (A\times B)^{\complement }=\left(A^{\complement }\times B^{\complement }\right)\cup \left(A^{\complement }\times B\right)\cup \left(A\times B^{\complement }\right)\!,}whereA∁{\displaystyle A^{\complement }}denotes theabsolute complementofA. Other properties related withsubsetsare: if bothA,B≠∅, thenA×B⊆C×D⟺A⊆CandB⊆D.{\displaystyle {\text{if both }}A,B\neq \emptyset {\text{, then }}A\times B\subseteq C\times D\!\iff \!A\subseteq C{\text{ and }}B\subseteq D.}[9] Thecardinalityof a set is the number of elements of the set. For example, defining two sets:A= {a, b}andB= {5, 6}. Both setAand setBconsist of two elements each. Their Cartesian product, written asA×B, results in a new set which has the following elements: where each element ofAis paired with each element ofB, and where each pair makes up one element of the output set. The number of values in each element of the resulting set is equal to the number of sets whose Cartesian product is being taken; 2 in this case. The cardinality of the output set is equal to the product of the cardinalities of all the input sets. That is, In this case,|A×B| = 4 Similarly, and so on. The setA×Bisinfiniteif eitherAorBis infinite, and the other set is not the empty set.[10] The Cartesian product can be generalized to then-ary Cartesian productovernsetsX1, ...,Xnas the setX1×⋯×Xn={(x1,…,xn)∣xi∈Xifor everyi∈{1,…,n}}{\displaystyle X_{1}\times \cdots \times X_{n}=\{(x_{1},\ldots ,x_{n})\mid x_{i}\in X_{i}\ {\text{for every}}\ i\in \{1,\ldots ,n\}\}} ofn-tuples. If tuples are defined asnested ordered pairs, it can be identified with(X1× ... ×Xn−1) ×Xn. If a tuple is defined as a function on{1, 2, ...,n} that takes its value atito be thei-th element of the tuple, then the Cartesian productX1× ... ×Xnis the set of functions{x:{1,…,n}→X1∪⋯∪Xn|x(i)∈Xifor everyi∈{1,…,n}}.{\displaystyle \{x:\{1,\ldots ,n\}\to X_{1}\cup \cdots \cup X_{n}\ |\ x(i)\in X_{i}\ {\text{for every}}\ i\in \{1,\ldots ,n\}\}.} TheCartesian squareof a setXis the Cartesian productX2=X×X. An example is the 2-dimensionalplaneR2=R×RwhereRis the set ofreal numbers:[1]R2is the set of all points(x,y)wherexandyare real numbers (see theCartesian coordinate system). Then-ary Cartesian powerof a setX, denotedXn{\displaystyle X^{n}}, can be defined asXn=X×X×⋯×X⏟n={(x1,…,xn)|xi∈Xfor everyi∈{1,…,n}}.{\displaystyle X^{n}=\underbrace {X\times X\times \cdots \times X} _{n}=\{(x_{1},\ldots ,x_{n})\ |\ x_{i}\in X\ {\text{for every}}\ i\in \{1,\ldots ,n\}\}.} An example of this isR3=R×R×R, withRagain the set of real numbers,[1]and more generallyRn. Then-ary Cartesian power of a setXisisomorphicto the space of functions from ann-element set toX. As a special case, the 0-ary Cartesian power ofXmay be taken to be asingleton set, corresponding to theempty functionwithcodomainX. Let Cartesian products be givenA=A1×⋯×An{\displaystyle A=A_{1}\times \dots \times A_{n}}andB=B1×⋯×Bn{\displaystyle B=B_{1}\times \dots \times B_{n}}. Then Inn-tuple algebra(NTA),[12]such a matrix-like representation of Cartesian products is called aC-n-tuple. With this in mind, the union of some Cartesian products given in the same universe can be expressed as a matrix bounded by square brackets, in which the rows represent the Cartesian products involved in the union: Such a structure is called aC-systemin NTA. Then the complement of the Cartesian productA{\displaystyle A}will look like the followingC-system expressed as a matrix of the dimensionn×n{\displaystyle n\times n}: The diagonal components of this matrixAi∁{\displaystyle A_{i}^{\complement }}are equal correspondingly toXi∖Ai{\displaystyle X_{i}\setminus A_{i}}. In NTA, a diagonalC-systemA∁{\displaystyle A^{\complement }}, that represents the complement of aC-n-tupleA{\displaystyle A}, can be written concisely as a tuple of diagonal components bounded by inverted square brackets: This structure is called aD-n-tuple. Then the complement of theC-systemR{\displaystyle R}is a structureR∁{\displaystyle R^{\complement }}, represented by a matrix of the same dimension and bounded by inverted square brackets, in which all components are equal to the complements of the components of the initial matrixR{\displaystyle R}. Such a structure is called aD-system and is calculated, if necessary, as the intersection of theD-n-tuples contained in it. For instance, if the followingC-system is given: then its complement will be theD-system Let us consider some new relations for structures with Cartesian products obtained in the process of studying the properties of NTA.[12]The structures defined in the same universe are calledhomotypicones. It is possible to define the Cartesian product of an arbitrary (possiblyinfinite)indexed familyof sets. IfIis anyindex set, and{Xi}i∈I{\displaystyle \{X_{i}\}_{i\in I}}is a family of sets indexed byI, then the Cartesian product of the sets in{Xi}i∈I{\displaystyle \{X_{i}\}_{i\in I}}is defined to be∏i∈IXi={f:I→⋃i∈IXi|∀i∈I.f(i)∈Xi},{\displaystyle \prod _{i\in I}X_{i}=\left\{\left.f:I\to \bigcup _{i\in I}X_{i}\ \right|\ \forall i\in I.\ f(i)\in X_{i}\right\},}that is, the set of all functions defined on theindex setIsuch that the value of the function at a particular indexiis an element ofXi. Even if each of theXiis nonempty, the Cartesian product may be empty if theaxiom of choice, which is equivalent to the statement that every such product is nonempty, is not assumed.∏i∈IXi{\displaystyle \prod _{i\in I}X_{i}}may also be denotedX{\displaystyle {\mathsf {X}}}i∈IXi{\displaystyle {}_{i\in I}X_{i}}.[13] For eachjinI, the functionπj:∏i∈IXi→Xj,{\displaystyle \pi _{j}:\prod _{i\in I}X_{i}\to X_{j},}defined byπj(f)=f(j){\displaystyle \pi _{j}(f)=f(j)}is called thej-thprojection map. Cartesian poweris a Cartesian product where all the factorsXiare the same setX. In this case,∏i∈IXi=∏i∈IX{\displaystyle \prod _{i\in I}X_{i}=\prod _{i\in I}X}is the set of all functions fromItoX, and is frequently denotedXI. This case is important in the study ofcardinal exponentiation. An important special case is when the index set isN{\displaystyle \mathbb {N} }, thenatural numbers: this Cartesian product is the set of all infinite sequences with thei-th term in its corresponding setXi. For example, each element of∏n=1∞R=R×R×⋯{\displaystyle \prod _{n=1}^{\infty }\mathbb {R} =\mathbb {R} \times \mathbb {R} \times \cdots }can be visualized as avectorwith countably infinite real number components. This set is frequently denotedRω{\displaystyle \mathbb {R} ^{\omega }}, orRN{\displaystyle \mathbb {R} ^{\mathbb {N} }}. If several sets are being multiplied together (e.g.,X1,X2,X3, ...), then some authors[14]choose to abbreviate the Cartesian product as simply×Xi. Iffis a function fromXtoAandgis a function fromYtoB, then their Cartesian productf×gis a function fromX×YtoA×Bwith(f×g)(x,y)=(f(x),g(y)).{\displaystyle (f\times g)(x,y)=(f(x),g(y)).} This can be extended totuplesand infinite collections of functions. This is different from the standard Cartesian product of functions considered as sets. LetA{\displaystyle A}be a set andB⊆A{\displaystyle B\subseteq A}. Then thecylinderofB{\displaystyle B}with respect toA{\displaystyle A}is the Cartesian productB×A{\displaystyle B\times A}ofB{\displaystyle B}andA{\displaystyle A}. Normally,A{\displaystyle A}is considered to be theuniverseof the context and is left away. For example, ifB{\displaystyle B}is a subset of the natural numbersN{\displaystyle \mathbb {N} }, then the cylinder ofB{\displaystyle B}isB×N{\displaystyle B\times \mathbb {N} }. Although the Cartesian product is traditionally applied to sets,category theoryprovides a more general interpretation of theproductof mathematical structures. This is distinct from, although related to, the notion of aCartesian squarein category theory, which is a generalization of thefiber product. Exponentiationis theright adjointof the Cartesian product; thus any category with a Cartesian product (and afinal object) is aCartesian closed category. Ingraph theory, theCartesian product of two graphsGandHis the graph denoted byG×H, whosevertexset is the (ordinary) Cartesian productV(G) ×V(H)and such that two vertices(u,v)and(u′,v′)are adjacent inG×H, if and only ifu=u′andvis adjacent withv′ inH,orv=v′anduis adjacent withu′ inG. The Cartesian product of graphs is not aproductin the sense of category theory. Instead, the categorical product is known as thetensor product of graphs.
https://en.wikipedia.org/wiki/Cartesian_square
Inmathematics, the notion ofcylindric algebra, developed byAlfred Tarski, arises naturally in thealgebraizationoffirst-order logic with equality. This is comparable to the roleBoolean algebrasplay forpropositional logic. Cylindric algebras are Boolean algebras equipped with additional cylindrification operations that modelquantificationandequality. They differ frompolyadic algebrasin that the latter do not model equality. The cylindric algebra should not be confused with themeasure theoreticconceptcylindrical algebrathat arises in the study ofcylinder set measuresand thecylindrical σ-algebra. Acylindric algebra of dimensionα{\displaystyle \alpha }(whereα{\displaystyle \alpha }is anyordinal number) is an algebraic structure(A,+,⋅,−,0,1,cκ,dκλ)κ,λ<α{\displaystyle (A,+,\cdot ,-,0,1,c_{\kappa },d_{\kappa \lambda })_{\kappa ,\lambda <\alpha }}such that(A,+,⋅,−,0,1){\displaystyle (A,+,\cdot ,-,0,1)}is aBoolean algebra,cκ{\displaystyle c_{\kappa }}a unary operator onA{\displaystyle A}for everyκ{\displaystyle \kappa }(called acylindrification), anddκλ{\displaystyle d_{\kappa \lambda }}a distinguished element ofA{\displaystyle A}for everyκ{\displaystyle \kappa }andλ{\displaystyle \lambda }(called adiagonal), such that the following hold: Assuming a presentation of first-order logicwithout function symbols, the operatorcκx{\displaystyle c_{\kappa }x}modelsexistential quantificationover variableκ{\displaystyle \kappa }in formulax{\displaystyle x}while the operatordκλ{\displaystyle d_{\kappa \lambda }}models the equality of variablesκ{\displaystyle \kappa }andλ{\displaystyle \lambda }. Hence, reformulated using standard logical notations, the axioms read as Acylindric set algebra of dimensionα{\displaystyle \alpha }is an algebraic structure(A,∪,∩,−,∅,Xα,cκ,dκλ)κ,λ<α{\displaystyle (A,\cup ,\cap ,-,\emptyset ,X^{\alpha },c_{\kappa },d_{\kappa \lambda })_{\kappa ,\lambda <\alpha }}such that⟨Xα,A⟩{\displaystyle \langle X^{\alpha },A\rangle }is afield of sets,cκS{\displaystyle c_{\kappa }S}is given by{y∈Xα∣∃x∈S∀β≠κy(β)=x(β)}{\displaystyle \{y\in X^{\alpha }\mid \exists x\in S\ \forall \beta \neq \kappa \ y(\beta )=x(\beta )\}}, anddκλ{\displaystyle d_{\kappa \lambda }}is given by{x∈Xα∣x(κ)=x(λ)}{\displaystyle \{x\in X^{\alpha }\mid x(\kappa )=x(\lambda )\}}.[1]It necessarily validates the axioms C1–C7 of a cylindric algebra, with∪{\displaystyle \cup }instead of+{\displaystyle +},∩{\displaystyle \cap }instead of⋅{\displaystyle \cdot }, set complement for complement,empty setas 0,Xα{\displaystyle X^{\alpha }}as the unit, and⊆{\displaystyle \subseteq }instead of≤{\displaystyle \leq }. The setXis called thebase. A representation of a cylindric algebra is anisomorphismfrom that algebra to a cylindric set algebra. Not every cylindric algebra has a representation as a cylindric set algebra.[2][example needed]It is easier to connect the semantics of first-order predicate logic with cylindric set algebra. (For more details, see§ Further reading.) Cylindric algebras have been generalized to the case ofmany-sorted logic(Caleiro and Gonçalves 2006), which allows for a better modeling of the duality between first-order formulas and terms. Whenα=1{\displaystyle \alpha =1}andκ,λ{\displaystyle \kappa ,\lambda }are restricted to being only 0, thencκ{\displaystyle c_{\kappa }}becomes∃{\displaystyle \exists }, the diagonals can be dropped out, and the following theorem of cylindric algebra (Pinter 1973): turns into the axiom ofmonadic Boolean algebra. The axiom (C4) drops out (becomes a tautology). Thus monadic Boolean algebra can be seen as a restriction of cylindric algebra to the one variable case.
https://en.wikipedia.org/wiki/Cylindric_algebra
Theextensionof apredicate– atruth-valuedfunction– is thesetoftuplesof values that, used as arguments, satisfy the predicate. Such a set of tuples is arelation. For example, the statement "d2follows the weekdayd1" can be seen as a truth function associating to each tuple (d2,d1) the valuetrueorfalse. The extension of this truth function is, by convention, the set of all such tuples associated with the valuetrue, i.e. By examining this extension, we can conclude that "Tuesday follows the weekday Saturday" (for example) is false. Usingset-builder notation, the extension of then-arypredicateΦ{\displaystyle \Phi }can be written as If the values 0 and 1 in the range of acharacteristic functionare identified with the values false and true, respectively – making the characteristic function a predicate – , then for all relationsRand predicatesΦ{\displaystyle \Phi }the following two statements are equivalent: Thislogic-related article is astub. You can help Wikipedia byexpanding it. Thismathematical logic-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Extension_(predicate_logic)
Charles Sanders Peirce(/pɜːrs/[a][8]PURSS; September 10, 1839 – April 19, 1914) was an American scientist, mathematician,logician, and philosopher who is sometimes known as "the father ofpragmatism".[9][10]According to philosopherPaul Weiss, Peirce was "the most original and versatile of America's philosophers and America's greatest logician".[11]Bertrand Russellwrote "he was one of the most original minds of the later nineteenth century and certainly the greatest American thinker ever".[12] Educated as a chemist and employed as a scientist for thirty years, Peirce meanwhile made major contributions to logic, such as theories ofrelationsandquantification.C. I. Lewiswrote, "The contributions of C. S. Peirce to symbolic logic are more numerous and varied than those of any other writer—at least in the nineteenth century." For Peirce, logic also encompassed much of what is now calledepistemologyand thephilosophy of science. He saw logic as the formal branch ofsemioticsor study ofsigns, of which he is a founder, which foreshadowed the debate amonglogical positivistsand proponents ofphilosophy of languagethat dominated 20th-century Western philosophy. Peirce's study of signs also included atripartite theory of predication. Additionally, he defined the concept ofabductive reasoning, as well as rigorously formulatingmathematical inductionanddeductive reasoning. He was one of thefounders of statistics. As early as 1886, he saw thatlogical operations could be carried out by electrical switching circuits. The same idea was used decades later to produce digital computers.[13] Inmetaphysics, Peirce was an "objective idealist" in the tradition of German philosopherImmanuel Kantas well as ascholastic realistabout universals. He also held a commitment to the ideas of continuity and chance as real features of the universe, views he labeledsynechismandtychismrespectively. Peirce believed an epistemicfallibilismand anti-skepticismwent along with these views. Peirce was born at 3 Phillips Place inCambridge, Massachusetts. He was the son of Sarah Hunt Mills andBenjamin Peirce, himself a professor of mathematics andastronomyatHarvard University.[b]At age 12, Charles read his older brother's copy ofRichard Whately'sElements of Logic, then the leading English-language text on the subject. So began his lifelong fascination with logic and reasoning.[14] He suffered from his late teens onward from a nervous condition then known as "facial neuralgia", which would today be diagnosed astrigeminal neuralgia. His biographer, Joseph Brent, says that when in the throes of its pain "he was, at first, almost stupefied, and then aloof, cold, depressed, extremely suspicious, impatient of the slightest crossing, and subject to violent outbursts of temper".[15]Its consequences may have led to the social isolation of his later life. Peirce went on to earn a Bachelor of Arts degree and a Master of Arts degree (1862) from Harvard. In 1863 theLawrence Scientific Schoolawarded him aBachelor of Sciencedegree, Harvard's firstsumma cum laudechemistrydegree.[16]His academic record was otherwise undistinguished.[17]At Harvard, he began lifelong friendships withFrancis Ellingwood Abbot,Chauncey Wright, andWilliam James.[18]One of his Harvard instructors,Charles William Eliot, formed an unfavorable opinion of Peirce. This proved fateful, because Eliot, while President of Harvard (1869–1909—a period encompassing nearly all of Peirce's working life), repeatedly vetoed Peirce's employment at the university.[19] Between 1859 and 1891, Peirce was intermittently employed in various scientific capacities by the United States Coast Survey, which in 1878 was renamed theUnited States Coast and Geodetic Survey,[20]where he enjoyed his highly influential father's protection until the latter's death in 1880.[21]At the Survey, he worked mainly ingeodesyandgravimetry, refining the use ofpendulumsto determine small local variations in the Earth'sgravity.[20] This employment exempted Peirce from having to take part in theAmerican Civil War; it would have been very awkward for him to do so, as theBoston BrahminPeirces sympathized with theConfederacy.[22]No members of the Peirce family volunteered or enlisted. Peirce grew up in a home where white supremacy was taken for granted, and slavery was considered natural.[23]Peirce's father had described himself as asecessionistuntil the outbreak of the war, after which he became aUnionpartisan, providing donations to theSanitary Commission, the leading Northern war charity. Peirce liked to use the followingsyllogismto illustrate the unreliability oftraditionalforms of logic (for the first premise arguablyassumes the conclusion):[24] All Men are equal in their political rights.Negroes are Men.Therefore, negroes are equal in political rights to whites. He was elected a resident fellow of theAmerican Academy of Arts and Sciencesin January 1867.[25]The Survey sent him to Europe five times,[26]first in 1871 as part of a group sent to observe asolar eclipse. There, he sought outAugustus De Morgan,William Stanley Jevons, andWilliam Kingdon Clifford,[27]British mathematicians and logicians whose turn of mind resembled his own. From 1869 to 1872, he was employed as an assistant in Harvard's astronomical observatory, doing important work on determining the brightness ofstarsand the shape of theMilky Way.[28]In 1872 he founded theMetaphysical Club, a conversational philosophical club that Peirce, the futureSupreme Court JusticeOliver Wendell Holmes Jr., the philosopher and psychologistWilliam James, amongst others, formed in January 1872 inCambridge, Massachusetts, and dissolved in December 1872. Other members of the club includedChauncey Wright,John Fiske,Francis Ellingwood Abbot,Nicholas St. John Green, andJoseph Bangs Warner.[29]The discussions eventually birthed Peirce's notion of pragmatism. On April 20, 1877, he was elected a member of theNational Academy of Sciences.[31]Also in 1877, he proposed measuring the meter as so manywavelengthsof light of a certainfrequency,[32]the kind of definition employedfrom 1960 to 1983. In 1879 Peirce developedPeirce quincuncial projection, having been inspired byH. A. Schwarz's 1869conformal transformation of a circle onto a polygon ofnsides(known as the Schwarz–Christoffel mapping). During the 1880s, Peirce's indifference to bureaucratic detail waxed while his Survey work's quality and timeliness waned. Peirce took years to write reports that he should have completed in months.[according to whom?]Meanwhile, he wrote entries, ultimately thousands, during 1883–1909 on philosophy, logic, science, and other subjects for the encyclopedicCentury Dictionary.[33]In 1885, an investigation by theAllisonCommission exonerated Peirce, but led to the dismissal of SuperintendentJulius Hilgardand several other Coast Survey employees for misuse of public funds.[34]In 1891, Peirce resigned from the Coast Survey at SuperintendentThomas Corwin Mendenhall's request.[35] In 1879, Peirce was appointed lecturer in logic atJohns Hopkins University, which had strong departments in areas that interested him, such as philosophy (RoyceandDeweycompleted their PhDs at Hopkins), psychology (taught byG. Stanley Halland studied byJoseph Jastrow, who coauthored a landmark empirical study with Peirce), and mathematics (taught byJ. J. Sylvester, who came to admire Peirce's work on mathematics and logic). HisStudies in Logic by Members of the Johns Hopkins University(1883) contained works by himself andAllan Marquand,Christine Ladd,Benjamin Ives Gilman, and Oscar Howard Mitchell,[36]several of whom were his graduate students.[7]Peirce's nontenured position at Hopkins was the only academic appointment he ever held. Brent documents something Peirce never suspected, namely that his efforts to obtain academic employment, grants, and scientific respectability were repeatedly frustrated by the covert opposition of a major Canadian-American scientist of the day,Simon Newcomb.[37]Newcomb had been a favourite student of Peirce's father; although "no doubt quite bright", "likeSalieriinPeter Shaffer's Amadeushe also had just enough talent to recognize he was not a genius and just enough pettiness to resent someone who was". Additionally "an intensely devout and literal-minded Christian of rigid moral standards", he was appalled by what he considered Peirce's personal shortcomings.[38]Peirce's efforts may also have been hampered by what Brent characterizes as "his difficult personality".[39]In contrast,Keith Devlinbelieves that Peirce's work was too far ahead of his time to be appreciated by the academic establishment of the day and that this played a large role in his inability to obtain a tenured position.[40] Peirce's personal life undoubtedly worked against his professional success. After his first wife,Harriet Melusina Fay("Zina"), left him in 1875,[41]Peirce, while still legally married, became involved withJuliette, whose last name, given variously as Froissy and Pourtalai,[42]and nationality (she spoke French)[43]remain uncertain.[44]When his divorce from Zina became final in 1883, he married Juliette.[45]That year, Newcomb pointed out to a Johns Hopkins trustee that Peirce, while a Hopkins employee, had lived and traveled with a woman to whom he was not married; the ensuing scandal led to his dismissal in January 1884.[46]Over the years Peirce sought academic employment at various universities without success.[47]He had no children by either marriage.[48] In 1887, Peirce spent part of his inheritance from his parents to buy 2,000 acres (8 km2) of rural land nearMilford, Pennsylvania, which never yielded an economic return.[49]There he had an 1854 farmhouse remodeled to his design.[50]The Peirces named the property "Arisbe". There they lived with few interruptions for the rest of their lives,[51]Charles writing prolifically, with much of his work remaining unpublished to this day (seeWorks). Living beyond their means soon led to grave financial and legal difficulties.[52]Charles spent much of his last two decades unable to afford heat in winter and subsisting on old bread donated by the local baker. Unable to afford new stationery, he wrote on theversoside of old manuscripts. An outstanding warrant for assault and unpaid debts led to his being a fugitive in New York City for a while.[53]Several people, including his brotherJames Mills Peirce[54]and his neighbors, relatives ofGifford Pinchot, settled his debts and paid his property taxes and mortgage.[55] Peirce did some scientific and engineering consulting and wrote much for meager pay, mainly encyclopedic dictionary entries, and reviews forThe Nation(with whose editor,Wendell Phillips Garrison, he became friendly). He did translations for theSmithsonian Institution, at its directorSamuel Langley's instigation. Peirce also did substantial mathematical calculations for Langley's research on powered flight. Hoping to make money, Peirce tried inventing.[56]He began but did not complete several books.[57]In 1888, PresidentGrover Clevelandappointed him to theAssay Commission.[58] From 1890 on, he had a friend and admirer in Judge Francis C. Russell of Chicago,[59]who introduced Peirce to editorPaul Carusand ownerEdward C. Hegelerof the pioneering American philosophy journalThe Monist, which eventually published at least 14 articles by Peirce.[60]He wrote many texts inJames Mark Baldwin'sDictionary of Philosophy and Psychology(1901–1905); half of those credited to him appear to have been written actually byChristine Ladd-Franklinunder his supervision.[61]He applied in 1902 to the newly formedCarnegie Institutionfor a grant to write a systematic book describing his life's work. The application was doomed; his nemesis, Newcomb, served on the Carnegie Institution executive committee, and its president had been president of Johns Hopkins at the time of Peirce's dismissal.[62] The one who did the most to help Peirce in these desperate times was his old friendWilliam James, dedicating hisWill to Believe(1897) to Peirce, and arranging for Peirce to be paid to give two series of lectures at or near Harvard (1898 and 1903).[63]Most important, each year from 1907 until James's death in 1910, James wrote to his friends in the Boston intelligentsia to request financial aid for Peirce; the fund continued even after James died. Peirce reciprocated by designating James's eldest son as his heir should Juliette predecease him.[64]It has been believed that this was also why Peirce used "Santiago" ("St. James" in English) as a middle name, but he appeared in print as early as 1890 as Charles Santiago Peirce. (SeeCharles Santiago Sanders Peircefor discussion and references). Peirce died destitute inMilford, Pennsylvania, twenty years before his widow. Juliette Peirce kept the urn with Peirce's ashes at Arisbe. In 1934, Pennsylvania GovernorGifford Pinchotarranged for Juliette's burial in Milford Cemetery. The urn with Peirce's ashes was interred with Juliette.[c] Bertrand Russell(1959) wrote "Beyond doubt [...] he was one of the most original minds of the later nineteenth century and certainly the greatest American thinker ever".[12]Russell andWhitehead'sPrincipia Mathematica, published from 1910 to 1913, does not mention Peirce (Peirce's work was not widely known until later).[65]A. N. Whitehead, while reading some of Peirce's unpublished manuscripts soon after arriving at Harvard in 1924, was struck by how Peirce had anticipated his own "process" thinking. (On Peirce andprocess metaphysics, see Lowe 1964.[28])Karl Popperviewed Peirce as "one of the greatest philosophers of all times".[66]Yet Peirce's achievements were not immediately recognized. His imposing contemporariesWilliam JamesandJosiah Royce[67]admired him andCassius Jackson Keyser, at Columbia andC. K. Ogden, wrote about Peirce with respect but to no immediate effect. The first scholar to give Peirce his considered professional attention was Royce's studentMorris Raphael Cohen, the editor of an anthology of Peirce's writings entitledChance, Love, and Logic(1923), and the author of the first bibliography of Peirce's scattered writings.[68]John Deweystudied under Peirce at Johns Hopkins.[7]From 1916 onward, Dewey's writings repeatedly mention Peirce with deference. His 1938Logic: The Theory of Inquiryis much influenced by Peirce.[69]The publication of the first six volumes ofCollected Papers(1931–1935) was the most important event to date in Peirce studies and one that Cohen made possible by raising the needed funds;[70]however it did not prompt an outpouring of secondary studies. The editors of those volumes,Charles HartshorneandPaul Weiss, did not become Peirce specialists. Early landmarks of the secondary literature include the monographs by Buchler (1939),Feibleman(1946), andGoudge(1950), the 1941 PhD thesis byArthur W. Burks(who went on to edit volumes 7 and 8), and the studies edited by Wiener and Young (1952). TheCharles S. Peirce Societywas founded in 1946. ItsTransactions, an academic quarterly specializing in Peirce's pragmatism and American philosophy has appeared since 1965.[71](See Phillips 2014, 62 for discussion of Peirce and Dewey relative totransactionalism.) By 1943 such was Peirce's reputation, in the US at least, thatWebster's Biographical Dictionarysaid that Peirce was "now regarded as the most original thinker and greatest logician of his time".[72] In 1949, while doing unrelated archival work, the historian of mathematicsCarolyn Eisele(1902–2000) chanced on an autograph letter by Peirce. So began her forty years of research on Peirce, “the mathematician and scientist,” culminating in Eisele (1976, 1979, 1985). In 1952, theScottishphilosopherW. B. Galliehad his bookPeirce and Pragmatism[73]published, which introduced the work of Peirce to an international readership.A.J. Ayer, the English philosopher, provided the Editorial Foreword to Gallie's book. In it he credited Peirce's philosophy as being 'not only of great historical significance, as one of the original sources of American pragmatism, but also extremely important in itself.' Ayer concluded: 'it is clear from Professor Gallie’s exposition of his doctrines that he is a philosopher from whom we still have much to learn.'[74] Beginning around 1960, Max Fisch (1900-1995),[75]the philosopher andhistorian of ideas, emerged as an authority on Peirce (Fisch, 1986).[76]He included many of his relevant articles in a survey (Fisch 1986: 422–448) of the impact of Peirce's thought through 1983. Peirce has gained an international following, marked by university research centers devoted to Peirce studies andpragmatismin Brazil (CeneP/CIEPandCentro de Estudos de Pragmatismo), Finland (HPRCandCommens), Germany (Wirth's group,Hoffman's and Otte's group, and Deuser's and Härle's group[77]), France (L'I.R.S.C.E.), Spain (GEP), and Italy (CSP). His writings have been translated into several languages, including German, French, Finnish, Spanish, and Swedish. Since 1950, there have been French, Italian, Spanish, British, and Brazilian Peirce scholars of note. For many years, the North American philosophy department most devoted to Peirce was theUniversity of Toronto, thanks in part to the leadership ofThomas Goudgeand David Savan. In recent years, U.S. Peirce scholars have clustered atIndiana University – Purdue University Indianapolis, home of thePeirce Edition Project(PEP) –, andPennsylvania State University. Currently, considerable interest is being taken in Peirce's ideas by researchers wholly outside the arena of academic philosophy. The interest comes from industry, business, technology, intelligence organizations, and the military; and it has resulted in the existence of a substantial number of agencies, institutes, businesses, and laboratories in which ongoing research into and development of Peircean concepts are being vigorously undertaken. In recent years, Peirce'strichotomyof signs is exploited by a growing number of practitioners for marketing and design tasks. John Deelywrites that Peirce was the last of the "moderns" and "first of the postmoderns". He lauds Peirce's doctrine of signs as a contribution to the dawn of thePostmodernepoch. Deely additionally comments that "Peirce stands...in a position analogous to the position occupied byAugustineas last of the WesternFathersand first of the medievals".[78] Peirce's reputation rests largely on academic papers published in American scientific and scholarly journals such asProceedings of theAmerican Academy of Arts and Sciences, theJournal of Speculative Philosophy,The Monist,Popular ScienceMonthly, theAmerican Journal of Mathematics,Memoirs of theNational Academy of Sciences,The Nation, and others. SeeArticles by Peirce, published in his lifetimefor an extensive list with links to them online. The only full-length book (neither extract nor pamphlet) that Peirce authored and saw published in his lifetime[79]wasPhotometric Researches(1878), a 181-page monograph on the applications of spectrographic methods to astronomy. While at Johns Hopkins, he editedStudies in Logic(1883), containing chapters by himself and hisgraduate students. Besides lectures during his years (1879–1884) as lecturer in Logic at Johns Hopkins, he gave at least nine series of lectures, many now published; seeLectures by Peirce. After Peirce's death,Harvard Universityobtained from Peirce's widow the papers found in his study, but did not microfilm them until 1964. Only after Richard Robin (1967)[80]catalogued thisNachlassdid it become clear that Peirce had left approximately 1,650 unpublished manuscripts, totaling over 100,000 pages,[81]mostly still unpublished excepton microfilm. On the vicissitudes of Peirce's papers, see Houser (1989).[82]Reportedly the papers remain in unsatisfactory condition.[83] The first published anthology of Peirce's articles was the one-volumeChance, Love and Logic: Philosophical Essays, edited byMorris Raphael Cohen, 1923, still in print.Other one-volume anthologieswere published in 1940, 1957, 1958, 1972, 1994, and 2009, most still in print. The main posthumous editions[84]of Peirce's works in their long trek to light, often multi-volume, and some still in print, have included: 1931–1958:Collected Papers of Charles Sanders Peirce(CP), 8 volumes, includes many published works, along with a selection of previously unpublished work and a smattering of his correspondence. This long-time standard edition drawn from Peirce's work from the 1860s to 1913 remains the most comprehensive survey of his prolific output from 1893 to 1913. It is organized thematically, but texts (including lecture series) are often split up across volumes, while texts from various stages in Peirce's development are often combined, requiring frequent visits to editors' notes.[85]Edited (1–6) byCharles HartshorneandPaul Weissand (7–8) byArthur Burks, in print and online. 1975–1987:Charles Sanders Peirce: Contributions toThe Nation, 4 volumes, includes Peirce's more than 300 reviews and articles published 1869–1908 inThe Nation. Edited by Kenneth Laine Ketner and James Edward Cook, online. 1976:The New Elements of Mathematics by Charles S. Peirce, 4 volumes in 5, included many previously unpublished Peirce manuscripts on mathematical subjects, along with Peirce's important published mathematical articles. Edited by Carolyn Eisele, back in print. 1977:Semiotic and Significs: The Correspondence between C. S. Peirce and Victoria Lady Welby(2nd edition 2001), included Peirce's entire correspondence (1903–1912) withVictoria, Lady Welby. Peirce's other published correspondence is largely limited to the 14 letters included in volume 8 of theCollected Papers, and the 20-odd pre-1890 items included so far in theWritings. Edited by Charles S. Hardwick with James Cook, out of print. 1982–now:Writings of Charles S. Peirce, A Chronological Edition(W), Volumes 1–6 & 8, of a projected 30. The limited coverage, and defective editing and organization, of theCollected Papersled Max Fisch and others in the 1970s to found thePeirce Edition Project(PEP), whose mission is to prepare a more complete critical chronological edition. Only seven volumes have appeared to date, but they cover the period from 1859 to 1892, when Peirce carried out much of his best-known work.Writings of Charles S. Peirce, 8 was published in November 2010; and work continues onWritings of Charles S. Peirce, 7, 9, and 11. In print and online. 1985:Historical Perspectives on Peirce's Logic of Science: A History of Science, 2 volumes. Auspitz has said,[86]"The extent of Peirce's immersion in the science of his day is evident in his reviews in theNation[...] and in his papers, grant applications, and publishers' prospectuses in the history and practice of science", referring latterly toHistorical Perspectives. Edited by Carolyn Eisele, back in print. 1992:Reasoning and the Logic of Thingscollects in one place Peirce's 1898 series of lectures invited by William James. Edited by Kenneth Laine Ketner, with commentary byHilary Putnam, in print. 1992–1998:The Essential Peirce(EP), 2 volumes, is an important recent sampler of Peirce's philosophical writings. Edited (1) by Nathan Hauser and Christian Kloesel and (2) byPeirce Edition Projecteditors, in print. 1997:Pragmatism as a Principle and Method of Right Thinkingcollects Peirce's 1903 Harvard "Lectures on Pragmatism" in a study edition, including drafts, of Peirce's lecture manuscripts, which had been previously published in abridged form; the lectures now also appear inThe Essential Peirce, 2. Edited by Patricia Ann Turisi, in print. 2010:Philosophy of Mathematics: Selected Writingscollects important writings by Peirce on the subject, many not previously in print. Edited by Matthew E. Moore, in print. Peirce's most important work in pure mathematics was in logical and foundational areas. He also worked onlinear algebra,matrices, various geometries,topologyandListing numbers,Bell numbers,graphs, thefour-color problem, and the nature of continuity. He worked on applied mathematics in economics, engineering, and map projections, and was especially active inprobabilityand statistics.[87] Peirce made a number of striking discoveries in formal logic and foundational mathematics, nearly all of which came to be appreciated only long after he died: In 1860,[88]he suggested a cardinal arithmetic for infinite numbers, years before any work byGeorg Cantor(who completedhis dissertation in 1867) and without access toBernard Bolzano's 1851 (posthumous)Paradoxien des Unendlichen. In 1880–1881,[89]he showed howBoolean algebracould be done via arepeated sufficient single binary operation(logical NOR), anticipatingHenry M. Shefferby 33 years. (See alsoDe Morgan's Laws.) In 1881,[90]he set out theaxiomatization of natural number arithmetic, a few years beforeRichard DedekindandGiuseppe Peano. In the same paper Peirce gave, years before Dedekind, the first purely cardinal definition of a finite set in the sense now known as "Dedekind-finite", and implied by the same stroke an important formal definition of aninfinite set(Dedekind-infinite), as asetthat can be put into aone-to-one correspondencewith one of its propersubsets. In 1885,[91]he distinguished between first-order and second-order quantification.[92][d]In the same paper he set out what can be read as the first (primitive)axiomatic set theory, anticipatingZermeloby about two decades (Brady 2000,[93]pp. 132–133). In 1886, he saw that Boolean calculations could be carried out via electrical switches,[13]anticipatingClaude Shannonby more than 50 years. By the later 1890s[94]he was devisingexistential graphs, a diagrammatic notation for thepredicate calculus. Based on them areJohn F. Sowa'sconceptual graphsand Sun-Joo Shin'sdiagrammatic reasoning. Peirce wrote drafts for an introductory textbook, with the working titleThe New Elements of Mathematics, that presented mathematics from an original standpoint. Those drafts and many other of his previously unpublished mathematical manuscripts finally appeared[87]inThe New Elements of Mathematics by Charles S. Peirce(1976), edited by mathematicianCarolyn Eisele. Peirce agreed withAuguste Comtein regarding mathematics as more basic than philosophy and the special sciences (of nature and mind). Peirceclassifiedmathematics into three subareas: (1) mathematics of logic, (2) discrete series, and (3) pseudo-continua (as he called them, including thereal numbers) and continua. Influenced by his fatherBenjamin, Peirce argued that mathematics studies purely hypothetical objects and is not just the science of quantity but is more broadly the science which draws necessary conclusions; that mathematics aids logic, not vice versa; and that logic itself is part of philosophy and is the scienceaboutdrawing conclusions necessary and otherwise.[95] Peirce held that science achieves statistical probabilities, not certainties, and that spontaneity ("absolute chance") is real (seeTychismon his view). Most of his statistical writings promote thefrequency interpretationof probability (objective ratios of cases), and many of his writings express skepticism about (and criticize the use of)probabilitywhen such models are not based on objectiverandomization.[e]Though Peirce was largely a frequentist, hispossible world semanticsintroduced the"propensity" theory of probabilitybeforeKarl Popper.[96][97]Peirce (sometimes withJoseph Jastrow) investigated theprobability judgmentsof experimental subjects, "perhaps the very first" elicitation and estimation ofsubjective probabilitiesinexperimental psychologyand (what came to be called)Bayesian statistics.[2] Peirce formulated modern statistics in "Illustrations of the Logic of Science" (1877–1878) and "A Theory of Probable Inference" (1883). With arepeated measures design, Charles Sanders Peirce and Joseph Jastrow introducedblinded,controlled randomized experimentsin 1884[98](Hacking 1990:205)[1](beforeRonald A. Fisher).[2]He inventedoptimal designfor experiments on gravity, in which he "corrected the means". He usedcorrelationandsmoothing. Peirce extended the work onoutliersbyBenjamin Peirce, his father.[2]He introduced the terms "confidence" and "likelihood" (beforeJerzy NeymanandFisher). (SeeStephen Stigler's historical books andIan Hacking1990.[1]) Peirce was a working scientist for 30 years, and arguably was a professional philosopher only during the five years he lectured at Johns Hopkins. He learned philosophy mainly by reading, each day, a few pages ofImmanuel Kant'sCritique of Pure Reason, in the original German, while a Harvard undergraduate. His writings bear on a wide array of disciplines, including mathematics,logic, philosophy, statistics,astronomy,[28]metrology,[3]geodesy,experimental psychology,[4]economics,[5]linguistics,[6]and thehistory and philosophy of science. This work has enjoyed renewed interest and approval, a revival inspired not only by his anticipations of recent scientific developments but also by his demonstration of how philosophy can be applied effectively to human problems. Peirce's philosophy includes a pervasive three-category system: belief that truth is immutable and is both independent from actual opinion (fallibilism) and discoverable (no radical skepticism), logic as formal semiotic on signs, on arguments, and on inquiry's ways—including philosophicalpragmatism(which he founded),critical common-sensism, andscientific method—and, in metaphysics:Scholastic realism, e.g.John Duns Scotus, belief in God, freedom, and at least an attenuated immortality,objective idealism, and belief in the reality of continuity and of absolute chance, mechanical necessity, and creative love.[99]In his work, fallibilism and pragmatism may seem to work somewhat likeskepticismandpositivism, respectively, in others' work. However, for Peirce, fallibilism is balanced by ananti-skepticismand is a basis for belief in the reality of absolute chance and of continuity,[100]and pragmatism commits one to anti-nominalistbelief in the reality of the general (CP 5.453–457). For Peirce, First Philosophy, which he also called cenoscopy, is less basic than mathematics and more basic than the special sciences (of nature and mind). It studies positive phenomena in general, phenomena available to any person at any waking moment, and does not settle questions by resorting to special experiences.[101]Hedividedsuch philosophy into (1) phenomenology (which he also called phaneroscopy or categorics), (2) normative sciences (esthetics, ethics, and logic), and (3) metaphysics; his views on them are discussed in order below. Peirce did not write extensively in aesthetics and ethics,[102]but came by 1902 to hold that aesthetics, ethics, and logic, in that order, comprise the normative sciences.[103]He characterized aesthetics as the study of the good (grasped as the admirable), and thus of the ends governing all conduct and thought.[104] Umberto Ecodescribed Peirce as "undoubtedly the greatest unpublished writer of our generation"[105]and byKarl Popperas "one of the greatest philosophers of all time".[106]TheInternet Encyclopedia of Philosophysays of Peirce that although "long considered an eccentric figure whose contribution to pragmatism was to provide its name and whose importance was as an influence upon James and Dewey, Peirce's significance in his own right is now largely accepted."[107] Peirce's recipe for pragmatic thinking, which he calledpragmatismand, later,pragmaticism, is recapitulated in several versions of the so-calledpragmatic maxim. Here is one of his moreemphatic reiterationsof it: Consider what effects that mightconceivablyhave practical bearings youconceivethe objects of yourconceptionto have. Then, yourconceptionof those effects is the whole of yourconceptionof the object. As a movement, pragmatism began in the early 1870s in discussions among Peirce,William James, and others inthe Metaphysical Club. James among others regarded some articles by Peirce such as "The Fixation of Belief" (1877) and especially "How to Make Our Ideas Clear" (1878) as foundational topragmatism.[108]Peirce (CP 5.11–12), like James (Pragmatism: A New Name for Some Old Ways of Thinking, 1907), saw pragmatism as embodying familiar attitudes, in philosophy and elsewhere, elaborated into a new deliberate method for fruitful thinking about problems. Peirce differed from James and the earlyJohn Dewey, in some of their tangential enthusiasms, in being decidedly more rationalistic and realistic, in several senses of those terms, throughout the preponderance of his own philosophical moods. In 1905 Peirce coined the new namepragmaticism"for the precise purpose of expressing the original definition", saying that "all went happily" with James's andF.C.S. Schiller's variant uses of the old name "pragmatism" and that he coined the new name because of the old name's growing use in "literary journals, where it gets abused". Yet he cited as causes, in a 1906 manuscript, his differences with James and Schiller and, in a 1908 publication, his differences with James as well as literary authorGiovanni Papini's declaration of pragmatism's indefinability. Peirce in any case regarded his views that truth is immutable and infinity is real, as being opposed by the other pragmatists, but he remained allied with them on other issues.[109][circular reference] Pragmatism begins with the idea that belief is that on which one is prepared to act. Peirce's pragmatism is a method of clarification of conceptions of objects. It equates any conception of an object to a conception of that object's effects to a general extent of the effects' conceivable implications for informed practice. It is a method of sorting out conceptual confusions occasioned, for example, by distinctions that make (sometimes needed) formal yet not practical differences. He formulated both pragmatism and statistical principles as aspects of scientific logic, in his "Illustrations of the Logic of Science" series of articles. In the second one, "How to Make Our Ideas Clear", Peirce discussed three grades of clearness of conception: By way of example of how to clarify conceptions, he addressed conceptions about truth and the real as questions of thepresuppositions of reasoningin general. In clearness's second grade (the "nominal" grade), he defined truth as a sign's correspondence to its object, and the real as the object of such correspondence, such that truth and the real are independent of that which you or I or any actual, definitecommunity of inquirersthink. After that needful but confined step, next in clearness's third grade (the pragmatic, practice-oriented grade) he defined truth as that opinion whichwouldbe reached, sooner or later but still inevitably, by research taken far enough, such that the real does depend on that ideal final opinion—a dependence to which he appeals in theoretical arguments elsewhere, for instance for the long-run validity of the rule of induction.[110]Peirce argued that even to argue against the independence and discoverability of truth and the real is to presuppose that there is, about that very question under argument, a truth with just such independence and discoverability. Peirce said that a conception's meaning consists in "all general modes of rational conduct" implied by "acceptance" of the conception—that is, if one were to accept, first of all, the conception as true, then what could one conceive to be consequent general modes of rational conduct by all who accept the conception as true?—the whole of such consequent general modes is the whole meaning. His pragmatism does not equate a conception's meaning, its intellectual purport, with the conceived benefit or cost of the conception itself, like a meme (or, say, propaganda), outside the perspective of its being true, nor, since a conception is general, is its meaning equated with any definite set of actual consequences or upshots corroborating or undermining the conception or its worth. His pragmatism also bears no resemblance to "vulgar" pragmatism, which misleadingly connotes a ruthless andMachiavelliansearch for mercenary or political advantage. Instead the pragmatic maxim is the heart of his pragmatism as a method of experimentational mentalreflection[111]arriving at conceptions in terms of conceivable confirmatory and disconfirmatory circumstances—a method hospitable to the formation of explanatory hypotheses, and conducive to the use and improvement of verification.[112] Peirce's pragmatism, as method and theory of definitions and conceptual clearness, is part of his theory of inquiry,[113]which he variously called speculative, general, formal oruniversal rhetoricor simply methodeutic.[114]He applied his pragmatism as a method throughout his work. In "The Fixation of Belief" (1877), Peirce gives his take on the psychological origin and aim of inquiry. On his view, individuals are motivated to inquiry by desire to escape the feelings of anxiety and unease which Peirce takes to be characteristic of the state of doubt. Doubt is described by Peirce as an "uneasy and dissatisfied state from which we struggle to free ourselves and pass into the state of belief." Peirce uses words like "irritation" to describe the experience of being in doubt and to explain why he thinks we find such experiences to be motivating. The irritating feeling of doubt is appeased, Peirce says, through our efforts to achieve a settled state of satisfaction with what we land on as our answer to the question which led to that doubt in the first place. This settled state, namely, belief, is described by Peirce as "a calm and satisfactory state which we do not wish to avoid." Our efforts to achieve the satisfaction of belief, by whichever methods we may pursue, are what Peirce calls "inquiry". Four methods which Peirce describes as having been actually pursued throughout the history of thought are summarized below in the section after next. Critical common-sensism,[115]treated by Peirce as a consequence of his pragmatism, is his combination ofThomas Reid's common-sense philosophywith afallibilismthat recognizes that propositions of our more or less vague common sense now indubitable may later come into question, for example because of transformations of our world through science. It includes efforts to raise genuine doubts in tests for a core group of common indubitables that change slowly, if at all. In "The Fixation of Belief" (1877), Peirce described inquiry in general not as the pursuit of truthper sebut as the struggle to move from irritating, inhibitory doubt born of surprise, disagreement, and the like, and to reach a secure belief, belief being that on which one is prepared to act. That let Peirce frame scientific inquiry as part of a broader spectrum and as spurred, like inquiry generally, by actual doubt, not mere verbal, quarrelsome, orhyperbolic doubt, which he held to be fruitless. Peirce sketched four methods of settling opinion, ordered from least to most successful: Peirce held that, in practical affairs, slow and stumbling ratiocination is often dangerously inferior to instinct and traditional sentiment, and that the scientific method is best suited to theoretical research,[116]which in turn should not be trammeled by the other methods and practical ends; reason's "first rule"[117]is that, in order to learn, one must desire to learn and, as a corollary, must not block the way of inquiry.Scientific methodexcels over the others finally by being deliberately designed to arrive—eventually—at the most secure beliefs, upon which the most successful practices can be based. Starting from the idea that people seek not truthper sebut instead to subdue irritating, inhibitory doubt, Peirce showed how, through the struggle, some can come to submit to truth for the sake of belief's integrity, seek as truth the guidance of potential conduct correctly to its given goal, and wed themselves to the scientific method. Insofar as clarification by pragmatic reflection suits explanatory hypotheses and fosters predictions and testing, pragmatism points beyond the usual duo of foundational alternatives:deductionfrom self-evident truths, orrationalism; andinductionfrom experiential phenomena, orempiricism. Based on his critique of threemodes of argumentand different from eitherfoundationalismorcoherentism, Peirce's approach seeks to justify claims by a three-phase dynamic of inquiry: Thereby, Peirce devised an approach to inquiry far more solid than the flatter image of inductive generalizationsimpliciter, which is a mere re-labeling of phenomenological patterns. Peirce's pragmatism was the first time thescientific methodwas proposed as anepistemologyfor philosophical questions. A theory that succeeds better than its rivals in predicting and controlling our world is said to be nearer the truth. This is an operational notion of truth used by scientists. Peirce extracted the pragmaticmodelortheoryof inquiry from its raw materials in classical logic and refined it in parallel with the early development of symbolic logic to address problems about the nature of scientific reasoning. Abduction, deduction, and induction make incomplete sense in isolation from one another but comprise a cycle understandable as a whole insofar as they collaborate toward the common end of inquiry. In the pragmatic way of thinking about conceivable practical implications, every thing has a purpose, and, as possible, its purpose should first be denoted. Abduction hypothesizes an explanation for deduction to clarify into implications to be tested so that induction can evaluate the hypothesis, in the struggle to move from troublesome uncertainty to more secure belief. No matter how traditional and needful it is to study the modes of inference in abstraction from one another, the integrity of inquiry strongly limits the effectivemodularityof its principal components. Peirce's outline of the scientific method in §III–IV of "A Neglected Argument"[118]is summarized below (except as otherwise noted). There he also reviewed plausibility and inductive precision (issues ofcritique of arguments). Peirce drew on the methodological implications of thefour incapacities—no genuine introspection, no intuition in the sense of non-inferential cognition, no thought but in signs, and no conception of the absolutely incognizable—to attack philosophicalCartesianism, of which he said that:[125] On May 14, 1867, the 27-year-old Peirce presented a paper entitled "On a New List of Categories" to theAmerican Academy of Arts and Sciences, which published it the following year. The paper outlined a theory of predication, involving three universal categories that Peirce developed in response to readingAristotle,Immanuel Kant, andG. W. F. Hegel, categories that Peirce applied throughout his work for the rest of his life.[20]Peirce scholars generally regard the "New List" as foundational or breaking the ground for Peirce's "architectonic", his blueprint for a pragmatic philosophy. In the categories one will discern, concentrated, the pattern that one finds formed by the three grades of clearness in "How To Make Our Ideas Clear" (1878 paper foundational to pragmatism), and in numerous other trichotomies in his work. "On a New List of Categories"is cast as a Kantian deduction; it is short but dense and difficult to summarize. The following table is compiled from that and later works.[126]In 1893, Peirce restated most of it for a less advanced audience.[127] *Note:An interpretant is an interpretation (human or otherwise) in the sense of the product of an interpretive process. In 1918, the logicianC. I. Lewiswrote, "The contributions of C.S. Peirce to symbolic logic are more numerous and varied than those of any other writer—at least in the nineteenth century."[134] Beginning with his first paper on the"Logic of Relatives" (1870), Peirce extended thetheory of relationspioneered byAugustus De Morgan.[h]Beginning in 1940,Alfred Tarskiand his students rediscovered aspects of Peirce's larger vision of relational logic, developing the perspective ofrelation algebra. Relational logic gained applications. In mathematics, it influenced the abstract analysis ofE. H. Mooreand thelattice theoryofGarrett Birkhoff. In computer science, therelational modelfordatabaseswas developed with Peircean ideas in work ofEdgar F. Codd, who was a doctoral student[135]ofArthur W. Burks, a Peirce scholar. In economics, relational logic was used byFrank P. Ramsey,John von Neumann, andPaul Samuelsonto study preferences and utility and byKenneth J. ArrowinSocial Choice and Individual Values, following Arrow's association with Tarski atCity College of New York. On Peirce and his contemporariesErnst SchröderandGottlob Frege,Hilary Putnam(1982)[92]documented that Frege's work on the logic of quantifiers had little influence on his contemporaries, although it was published four years before the work of Peirce and his student Oscar Howard Mitchell. Putnam found that mathematicians and logicians learned about the logic of quantifiers through the independent work of Peirce and Mitchell, particularly through Peirce's "On the Algebra of Logic: A Contribution to the Philosophy of Notation"[91](1885), published in the premier American mathematical journal of the day, and cited byPeanoand Schröder, among others, who ignored Frege. They also adopted and modified Peirce's notations, typographical variants of those now used. Peirce apparently was ignorant of Frege's work, despite their overlapping achievements in logic,philosophy of language, and thefoundations of mathematics. Peirce's work on formal logic had admirers besidesErnst Schröder: A philosophy of logic, grounded in his categories and semiotic, can be extracted from Peirce's writings and, along with Peirce's logical work more generally, is exposited and defended in Hilary Putnam (1982);[92]the Introduction in Nathan Houseret al.(1997);[137]andRandall Dipert's chapter inCheryl Misak(2004).[138] Peirce regarded logicper seas a division of philosophy, as a normative science based on esthetics and ethics, as more basic than metaphysics,[117]and as "the art of devising methods of research".[139]More generally, as inference, "logic is rooted in the social principle", since inference depends on a standpoint that, in a sense, is unlimited.[140]Peirce called (with no sense of deprecation) "mathematics of logic" much of the kind of thing which, in current research and applications, is called simply "logic". He was productive in both (philosophical) logic and logic's mathematics, which were connected deeply in his work and thought. Peirce argued that logic is formal semiotic: the formal study of signs in the broadest sense, not only signs that are artificial, linguistic, or symbolic, but also signs that are semblances or are indexical such as reactions. Peirce held that "all this universe is perfused with signs, if it is not composed exclusively of signs",[141]along with their representational and inferential relations. He argued that, since all thought takes time, all thought is in signs[142]and sign processes ("semiosis") such as the inquiry process. Hedividedlogic into: (1) speculative grammar, or stechiology, on how signs can be meaningful and, in relation to that, what kinds of signs there are, how they combine, and how some embody or incorporate others; (2) logical critic, or logic proper, on the modes of inference; and (3) speculative oruniversal rhetoric, or methodeutic,[114]the philosophical theory of inquiry, including pragmatism. In his "F.R.L." [First Rule of Logic] (1899), Peirce states that the first, and "in one sense, the sole", rule of reason is that,to learn, one needs to desire to learnand desire it without resting satisfied with that which one is inclined to think.[117]So, the first rule is,to wonder. Peirce proceeds to a critical theme in research practices and the shaping of theories: ...there follows onecorollarywhich itself deserves to be inscribed upon every wall of the city of philosophy:Do not block the way of inquiry. Peirce adds, that method and economy are best in research but no outright sin inheres in trying any theory in the sense that the investigation via its trial adoption can proceed unimpeded and undiscouraged, and that "the one unpardonable offence" is a philosophical barricade against truth's advance, an offense to which "metaphysicians in all ages have shown themselves the most addicted". Peirce in many writings holds thatlogic precedes metaphysics(ontological, religious, and physical). Peirce goes on to list four common barriers to inquiry: (1) Assertion of absolute certainty; (2) maintaining that something is absolutely unknowable; (3) maintaining that something is absolutely inexplicable because absolutely basic or ultimate; (4) holding that perfect exactitude is possible, especially such as to quite preclude unusual and anomalous phenomena. To refuse absolute theoretical certainty is the heart offallibilism, which Peirce unfolds into refusals to set up any of the listed barriers. Peirce elsewhere argues (1897) that logic's presupposition of fallibilism leads at length to the view that chance and continuity are very real (tychismandsynechism).[100] The First Rule of Logic pertains to the mind's presuppositions in undertaking reason and logic; presuppositions, for instance, that truth and the real do not depend on yours or my opinion of them but do depend on representational relation and consist in the destined end in investigation taken far enough (see below). He describes such ideas as, collectively, hopes which, in particular cases, one is unable seriously to doubt.[143] In three articles in 1868–1869,[142][125][144]Peirce rejected mere verbal orhyperbolic doubtand first or ultimate principles, and argued that we have (as he numbered them[125]): (The above sense of the term "intuition" is almost Kant's, said Peirce. It differs from the current looser sense that encompasses instinctive or anyway half-conscious inference.) Peirce argued that those incapacities imply the reality of the general and of the continuous, the validity of the modes of reasoning,[144]and the falsity of philosophicalCartesianism(see below). Peirce rejected the conception (usually ascribed to Kant) of the unknowable thing-in-itself[125]and later said that to "dismiss make-believes" is a prerequisite for pragmatism.[145] Peirce sought, through his wide-ranging studies through the decades, formal philosophical ways to articulate thought's processes, and also to explain the workings of science. These inextricably entangled questions of a dynamics of inquiry rooted in nature and nurture led him to develop his semiotic with very broadened conceptions of signs and inference, and, as its culmination, a theory of inquiry for the task of saying 'how science works' and devising research methods. This would be logic by the medieval definition taught for centuries: art of arts, science of sciences, having the way to the principles of all methods.[139]Influences radiate from points on parallel lines of inquiry inAristotle's work, in suchlocias: the basic terminology ofpsychologyinOn the Soul; the founding description ofsign relationsinOn Interpretation; and the differentiation ofinferenceinto three modes that are commonly translated into English asabduction,deduction, andinduction, in thePrior Analytics, as well as inference byanalogy(calledparadeigmaby Aristotle), which Peirce regarded as involving the other three modes. Peirce began writing on semiotic in the 1860s, around the time when he devised his system of three categories. He called it bothsemioticandsemeiotic. Both are current in singular and plural. He based it on the conception of a triadicsign relation, and definedsemiosisas "action, or influence, which is, or involves, a cooperation ofthreesubjects, such as a sign, its object, and its interpretant, this tri-relative influence not being in any way resolvable into actions between pairs".[146]As to signs in thought, Peirce emphasized the reverse: "To say, therefore, that thought cannot happen in an instant, but requires a time, is but another way of saying that every thought must be interpreted in another, or that all thought is in signs."[142] Peirce held that all thought is in signs, issuing in and from interpretation, wheresignis the word for the broadest variety of conceivable semblances, diagrams, metaphors, symptoms, signals, designations, symbols, texts, even mental concepts and ideas, all as determinations of a mind orquasi-mind, that which at least functions like a mind, as in the work of crystals or bees[147]—the focus is on sign action in general rather than on psychology, linguistics, or social studies (fields which he also pursued). Inquiry is a kind of inference process, a manner of thinking and semiosis. Global divisions of ways for phenomena to stand as signs, and the subsumption of inquiry and thinking within inference as a sign process, enable the study of inquiry on semiotics' three levels: Peirce uses examples often from common experience, but defines and discusses such things as assertion and interpretation in terms of philosophical logic. In a formal vein, Peirce said: On the Definition of Logic. Logic isformal semiotic. A sign is something,A, which brings something,B, itsinterpretantsign, determined or created by it, into the same sort of correspondence (or a lower implied sort) with something,C, itsobject, as that in which itself stands toC. This definition no more involves any reference to human thought than does the definition of a line as the place within which a particle lies during a lapse of time. It is from this definition that I deduce the principles of logic by mathematical reasoning, and by mathematical reasoning that, I aver, will support criticism ofWeierstrassianseverity, and that is perfectly evident. The word "formal" in the definition is also defined.[148] Peirce's theory of signs is known to be one of the most complex semiotic theories due to its generalistic claim. Anything is a sign—not absolutely as itself, but instead in some relation or other. Thesign relationis the key. It defines three roles encompassing (1) the sign, (2) the sign's subject matter, called itsobject, and (3) the sign's meaning or ramification as formed into a kind of effect called itsinterpretant(a further sign, for example a translation). It is an irreducibletriadic relation, according to Peirce. The roles are distinct even when the things that fill those roles are not. The roles are but three; a sign of an object leads to one or more interpretants, and, as signs, they lead to further interpretants. Extension × intension = information.Two traditional approaches to sign relation, necessary though insufficient, are the way ofextension(a sign's objects, also called breadth, denotation, or application) and the way ofintension(the objects' characteristics, qualities, attributes referenced by the sign, also called depth,comprehension, significance, or connotation). Peirce adds a third, the way ofinformation, including change of information, to integrate the other two approaches into a unified whole.[149]For example, because of the equation above, if a term's total amount of information stays the same, then the more that the term 'intends' or signifies about objects, the fewer are the objects to which the term 'extends' or applies. Determination.A sign depends on its object in such a way as to represent its object—the object enables and, in a sense, determines the sign. A physically causal sense of this stands out when a sign consists in an indicative reaction. The interpretant depends likewise on both the sign and the object—an object determines a sign to determine an interpretant. But this determination is not a succession of dyadic events, like a row of toppling dominoes; sign determination is triadic. For example, an interpretant does not merely represent something which represented an object; instead an interpretant represents somethingasa sign representing the object. The object (be it a quality or fact or law or even fictional) determines the sign to an interpretant through one's collateral experience[150]with the object, in which the object is found or from which it is recalled, as when a sign consists in a chance semblance of an absent object. Peirce used the word "determine" not in a strictly deterministic sense, but in a sense of "specializes",bestimmt,[151]involving variable amount, like an influence.[152]Peirce came to define representation and interpretation in terms of (triadic) determination.[153]The object determines the sign to determine another sign—the interpretant—to be related to the objectas the sign is related to the object, hence the interpretant, fulfilling its function as sign of the object, determines a further interpretant sign. The process is logically structured to perpetuate itself, and is definitive of sign, object, and interpretant in general.[152] Peirce held there are exactly three basic elements in semiosis (sign action): Some of the understanding needed by the mind depends on familiarity with the object. To know what a given sign denotes, the mind needs some experience of that sign's object, experience outside of, and collateral to, that sign or sign system. In that context Peirce speaks of collateral experience, collateral observation, collateral acquaintance, all in much the same terms.[150] Among Peirce's many sign typologies, three stand out, interlocked. The first typology depends on the sign itself, the second on how the sign stands for its denoted object, and the third on how the sign stands for its object to its interpretant. Also, each of the three typologies is a three-way division, atrichotomy, via Peirce's three phenomenologicalcategories: (1) quality of feeling, (2) reaction, resistance, and (3) representation, mediation.[157] I.Qualisign, sinsign, legisign(also calledtone, token, type,and also calledpotisign, actisign, famisign):[158]This typology classifies every sign according to the sign's own phenomenological category—the qualisign is a quality, a possibility, a "First"; the sinsign is a reaction or resistance, a singular object, an actual event or fact, a "Second"; and the legisign is a habit, a rule, a representational relation, a "Third". II.Icon, index, symbol: This typology, the best known one, classifies every sign according to the category of the sign's way of denoting its object—the icon (also called semblance or likeness) by a quality of its own, the index by factual connection to its object, and the symbol by a habit or rule for its interpretant. III.Rheme, dicisign, argument(also calledsumisign, dicisign, suadisign,alsoseme, pheme, delome,[158]and regarded as very broadened versions of the traditionalterm, proposition, argument): This typology classifies every sign according to the category which the interpretant attributes to the sign's way of denoting its object—the rheme, for example a term, is a sign interpreted to represent its object in respect of quality; the dicisign, for example a proposition, is a sign interpreted to represent its object in respect of fact; and the argument is a sign interpreted to represent its object in respect of habit or law. This is the culminating typology of the three, where the sign is understood as a structural element of inference. Every sign belongs to one class or another within (I)andwithin (II)andwithin (III). Thus each of the three typologies is a three-valued parameter for every sign. The three parameters are not independent of each other; many co-classifications are absent, for reasons pertaining to the lack of either habit-taking or singular reaction in a quality, and the lack of habit-taking in a singular reaction. The result is not 27 but instead ten classes of signs fully specified at this level of analysis. Borrowing a brace of concepts fromAristotle, Peirce examined three basic modes ofinference—abduction,deduction, andinduction—in his "critique of arguments" or "logic proper". Peirce also called abduction "retroduction", "presumption", and, earliest of all, "hypothesis". He characterized it as guessing and as inference to an explanatory hypothesis. He sometimes expounded the modes of inference by transformations of the categoricalsyllogism Barbara (AAA), for example in "Deduction, Induction, and Hypothesis" (1878).[159]He does this by rearranging therule(Barbara's major premise), thecase(Barbara's minor premise), and theresult(Barbara's conclusion): Deduction. Rule:All the beans from this bag are white.Case:These beans are beans from this bag.∴{\displaystyle \therefore }Result:These beans are white. Induction. Case:These beans are [randomly selected] from this bag.Result:These beans are white.∴{\displaystyle \therefore }Rule:All the beans from this bag are white. Hypothesis (Abduction). Rule:All the beans from this bag are white.Result:These beans [oddly] are white.∴{\displaystyle \therefore }Case:These beans are from this bag. In 1883, in "A Theory of Probable Inference" (Studies in Logic), Peirce equated hypothetical inference with the induction of characters of objects (as he had done in effect before[125]). Eventually dissatisfied, by 1900 he distinguished them once and for all and also wrote that he now took the syllogistic forms and the doctrine of logical extension and comprehension as being less basic than he had thought. In 1903 he presented the following logical form for abductive inference:[160] The surprising fact, C, is observed; The logical form does not also cover induction, since induction neither depends on surprise nor proposes a new idea for its conclusion. Induction seeks facts to test a hypothesis; abduction seeks a hypothesis to account for facts. "Deduction proves that somethingmustbe; Induction shows that somethingactually isoperative; Abduction merely suggests that somethingmay be."[161]Peirce did not remain quite convinced that one logical form covers all abduction.[162]In hismethodeuticor theory of inquiry (see below), he portrayed abduction as an economic initiative to further inference and study, and portrayed all three modes as clarified by their coordination in essential roles in inquiry: hypothetical explanation, deductive prediction, inductive testing Peircedividedmetaphysics into (1) ontology or general metaphysics, (2)psychicalor religious metaphysics, and (3) physical metaphysics. On the issue of universals, Peirce was ascholastic realist, declaring the reality ofgeneralsas early as 1868.[163]According to Peirce, his category he called "thirdness", the more general facts about the world, are extra-mental realities. Regardingmodalities(possibility, necessity, etc.), he came in later years to regard himself as having wavered earlier as to just how positively real the modalities are. In his 1897 "The Logic of Relatives" he wrote: I formerly defined the possible as that which in a given state of information (real or feigned) we do not know not to be true. But this definition today seems to me only a twisted phrase which, by means of two negatives, conceals an anacoluthon. We know in advance of experience that certain things are not true, because we see they are impossible. Peirce retained, as useful for some purposes, the definitions in terms of information states, but insisted that the pragmaticist is committed to a strongmodal realismby conceiving of objects in terms of predictive general conditional propositions about how theywouldbehave under certain circumstances.[164] Continuity andsynechismare central in Peirce's philosophy: "I did not at first suppose that it was, as I gradually came to find it, the master-Key of philosophy".[165] From a mathematical point of view, he embracedinfinitesimalsand worked long on the mathematics of continua. He long held that the real numbers constitute a pseudo-continuum;[166]that a true continuum is the real subject matter ofanalysis situs(topology); and that a true continuum of instants exceeds—and within any lapse of time has room for—anyAleph number(any infinitemultitudeas he called it) of instants.[167] In 1908 Peirce wrote that he found that a true continuum might have or lack such room. Jérôme Havenel (2008): "It is on 26 May 1908, that Peirce finally gave up his idea that in every continuum there is room for whatever collection of any multitude. From now on, there are different kinds of continua, which have different properties."[168] Peirce believed in God, and characterized such belief as founded in an instinct explorable in musing over the worlds of ideas, brute facts, and evolving habits—and it is a belief in God not as anactualorexistentbeing (in Peirce's sense of those words), but all the same as arealbeing.[169]In "A Neglected Argument for the Reality of God" (1908),[118]Peirce sketches, for God's reality, an argument to a hypothesis of God as the Necessary Being, a hypothesis which he describes in terms of how it would tend to develop and become compelling in musement and inquiry by a normal person who is led, by the hypothesis, to consider as being purposed the features of the worlds of ideas, brute facts, and evolving habits (for example scientific progress), such that the thought of such purposefulness will "stand or fall with the hypothesis"; meanwhile, according to Peirce, the hypothesis, in supposing an "infinitely incomprehensible" being, starts off at odds with its own nature as a purportively true conception, and so, no matter how much the hypothesis grows, it both (A) inevitably regards itself as partly true, partly vague, and as continuing to define itself without limit, and (B) inevitably has God appearing likewise vague but growing, though God as the Necessary Being is not vague or growing; but the hypothesis will hold it to bemorefalse to say the opposite, that God is purposeless. Peirce also argued that the will is free[170]and (seeSynechism) that there is at least an attenuated kind of immortality. Peirce held the view, which he calledobjective idealism, that "matter is effete mind, inveterate habits becoming physical laws".[171]Peirce observed that "Berkeley's metaphysical theories have at first sight an air of paradox and levity very unbecoming to a bishop".[172] Peirce asserted the reality of (1) "absolute chance" or randomness (histychistview), (2) "mechanical necessity" or physical laws (anancistview), and (3) what he called the "law of love" (agapistview), echoing hiscategoriesFirstness, Secondness, and Thirdness, respectively.[99]He held that fortuitous variation (which he also called "sporting"), mechanical necessity, and creative love are the three modes of evolution (modes called "tychasm", "anancasm", and "agapasm")[173]of the cosmos and its parts. He found his conception of agapasm embodied inLamarckian evolution; the overall idea in any case is that of evolution tending toward an end or goal, and it could also be the evolution of a mind or a society; it is the kind of evolution which manifests workings of mind in some general sense. He said that overall he was a synechist, holding with reality of continuity,[99]especially of space, time, and law.[174] Peirce outlined two fields, "Cenoscopy" and "Science of Review", both of which he called philosophy. Both included philosophy about science. In 1903 he arranged them, from more to less theoretically basic, thus:[101] Peirce placed, within Science of Review, the work and theory ofclassifying the sciences(including mathematics and philosophy). His classifications, on which he worked for many years, draw on argument and wide knowledge, and are of interest both as a map for navigating his philosophy and as an accomplished polymath's survey of research in his time. Consequently, to discover is simply to expedite an event that would occur sooner or later, if we had not troubled ourselves to make the discovery. Consequently, the art of discovery is purely a question of economics. The economics of research is, so far as logic is concerned, the leading doctrine with reference to the art of discovery. Consequently, the conduct of abduction, which is chiefly a question of heuretic and is the first question of heuretic, is to be governed by economical considerations. Thus, twenty skillful hypotheses will ascertain what 200,000 stupid ones might fail to do. Now logical terms are of three grand classes. The first embraces those whoselogical forminvolves only the conception of quality, and which therefore represent a thing simply as "a —." These discriminate objects in the most rudimentary way, which does not involve any consciousness of discrimination. They regard an object as it is in itself assuch(quale); for example, as horse, tree, or man. These areabsolute terms. (Peirce, 1870. But also see "Quale-Consciousness", 1898, in CP 6.222–237.) ... death makes the number of our risks, the number of our inferences, finite, and so makes their mean result uncertain. The very idea of probability and of reasoning rests on the assumption that this number is indefinitely great. ... logicality inexorably requires that our interests shallnotbe limited. ... Logic is rooted in the social principle. I define a Sign as anything which is so determined by something else, called its Object, and so determines an effect upon a person, which effect I call its Interpretant, that the latter is thereby mediately determined by the former. My insertion of "upon a person" is a sop to Cerberus, because I despair of making my own broader conception understood. I will also take the liberty of substituting "reality" for "existence." This is perhaps overscrupulosity; but I myself always useexistin its strict philosophical sense of "react with the other like things in the environment." Of course, in that sense, it would be fetichism to say that God "exists." The word "reality," on the contrary, is used in ordinary parlance in its correct philosophical sense. [....] I define therealas that which holds its characters on such a tenure that it makes not the slightest difference what any man or men may havethoughtthem to be, or ever will havethoughtthem to be, here using thought to include, imagining, opining, and willing (as long as forciblemeansare not used); but the real thing's characters will remain absolutely untouched.
https://en.wikipedia.org/wiki/Logic_of_relatives
Alogical matrix,binary matrix,relation matrix,Boolean matrix, or(0, 1)-matrixis amatrixwith entries from theBoolean domainB= {0, 1}.Such a matrix can be used to represent abinary relationbetween a pair offinite sets. It is an important tool incombinatorial mathematicsandtheoretical computer science. IfRis abinary relationbetween the finiteindexed setsXandY(soR⊆X×Y), thenRcan be represented by the logical matrixMwhose row and column indices index the elements ofXandY, respectively, such that the entries ofMare defined by In order to designate the row and column numbers of the matrix, the setsXandYare indexed with positiveintegers:iranges from 1 to thecardinality(size) ofX, andjranges from 1 to the cardinality ofY. See the article onindexed setsfor more detail. ThetransposeRT{\displaystyle R^{T}}of the logical matrixR{\displaystyle R}of a binary relation corresponds to theconverse relation.[1] The binary relationRon the set{1, 2, 3, 4}is defined so thataRbholds if and only ifadividesbevenly, with no remainder. For example, 2R4 holds because 2 divides 4 without leaving a remainder, but 3R4 does not hold because when 3 divides 4, there is a remainder of 1. The following set is the set of pairs for which the relationRholds. The corresponding representation as a logical matrix is which includes a diagonal of ones, since each number divides itself. The matrix representation of theequality relationon a finite set is theidentity matrixI, that is, the matrix whose entries on the diagonal are all 1, while the others are all 0. More generally, if relationRsatisfiesI ⊆R,thenRis areflexive relation. If the Boolean domain is viewed as asemiring, where addition corresponds tological ORand multiplication tological AND, the matrix representation of thecompositionof two relations is equal to thematrix productof the matrix representations of these relations. This product can be computed inexpectedtime O(n2).[3] Frequently, operations on binary matrices are defined in terms ofmodular arithmeticmod 2—that is, the elements are treated as elements of theGalois fieldGF(2)=Z2{\displaystyle {\mathbf {GF}}(2)=\mathbb {Z} _{2}}. They arise in a variety of representations and have a number of more restricted special forms. They are applied e.g. inXOR-satisfiability. The number of distinctm-by-nbinary matrices is equal to 2mn, and is thus finite. Letnandmbe given and letUdenote the set of all logicalm×nmatrices. ThenUhas apartial ordergiven by In fact,Uforms aBoolean algebrawith the operationsand&orbetween two matrices applied component-wise. The complement of a logical matrix is obtained by swapping all zeros and ones for their opposite. Every logical matrixA= (Aij)has a transposeAT= (Aji).SupposeAis a logical matrix with no columns or rows identically zero. Then the matrix product, using Boolean arithmetic,ATA{\displaystyle A^{\operatorname {T} }A}contains them×midentity matrix, and the productAAT{\displaystyle AA^{\operatorname {T} }}contains then×nidentity. As a mathematical structure, the Boolean algebraUforms alatticeordered byinclusion; additionally it is a multiplicative lattice due to matrix multiplication. Every logical matrix inUcorresponds to a binary relation. These listed operations onU, and ordering, correspond to acalculus of relations, where the matrix multiplication representscomposition of relations.[4] Ifmornequals one, then them×nlogical matrix (mij) is alogical vectororbit string. Ifm= 1, the vector is a row vector, and ifn= 1, it is a column vector. In either case the index equaling 1 is dropped from denotation of the vector. Suppose(Pi),i=1,2,…,m{\displaystyle (P_{i}),\,i=1,2,\ldots ,m}and(Qj),j=1,2,…,n{\displaystyle (Q_{j}),\,j=1,2,\ldots ,n}are two logical vectors. Theouter productofPandQresults in anm×nrectangular relation A reordering of the rows and columns of such a matrix can assemble all the ones into a rectangular part of the matrix.[5] Lethbe the vector of all ones. Then ifvis an arbitrary logical vector, the relationR=v hThas constant rows determined byv. In thecalculus of relationssuch anRis called a vector.[5]A particular instance is the universal relationhhT{\displaystyle hh^{\operatorname {T} }}. For a given relationR, a maximal rectangular relation contained inRis called a concept inR. Relations may be studied by decomposing into concepts, and then noting theinduced concept lattice. Consider the table of group-like structures, where "unneeded" can be denoted 0, and "required" denoted by 1, forming a logical matrixR.{\displaystyle R.}To calculate elements ofRRT{\displaystyle RR^{\operatorname {T} }}, it is necessary to use the logical inner product of pairs of logical vectors in rows of this matrix. If this inner product is 0, then the rows are orthogonal. In fact,small categoryis orthogonal toquasigroup, andgroupoidis orthogonal tomagma. Consequently there are zeros inRRT{\displaystyle RR^{\operatorname {T} }}, and it fails to be auniversal relation. Adding up all the ones in a logical matrix may be accomplished in two ways: first summing the rows or first summing the columns. When the row sums are added, the sum is the same as when the column sums are added. Inincidence geometry, the matrix is interpreted as anincidence matrixwith the rows corresponding to "points" and the columns as "blocks" (generalizing lines made of points). A row sum is called itspoint degree, and a column sum is theblock degree. The sum of point degrees equals the sum of block degrees.[6] An early problem in the area was "to find necessary and sufficient conditions for the existence of anincidence structurewith given point degrees and block degrees; or in matrix language, for the existence of a (0, 1)-matrix of typev×bwith given row and column sums".[6]This problem is solved by theGale–Ryser theorem.
https://en.wikipedia.org/wiki/Logical_matrix
Inmathematical logic,predicate functor logic(PFL) is one of several ways to expressfirst-order logic(also known aspredicate logic) by purely algebraic means, i.e., withoutquantified variables. PFL employs a small number of algebraic devices calledpredicate functors(orpredicate modifiers)[1]that operate on terms to yield terms. PFL is mostly the invention of thelogicianandphilosopherWillard Quine. The source for this section, as well as for much of this entry, is Quine (1976). Quine proposed PFL as a way of algebraizingfirst-order logicin a manner analogous to howBoolean algebraalgebraizespropositional logic. He designed PFL to have exactly theexpressive poweroffirst-order logicwithidentity. Hence themetamathematicsof PFL are exactly those of first-order logic with no interpreted predicate letters: both logics aresound,complete, andundecidable. Most work Quine published on logic and mathematics in the last 30 years of his life touched on PFL in some way.[citation needed] Quine took "functor" from the writings of his friendRudolf Carnap, the first to employ it inphilosophyandmathematical logic, and defined it as follows: "The wordfunctor, grammatical in import but logical in habitat... is a sign that attaches to one or more expressions of given grammatical kind(s) to produce an expression of a given grammatical kind." (Quine 1982: 129) Ways other than PFL to algebraizefirst-order logicinclude: PFL is arguably the simplest of these formalisms, yet also the one about which the least has been written. Quine had a lifelong fascination withcombinatory logic, attested to by his introduction to the translation in Van Heijenoort (1967) of the paper by the Russian logicianMoses Schönfinkelfounding combinatory logic. When Quine began working on PFL in earnest, in 1959, combinatory logic was commonly deemed a failure for the following reasons: The PFLsyntax, primitives, and axioms described in this section are largelySteven Kuhn's (1983). Thesemanticsof the functors are Quine's (1982). The rest of this entry incorporates some terminology from Bacon (1985). Anatomic termis an upper case Latin letter,IandSexcepted, followed by a numericalsuperscriptcalled itsdegree, or by concatenated lower case variables, collectively known as anargument list. The degree of a term conveys the same information as the number of variables following a predicate letter. An atomic term of degree 0 denotes aBoolean variableor atruth value. The degree ofIis invariably 2 and so is not indicated. The "combinatory" (the word is Quine's) predicate functors, all monadic and peculiar to PFL, areInv,inv,∃,+, andp. A term is either an atomic term, or constructed by the following recursive rule. If τ is a term, thenInvτ,invτ,∃τ,+τ, andpτ are terms. A functor with a superscriptn,nanatural number> 1, denotesnconsecutive applications (iterations) of that functor. A formula is either a term or defined by the recursive rule: if α and β are formulas, then αβ and ~(α) are likewise formulas. Hence "~" is another monadic functor, and concatenation is the sole dyadic predicate functor. Quine called these functors "alethic." The natural interpretation of "~" isnegation; that of concatenation is anyconnectivethat, when combined with negation, forms afunctionally completeset of connectives. Quine's preferred functionally complete set wasconjunctionandnegation. Thus concatenated terms are taken as conjoined. The notation+is Bacon's (1985); all other notation is Quine's (1976; 1982). The alethic part of PFL is identical to theBoolean term schemataof Quine (1982). As is well known, the two alethic functors could be replaced by a single dyadic functor with the followingsyntaxandsemantics: if α and β are formulas, then (αβ) is a formula whose semantics are "not (α and/or β)" (seeNANDandNOR). Quine set out neither axiomatization nor proof procedure for PFL. The following axiomatization of PFL, one of two proposed in Kuhn (1983), is concise and easy to describe, but makes extensive use offree variablesand so does not do full justice to the spirit of PFL. Kuhn gives another axiomatization dispensing with free variables, but that is harder to describe and that makes extensive use of defined functors. Kuhn proved both of his PFL axiomatizationssoundandcomplete. This section is built around the primitive predicate functors and a few defined ones. The alethic functors can be axiomatized by any set of axioms forsentential logicwhose primitives are negation and one of ∧ or ∨. Equivalently, alltautologiesof sentential logic can be taken as axioms. Quine's (1982) semantics for each predicate functor are stated below in terms ofabstraction(set builder notation), followed by either the relevant axiom from Kuhn (1983), or a definition from Quine (1976). The notation{x1⋯xn:Fx1⋯xn}{\displaystyle \{x_{1}\cdots x_{n}:Fx_{1}\cdots x_{n}\}}denotes the set ofn-tuplessatisfying the atomic formulaFx1⋯xn.{\displaystyle Fx_{1}\cdots x_{n}.} Identity isreflexive(Ixx),symmetric(Ixy→Iyx),transitive((Ixy∧Iyz) →Ixz), and obeys the substitution property: Croppingenables two useful defined functors: Sgeneralizes the notion of reflexivity to all terms of any finite degree greater than 2. N.B:Sshould not be confused with theprimitive combinatorSof combinatory logic. Here only, Quine adopted an infix notation, because this infix notation for Cartesian product is very well established in mathematics. Cartesian product allows restating conjunction as follows: Reorder the concatenated argument list so as to shift a pair of duplicate variables to the far left, then invokeSto eliminate the duplication. Repeating this as many times as required results in an argument list of length max(m,n). The next three functors enable reordering argument lists at will. Given an argument list consisting ofnvariables,pimplicitly treats the lastn−1 variables like a bicycle chain, with each variable constituting a link in the chain. One application ofpadvances the chain by one link.kconsecutive applications ofptoFnmoves thek+1 variable to the second argument position inF. Whenn=2,Invandinvmerely interchangex1andx2. Whenn=1, they have no effect. Hencephas no effect whenn< 3. Kuhn (1983) takesMajor inversionandMinor inversionas primitive. The notationpin Kuhn corresponds toinv; he has no analog toPermutationand hence has no axioms for it. If, following Quine (1976),pis taken as primitive,Invandinvcan be defined as nontrivial combinations of+,∃, and iteratedp. The following table summarizes how the functors affect the degrees of their arguments. All instances of a predicate letter may be replaced by another predicate letter of the same degree, without affecting validity. Therulesare: Instead of axiomatizing PFL, Quine (1976) proposed the following conjectures as candidate axioms. n−1 consecutive iterations ofprestores thestatus quo ante: +and∃annihilate each other: Negation distributes over+,∃, andp: +andpdistributes over conjunction: Identity has the interesting implication: Quine also conjectured the rule: Ifαis a PFL theorem, then so arepα, +α, and¬∃¬α{\displaystyle \lnot \exists \lnot \alpha }. Bacon (1985) takes theconditional,negation,Identity,Padding, andMajorandMinor inversionas primitive, andCroppingas defined. Employing terminology and notation differing somewhat from the above, Bacon (1985) sets out two formulations of PFL: Bacon also: The followingalgorithmis adapted from Quine (1976: 300–2). Given aclosed formulaoffirst-order logic, first do the following: Now apply the following algorithm to the preceding result: The reverse translation, from PFL to first-order logic, is discussed in Quine (1976: 302–4). The canonicalfoundation of mathematicsisaxiomatic set theory, with a background logic consisting offirst-order logicwithidentity, with auniverse of discourseconsisting entirely of sets. There is a singlepredicate letterof degree 2, interpreted as set membership. The PFL translation of the canonicalaxiomatic set theoryZFCis not difficult, as noZFCaxiom requires more than 6 quantified variables.[2]
https://en.wikipedia.org/wiki/Predicate_functor_logic
Inmathematics,quantalesare certainpartially orderedalgebraic structuresthat generalizelocales(point free topologies) as well as various multiplicativelatticesofidealsfromring theoryandfunctional analysis(C*-algebras,von Neumann algebras).[1]Quantales are sometimes referred to ascompleteresiduated semigroups. Aquantaleis acomplete latticeQ{\displaystyle Q}with anassociativebinary operation∗:Q×Q→Q{\displaystyle \ast \colon Q\times Q\to Q}, called itsmultiplication, satisfying a distributive property such that and for allx,yi∈Q{\displaystyle x,y_{i}\in Q}andi∈I{\displaystyle i\in I}(hereI{\displaystyle I}is anyindex set). The quantale isunitalif it has anidentity elemente{\displaystyle e}for its multiplication: for allx∈Q{\displaystyle x\in Q}. In this case, the quantale is naturally amonoidwith respect to its multiplication∗{\displaystyle \ast }. A unital quantale may be defined equivalently as amonoidin the categorySupof completejoin-semilattices. A unital quantale is an idempotentsemiringunder join and multiplication. A unital quantale in which the identity is thetop elementof the underlying lattice is said to bestrictly two-sided(or simplyintegral). Acommutative quantaleis a quantale whose multiplication iscommutative. Aframe, with its multiplication given by themeetoperation, is a typical example of a strictly two-sided commutative quantale. Another simple example is provided by theunit intervaltogether with its usualmultiplication. Anidempotent quantaleis a quantale whose multiplication isidempotent. Aframeis the same as an idempotent strictly two-sided quantale. Aninvolutive quantaleis a quantale with an involution that preserves joins: Aquantalehomomorphismis amapf:Q1→Q2{\displaystyle f\colon Q_{1}\to Q_{2}}that preserves joins and multiplication for allx,y,xi∈Q1{\displaystyle x,y,x_{i}\in Q_{1}}andi∈I{\displaystyle i\in I}: Thismathematics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Quantale
Inmathematics, arelationdenotes some kind ofrelationshipbetween twoobjectsin aset, which may or may not hold.[1]As an example, "is less than" is a relation on the set ofnatural numbers; it holds, for instance, between the values1and3(denoted as1 < 3), and likewise between3and4(denoted as3 < 4), but not between the values3and1nor between4and4, that is,3 < 1and4 < 4both evaluate to false. As another example, "is sister of"is a relation on the set of all people, it holds e.g. betweenMarie CurieandBronisława Dłuska, and likewise vice versa. Set members may not be in relation "to a certain degree" – either they are in relation or they are not. Formally, a relationRover a setXcan be seen as a set ofordered pairs(x,y)of members ofX.[2]The relationRholds betweenxandyif(x,y)is a member ofR. For example, the relation "is less than" on the natural numbers is aninfinite setRlessof pairs of natural numbers that contains both(1,3)and(3,4), but neither(3,1)nor(4,4). The relation "is anontrivial divisorof"on the set of one-digit natural numbers is sufficiently small to be shown here:Rdv= { (2,4), (2,6), (2,8), (3,6), (3,9), (4,8) }; for example2is a nontrivial divisor of8, but not vice versa, hence(2,8) ∈Rdv, but(8,2) ∉Rdv. IfRis a relation that holds forxandy, one often writesxRy. For most common relations in mathematics, special symbols are introduced, like "<" for"is less than", and "|" for"is a nontrivial divisor of", and, most popular "=" for"is equal to". For example, "1 < 3", "1is less than3", and "(1,3) ∈Rless" mean all the same; some authors also write "(1,3) ∈ (<)". Various properties of relations are investigated. A relationRis reflexive ifxRxholds for allx, and irreflexive ifxRxholds for nox. It is symmetric ifxRyalways impliesyRx, and asymmetric ifxRyimplies thatyRxis impossible. It is transitive ifxRyandyRzalways impliesxRz. For example, "is less than" is irreflexive, asymmetric, and transitive, but neither reflexive nor symmetric. "is sister of"is transitive, but neither reflexive (e.g.Pierre Curieis not a sister of himself), nor symmetric, nor asymmetric; while being irreflexive or not may be a matter of definition (is every woman a sister of herself?), "is ancestor of"is transitive, while "is parent of"is not. Mathematical theorems are known about combinations of relation properties, such as "a transitive relation is irreflexive if, and only if, it is asymmetric". Of particular importance are relations that satisfy certain combinations of properties. Apartial orderis a relation that is reflexive, antisymmetric, and transitive,[3]anequivalence relationis a relation that is reflexive, symmetric, and transitive,[4]afunctionis a relation that is right-unique and left-total (seebelow).[5][6] Since relations are sets, they can be manipulated using set operations, includingunion,intersection, andcomplementation, leading to thealgebra of sets. Furthermore, thecalculus of relationsincludes the operations of taking theconverseandcomposing relations.[7][8][9] The above concept of relation[a]has been generalized to admit relations between members of two different sets (heterogeneous relation, like "lies on" between the set of allpointsand that of alllinesin geometry), relations between three or more sets (finitary relation, like"personxlives in townyat timez"), and relations betweenclasses[b](like "is an element of"on the class of all sets, seeBinary relation § Sets versus classes). Given a setX, a relationRoverXis a set ofordered pairsof elements fromX, formally:R⊆ { (x,y) |x,y∈X}.[2][10] The statement(x,y) ∈Rreads "xisR-related toy" and is written ininfix notationasxRy.[7][8]The order of the elements is important; ifx≠ythenyRxcan be true or false independently ofxRy. For example,3divides9, but9does not divide3. A relationRon a finite setXmay be represented as: A transitive[c]relationRon a finite setXmay be also represented as For example, on the set of all divisors of12, define the relationRdivby Formally,X= { 1, 2, 3, 4, 6, 12 }andRdiv= { (1,2), (1,3), (1,4), (1,6), (1,12), (2,4), (2,6), (2,12), (3,6), (3,12), (4,12), (6,12) }. The representation ofRdivas a Boolean matrix is shown in the middle table; the representation both as a Hasse diagram and as a directed graph is shown in the left picture. The following are equivalent: As another example, define the relationRelonRby The representation ofRelas a 2D-plot obtains an ellipse, see right picture. SinceRis not finite, neither a directed graph, nor a finite Boolean matrix, nor a Hasse diagram can be used to depictRel. Some important properties that a relationRover a setXmay have are: The previous 2 alternatives are not exhaustive; e.g., the red relationy=x2given in the diagram below is neither irreflexive, nor reflexive, since it contains the pair(0,0), but not(2,2), respectively. Again, the previous 3 alternatives are far from being exhaustive; as an example over the natural numbers, the relationxRydefined byx> 2is neither symmetric (e.g.5R1, but not1R5) nor antisymmetric (e.g.6R4, but also4R6), let alone asymmetric. Uniqueness properties: Totality properties: Relations that satisfy certain combinations of the above properties are particularly useful, and thus have received names by their own. Orderings: Uniqueness properties: Uniqueness and totality properties: A relationRover setsXandYis said to becontained ina relationSoverXandY, writtenR⊆S, ifRis a subset ofS, that is, for allx∈Xandy∈Y, ifxRy, thenxSy. IfRis contained inSandSis contained inR, thenRandSare calledequalwrittenR=S. IfRis contained inSbutSis not contained inR, thenRis said to besmallerthanS, writtenR⊊S. For example, on therational numbers, the relation>is smaller than≥, and equal to the composition> ∘ >. The above concept of relation has been generalized to admit relations between members of two different sets. Given setsXandY, aheterogeneous relationRoverXandYis a subset of{ (x,y) |x∈X,y∈Y}.[2][22]WhenX=Y, the relation concept described above is obtained; it is often calledhomogeneous relation(orendorelation)[23][24]to distinguish it from its generalization. The above properties and operations that are marked "[d]" and "[e]", respectively, generalize to heterogeneous relations. An example of a heterogeneous relation is "oceanxborders continenty". The best-known examples arefunctions[f]with distinct domains and ranges, such assqrt :N→R+.
https://en.wikipedia.org/wiki/Relation_(mathematics)
Inlogicandmathematics,relation constructionandrelational constructibilityhave to do with the ways that onerelationis determined by anindexed familyor asequenceof other relations, called therelation dataset. The relation in the focus of consideration is called thefaciendum. The relation dataset typically consists of a specified relation over sets of relations, called theconstructor, thefactor, or themethod of construction, plus a specified set of other relations, called thefaciens, theingredients, or themakings. Relation compositionand relation reduction are special cases of relation constructions.
https://en.wikipedia.org/wiki/Relation_construction
Therelational calculusconsists of two calculi, thetuple relational calculusand thedomain relational calculus, that is part of therelational modelfor databases and provide a declarative way to specify database queries. The raison d'être of relational calculus is the formalization ofquery optimization, which is finding more efficient manners to execute the same query in a database. The relational calculus is similar to therelational algebra, which is also part of the relational model: While the relational calculus is meant as a declarative language that prescribes no execution order on the subexpressions of a relational calculus expression, therelational algebrais meant as an imperative language: the sub-expressions of a relational algebraic expression are meant to be executed from left-to-right and inside-out following their nesting. PerCodd's theorem, the relational algebra and the domain-independent relational calculus arelogically equivalent. Arelational algebraexpression might prescribe the following steps to retrieve the phone numbers and names of book stores that supplySome Sample Book: A relational calculus expression would formulate this query in the following descriptive or declarative manner: The relational algebra and the domain-independent relational calculus arelogically equivalent: for any algebraic expression, there is an equivalent expression in the calculus, and vice versa. This result is known asCodd's theorem. The raison d'être of the relational calculus is the formalization ofquery optimization. Query optimization consists in determining from a query the most efficient manner (or manners) to execute it. Query optimization can be formalized as translating a relational calculus expression delivering an answer A into efficient relational algebraic expressions delivering the same answer A. Thisdatabase-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Relational_calculus
Indatabase theory,relational algebrais a theory that usesalgebraic structuresfor modeling data and defining queries on it with well foundedsemantics. The theory was introduced byEdgar F. Codd.[1] The main application of relational algebra is to provide a theoretical foundation forrelational databases, particularlyquery languagesfor such databases, chief among which isSQL. Relational databases store tabular data represented asrelations. Queries over relational databases often likewise return tabular data represented as relations. The main purpose of relational algebra is to defineoperatorsthat transform one or more input relations to an output relation. Given that these operators accept relations as input and produce relations as output, they can be combined and used to express complex queries that transform multiple input relations (whose data are stored in the database) into a single output relation (the query results). Unary operatorsaccept a single relation as input. Examples include operators to filter certain attributes (columns) ortuples(rows) from an input relation.Binary operatorsaccept two relations as input and combine them into a single output relation. For example, taking all tuples found in either relation (union), removing tuples from the first relation found in the second relation (difference), extending the tuples of the first relation with tuples in the second relation matching certain conditions, and so forth. Relational algebra received little attention outside of pure mathematics until the publication ofE.F. Codd'srelational model of datain 1970.[2]Codd proposed such an algebra as a basis for database query languages. Relational algebra operates on homogeneous sets of tuplesS={(sj1,sj2,...sjn)|j∈1...m}{\displaystyle S=\{(s_{j1},s_{j2},...s_{jn})|j\in 1...m\}}where we commonly interpretmto be the number of rows of tuples in a table andnto be the number of columns. All entries in each column have the sametype. A relation also has a unique tuple called theheaderwhich gives each column a unique name orattributeinside the relation. Attributes are used in projections and selections. The relational algebra usesset union,set difference, andCartesian productfrom set theory, and adds additional constraints to these operators to create new ones. For set union and set difference, the tworelationsinvolved must beunion-compatible—that is, the two relations must have the same set of attributes. Becauseset intersectionis defined in terms of set union and set difference, the two relations involved in set intersection must also be union-compatible. For the Cartesian product to be defined, the two relations involved must have disjoint headers—that is, they must not have a common attribute name. In addition, the Cartesian product is defined differently from the one insettheory in the sense that tuples are considered to be "shallow" for the purposes of the operation. That is, the Cartesian product of a set ofn-tuples with a set ofm-tuples yields a set of "flattened"(n+m)-tuples (whereas basic set theory would have prescribed a set of 2-tuples, each containing ann-tuple and anm-tuple). More formally,R×Sis defined as follows: R×S:={(r1,r2,…,rn,s1,s2,…,sm)|(r1,r2,…,rn)∈R,(s1,s2,…,sm)∈S}{\displaystyle R\times S:=\{(r_{1},r_{2},\dots ,r_{n},s_{1},s_{2},\dots ,s_{m})|(r_{1},r_{2},\dots ,r_{n})\in R,(s_{1},s_{2},\dots ,s_{m})\in S\}} The cardinality of the Cartesian product is the product of the cardinalities of its factors, that is, |R×S| = |R| × |S|. Aprojection(Π) is aunary operationwritten asΠa1,…,an(R){\displaystyle \Pi _{a_{1},\ldots ,a_{n}}(R)}wherea1,…,an{\displaystyle a_{1},\ldots ,a_{n}}is a set of attribute names. The result of such projection is defined as thesetthat is obtained when alltuplesinRare restricted to the set{a1,…,an}{\displaystyle \{a_{1},\ldots ,a_{n}\}}. Note: when implemented inSQLstandard the "default projection" returns amultisetinstead of a set, and theΠprojection to eliminate duplicate data is obtained by the addition of theDISTINCTkeyword. Ageneralized selection(σ) is aunary operationwritten asσφ(R){\displaystyle \sigma _{\varphi }(R)}whereφis apropositional formulathat consists ofatomsas allowed in thenormal selectionand the logical operators∧{\displaystyle \wedge }(and),∨{\displaystyle \lor }(or) and¬{\displaystyle \neg }(negation). This selection selects all thosetuplesinRfor whichφholds. To obtain a listing of all friends or business associates in an address book, the selection might be written asσisFriend = true∨isBusinessContact = true(addressBook){\displaystyle \sigma _{{\text{isFriend = true}}\,\lor \,{\text{isBusinessContact = true}}}({\text{addressBook}})}. The result would be a relation containing every attribute of every unique record whereisFriendis true or whereisBusinessContactis true. Arename(ρ) is aunary operationwritten asρa/b(R){\displaystyle \rho _{a/b}(R)}where the result is identical toRexcept that thebattribute in all tuples is renamed to anaattribute. This is commonly used to rename the attribute of arelationfor the purpose of a join. To rename the "isFriend" attribute to "isBusinessContact" in a relation,ρisBusinessContact / isFriend(addressBook){\displaystyle \rho _{\text{isBusinessContact / isFriend}}({\text{addressBook}})}might be used. There is also theρx(A1,…,An)(R){\displaystyle \rho _{x(A_{1},\ldots ,A_{n})}(R)}notation, whereRis renamed toxand the attributes{a1,…,an}{\displaystyle \{a_{1},\ldots ,a_{n}\}}are renamed to{A1,…,An}{\displaystyle \{A_{1},\ldots ,A_{n}\}}.[3] Natural join (⨝) is abinary operatorthat is written as (R⨝S) whereRandSarerelations.[a]The result of the natural join is the set of all combinations of tuples inRandSthat are equal on their common attribute names. For an example consider the tablesEmployeeandDeptand their natural join:[citation needed] Note that neither the employee named Mary nor the Production department appear in the result. Mary does not appear in the result because Mary's Department, "Human Resources", is not listed in the Dept relation and the Production department does not appear in the result because there are no tuples in the Employee relation that have "Production" as their DeptName attribute. This can also be used to definecomposition of relations. For example, the composition ofEmployeeandDeptis their join as shown above, projected on all but the common attributeDeptName. Incategory theory, the join is precisely thefiber product. The natural join is arguably one of the most important operators since it is the relational counterpart of the logical AND operator. Note that if the same variable appears in each of two predicates that are connected by AND, then that variable stands for the same thing and both appearances must always be substituted by the same value (this is a consequence of theidempotenceof the logical AND). In particular, natural join allows the combination of relations that are associated by aforeign key. For example, in the above example a foreign key probably holds fromEmployee.DeptNametoDept.DeptNameand then the natural join ofEmployeeandDeptcombines all employees with their departments. This works because the foreign key holds between attributes with the same name. If this is not the case such as in the foreign key fromDept.ManagertoEmployee.Namethen these columns must be renamed before taking the natural join. Such a join is sometimes also referred to as anequijoin. More formally the semantics of the natural join are defined as follows: whereFun(t)is apredicatethat is true for arelationt(in the mathematical sense)ifftis a function (that is,tdoes not map any attribute to multiple values). It is usually required thatRandSmust have at least one common attribute, but if this constraint is omitted, andRandShave no common attributes, then the natural join becomes exactly the Cartesian product. The natural join can be simulated with Codd's primitives as follows. Assume thatc1,...,cmare the attribute names common toRandS,r1,...,rnare the attribute names unique toRands1,...,skare the attribute names unique toS. Furthermore, assume that the attribute namesx1,...,xmare neither inRnor inS. In a first step the common attribute names inScan be renamed: Then we take the Cartesian product and select the tuples that are to be joined: Finally we take a projection to get rid of the renamed attributes: Consider tablesCarandBoatwhich list models of cars and boats and their respective prices. Suppose a customer wants to buy a car and a boat, but she does not want to spend more money for the boat than for the car. Theθ-join (⋈θ) on the predicateCarPrice≥BoatPriceproduces the flattened pairs of rows which satisfy the predicate. When using a condition where the attributes are equal, for example Price, then the condition may be specified asPrice=Priceor alternatively (Price) itself. In order to combine tuples from two relations where the combination condition is not simply the equality of shared attributes it is convenient to have a more general form of join operator, which is theθ-join (or theta-join). Theθ-join is a binary operator that is written asR⋈Saθb{\displaystyle {R\ \bowtie \ S \atop a\ \theta \ b}}orR⋈Saθv{\displaystyle {R\ \bowtie \ S \atop a\ \theta \ v}}whereaandbare attribute names,θis a binaryrelational operatorin the set{<, ≤, =, ≠, >, ≥},υis a value constant, andRandSare relations. The result of this operation consists of all combinations of tuples inRandSthat satisfyθ. The result of theθ-join is defined only if the headers ofSandRare disjoint, that is, do not contain a common attribute. The simulation of this operation in the fundamental operations is therefore as follows: In case the operatorθis the equality operator (=) then this join is also called anequijoin. Note, however, that a computer language that supports the natural join and selection operators does not needθ-join as well, as this can be achieved by selection from the result of a natural join (which degenerates to Cartesian product when there are no shared attributes). In SQL implementations, joining on a predicate is usually called aninner join, and theonkeyword allows one to specify the predicate used to filter the rows. It is important to note: forming the flattened Cartesian product then filtering the rows is conceptually correct, but an implementation would use more sophisticated data structures to speed up the join query. The left semijoin (⋉ and ⋊) is a joining similar to the natural join and written asR⋉S{\displaystyle R\ltimes S}whereR{\displaystyle R}andS{\displaystyle S}arerelations.[b]The result is the set of all tuples inR{\displaystyle R}for which there is a tuple inS{\displaystyle S}that is equal on their common attribute names. The difference from a natural join is that other columns ofS{\displaystyle S}do not appear. For example, consider the tablesEmployeeandDeptand their semijoin:[citation needed] More formally the semantics of the semijoin can be defined as follows: R⋉S={t:t∈R∧∃s∈S(Fun⁡(t∪s))}{\displaystyle R\ltimes S=\{t:t\in R\land \exists s\in S(\operatorname {Fun} (t\cup s))\}} whereFun⁡(r){\displaystyle \operatorname {Fun} (r)}is as in the definition of natural join. The semijoin can be simulated using the natural join as follows. Ifa1,…,an{\displaystyle a_{1},\ldots ,a_{n}}are the attribute names ofR{\displaystyle R}, then R⋉S=Πa1,…,an(R⋈S).{\displaystyle R\ltimes S=\Pi _{a_{1},\ldots ,a_{n}}(R\bowtie S).} Since we can simulate the natural join with the basic operators it follows that this also holds for the semijoin. In Codd's 1970 paper, semijoin is called restriction.[1] The antijoin (▷), written asR▷SwhereRandSarerelations,[c]is similar to the semijoin, but the result of an antijoin is only those tuples inRfor which there isnotuple inSthat is equal on their common attribute names.[4] For an example consider the tablesEmployeeandDeptand their antijoin: The antijoin is formally defined as follows: or whereFun(t∪s)is as in the definition of natural join. The antijoin can also be defined as thecomplementof the semijoin, as follows: Given this, the antijoin is sometimes called the anti-semijoin, and the antijoin operator is sometimes written as semijoin symbol with a bar above it, instead of ▷. In the case where the relations have the same attributes (union-compatible), antijoin is the same as minus. The division (÷) is a binary operation that is written asR÷S. Division is not implemented directly in SQL. The result consists of the restrictions of tuples inRto the attribute names unique toR, i.e., in the header ofRbut not in the header ofS, for which it holds that all their combinations with tuples inSare present inR. IfDBProjectcontains all the tasks of the Database project, then the result of the division above contains exactly the students who have completed both of the tasks in the Database project. More formally the semantics of the division is defined as follows: where {a1,...,an} is the set of attribute names unique toRandt[a1,...,an] is the restriction oftto this set. It is usually required that the attribute names in the header ofSare a subset of those ofRbecause otherwise the result of the operation will always be empty. The simulation of the division with the basic operations is as follows. We assume thata1,...,anare the attribute names unique toRandb1,...,bmare the attribute names ofS. In the first step we projectRon its unique attribute names and construct all combinations with tuples inS: In the prior example, T would represent a table such that every Student (because Student is the unique key / attribute of the Completed table) is combined with every given Task. So Eugene, for instance, would have two rows, Eugene → Database1 and Eugene → Database2 in T. In the next step we subtractRfromT relation: InUwe have the possible combinations that "could have" been inR, but weren't. So if we now take the projection on the attribute names unique toR then we have the restrictions of the tuples inRfor which not all combinations with tuples inSwere present inR: So what remains to be done is take the projection ofRon its unique attribute names and subtract those inV: In practice the classical relational algebra described above is extended with various operations such as outer joins, aggregate functions and even transitive closure.[5] Whereas the result of a join (or inner join) consists of tuples formed by combining matching tuples in the two operands, an outer join contains those tuples and additionally some tuples formed by extending an unmatched tuple in one of the operands by "fill" values for each of the attributes of the other operand. Outer joins are not considered part of the classical relational algebra discussed so far.[6] The operators defined in this section assume the existence of anullvalue,ω, which we do not define, to be used for the fill values; in practice this corresponds to theNULLin SQL. In order to make subsequent selection operations on the resulting table meaningful, a semantic meaning needs to be assigned to nulls; in Codd's approach the propositional logic used by the selection isextended to a three-valued logic, although we elide those details in this article. Three outer join operators are defined: left outer join, right outer join, and full outer join. (The word "outer" is sometimes omitted.) The left outer join (⟕) is written asR⟕SwhereRandSarerelations.[d]The result of the left outer join is the set of all combinations of tuples inRandSthat are equal on their common attribute names, in addition (loosely speaking) to tuples inRthat have no matching tuples inS.[citation needed] For an example consider the tablesEmployeeandDeptand their left outer join: In the resulting relation, tuples inSwhich have no common values in common attribute names with tuples inRtake anullvalue,ω. Since there are no tuples inDeptwith aDeptNameofFinanceorExecutive,ωs occur in the resulting relation where tuples inEmployeehave aDeptNameofFinanceorExecutive. Letr1,r2, ...,rnbe the attributes of the relationRand let {(ω, ...,ω)} be the singleton relation on the attributes that areuniqueto the relationS(those that are not attributes ofR). Then the left outer join can be described in terms of the natural join (and hence using basic operators) as follows: The right outer join (⟖) behaves almost identically to the left outer join, but the roles of the tables are switched. The right outer join ofrelationsRandSis written asR⟖S.[e]The result of the right outer join is the set of all combinations of tuples inRandSthat are equal on their common attribute names, in addition to tuples inSthat have no matching tuples inR.[citation needed] For example, consider the tablesEmployeeandDeptand their right outer join: In the resulting relation, tuples inRwhich have no common values in common attribute names with tuples inStake anullvalue,ω. Since there are no tuples inEmployeewith aDeptNameofProduction,ωs occur in the Name and EmpId attributes of the resulting relation where tuples inDepthadDeptNameofProduction. Lets1,s2, ...,snbe the attributes of the relationSand let {(ω, ...,ω)} be the singleton relation on the attributes that areuniqueto the relationR(those that are not attributes ofS). Then, as with the left outer join, the right outer join can be simulated using the natural join as follows: Theouter join(⟗) orfull outer joinin effect combines the results of the left and right outer joins. The full outer join is written asR⟗SwhereRandSarerelations.[f]The result of the full outer join is the set of all combinations of tuples inRandSthat are equal on their common attribute names, in addition to tuples inSthat have no matching tuples inRand tuples inRthat have no matching tuples inSin their common attribute names.[citation needed] For an example consider the tablesEmployeeandDeptand their full outer join: In the resulting relation, tuples inRwhich have no common values in common attribute names with tuples inStake anullvalue,ω. Tuples inSwhich have no common values in common attribute names with tuples inRalso take anullvalue,ω. The full outer join can be simulated using the left and right outer joins (and hence the natural join and set union) as follows: There is nothing in relational algebra introduced so far that would allow computations on the data domains (other than evaluation of propositional expressions involving equality). For example, it is not possible using only the algebra introduced so far to write an expression that would multiply the numbers from two columns, e.g. a unit price with a quantity to obtain a total price. Practical query languages have such facilities, e.g. the SQL SELECT allows arithmetic operations to define new columns in the resultSELECTunit_price*quantityAStotal_priceFROMt, and a similar facility is provided more explicitly by Tutorial D'sEXTENDkeyword.[7]In database theory, this is calledextended projection.[8]: 213 Furthermore, computing various functions on a column, like the summing up of its elements, is also not possible using the relational algebra introduced so far. There are fiveaggregate functionsthat are included with most relational database systems. These operations are Sum, Count, Average, Maximum and Minimum. In relational algebra the aggregation operation over a schema (A1,A2, ...An) is written as follows: where eachAj', 1 ≤j≤k, is one of the original attributesAi, 1 ≤i≤n. The attributes preceding thegare grouping attributes, which function like a "group by" clause in SQL. Then there are an arbitrary number of aggregation functions applied to individual attributes. The operation is applied to an arbitrary relationr. The grouping attributes are optional, and if they are not supplied, the aggregation functions are applied across the entire relation to which the operation is applied. Let's assume that we have a table namedAccountwith three columns, namelyAccount_Number, Branch_NameandBalance. We wish to find the maximum balance of each branch. This is accomplished byBranch_NameGMax(Balance)(Account). To find the highest balance of all accounts regardless of branch, we could simply writeGMax(Balance)(Account). Grouping is often written asBranch_NameɣMax(Balance)(Account) instead.[8] Although relational algebra seems powerful enough for most practical purposes, there are some simple and natural operators onrelationsthat cannot be expressed by relational algebra. One of them is thetransitive closureof a binary relation. Given a domainD, let binary relationRbe a subset ofD×D. The transitive closureR+ofRis the smallest subset ofD×Dthat containsRand satisfies the following condition: It can be proved using the fact that there is no relational algebra expressionE(R) takingRas a variable argument that producesR+.[9] SQL however officially supports suchfixpoint queriessince 1999, and it had vendor-specific extensions in this direction well before that. Relational database management systemsoften include aquery optimizerwhich attempts to determine the most efficient way to execute a given query. Query optimizers enumerate possiblequery plans, estimate their cost, and pick the plan with the lowest estimated cost. If queries are represented by operators from relational algebra, the query optimizer can enumerate possible query plans by rewriting the initial query using the algebraic properties of these operators. Queriescan be represented as atree, where The primary goal of the query optimizer is to transformexpression treesinto equivalent expression trees, where the average size of the relations yielded by subexpressions in the tree is smaller than it was before theoptimization. The secondary goal is to try to form common subexpressions within a single query, or if there is more than one query being evaluated at the same time, in all of those queries. The rationale behind the second goal is that it is enough to compute common subexpressions once, and the results can be used in all queries that contain that subexpression. Here are a set of rules that can be used in such transformations. Rules about selection operators play the most important role in query optimization. Selection is an operator that very effectively decreases the number of rows in its operand, so if the selections in an expression tree are moved towards the leaves, the internalrelations(yielded by subexpressions) will likely shrink. Selection isidempotent(multiple applications of the same selection have no additional effect beyond the first one), andcommutative(the order selections are applied in has no effect on the eventual result). A selection whose condition is aconjunctionof simpler conditions is equivalent to a sequence of selections with those same individual conditions, and selection whose condition is adisjunctionis equivalent to a union of selections. These identities can be used to merge selections so that fewer selections need to be evaluated, or to split them so that the component selections may be moved or optimized separately. Cross product is the costliest operator to evaluate. If the inputrelationshaveNandMrows, the result will containNM{\displaystyle NM}rows. Therefore, it is important to decrease the size of both operands before applying the cross product operator. This can be effectively done if the cross product is followed by a selection operator, e.g.σA(R×P){\displaystyle \sigma _{A}(R\times P)}. Considering the definition of join, this is the most likely case. If the cross product is not followed by a selection operator, we can try to push down a selection from higher levels of the expression tree using the other selection rules. In the above case the conditionAis broken up in to conditionsB,CandDusing the split rules about complex selection conditions, so thatA=B∧C∧D{\displaystyle A=B\wedge C\wedge D}andBcontains attributes only fromR,Ccontains attributes only fromP, andDcontains the part ofAthat contains attributes from bothRandP. Note, thatB,CorDare possibly empty. Then the following holds: Selection isdistributiveover the set difference, intersection, and union operators. The following three rules are used to push selection below set operations in the expression tree. For the set difference and the intersection operators, it is possible to apply the selection operator to just one of the operands following the transformation. This can be beneficial where one of the operands is small, and the overhead of evaluating the selection operator outweighs the benefits of using a smallerrelationas an operand. Selection commutes with projection if and only if the fields referenced in the selection condition are a subset of the fields in the projection. Performing selection before projection may be useful if the operand is a cross product or join. In other cases, if the selection condition is relatively expensive to compute, moving selection outside the projection may reduce the number of tuples which must be tested (since projection may produce fewer tuples due to the elimination of duplicates resulting from omitted fields). Projection is idempotent, so that a series of (valid) projections is equivalent to the outermost projection. Projection isdistributiveover set union. Projection does not distribute over intersection and set difference. Counterexamples are given by: and wherebis assumed to be distinct fromb'. Successive renames of a variable can be collapsed into a single rename. Rename operations which have no variables in common can be arbitrarily reordered with respect to one another, which can be exploited to make successive renames adjacent so that they can be collapsed. Rename is distributive over set difference, union, and intersection. Cartesian product is distributive over union. The first query language to be based on Codd's algebra was Alpha, developed by Dr. Codd himself. Subsequently,ISBLwas created, and this pioneering work has been acclaimed by many authorities[10]as having shown the way to make Codd's idea into a useful language.Business System 12was a short-lived industry-strength relational DBMS that followed the ISBL example. In 1998Chris DateandHugh Darwenproposed a language calledTutorial Dintended for use in teaching relational database theory, and its query language also draws on ISBL's ideas.[11]Rel is an implementation ofTutorial D. Bmg is an implementation of relational algebra in Ruby which closely follows the principles ofTutorial DandThe Third Manifesto.[12] Even the query language ofSQLis loosely based on a relational algebra, though the operands in SQL (tables) are not exactlyrelationsand several useful theorems about the relational algebra do not hold in the SQL counterpart (arguably to the detriment of optimisers and/or users). The SQL table model is a bag (multiset), rather than a set. For example, the expression(R∪S)∖T=(R∖T)∪(S∖T){\displaystyle (R\cup S)\setminus T=(R\setminus T)\cup (S\setminus T)}is a theorem for relational algebra on sets, but not for relational algebra on bags.[8]
https://en.wikipedia.org/wiki/Relational_algebra
Inmathematics, aresiduated Boolean algebrais aresiduated latticewhose lattice structure is that of aBoolean algebra. Examples include Boolean algebras with the monoid taken to be conjunction, the set of all formal languages over a given alphabet Σ under concatenation, the set of all binary relations on a given setXunder relational composition, and more generally the power set of any equivalence relation, again under relational composition. The original application was torelation algebrasas a finitely axiomatized generalization of the binary relation example, but there exist interesting examples of residuated Boolean algebras that are not relation algebras, such as the language example. Aresiduated Boolean algebrais an algebraic structure(L, ∧, ∨, ¬, 0, 1, •,I, \, /)such that An equivalent signature better suited to therelation algebraapplication is(L, ∧, ∨, ¬, 0, 1, •,I, ▷, ◁)where the unary operationsx\ andx▷ are intertranslatable in the manner ofDe Morgan's lawsvia and dually /yand ◁yas with the residuation axioms in theresiduated latticearticle reorganized accordingly (replacingzby ¬z) to read ThisDe Morgan dualreformulation is motivated and discussed in more detail in the section below on conjugacy. Since residuated lattices and Boolean algebras are each definable with finitely many equations, so are residuated Boolean algebras, whence they form a finitely axiomatizablevariety. The De Morgan duals ▷ and ◁ of residuation arise as follows. Among residuated lattices, Boolean algebras are special by virtue of having a complementation operation ¬. This permits an alternative expression of the three inequalities in the axiomatization of the two residuals in terms of disjointness, via the equivalencex≤y⇔x∧¬y= 0. Abbreviatingx∧y= 0 tox#yas the expression of their disjointness, and substituting ¬zforzin the axioms, they become with a little Boolean manipulation Now ¬(x\¬z) is reminiscent ofDe Morgan duality, suggesting thatx\ be thought of as a unary operationf, defined byf(y) =x\y, that has a De Morgan dual ¬f(¬y), analogous to ∀xφ(x) = ¬∃x¬φ(x). Denoting this dual operation asx▷, we definex▷zas ¬(x\¬z). Similarly we define another operationz◁yas ¬(¬z/y). By analogy withx\ as the residual operation associated with the operationx•, we refer tox▷ as the conjugate operation, or simplyconjugate, ofx•. Likewise ◁yis theconjugateof •y. Unlike residuals, conjugacy is an equivalence relation between operations: iffis the conjugate ofgthengis also the conjugate off, i.e. the conjugate of the conjugate offisf. Another advantage of conjugacy is that it becomes unnecessary to speak of right and left conjugates, that distinction now being inherited from the difference betweenx• and •x, which have as their respective conjugatesx▷ and ◁x. (But this advantage accrues also to residuals whenx\ is taken to be the residual operation tox•.) All this yields (along with the Boolean algebra and monoid axioms) the following equivalent axiomatization of a residuated Boolean algebra. With this signature it remains the case that this axiomatization can be expressed as finitely many equations. In Examples 2 and 3 it can be shown thatx▷I=I◁x. In Example 2 both sides equal theconversex˘ ofx, while in Example 3, both sides areIwhenxcontains the empty word and 0 otherwise. In the former casex˘ =x. This is impossible for the latter becausex▷Iretains hardly any information aboutx. Hence in Example 2 we can substitutex˘ forxinx▷I=x˘ =I◁xand cancel (soundly) to give x˘˘ =xcan be proved from these two equations.Tarski's notion of arelation algebracan be defined as a residuated Boolean algebra having an operationx˘ satisfying these two equations. The cancellation step in the above is not possible for Example 3, which therefore is not a relation algebra,x˘ being uniquely determined asx▷I. Consequences of this axiomatization of converse includex˘˘ =x, ¬(x˘) = (¬x)˘,(x∨y)˘ =x˘ ∨y˘, and (x•y)˘ =y˘•x˘.
https://en.wikipedia.org/wiki/Residuated_Boolean_algebra
Spatial–temporal reasoningis an area ofartificial intelligencethat draws from the fields ofcomputer science,cognitive science, andcognitive psychology. The theoretic goal—on the cognitive side—involves representing and reasoning spatial-temporal knowledge in mind. The applied goal—on the computing side—involves developing high-level control systems of automata fornavigatingand understanding time and space. A convergent result in cognitive psychology is that the connection relation is the first spatial relation that human babies acquire, followed by understanding orientation relations and distance relations. Internal relations among the three kinds of spatial relations can be computationally and systematically explained within the theory of cognitive prism as follows: Without addressing internal relations among spatial relations, AI researchers contributed many fragmentary representations. Examples of temporal calculi includeAllen's interval algebra, and Vilain's & Kautz'spoint algebra. The most prominent spatial calculi aremereotopological calculi,Frank'scardinal direction calculus, Freksa's double cross calculus, Egenhofer and Franzosa's4- and 9-intersection calculi, Ligozat'sflip-flop calculus, variousregion connection calculi(RCC), and the Oriented Point Relation Algebra. Recently, spatio-temporal calculi have been designed that combine spatial and temporal information. For example, thespatiotemporal constraint calculus(STCC) by Gerevini and Nebel combines Allen's interval algebra with RCC-8. Moreover, thequalitative trajectory calculus(QTC) allows for reasoning about moving objects. An emphasis in the literature has been onqualitativespatial-temporal reasoning which is based on qualitative abstractions of temporal and spatial aspects of the common-sense background knowledge on which our human perspective of physical reality is based. Methodologically, qualitativeconstraintcalculi restrict the vocabulary of rich mathematical theories dealing with temporal or spatial entities such that specific aspects of these theories can be treated withindecidablefragments with simple qualitative (non-metric) languages. Contrary to mathematical or physical theories about space and time, qualitative constraint calculi allow for rather inexpensive reasoning about entities located in space and time. For this reason, the limited expressiveness of qualitative representation formalism calculi is a benefit if such reasoning tasks need to be integrated in applications. For example, some of these calculi may be implemented for handling spatialGISqueries efficiently and some may be used for navigating, and communicating with, a mobilerobot. Most of these calculi can be formalized as abstractrelation algebras, such that reasoning can be carried out at asymboliclevel. For computing solutions of aconstraint network, thepath-consistency algorithmis an important tool.
https://en.wikipedia.org/wiki/Spatial-temporal_reasoning
Inmathematics, afinitary relationover a sequence of setsX1, ...,Xnis asubsetof theCartesian productX1× ... ×Xn; that is, it is a set ofn-tuples(x1, ...,xn), each being a sequence of elementsxiin the correspondingXi.[1][2][3]Typically, the relation describes a possible connection between the elements of ann-tuple. For example, the relation "xis divisible byyandz" consists of the set of 3-tuples such that when substituted tox,yandz, respectively, make the sentence true. The non-negative integernthat gives the number of "places" in the relation is called thearity,adicityordegreeof the relation. A relation withn"places" is variously called ann-ary relation, ann-adic relationor arelation of degreen. Relations with a finite number of places are calledfinitary relations(or simplyrelationsif the context is clear). It is also possible to generalize the concept toinfinitary relationswithinfinite sequences.[4] When two objects, qualities, classes, or attributes, viewed together by the mind, are seen under some connexion, that connexion is called a relation. Since the definition is predicated on the underlying setsX1, ...,Xn,Rmay be more formally defined as the (n+ 1)-tuple(X1, ...,Xn,G), whereG, called thegraphofR, is a subset of the Cartesian productX1× ... ×Xn. As is often done in mathematics, the same symbol is used to refer to the mathematical object and an underlying set, so the statement(x1, ...,xn) ∈Ris often used to mean(x1, ...,xn) ∈Gis read "x1, ...,xnareR-related" and are denoted usingprefix notationbyRx1⋯xnand usingpostfix notationbyx1⋯xnR. In the case whereRis a binary relation, those statements are also denoted usinginfix notationbyx1Rx2. The following considerations apply: Let aBoolean domainBbe a two-element set, say,B= {0, 1}, whose elements can be interpreted as logical values, typically0 = falseand1 = true. Thecharacteristic functionofR, denoted byχR, is theBoolean-valued functionχR:X1× ... ×Xn→B, defined byχR((x1, ...,xn)) = 1ifRx1⋯xnandχR((x1, ...,xn)) = 0otherwise. In applied mathematics,computer scienceand statistics, it is common to refer to a Boolean-valued function as ann-arypredicate. From the more abstract viewpoint offormal logicandmodel theory, the relationRconstitutes alogical modelor arelational structure, that serves as one of many possibleinterpretationsof somen-ary predicate symbol. Because relations arise in many scientific disciplines, as well as in many branches ofmathematicsandlogic, there is considerable variation in terminology. Aside from theset-theoreticextensionof a relational concept or term, the term "relation" can also be used to refer to the corresponding logical entity, either thelogical comprehension, which is the totality ofintensionsor abstract properties shared by all elements in the relation, or else the symbols denoting these elements and intensions. Further, some writers of the latter persuasion introduce terms with more concrete connotations (such as "relational structure" for the set-theoretic extension of a given relational concept). Nullary (0-ary) relations count only two members: the empty nullary relation, which never holds, and the universal nullary relation, which always holds. This is because there is only one 0-tuple, the empty tuple (), and there are exactly two subsets of the (singleton) set of all 0-tuples. They are sometimes useful for constructing the base case of aninductionargument. Unary (1-ary) relations can be viewed as a collection of members (such as the collection ofNobel laureates) having some property (such as that of having been awarded theNobel Prize). Every nullary function is a unary relation. Binary(2-ary) relations are the most commonly studied form of finitary relations. Homogeneous binary relations (whereX1=X2) include Heterogeneous binary relations include Ternary(3-ary) relations include, for example, thebinary functions, which relate two inputs and the output. All three of the domains of a homogeneous ternary relation are the same set. Consider the ternary relationR"xthinks thatylikesz" over the set of peopleP= { Alice, Bob, Charles, Denise }, defined by: Rcan be represented equivalently by the following table: Here, each row represents a triple ofR, that is it makes a statement of the form "xthinks thatylikesz". For instance, the first row states that "Alice thinks that Bob likes Denise". All rows are distinct. The ordering of rows is insignificant but the ordering of columns is significant.[1] The above table is also a simple example of arelational database, a field with theory rooted inrelational algebraand applications in data management.[6]Computer scientists, logicians, and mathematicians, however, tend to have different conceptions what a general relation is, and what it is consisted of. For example, databases are designed to deal with empirical data, which is by definition finite, whereas in mathematics, relations with infinite arity (i.e., infinitary relation) are also considered. The logicianAugustus De Morgan, in work published around 1860, was the first to articulate the notion of relation in anything like its present sense. He also stated the first formal results in the theory of relations (on De Morgan and relations, see Merrill 1990). Charles Peirce,Gottlob Frege,Georg Cantor,Richard Dedekindand others advanced the theory of relations. Many of their ideas, especially on relations calledorders, were summarized inThe Principles of Mathematics(1903) whereBertrand Russellmade free use of these results. In 1970,Edgar Coddproposed arelational modelfordatabases, thus anticipating the development ofdata base management systems.[1]
https://en.wikipedia.org/wiki/Theory_of_relations
Inmathematics, aternary relationortriadic relationis afinitary relationin which the number of places in the relation is three.Ternaryrelations may also be referred to as3-adic,3-ary,3-dimensional, or3-place. Just as abinary relationis formally defined as a set ofpairs, i.e. a subset of theCartesian productA×Bof some setsAandB, so a ternary relation is a set of triples, forming a subset of the Cartesian productA×B×Cof three setsA,BandC. An example of a ternary relation in elementary geometry can be given on triples of points, where a triple is in the relation if the three points arecollinear. Another geometric example can be obtained by considering triples consisting of two points and a line, where a triple is in the ternary relation if the two points determine (areincidentwith) the line. A functionf:A×B→Cin two variables, mapping two values from setsAandB, respectively, to a value inCassociates to every pair (a,b) inA×Ban elementf(a,b) inC. Therefore, its graph consists of pairs of the form((a,b),f(a,b)). Such pairs in which the first element is itself a pair are often identified with triples. This makes the graph offa ternary relation betweenA,BandC, consisting of all triples(a,b,f(a,b)), satisfyingainA,binB, andf(a,b) inC. Given any setAwhose elements are arranged on a circle, one can define a ternary relationRonA, i.e. a subset ofA3=A×A×A, by stipulating thatR(a,b,c)holds if and only if the elementsa,bandcare pairwise different and when going fromatocin a clockwise direction one passes throughb. For example, ifA= { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 }represents the hours on aclock face, thenR(8, 12, 4)holds andR(12, 8, 4)does not hold. The ordinary congruence of arithmetics which holds for three integersa,b, andmif and only ifmdividesa−b, formally may be considered as a ternary relation. However, usually, this instead is considered as a family ofbinary relationsbetween theaand theb, indexed by themodulusm. For each fixedm, indeed this binary relation has some natural properties, like being anequivalence relation; while the combined ternary relation in general is not studied as one relation. Atyping relationΓ ⊢e:σindicates thateis a term of typeσin context Γ, and is thus a ternary relation between contexts, terms and types. Givenhomogeneous relationsA,B, andCon a set, a ternary relation(A,B,C)can be defined usingcomposition of relationsABandinclusionAB⊆C. Within thecalculus of relationseach relationAhas aconverse relationATand a complement relationA. Using theseinvolutions,Augustus De MorganandErnst Schrödershowed that(A,B,C)is equivalent to(C,BT,A)and also equivalent to(AT,C,B). The mutual equivalences of these forms, constructed from the ternary relation(A,B,C),are called theSchröder rules.[1]
https://en.wikipedia.org/wiki/Triadic_relation
Inaxiomatic set theory, thegimel functionis the following function mappingcardinal numbersto cardinal numbers: where cf denotes thecofinalityfunction; the gimel function is used for studying thecontinuum functionand thecardinal exponentiationfunction. The symbolℷ{\displaystyle \gimel }is a serif form of the Hebrew lettergimel. The gimel function has the propertyℷ(κ)>κ{\displaystyle \gimel (\kappa )>\kappa }for all infinite cardinalsκ{\displaystyle \kappa }byKönig's theorem. For regular cardinalsκ{\displaystyle \kappa },ℷ(κ)=2κ{\displaystyle \gimel (\kappa )=2^{\kappa }}, andEaston's theoremsays we don't know much about the values of this function. For singularκ{\displaystyle \kappa }, upper bounds forℷ(κ){\displaystyle \gimel (\kappa )}can be found fromShelah'sPCF theory. Thegimel hypothesisstates thatℷ(κ)=max(2cf(κ),κ+){\displaystyle \gimel (\kappa )=\max(2^{{\text{cf}}(\kappa )},\kappa ^{+})}. In essence, this means thatℷ(κ){\displaystyle \gimel (\kappa )}for singularκ{\displaystyle \kappa }is the smallest value allowed by the axioms ofZermelo–Fraenkel set theory(assuming consistency). Under this hypothesis cardinal exponentiation is simplified, though not to the extent of thecontinuum hypothesis(which implies the gimel hypothesis). Bukovský (1965)showed that all cardinal exponentiation is determined (recursively) by the gimel function as follows. The remaining rules hold wheneverκ{\displaystyle \kappa }andλ{\displaystyle \lambda }are both infinite:
https://en.wikipedia.org/wiki/Gimel_function
Inset theory, aregular cardinalis acardinal numberthat is equal to its owncofinality. More explicitly, this means thatκ{\displaystyle \kappa }is a regular cardinal if and only if everyunboundedsubsetC⊆κ{\displaystyle C\subseteq \kappa }has cardinalityκ{\displaystyle \kappa }. Infinitewell-orderedcardinals that are not regular are calledsingular cardinals. Finite cardinal numbers are typically not called regular or singular. In the presence of theaxiom of choice, any cardinal number can be well-ordered, and then the following are equivalent for a cardinalκ{\displaystyle \kappa }: Crudely speaking, this means that a regular cardinal is one that cannot be broken down into a small number of smaller parts. The situation is slightly more complicated in contexts where theaxiom of choicemight fail, as in that case not all cardinals are necessarily the cardinalities of well-ordered sets. In that case, the above equivalence holds for well-orderable cardinals only. An infiniteordinalα{\displaystyle \alpha }is aregular ordinalif it is alimit ordinalthat is not the limit of a set of smaller ordinals that as a set hasorder typeless thanα{\displaystyle \alpha }. A regular ordinal is always aninitial ordinal, though some initial ordinals are not regular, e.g.,ωω{\displaystyle \omega _{\omega }}(see the example below). The ordinals less thanω{\displaystyle \omega }are finite. A finite sequence of finite ordinals always has a finite maximum, soω{\displaystyle \omega }cannot be the limit of any sequence of type less thanω{\displaystyle \omega }whose elements are ordinals less thanω{\displaystyle \omega }, and is therefore a regular ordinal.ℵ0{\displaystyle \aleph _{0}}(aleph-null) is a regular cardinal because its initial ordinal,ω{\displaystyle \omega }, is regular. It can also be seen directly to be regular, as the cardinal sum of a finite number of finite cardinal numbers is itself finite. ω+1{\displaystyle \omega +1}is thenext ordinal numbergreater thanω{\displaystyle \omega }. It is singular, since it is not a limit ordinal.ω+ω{\displaystyle \omega +\omega }is the next limit ordinal afterω{\displaystyle \omega }. It can be written as the limit of the sequenceω{\displaystyle \omega },ω+1{\displaystyle \omega +1},ω+2{\displaystyle \omega +2},ω+3{\displaystyle \omega +3}, and so on. This sequence has order typeω{\displaystyle \omega }, soω+ω{\displaystyle \omega +\omega }is the limit of a sequence of type less thanω+ω{\displaystyle \omega +\omega }whose elements are ordinals less thanω+ω{\displaystyle \omega +\omega }; therefore it is singular. ℵ1{\displaystyle \aleph _{1}}is thenext cardinal numbergreater thanℵ0{\displaystyle \aleph _{0}}, so the cardinals less thanℵ1{\displaystyle \aleph _{1}}arecountable(finite or denumerable). Assuming the axiom of choice, the union of a countable set of countable sets is itself countable. Soℵ1{\displaystyle \aleph _{1}}cannot be written as the sum of a countable set of countable cardinal numbers, and is regular. ℵω{\displaystyle \aleph _{\omega }}is the next cardinal number after the sequenceℵ0{\displaystyle \aleph _{0}},ℵ1{\displaystyle \aleph _{1}},ℵ2{\displaystyle \aleph _{2}},ℵ3{\displaystyle \aleph _{3}}, and so on. Its initial ordinalωω{\displaystyle \omega _{\omega }}is the limit of the sequenceω{\displaystyle \omega },ω1{\displaystyle \omega _{1}},ω2{\displaystyle \omega _{2}},ω3{\displaystyle \omega _{3}}, and so on, which has order typeω{\displaystyle \omega }, soωω{\displaystyle \omega _{\omega }}is singular, and so isℵω{\displaystyle \aleph _{\omega }}. Assuming the axiom of choice,ℵω{\displaystyle \aleph _{\omega }}is the first infinite cardinal that is singular (the first infiniteordinalthat is singular isω+1{\displaystyle \omega +1}, and the first infinitelimit ordinalthat is singular isω+ω{\displaystyle \omega +\omega }). Proving the existence of singular cardinals requires theaxiom of replacement, and in fact the inability to prove the existence ofℵω{\displaystyle \aleph _{\omega }}inZermelo set theoryis what ledFraenkelto postulate this axiom.[1] Uncountable (weak)limit cardinalsthat are also regular are known as (weakly)inaccessible cardinals. They cannot be proved to exist within ZFC, though their existence is not known to be inconsistent with ZFC. Their existence is sometimes taken as an additional axiom. Inaccessible cardinals are necessarilyfixed pointsof thealeph function, though not all fixed points are regular. For instance, the first fixed point is the limit of theω{\displaystyle \omega }-sequenceℵ0,ℵω,ℵωω,...{\displaystyle \aleph _{0},\aleph _{\omega },\aleph _{\omega _{\omega }},...}and is therefore singular. If theaxiom of choiceholds, then everysuccessor cardinalis regular. Thus the regularity or singularity of most aleph numbers can be checked depending on whether the cardinal is a successor cardinal or a limit cardinal. Some cardinalities cannot be proven to be equal to any particular aleph, for instance thecardinality of the continuum, whose value in ZFC may be any uncountable cardinal of uncountable cofinality (seeEaston's theorem). Thecontinuum hypothesispostulates that the cardinality of the continuum is equal toℵ1{\displaystyle \aleph _{1}}, which is regular assuming choice. Without the axiom of choice: there would be cardinal numbers that were not well-orderable.[citation needed]Moreover, the cardinal sum of an arbitrary collection could not be defined.[citation needed]Therefore, only thealeph numberscould meaningfully be called regular or singular cardinals.[citation needed]Furthermore, a successor aleph would need not be regular. For instance, the union of a countable set of countable sets would not necessarily be countable. It is consistent withZFthatω1{\displaystyle \omega _{1}}be the limit of a countable sequence of countable ordinals as well as the set ofreal numbersbe a countable union of countable sets.[citation needed]Furthermore, it is consistent with ZF when not including AC that every aleph bigger thanℵ0{\displaystyle \aleph _{0}}is singular (a result proved byMoti Gitik). Ifκ{\displaystyle \kappa }is a limit ordinal,κ{\displaystyle \kappa }is regular iff the set ofα<κ{\displaystyle \alpha <\kappa }that are critical points ofΣ1{\displaystyle \Sigma _{1}}-elementary embeddingsj{\displaystyle j}withj(α)=κ{\displaystyle j(\alpha )=\kappa }isclubinκ{\displaystyle \kappa }.[2] For cardinalsκ<θ{\displaystyle \kappa <\theta }, say that an elementary embeddingj:M→H(θ){\displaystyle j:M\to H(\theta )}asmall embeddingifM{\displaystyle M}is transitive andj(crit(j))=κ{\displaystyle j({\textrm {crit}}(j))=\kappa }. A cardinalκ{\displaystyle \kappa }is uncountable and regular iff there is anα>κ{\displaystyle \alpha >\kappa }such that for everyθ>α{\displaystyle \theta >\alpha }, there is a small embeddingj:M→H(θ){\displaystyle j:M\to H(\theta )}.[3]Corollary 2.2
https://en.wikipedia.org/wiki/Regular_cardinal
Infinityis something which is boundless, endless, or larger than anynatural number. It is denoted by∞{\displaystyle \infty }, calledthe infinity symbol. From the time of theancient Greeks, thephilosophical nature of infinityhas been the subject of many discussions among philosophers. In the 17th century, with the introduction of the infinity symbol[1]and theinfinitesimal calculus, mathematicians began to work withinfinite seriesand what some mathematicians (includingl'HôpitalandBernoulli)[2]regarded as infinitely small quantities, but infinity continued to be associated with endless processes. As mathematicians struggled with the foundation ofcalculus, it remained unclear whether infinity could be considered as a number ormagnitudeand, if so, how this could be done.[1]At the end of the 19th century,Georg Cantorenlarged the mathematical study of infinity by studyinginfinite setsandinfinite numbers, showing that they can be of various sizes.[1][3]For example, if a line is viewed as the set of all of its points, their infinite number (i.e., thecardinalityof the line) is larger than the number ofintegers.[4]In this usage, infinity is a mathematical concept, and infinitemathematical objectscan be studied, manipulated, and used just like any other mathematical object. The mathematical concept of infinity refines and extends the old philosophical concept, in particular by introducing infinitely many different sizes of infinite sets. Among the axioms ofZermelo–Fraenkel set theory, on which most of modern mathematics can be developed, is theaxiom of infinity, which guarantees the existence of infinite sets.[1]The mathematical concept of infinity and the manipulation of infinite sets are widely used in mathematics, even in areas such ascombinatoricsthat may seem to have nothing to do with them. For example,Wiles's proof of Fermat's Last Theoremimplicitly relies on the existence ofGrothendieck universes, very large infinite sets,[5]for solving a long-standing problem that is stated in terms ofelementary arithmetic. Inphysicsandcosmology, it is an open questionwhether the universe is spatially infinite or not. Ancient cultures had various ideas about the nature of infinity. Theancient Indiansand theGreeksdid not define infinity in precise formalism as does modern mathematics, and instead approached infinity as a philosophical concept. The earliest recorded idea of infinity in Greece may be that ofAnaximander(c. 610 – c. 546 BC) apre-SocraticGreek philosopher. He used the wordapeiron, which means "unbounded", "indefinite", and perhaps can be translated as "infinite".[1][6] Aristotle(350 BC) distinguishedpotential infinityfromactual infinity, which he regarded as impossible due to the variousparadoxesit seemed to produce.[7]It has been argued that, in line with this view, theHellenisticGreeks had a "horror of the infinite"[8][9]which would, for example, explain whyEuclid(c. 300 BC) did not say that there are an infinity of primes but rather "Prime numbers are more than any assigned multitude of prime numbers."[10]It has also been maintained, that, in proving theinfinitude of the prime numbers, Euclid "was the first to overcome the horror of the infinite".[11]There is a similar controversy concerning Euclid'sparallel postulate, sometimes translated: If a straight line falling across two [other] straight lines makes internal angles on the same side [of itself whose sum is] less than two right angles, then the two [other] straight lines, being produced to infinity, meet on that side [of the original straight line] that the [sum of the internal angles] is less than two right angles.[12] Other translators, however, prefer the translation "the two straight lines, if produced indefinitely ...",[13]thus avoiding the implication that Euclid was comfortable with the notion of infinity. Finally, it has been maintained that a reflection on infinity, far from eliciting a "horror of the infinite", underlay all of early Greek philosophy and that Aristotle's "potential infinity" is an aberration from the general trend of this period.[14] Zeno of Elea(c.495 –c.430 BC) did not advance any views concerning the infinite. Nevertheless, his paradoxes,[15]especially "Achilles and the Tortoise", were important contributions in that they made clear the inadequacy of popular conceptions. The paradoxes were described byBertrand Russellas "immeasurably subtle and profound".[16] Achillesraces a tortoise, giving the latter a head start. Etc. Apparently, Achilles never overtakes the tortoise, since however many steps he completes, the tortoise remains ahead of him. Zeno was not attempting to make a point about infinity. As a member of theEleaticsschool which regarded motion as an illusion, he saw it as a mistake to suppose that Achilles could run at all. Subsequent thinkers, finding this solution unacceptable, struggled for over two millennia to find other weaknesses in the argument. Finally, in 1821,Augustin-Louis Cauchyprovided both a satisfactory definition of a limit and a proof that, for0 <x< 1,[17]a+ax+ax2+ax3+ax4+ax5+⋯=a1−x.{\displaystyle a+ax+ax^{2}+ax^{3}+ax^{4}+ax^{5}+\cdots ={\frac {a}{1-x}}.} Suppose that Achilles is running at 10 meters per second, the tortoise is walking at 0.1 meters per second, and the latter has a 100-meter head start. The duration of the chase fits Cauchy's pattern witha= 10 secondsandx= 0.01. Achilles does overtake the tortoise; it takes him TheJain mathematicaltext Surya Prajnapti (c. 4th–3rd century BCE) classifies all numbers into three sets:enumerable, innumerable, and infinite. Each of these was further subdivided into three orders:[18] In the 17th century, European mathematicians started using infinite numbers and infinite expressions in a systematic fashion. In 1655,John Wallisfirst used the notation∞{\displaystyle \infty }for such a number in hisDe sectionibus conicis,[19]and exploited it in area calculations by dividing the region intoinfinitesimalstrips of width on the order of1∞.{\displaystyle {\tfrac {1}{\infty }}.}[20]But inArithmetica infinitorum(1656),[21]he indicates infinite series, infinite products and infinite continued fractions by writing down a few terms or factors and then appending "&c.", as in "1, 6, 12, 18, 24, &c."[22] In 1699,Isaac Newtonwrote about equations with an infinite number of terms in his workDe analysi per aequationes numero terminorum infinitas.[23] Hermann Weylopened a mathematico-philosophic address given in 1930 with:[24] Mathematics is the science of the infinite. The infinity symbol∞{\displaystyle \infty }(sometimes called thelemniscate) is a mathematical symbol representing the concept of infinity. The symbol is encoded inUnicodeatU+221E∞INFINITY(&infin;)[25]and inLaTeXas\infty.[26] It was introduced in 1655 byJohn Wallis,[27][28]and since its introduction, it has also been used outside mathematics in modern mysticism[29]and literarysymbology.[30] Gottfried Leibniz, one of the co-inventors ofinfinitesimal calculus, speculated widely about infinite numbers and their use in mathematics. To Leibniz, both infinitesimals and infinite quantities were ideal entities, not of the same nature as appreciable quantities, but enjoying the same properties in accordance with theLaw of continuity.[31][2] Inreal analysis, the symbol∞{\displaystyle \infty }, called "infinity", is used to denote an unboundedlimit.[32]The notationx→∞{\displaystyle x\rightarrow \infty }means thatx{\displaystyle x}increases without bound, andx→−∞{\displaystyle x\to -\infty }means thatx{\displaystyle x}decreases without bound. For example, iff(t)≥0{\displaystyle f(t)\geq 0}for everyt{\displaystyle t}, then[33] Infinity can also be used to describeinfinite series, as follows: In addition to defining a limit, infinity can be also used as a value in the extended real number system. Points labeled+∞{\displaystyle +\infty }and−∞{\displaystyle -\infty }can be added to thetopological spaceof the real numbers, producing the two-pointcompactificationof the real numbers. Adding algebraic properties to this gives us theextended real numbers.[35]We can also treat+∞{\displaystyle +\infty }and−∞{\displaystyle -\infty }as the same, leading to theone-point compactificationof the real numbers, which is thereal projective line.[36]Projective geometryalso refers to aline at infinityin plane geometry, aplane at infinityin three-dimensional space, and ahyperplane at infinityfor generaldimensions, each consisting ofpoints at infinity.[37] Incomplex analysisthe symbol∞{\displaystyle \infty }, called "infinity", denotes an unsigned infinitelimit. The expressionx→∞{\displaystyle x\rightarrow \infty }means that the magnitude|x|{\displaystyle |x|}ofx{\displaystyle x}grows beyond any assigned value. Apoint labeled∞{\displaystyle \infty }can be added to the complex plane as atopological spacegiving theone-point compactificationof the complex plane. When this is done, the resulting space is a one-dimensionalcomplex manifold, orRiemann surface, called the extended complex plane or theRiemann sphere.[38]Arithmetic operations similar to those given above for the extended real numbers can also be defined, though there is no distinction in the signs (which leads to the one exception that infinity cannot be added to itself). On the other hand, this kind of infinity enablesdivision by zero, namelyz/0=∞{\displaystyle z/0=\infty }for any nonzerocomplex numberz{\displaystyle z}. In this context, it is often useful to considermeromorphic functionsas maps into the Riemann sphere taking the value of∞{\displaystyle \infty }at the poles. The domain of a complex-valued function may be extended to include the point at infinity as well. One important example of such functions is the group ofMöbius transformations(seeMöbius transformation § Overview). The original formulation ofinfinitesimal calculusbyIsaac Newtonand Gottfried Leibniz used infinitesimal quantities. In the second half of the 20th century, it was shown that this treatment could be put on a rigorous footing through variouslogical systems, includingsmooth infinitesimal analysisandnonstandard analysis. In the latter, infinitesimals are invertible, and their inverses are infinite numbers. The infinities in this sense are part of ahyperreal field; there is no equivalence between them as with the Cantoriantransfinites. For example, if H is an infinite number in this sense, then H + H = 2H and H + 1 are distinct infinite numbers. This approach tonon-standard calculusis fully developed inKeisler (1986). A different form of "infinity" is theordinalandcardinalinfinities of set theory—a system oftransfinite numbersfirst developed byGeorg Cantor. In this system, the first transfinite cardinal isaleph-null(ℵ0), the cardinality of the set ofnatural numbers. This modern mathematical conception of the quantitative infinite developed in the late 19th century from works by Cantor,Gottlob Frege,Richard Dedekindand others—using the idea of collections or sets.[1] Dedekind's approach was essentially to adopt the idea ofone-to-one correspondenceas a standard for comparing the size of sets, and to reject the view of Galileo (derived fromEuclid) that the whole cannot be the same size as the part. (However, seeGalileo's paradoxwhere Galileo concludes that positive integers cannot be compared to the subset of positivesquare integerssince both are infinite sets.) An infinite set can simply be defined as one having the same size as at least one of itsproperparts; this notion of infinity is calledDedekind infinite. The diagram to the right gives an example: viewing lines as infinite sets of points, the left half of the lower blue line can be mapped in a one-to-one manner (green correspondences) to the higher blue line, and, in turn, to the whole lower blue line (red correspondences); therefore the whole lower blue line and its left half have the same cardinality, i.e. "size".[39] Cantor defined two kinds of infinite numbers:ordinal numbersandcardinal numbers. Ordinal numbers characterizewell-orderedsets, or counting carried on to any stopping point, including points after an infinite number have already been counted. Generalizing finite and (ordinary) infinitesequenceswhich are maps from the positiveintegersleads tomappingsfrom ordinal numbers to transfinite sequences. Cardinal numbers define the size of sets, meaning how many members they contain, and can be standardized by choosing the first ordinal number of a certain size to represent the cardinal number of that size. The smallest ordinal infinity is that of the positive integers, and any set which has the cardinality of the integers iscountably infinite. If a set is too large to be put in one-to-one correspondence with the positive integers, it is calleduncountable. Cantor's views prevailed and modern mathematics accepts actual infinity as part of a consistent and coherent theory.[40]Certain extended number systems, such as thehyperreal numbers, incorporate the ordinary (finite) numbers and infinite numbers of different sizes.[41] One of Cantor's most important results was that the cardinality of the continuumc{\displaystyle \mathbf {c} }is greater than that of the natural numbersℵ0{\displaystyle {\aleph _{0}}}; that is, there are more real numbersRthan natural numbersN. Namely, Cantor showed thatc=2ℵ0>ℵ0{\displaystyle \mathbf {c} =2^{\aleph _{0}}>{\aleph _{0}}}.[42] Thecontinuum hypothesisstates that there is nocardinal numberbetween the cardinality of the reals and the cardinality of the natural numbers, that is,c=ℵ1=ℶ1{\displaystyle \mathbf {c} =\aleph _{1}=\beth _{1}}. This hypothesis cannot be proved or disproved within the widely acceptedZermelo–Fraenkel set theory, even assuming theAxiom of Choice.[43] Cardinal arithmeticcan be used to show not only that the number of points in areal number lineis equal to the number of points in anysegment of that line, but also that this is equal to the number of points on a plane and, indeed, in anyfinite-dimensionalspace.[44] The first of these results is apparent by considering, for instance, thetangentfunction, which provides aone-to-one correspondencebetween theinterval(−⁠π/2⁠,⁠π/2⁠) andR. The second result was proved by Cantor in 1878, but only became intuitively apparent in 1890, whenGiuseppe Peanointroduced thespace-filling curves, curved lines that twist and turn enough to fill the whole of any square, orcube, orhypercube, or finite-dimensional space. These curves can be used to define a one-to-one correspondence between the points on one side of a square and the points in the square.[45] Until the end of the 19th century, infinity was rarely discussed ingeometry, except in the context of processes that could be continued without any limit. For example, alinewas what is now called aline segment, with the proviso that one can extend it as far as one wants; but extending itinfinitelywas out of the question. Similarly, a line was usually not considered to be composed of infinitely many points but was a location where a point may be placed. Even if there are infinitely many possible positions, only a finite number of points could be placed on a line. A witness of this is the expression "thelocusofa pointthat satisfies some property" (singular), where modern mathematicians would generally say "the set ofthe pointsthat have the property" (plural). One of the rare exceptions of a mathematical concept involvingactual infinitywasprojective geometry, wherepoints at infinityare added to theEuclidean spacefor modeling theperspectiveeffect that showsparallel linesintersecting "at infinity". Mathematically, points at infinity have the advantage of allowing one to not consider some special cases. For example, in aprojective plane, two distinctlinesintersect in exactly one point, whereas without points at infinity, there are no intersection points for parallel lines. So, parallel and non-parallel lines must be studied separately in classical geometry, while they need not be distinguished in projective geometry. Before the use ofset theoryfor thefoundation of mathematics, points and lines were viewed as distinct entities, and a point could belocated on a line. With the universal use of set theory in mathematics, the point of view has dramatically changed: a line is now considered asthe set of its points, and one says that a pointbelongs to a lineinstead ofis located on a line(however, the latter phrase is still used). In particular, in modern mathematics, lines areinfinite sets. Thevector spacesthat occur in classicalgeometryhave always a finitedimension, generally two or three. However, this is not implied by the abstract definition of a vector space, and vector spaces of infinite dimension can be considered. This is typically the case infunctional analysiswherefunction spacesare generally vector spaces of infinite dimension. In topology, some constructions can generatetopological spacesof infinite dimension. In particular, this is the case ofiterated loop spaces. The structure of afractalobject is reiterated in its magnifications. Fractals can be magnified indefinitely without losing their structure and becoming "smooth"; they have infinite perimeters and can have infinite or finite areas. One suchfractal curvewith an infinite perimeter and finite area is theKoch snowflake.[46] Leopold Kroneckerwas skeptical of the notion of infinity and how his fellow mathematicians were using it in the 1870s and 1880s. This skepticism was developed in thephilosophy of mathematicscalledfinitism, an extreme form of mathematical philosophy in the general philosophical and mathematical schools ofconstructivismandintuitionism.[47] Inphysics, approximations ofreal numbersare used forcontinuousmeasurements andnatural numbersare used fordiscretemeasurements (i.e., counting). Concepts of infinite things such as an infiniteplane waveexist, but there are no experimental means to generate them.[48] The first published proposal that the universe is infinite came from Thomas Digges in 1576.[49]Eight years later, in 1584, the Italian philosopher and astronomerGiordano Brunoproposed an unbounded universe inOn the Infinite Universe and Worlds: "Innumerable suns exist; innumerable earths revolve around these suns in a manner similar to the way the seven planets revolve around our sun. Living beings inhabit these worlds."[50] Cosmologistshave long sought to discover whether infinity exists in our physicaluniverse: Are there an infinite number of stars? Does the universe have infinite volume? Does space "go on forever"? This is still an open question ofcosmology. The question of being infinite is logically separate from the question of having boundaries. The two-dimensional surface of the Earth, for example, is finite, yet has no edge. By travelling in a straight line with respect to the Earth's curvature, one will eventually return to the exact spot one started from. The universe, at least in principle, might have a similartopology. If so, one might eventually return to one's starting point after travelling in a straight line through the universe for long enough.[51] The curvature of the universe can be measured throughmultipole momentsin the spectrum of thecosmic background radiation. To date, analysis of the radiation patterns recorded by theWMAPspacecraft hints that the universe has a flat topology. This would be consistent with an infinite physical universe.[52][53][54] However, the universe could be finite, even if its curvature is flat. An easy way to understand this is to consider two-dimensional examples, such as video games where items that leave one edge of the screen reappear on the other. The topology of such games istoroidaland the geometry is flat. Many possible bounded, flat possibilities also exist for three-dimensional space.[55] The concept of infinity also extends to themultiversehypothesis, which, when explained by astrophysicists such asMichio Kaku, posits that there are an infinite number and variety of universes.[56]Also,cyclic modelsposit an infinite amount ofBig Bangs, resulting in an infinite variety of universes after each Big Bang event in an infinite cycle.[57] Inlogic, aninfinite regressargument is "a distinctively philosophical kind of argument purporting to show that a thesis is defective because it generates an infinite series when either (form A) no such series exists or (form B) were it to exist, the thesis would lack the role (e.g., of justification) that it is supposed to play."[58] TheIEEE floating-pointstandard (IEEE 754) specifies a positive and a negative infinity value (and alsoindefinitevalues). These are defined as the result ofarithmetic overflow,division by zero, and other exceptional operations.[59] Someprogramming languages, such asJava[60]andJ,[61]allow the programmer an explicit access to the positive and negative infinity values as language constants. These can be used asgreatest and least elements, as they compare (respectively) greater than or less than all other values. They have uses assentinel valuesinalgorithmsinvolvingsorting,searching, orwindowing.[citation needed] In languages that do not have greatest and least elements but do allowoverloadingofrelational operators, it is possible for a programmer tocreatethe greatest and least elements. In languages that do not provide explicit access to such values from the initial state of the program but do implement the floating-pointdata type, the infinity values may still be accessible and usable as the result of certain operations.[citation needed] In programming, aninfinite loopis aloopwhose exit condition is never satisfied, thus executing indefinitely. Perspectiveartwork uses the concept ofvanishing points, roughly corresponding to mathematicalpoints at infinity, located at an infinite distance from the observer. This allows artists to create paintings that realistically render space, distances, and forms.[62]ArtistM.C. Escheris specifically known for employing the concept of infinity in his work in this and other ways.[63] Variations ofchessplayed on an unbounded board are calledinfinite chess.[64][65] Cognitive scientistGeorge Lakoffconsiders the concept of infinity in mathematics and the sciences as a metaphor. This perspective is based on the basic metaphor of infinity (BMI), defined as the ever-increasing sequence <1, 2, 3, …>.[66]
https://en.wikipedia.org/wiki/Infinity
Inmathematics,transfinite numbersorinfinite numbersare numbers that are "infinite" in the sense that they are larger than allfinitenumbers. These include thetransfinite cardinals, which arecardinal numbersused to quantify the size of infinite sets, and thetransfinite ordinals, which areordinal numbersused to provide an ordering of infinite sets.[1][2]The termtransfinitewas coined in 1895 byGeorg Cantor,[3][4][5][6]who wished to avoid some of the implications of the wordinfinitein connection with these objects, which were, nevertheless, notfinite.[citation needed]Few contemporary writers share these qualms; it is now accepted usage to refer to transfinite cardinals and ordinals asinfinite numbers. Nevertheless, the termtransfinitealso remains in use. Notable work on transfinite numbers was done byWacław Sierpiński:Leçons sur les nombres transfinis(1928 book) much expanded intoCardinal and Ordinal Numbers(1958,[7]2nd ed. 1965[8]). Any finitenatural numbercan be used in at least two ways: as an ordinal and as a cardinal. Cardinal numbers specify the size of sets (e.g., a bag offivemarbles), whereas ordinal numbers specify the order of a member within an ordered set[9](e.g., "thethirdman from the left" or "thetwenty-seventhday of January"). When extended to transfinite numbers, these two concepts are no longer inone-to-one correspondence. A transfinite cardinal number is used to describe the size of an infinitely large set,[2]while a transfinite ordinal is used to describe the location within an infinitely large set that is ordered.[9][failed verification]The most notable ordinal and cardinal numbers are, respectively: Thecontinuum hypothesisis the proposition that there are no intermediate cardinal numbers betweenℵ0{\displaystyle \aleph _{0}}and thecardinality of the continuum(the cardinality of the set ofreal numbers):[2]or equivalently thatℵ1{\displaystyle \aleph _{1}}is the cardinality of the set of real numbers. InZermelo–Fraenkel set theory, neither the continuum hypothesis nor its negation can be proved. Some authors, including P. Suppes and J. Rubin, use the termtransfinite cardinalto refer to the cardinality of aDedekind-infinite setin contexts where this may not be equivalent to "infinite cardinal"; that is, in contexts where theaxiom of countable choiceis not assumed or is not known to hold. Given this definition, the following are all equivalent: Although transfinite ordinals and cardinals both generalize only the natural numbers, other systems of numbers, including thehyperreal numbersandsurreal numbers, provide generalizations of thereal numbers.[10] In Cantor's theory of ordinal numbers, every integer number must have a successor.[11]The next integer after all the regular ones, that is the first infinite integer, is namedω{\displaystyle \omega }. In this context,ω+1{\displaystyle \omega +1}is larger thanω{\displaystyle \omega }, andω⋅2{\displaystyle \omega \cdot 2},ω2{\displaystyle \omega ^{2}}andωω{\displaystyle \omega ^{\omega }}are larger still. Arithmetic expressions containingω{\displaystyle \omega }specify an ordinal number, and can be thought of as the set of all integers up to that number. A given number generally has multiple expressions that represent it, however, there is a uniqueCantor normal formthat represents it,[11]essentially a finite sequence of digits that give coefficients of descending powers ofω{\displaystyle \omega }. Not all infinite integers can be represented by a Cantor normal form however, and the first one that cannot is given by the limitωωω...{\displaystyle \omega ^{\omega ^{\omega ^{...}}}}and is termedε0{\displaystyle \varepsilon _{0}}.[11]ε0{\displaystyle \varepsilon _{0}}is the smallest solution toωε=ε{\displaystyle \omega ^{\varepsilon }=\varepsilon }, and the following solutionsε1,...,εω,...,εε0,...{\displaystyle \varepsilon _{1},...,\varepsilon _{\omega },...,\varepsilon _{\varepsilon _{0}},...}give larger ordinals still, and can be followed until one reaches the limitεεε...{\displaystyle \varepsilon _{\varepsilon _{\varepsilon _{...}}}}, which is the first solution toεα=α{\displaystyle \varepsilon _{\alpha }=\alpha }. This means that in order to be able to specify all transfinite integers, one must think up an infinite sequence of names: because if one were to specify a single largest integer, one would then always be able to mention its larger successor. But as noted by Cantor,[citation needed]even this only allows one to reach the lowest class of transfinite numbers: those whose size of sets correspond to the cardinal numberℵ0{\displaystyle \aleph _{0}}.
https://en.wikipedia.org/wiki/Transfinite_number
Acalculationis a deliberatemathematicalprocess that transforms a plurality of inputs into a singular or plurality of outputs, known also as a result or results. The term is used in a variety of senses, from the very definitearithmeticalcalculation of using analgorithm, to the vagueheuristicsof calculating a strategy in a competition, or calculating the chance of a successful relationship between two people. For example,multiplying7 by 6 is a simple algorithmic calculation. Extracting thesquare rootor thecube rootof a number using mathematical models is a more complex algorithmic calculation. Statistical estimationsof the likely election results from opinion polls also involve algorithmic calculations, but produces ranges of possibilities rather than exact answers. Tocalculatemeans to determine mathematically in the case of a number or amount, or in the case of an abstract problem to deduce the answer usinglogic,reasonorcommon sense.[1]The English word derives from theLatincalculus, which originally meant apebble(from Latincalx), for instance the small stones used as a counters on anabacus(Latin:abacus,Greek:ἄβαξ,romanized:abax). The abacus was an instrument used by Greeks and Romans for arithmetic calculations, preceding theslide-ruleand theelectronic calculator, and consisted of perforated pebbles sliding on iron bars.
https://en.wikipedia.org/wiki/Calculation
Incontract bridge,card reading(orcounting the hand) is the process of inferring which remaining cards are held by each opponent. The reading is based on information gained in the bidding and the play to previous tricks.[1]The technique is used by the declarer and defenders primarily to determine the probablesuitdistribution andhonorcard holdings of each unseenhand; determination of the location of specific spot-cards may be critical as well. Card reading is based on the fact that there are thirteen cards in each of four suits and thirteen cards in each of four hands. There are some basic tips: The advanced tips include: As a declarer, an efficient way of counting thetrump cardsis: instead of counting the number of trump rounds and cards trumped in,count the number of trumps in the opponents' hands. Once the dummy hand appears, calculate the number of trumps which the opponents have, then reduce this number mentally as they are played from the opponents' hands. This means keeping track of one small number, and your own trumps do not enter the calculation. An even better way of counting trumps is to get familiar with common distribution patterns. For example, 5-3 and 4-4 are among the most common trump distributions on the declarer and dummy's hands. In cases, if an opponent shows out on the second trump round, then 5-3-1 or 4-4-1 is known, and the pattern 5-3-4-1 or 4-4-4-1 comes up automatically, and the other defender is known to have begun with four.
https://en.wikipedia.org/wiki/Card_reading_(bridge)
Inmathematics, acardinal number, orcardinalfor short, is what is commonly called the number of elements of aset. In the case of afinite set, its cardinal number, orcardinalityis therefore anatural number. For dealing with the case ofinfinite sets, theinfinite cardinal numbershave been introduced, which are often denoted with theHebrew letterℵ{\displaystyle \aleph }(aleph) marked with subscript indicating their rank among the infinite cardinals. Cardinality is defined in terms ofbijective functions. Two sets have the same cardinalityif, and only if, there is aone-to-one correspondence(bijection) between the elements of the two sets. In the case of finite sets, this agrees with the intuitive notion of number of elements. In the case of infinite sets, the behavior is more complex. A fundamental theorem due toGeorg Cantorshows that it is possible for infinite sets to have different cardinalities, and in particular the cardinality of the set ofreal numbersis greater than the cardinality of the set of natural numbers. It is also possible for aproper subsetof an infinite set to have the same cardinality as the original set—something that cannot happen with proper subsets of finite sets. There is atransfinite sequenceof cardinal numbers: This sequence starts with thenatural numbersincluding zero (finite cardinals), which are followed by thealeph numbers. The aleph numbers are indexed byordinal numbers. If theaxiom of choiceis true, this transfinite sequence includes every cardinal number. If the axiom of choice is not true (seeAxiom of choice § Independence), there are infinite cardinals that are not aleph numbers. Cardinalityis studied for its own sake as part ofset theory. It is also a tool used in branches of mathematics includingmodel theory,combinatorics,abstract algebraandmathematical analysis. Incategory theory, the cardinal numbers form askeletonof thecategory of sets. The notion of cardinality, as now understood, was formulated byGeorg Cantor, the originator ofset theory, in 1874–1884. Cardinality can be used to compare an aspect of finite sets. For example, the sets {1,2,3} and {4,5,6} are notequal, but have thesame cardinality, namely three. This is established by the existence of abijection(i.e., a one-to-one correspondence) between the two sets, such as the correspondence {1→4, 2→5, 3→6}. Cantor applied his concept of bijection to infinite sets[1](for example the set of natural numbersN= {0, 1, 2, 3, ...}). Thus, he called all sets having a bijection withNdenumerable (countably infinite) sets, which all share the same cardinal number. This cardinal number is calledℵ0{\displaystyle \aleph _{0}},aleph-null. He called the cardinal numbers of infinite setstransfinite cardinal numbers. Cantor proved that anyunbounded subsetofNhas the same cardinality asN, even though this might appear to run contrary to intuition. He also proved that the set of allordered pairsof natural numbers is denumerable; this implies that the set of allrational numbersis also denumerable, since every rational can be represented by a pair of integers. He later proved that the set of all realalgebraic numbersis also denumerable. Each real algebraic numberzmay be encoded as a finite sequence of integers, which are the coefficients in the polynomial equation of which it is a solution, i.e. the ordered n-tuple (a0,a1, ...,an),ai∈Ztogether with a pair of rationals (b0,b1) such thatzis the unique root of the polynomial with coefficients (a0,a1, ...,an) that lies in the interval (b0,b1). In his 1874 paper "On a Property of the Collection of All Real Algebraic Numbers", Cantor proved that there exist higher-order cardinal numbers, by showing that the set of real numbers has cardinality greater than that ofN. His proof used an argument withnested intervals, but in an 1891 paper, he proved the same result using his ingenious and much simplerdiagonal argument. The new cardinal number of the set of real numbers is called thecardinality of the continuumand Cantor used the symbolc{\displaystyle {\mathfrak {c}}}for it. Cantor also developed a large portion of the general theory of cardinal numbers; he proved that there is a smallest transfinite cardinal number (ℵ0{\displaystyle \aleph _{0}}, aleph-null), and that for every cardinal number there is a next-larger cardinal Hiscontinuum hypothesisis the proposition that the cardinalityc{\displaystyle {\mathfrak {c}}}of the set of real numbers is the same asℵ1{\displaystyle \aleph _{1}}. This hypothesis is independent of the standard axioms of mathematical set theory, that is, it can neither be proved nor disproved from them. This was shown in 1963 byPaul Cohen, complementing earlier work byKurt Gödelin 1940. In informal use, a cardinal number is what is normally referred to as acounting number, provided that 0 is included: 0, 1, 2, .... They may be identified with thenatural numbersbeginning with 0. The counting numbers are exactly what can be defined formally as thefinitecardinal numbers. Infinite cardinals only occur in higher-level mathematics andlogic. More formally, a non-zero number can be used for two purposes: to describe the size of a set, or to describe the position of an element in a sequence. For finite sets and sequences it is easy to see that these two notions coincide, since for every number describing a position in a sequence we can construct a set that has exactly the right size. For example, 3 describes the position of 'c' in the sequence <'a','b','c','d',...>, and we can construct the set {a,b,c}, which has 3 elements. However, when dealing withinfinite sets, it is essential to distinguish between the two, since the two notions are in fact different for infinite sets. Considering the position aspect leads toordinal numbers, while the size aspect is generalized by the cardinal numbers described here. The intuition behind the formal definition of cardinal is the construction of a notion of the relative size or "bigness" of a set, without reference to the kind of members which it has. For finite sets this is easy; one simply counts the number of elements a set has. In order to compare the sizes of larger sets, it is necessary to appeal to more refined notions. A setYis at least as big as a setXif there is aninjectivemappingfrom the elements ofXto the elements ofY. An injective mapping identifies each element of the setXwith a unique element of the setY. This is most easily understood by an example; suppose we have the setsX= {1,2,3} andY= {a,b,c,d}, then using this notion of size, we would observe that there is a mapping: which is injective, and hence conclude thatYhas cardinality greater than or equal toX. The element d has no element mapping to it, but this is permitted as we only require an injective mapping, and not necessarily abijectivemapping. The advantage of this notion is that it can be extended to infinite sets. We can then extend this to an equality-style relation. TwosetsXandYare said to have the samecardinalityif there exists abijectionbetweenXandY. By theSchroeder–Bernstein theorem, this is equivalent to there beingbothan injective mapping fromXtoY,andan injective mapping fromYtoX. We then write |X| = |Y|. The cardinal number ofXitself is often defined as the least ordinalawith |a| = |X|.[2]This is called thevon Neumann cardinal assignment; for this definition to make sense, it must be proved that every set has the same cardinality assomeordinal; this statement is thewell-ordering principle. It is however possible to discuss the relative cardinality of sets without explicitly assigning names to objects. The classic example used is that of the infinite hotel paradox, also calledHilbert's paradox of the Grand Hotel. Supposing there is an innkeeper at a hotel with an infinite number of rooms. The hotel is full, and then a new guest arrives. It is possible to fit the extra guest in by asking the guest who was in room 1 to move to room 2, the guest in room 2 to move to room 3, and so on, leaving room 1 vacant. We can explicitly write a segment of this mapping: With this assignment, we can see that the set {1,2,3,...} has the same cardinality as the set {2,3,4,...}, since a bijection between the first and the second has been shown. This motivates the definition of an infinite set being any set that has a proper subset of the same cardinality (i.e., aDedekind-infinite set); in this case {2,3,4,...} is a proper subset of {1,2,3,...}. When considering these large objects, one might also want to see if the notion of counting order coincides with that of cardinal defined above for these infinite sets. It happens that it does not; by considering the above example we can see that if some object "one greater than infinity" exists, then it must have the same cardinality as the infinite set we started out with. It is possible to use a different formal notion for number, calledordinals, based on the ideas of counting and considering each number in turn, and we discover that the notions of cardinality and ordinality are divergent once we move out of the finite numbers. It can be proved that the cardinality of thereal numbersis greater than that of the natural numbers just described. This can be visualized usingCantor's diagonal argument; classic questions of cardinality (for instance thecontinuum hypothesis) are concerned with discovering whether there is some cardinal between some pair of other infinite cardinals. In more recent times, mathematicians have been describing the properties of larger and larger cardinals. Since cardinality is such a common concept in mathematics, a variety of names are in use. Sameness of cardinality is sometimes referred to asequipotence,equipollence, orequinumerosity. It is thus said that two sets with the same cardinality are, respectively,equipotent,equipollent, orequinumerous. Formally, assuming theaxiom of choice, the cardinality of a setXis the leastordinal numberα such that there is a bijection betweenXand α. This definition is known as thevon Neumann cardinal assignment. If the axiom of choice is not assumed, then a different approach is needed. The oldest definition of the cardinality of a setX(implicit in Cantor and explicit in Frege andPrincipia Mathematica) is as the class [X] of all sets that are equinumerous withX. This does not work inZFCor other related systems ofaxiomatic set theorybecause ifXis non-empty, this collection is too large to be a set. In fact, forX≠ ∅ there is an injection from the universe into [X] by mapping a setmto {m} ×X, and so by theaxiom of limitation of size, [X] is a proper class. The definition does work however intype theoryand inNew Foundationsand related systems. However, if we restrict from this class to those equinumerous withXthat have the leastrank, then it will work (this is a trick due toDana Scott:[3]it works because the collection of objects with any given rank is a set). Von Neumann cardinal assignment implies that the cardinal number of a finite set is the common ordinal number of all possible well-orderings of that set, and cardinal and ordinal arithmetic (addition, multiplication, power, proper subtraction) then give the same answers for finite numbers. However, they differ for infinite numbers. For example,2ω=ω<ω2{\displaystyle 2^{\omega }=\omega <\omega ^{2}}in ordinal arithmetic while2ℵ0>ℵ0=ℵ02{\displaystyle 2^{\aleph _{0}}>\aleph _{0}=\aleph _{0}^{2}}in cardinal arithmetic, although the von Neumann assignment putsℵ0=ω{\displaystyle \aleph _{0}=\omega }. On the other hand, Scott's trick implies that the cardinal number 0 is{∅}{\displaystyle \{\emptyset \}}, which is also the ordinal number 1, and this may be confusing. A possible compromise (to take advantage of the alignment in finite arithmetic while avoiding reliance on the axiom of choice and confusion in infinite arithmetic) is to apply von Neumann assignment to the cardinal numbers of finite sets (those which can be well ordered and are not equipotent to proper subsets) and to use Scott's trick for the cardinal numbers of other sets. Formally, the order among cardinal numbers is defined as follows: |X| ≤ |Y| means that there exists aninjectivefunction fromXtoY. TheCantor–Bernstein–Schroeder theoremstates that if |X| ≤ |Y| and |Y| ≤ |X| then |X| = |Y|. The axiom of choice is equivalent to the statement that given two setsXandY, either |X| ≤ |Y| or |Y| ≤ |X|.[4][5] A setXisDedekind-infiniteif there exists aproper subsetYofXwith |X| = |Y|, andDedekind-finiteif such a subset does not exist. Thefinitecardinals are just thenatural numbers, in the sense that a setXis finite if and only if |X| = |n| =nfor some natural numbern. Any other set isinfinite. Assuming the axiom of choice, it can be proved that the Dedekind notions correspond to the standard ones. It can also be proved that the cardinalℵ0{\displaystyle \aleph _{0}}(aleph nullor aleph-0, where aleph is the first letter in theHebrew alphabet, representedℵ{\displaystyle \aleph }) of the set of natural numbers is the smallest infinite cardinal (i.e., any infinite set has a subset of cardinalityℵ0{\displaystyle \aleph _{0}}). The next larger cardinal is denoted byℵ1{\displaystyle \aleph _{1}}, and so on. For every ordinal α, there is a cardinal numberℵα,{\displaystyle \aleph _{\alpha },}and this list exhausts all infinite cardinal numbers. We can definearithmeticoperations on cardinal numbers that generalize the ordinary operations for natural numbers. It can be shown that for finite cardinals, these operations coincide with the usual operations for natural numbers. Furthermore, these operations share many properties with ordinary arithmetic. If the axiom of choice holds, then every cardinal κ has a successor, denoted κ+, where κ+> κ and there are no cardinals between κ and its successor. (Without the axiom of choice, usingHartogs' theorem, it can be shown that for any cardinal number κ, there is a minimal cardinal κ+such thatκ+≰κ.{\displaystyle \kappa ^{+}\nleq \kappa .}) For finite cardinals, the successor is simply κ + 1. For infinite cardinals, the successor cardinal differs from thesuccessor ordinal. IfXandYaredisjoint, addition is given by theunionofXandY. If the two sets are not already disjoint, then they can be replaced by disjoint sets of the same cardinality (e.g., replaceXbyX×{0} andYbyY×{1}). Zero is an additive identityκ+ 0 = 0 +κ=κ. Addition isassociative(κ+μ) +ν=κ+ (μ+ν). Addition iscommutativeκ+μ=μ+κ. Addition is non-decreasing in both arguments: Assuming the axiom of choice, addition of infinite cardinal numbers is easy. If eitherκorμis infinite, then Assuming the axiom of choice and, given an infinite cardinalσand a cardinalμ, there exists a cardinalκsuch thatμ+κ=σif and only ifμ≤σ. It will be unique (and equal toσ) if and only ifμ<σ. The product of cardinals comes from theCartesian product. Zero is a multiplicativeabsorbing element:κ·0 = 0·κ= 0. There are no nontrivialzero divisors:κ·μ= 0 → (κ= 0 orμ= 0). One is a multiplicative identity:κ·1 = 1·κ=κ. Multiplication is associative: (κ·μ)·ν=κ·(μ·ν). Multiplication iscommutative:κ·μ=μ·κ. Multiplication is non-decreasing in both arguments:κ≤μ→ (κ·ν≤μ·νandν·κ≤ν·μ). Multiplicationdistributesover addition:κ·(μ+ν) =κ·μ+κ·νand (μ+ν)·κ=μ·κ+ν·κ. Assuming the axiom of choice, multiplication of infinite cardinal numbers is also easy. If eitherκorμis infinite and both are non-zero, then Thus the product of two infinite cardinal numbers is equal to their sum. Assuming the axiom of choice and given an infinite cardinalπand a non-zero cardinalμ, there exists a cardinalκsuch thatμ·κ=πif and only ifμ≤π. It will be unique (and equal toπ) if and only ifμ<π. Exponentiation is given by whereXYis the set of allfunctionsfromYtoX.[6]It is easy to check that the right-hand side depends only on|X|{\displaystyle {|X|}}and|Y|{\displaystyle {|Y|}}. Exponentiation is non-decreasing in both arguments: 2|X|is the cardinality of thepower setof the setXandCantor's diagonal argumentshows that 2|X|> |X| for any setX. This proves that no largest cardinal exists (because for any cardinalκ, we can always find a larger cardinal 2κ). In fact, theclassof cardinals is aproper class. (This proof fails in some set theories, notablyNew Foundations.) All the remaining propositions in this section assume the axiom of choice: If 2 ≤κand 1 ≤μand at least one of them is infinite, then: UsingKönig's theorem, one can proveκ<κcf(κ)andκ< cf(2κ) for any infinite cardinalκ, where cf(κ) is thecofinalityofκ. Assuming the axiom of choice and, given an infinite cardinalκand a finite cardinalμgreater than 0, the cardinalνsatisfyingνμ=κ{\displaystyle \nu ^{\mu }=\kappa }will beκ{\displaystyle \kappa }. Assuming the axiom of choice and, given an infinite cardinalκand a finite cardinalμgreater than 1, there may or may not be a cardinalλsatisfyingμλ=κ{\displaystyle \mu ^{\lambda }=\kappa }. However, if such a cardinal exists, it is infinite and less thanκ, and any finite cardinalityνgreater than 1 will also satisfyνλ=κ{\displaystyle \nu ^{\lambda }=\kappa }. The logarithm of an infinite cardinal numberκis defined as the least cardinal numberμsuch thatκ≤ 2μ. Logarithms of infinite cardinals are useful in some fields of mathematics, for example in the study ofcardinal invariantsoftopological spaces, though they lack some of the properties that logarithms of positive real numbers possess.[7][8][9] Thecontinuum hypothesis(CH) states that there are no cardinals strictly betweenℵ0{\displaystyle \aleph _{0}}and2ℵ0.{\displaystyle 2^{\aleph _{0}}.}The latter cardinal number is also often denoted byc{\displaystyle {\mathfrak {c}}}; it is thecardinality of the continuum(the set ofreal numbers). In this case2ℵ0=ℵ1.{\displaystyle 2^{\aleph _{0}}=\aleph _{1}.} Similarly, thegeneralized continuum hypothesis(GCH) states that for every infinite cardinalκ{\displaystyle \kappa }, there are no cardinals strictly betweenκ{\displaystyle \kappa }and2κ{\displaystyle 2^{\kappa }}. Both the continuum hypothesis and the generalized continuum hypothesis have been proved to beindependentof the usual axioms of set theory, the Zermelo–Fraenkel axioms together with the axiom of choice (ZFC). Indeed,Easton's theoremshows that, forregular cardinalsκ{\displaystyle \kappa }, the only restrictions ZFC places on the cardinality of2κ{\displaystyle 2^{\kappa }}are thatκ<cf⁡(2κ){\displaystyle \kappa <\operatorname {cf} (2^{\kappa })}, and that the exponential function is non-decreasing. Notes Bibliography
https://en.wikipedia.org/wiki/Cardinal_number
Instatistics,count datais astatistical data typedescribingcountable quantities,datawhich can take only thecounting numbers, non-negativeintegervalues {0, 1, 2, 3, ...}, and where these integers arise fromcountingrather thanranking. The statistical treatment of count data is distinct from that ofbinary data, in which the observations can take only two values, usually represented by 0 and 1, and fromordinal data, which may also consist of integers but where the individual values fall on an arbitrary scale and only the relative ranking is important.[example needed] An individual piece of count data is often termed acount variable. When such a variable is treated as arandom variable, thePoisson,binomialandnegative binomialdistributions are commonly used to represent its distribution. Graphical examination of count data may be aided by the use ofdata transformationschosen to have the property of stabilising the sample variance. In particular, thesquare roottransformation might be used when data can be approximated by aPoisson distribution(although other transformation have modestly improved properties), while an inverse sine transformation is available when abinomial distributionis preferred. Here the count variable would be treated as adependent variable. Statistical methods such asleast squaresandanalysis of varianceare designed to deal with continuous dependent variables. These can be adapted to deal with count data by usingdata transformationssuch as thesquare roottransformation, but such methods have several drawbacks; they are approximate at best and estimateparametersthat are often hard to interpret. ThePoisson distributioncan form the basis for some analyses of count data and in this casePoisson regressionmay be used. This is a special case of the class ofgeneralized linear modelswhich also contains specific forms of model capable of using thebinomial distribution(binomial regression,logistic regression) or thenegative binomial distributionwhere the assumptions of the Poisson model are violated, in particular when the range of count values is limited or whenoverdispersionis present.
https://en.wikipedia.org/wiki/Count_data
Inmusic,countingis a system of regularly occurringsoundsthat serve to assist with theperformanceorauditionof music by allowing the easy identification of thebeat. Commonly, this involves verballycountingthe beats in eachmeasureas they occur, whether there be 2 beats, 3 beats, 4 beats, or even 5 beats. In addition to helping to normalize the time taken up by each beat, counting allows easier identification of the beats that are stressed. Counting is most commonly used withrhythm(often to decipher a difficult rhythm) andformand often involvessubdivision. The method involving numbers may be termedcount chant, "to identify it as a unique instructional process."[1] In lieu of simply counting the beats of a measure, other systems can be used which may be more appropriate to the particular piece of music. Depending on thetempo, the divisions of a beat may be vocalized as well (for slower times), or skipping numbers altogether (for faster times). As an alternative to counting, ametronomecan be used to accomplish the same function. Triple meter, such as34, is often counted 1 2 3, whilecompound meter, such as68, is often counted in two and subdivided "One-and-ah-Two-and-ah"[2]but may be articulated as "One-la-lee-Two-la-lee".[2]For each subdivision employed a new syllable is used. For example, sixteenth notes in44are counted 1 e & a 2 e & a 3 e & a 4 e & a, using numbers for the quarter note, "&" for theeighth note, and "e" and "a" for the sixteenth note level. Triplets may be counted "1 tri ple 2 tri ple 3 tri ple 4 tri ple" and sixteenth note triplets "1 la li + la li 2 la li + la li".[3]Quarter note triplets, due to their different rhythmic feel, may be articulated differently as "1 dra git 3 dra git".[3] Rather than numbers or nonsense syllables, a random word may be assigned to a rhythm to clearly count each beat. An example is with a triplet, so that atripletsubdivision is often counted "tri-pl-et".[4]TheKodály Methoduses "Ta" forquarter notesand "Ti-Ti" for eighth notes. For sextuplets simply say triplet twice (seeSextuplet rhythm.png), while quintuplets may be articulated as "un-i-vers-i-ty", or other five-syllable words such as "hip-po-pot-a-mus".[4]In some approaches, "rote-before-note",[5]the fractional definitions of notes are not taught to children until after they are able to perform syllable or phrase-based versions of these rhythms.[6] "However the counting may besyllabized, the important skill is to keep thepulsesteady and thedivisionexact."[2] There are various ways to count rhythm, from simplenumbersto countingsyllablesto beat placement syllables. Here are a few examples. Ultimately,musicianscount using numbers, “ands” andvowelsounds. Downbeats within a measure are called 1, 2, 3… Upbeats are represented with a plus sign and are called “and” (i.e. 1 + 2 +), and further subdivisions receive the sounds “ee” and “uh” (i.e. 1 e + a 2 e + a). Musicians do not agree on what to call triplets: some simply say the word triplet (“trip-a-let”), or another three-syllable word (like pineapple or elephant) with an antepenultimate accent. Some use numbers along with the word triplet (i.e. “1-trip-let”). Still others have devised sounds like “ah-lee” or “la-li” added after the number (i.e. 1-la-li, 2-la-li or 1-tee-duh, 2-tee-duh). Example The folk song lyric "This Old Man, he played one, he played knick-knack on my thumb, with a knick-knack paddy whack, give my dog a bone, this old man came rolling home" in24time would be said, "one and two one and two one and two and one and two and uh one and two ee and one ee and uh two one and two and one and two." 1 e and uh 2 e and uh 3 e and uh 4 e and uh Counts the beat number on the tactus, & on the half beat, and n-e-&-a for four sixteenth notes, n-&-a for a triplet or three eighth notes in compound meter, where n is the beat number.[7] The beat numbers are used for the tactus, te for the half beat, and n-ti-te-ta for four sixteenths. Triplets or three eighth notes in compound meter are n-la-li and six sixteenth notes in compound meter is n-ta-la-ta-li-ta.[7] Counting system using n-ne, n-ta-ne-ta, n-na-ni, and n-ta-na-ta-ni-ta. All three systems have internal consistency for all divisions of the beat except the tactus, which changes according to the beat number.[7] Syllables systems are categorized as "Beat Function Systems" - when the tactus (pulse) has certain syllable A, and the half-beat is always certain syllable B, regardless of how the rest of the measure is filled out.[8] The "Galin-Paris-Chevé system" or French "Time-Names system", originally used French words. Toward the middle of the 19th century the American musicianLowell Mason(affectionately named the "Father of Music Education") adapted the French Time-Names system for use in the United States, and instead of using the French names of the notes, he replaced these with a system that identified the value of each note within a meter and the measure.[9] Usual duple meter Usual triple meter Unusual meters pair the duple and triple meter syllables, and employ the "b" consonant. The beat is always called ta. Insimple meters, the division and subdivision are always ta-di and ta-ka-di-mi. Any note value can be the beat, depending on thetime signature. In compound meters (wherein the beat is generally notated withdotted notes), the division and subdivision are always ta-ki-da and ta-va-ki-di-da-ma. Thenote valuedoes not receive a particular name; the note’s position within the beat gets the name. This system allows children to internalize a steady beat and to naturally discover the subdivisions of beat, similar to the down-ee-up-ee system. Example The folk song lyric "This Old Man, he played one, he played knick-knack on my thumb, with a knick-knack paddy whack, give my dog a bone, this old man came rolling home" would be said, "tadi ta tadi ta tadi tadi tadi tadimi tadi takadi takadimi ta tadi tadi tadi ta." Eighth Rest + Eighth Note = X-Di Eighth Note + Two Sixteenth Notes = Taaa-Di-Mi Two Sixteenth Notes + Eighth Note = Ta-Ka-Diii Three Eighth Notes Beamed Together = Ta-Ki-Da Eighth Note + Eighth Rest + Eighth Note = Ta-X-Da Six Sixteenth Notes = Ta-Va-Ki-Di-Da-Ma Eighth Note + Four Sixteenth Notes = Ta-aa-Ki-Di-Da-Ma Four Sixteenth Notes + Eighth Note = Ta-Va-Ki-Di-Da-aa Two Sixteenth Notes + Eighth Note + Two Sixteenth Notes = Ta-Va-Ki-ii-Da-Ma This is a beat-function system used by some Kodály teachers that was developed by Laurdella Foulkes-Levy, and was designed to be easier to say than Gordon's system or the Takadimi system while still honoring the beat-function. The beat is said as "Ta" in both duple and triple meters, but the beat divisions are performed differently between the two meters. The "t" consonant always falls on the main beat and beat division, and the "k" consonant is always when the beat divides again. Alternating "t" and "k" in quick succession is easy to say, as they fall on two different parts of the tongue, making it very easy to say these syllables at a fast tempo (much like tonguing on recorder or flute). It is also a logical system since it always alternates between the same two consonants. Duple meter Triple meter This system allows the value of each note to be clearly represented no matter its placement within the beat/measure Example Thefolk songlyric"This Old Man, he played one, he played knick-knack on my thumb, with a knick-knack paddy whack, give my dog a bone, this old man came rolling home" would be said, "titi ta titi ta titi titi titi ti-tiri titi tiriti tiritiri ta titi titi titi ta" Beats are down, up-beats are up, subdivisions are “ee” but… need more info! Example The folk song lyric "This Old Man, he played one, he played knick-knack on my thumb, with a knick-knack paddy whack, give my dog a bone, this old man came rolling home" would be said, "down up down down up down down up down up down up down up-ee down up down-ee-up down-ee-up-ee down down up down up down up down." 1 2 3 4, Orff rhythm syllables don't have a specified system. Often, they'll encourage teachers to use whatever they prefer, and many choose to use the Kodaly syllable system.[10]Outside of this, Orff teachers will often use a language-based model in which the rhythms are replaced with a word which matches the number of sounds in the rhythm. For example, two paired eighth notes may become "Jackie" or "Apple." Often, a teacher will stick with a theme and encourage students to create their own words within said theme.[11]Examples include:
https://en.wikipedia.org/wiki/Counting_(music)
Incomputational complexity theoryandcomputability theory, acounting problemis a type ofcomputational problem. IfRis asearch problemthen is the correspondingcounting functionand denotes the corresponding decision problem. Note thatcRis a search problem while #Ris a decision problem, howevercRcan beCCook-reducedto #R(for appropriateC) using abinary search(the reason #Ris defined the way it is, rather than being the graph ofcR, is to make this binary search possible). Just as NP hasNP-completeproblems viamany-one reductions, #P has #P-complete problems viaparsimonious reductions, problem transformations that preserve the number of solutions.[1] Thistheoretical computer science–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Counting_problem_(complexity)