text
stringlengths
11
320k
source
stringlengths
26
161
Inmathematics, asheaf(pl.:sheaves) is a tool for systematically tracking data (such assets,abelian groups,rings) attached to theopen setsof atopological spaceand defined locally with regard to them. For example, for each open set, the data could be the ring ofcontinuousfunctionsdefined on that open set. Such data are well-behaved in that they can be restricted to smaller open sets, and also the data assigned to an open set are equivalent to all collections of compatible data assigned to collections of smaller open setscoveringthe original open set (intuitively, every datum is the sum of its constituent data). The field of mathematics that studies sheaves is calledsheaf theory. Sheaves are understood conceptually as general and abstractobjects. Their precise definition is rather technical. They are specifically defined assheaves of setsor assheaves of rings, for example, depending on the type of data assigned to the open sets. There are alsomaps(ormorphisms) from one sheaf to another; sheaves (of a specific type, such as sheaves ofabelian groups) with their morphisms on a fixed topological space form acategory. On the other hand, to eachcontinuous mapthere is associated both adirect image functor, taking sheaves and their morphisms on thedomainto sheaves and morphisms on thecodomain, and aninverse image functoroperating in the opposite direction. Thesefunctors, and certain variants of them, are essential parts of sheaf theory. Due to their general nature and versatility, sheaves have several applications in topology and especially inalgebraicanddifferential geometry. First, geometric structures such as that of adifferentiable manifoldor aschemecan be expressed in terms of a sheaf of rings on the space. In such contexts, several geometric constructions such asvector bundlesordivisorsare naturally specified in terms of sheaves. Second, sheaves provide the framework for a very generalcohomology theory, which encompasses also the "usual" topological cohomology theories such assingular cohomology. Especially in algebraic geometry and the theory ofcomplex manifolds, sheaf cohomology provides a powerful link between topological and geometric properties of spaces. Sheaves also provide the basis for the theory ofD-modules, which provide applications to the theory ofdifferential equations. In addition, generalisations of sheaves to more general settings than topological spaces, such as the notion of a sheaf on a category with respect to someGrothendieck topology, have provided applications tomathematical logicand tonumber theory. In many mathematical branches, several structures defined on atopological spaceX{\displaystyle X}(e.g., adifferentiable manifold) can be naturallylocalisedorrestrictedtoopensubsetsU⊆X{\displaystyle U\subseteq X}: typical examples includecontinuousreal-valued orcomplex-valued functions,n{\displaystyle n}-timesdifferentiable(real-valued or complex-valued) functions,boundedreal-valued functions,vector fields, andsectionsof anyvector bundleon the space. The ability to restrict data to smaller open subsets gives rise to the concept of presheaves. Roughly speaking, sheaves are then those presheaves, where local data can be glued to global data. LetX{\displaystyle X}be a topological space. ApresheafF{\displaystyle {\mathcal {F}}}of setsonX{\displaystyle X}consists of the following data: The restriction morphisms are required to satisfy two additional (functorial) properties: Informally, the second axiom says it does not matter whether we restrict toW{\displaystyle W}in one step or restrict first toV{\displaystyle V}, then toW{\displaystyle W}. A concise functorial reformulation of this definition is given further below. Many examples of presheaves come from different classes of functions: to anyU{\displaystyle U}, one can assign the setC0(U){\displaystyle C^{0}(U)}of continuous real-valued functions onU{\displaystyle U}. The restriction maps are then just given by restricting a continuous function onU{\displaystyle U}to a smaller open subsetV⊆U{\displaystyle V\subseteq U}, which again is a continuous function. The two presheaf axioms are immediately checked, thereby giving an example of a presheaf. This can be extended to a presheaf of holomorphic functionsH(−){\displaystyle {\mathcal {H}}(-)}and a presheaf of smooth functionsC∞(−){\displaystyle C^{\infty }(-)}. Another common class of examples is assigning toU{\displaystyle U}the set of constant real-valued functions onU{\displaystyle U}. This presheaf is called theconstant presheafassociated toR{\displaystyle \mathbb {R} }and is denotedR_psh{\displaystyle {\underline {\mathbb {R} }}^{\text{psh}}}. Given a presheaf, a natural question to ask is to what extent its sections over an open setU{\displaystyle U}are specified by their restrictions to open subsets ofU{\displaystyle U}. Asheafis a presheaf whose sections are, in a technical sense, uniquely determined by their restrictions. Axiomatically, asheafis a presheaf that satisfies both of the following axioms: In both of these axioms, the hypothesis on the open cover is equivalent to the assumption that⋃i∈IUi=U{\textstyle \bigcup _{i\in I}U_{i}=U}. The sections{\displaystyle s}whose existence is guaranteed by axiom 2 is called thegluing,concatenation, orcollationof the sectionssi{\displaystyle s_{i}}. By axiom 1 it is unique. Sectionssi{\displaystyle s_{i}}andsj{\displaystyle s_{j}}satisfying the agreement precondition of axiom 2 are often calledcompatible; thus axioms 1 and 2 together state thatany collection of pairwise compatible sections can be uniquely glued together. Aseparated presheaf, ormonopresheaf, is a presheaf satisfying axiom 1.[2] The presheaf consisting of continuous functions mentioned above is a sheaf. This assertion reduces to checking that, given continuous functionsfi:Ui→R{\displaystyle f_{i}:U_{i}\to \mathbb {R} }which agree on the intersectionsUi∩Uj{\displaystyle U_{i}\cap U_{j}}, there is a unique continuous functionf:U→R{\displaystyle f:U\to \mathbb {R} }whose restriction equals thefi{\displaystyle f_{i}}. By contrast, the constant presheaf is usuallynota sheaf as it fails to satisfy the locality axiom on the empty set (this is explained in more detail atconstant sheaf). Presheaves and sheaves are typically denoted by capital letters,F{\displaystyle F}being particularly common, presumably for theFrenchword for sheaf,faisceau. Use of calligraphic letters such asF{\displaystyle {\mathcal {F}}}is also common. It can be shown that to specify a sheaf, it is enough to specify its restriction to the open sets of abasisfor the topology of the underlying space. Moreover, it can also be shown that it is enough to verify the sheaf axioms above relative to the open sets of a covering. This observation is used to construct another example which is crucial in algebraic geometry, namelyquasi-coherent sheaves. Here the topological space in question is thespectrum of a commutative ringR{\displaystyle R}, whose points are theprime idealsp{\displaystyle {\mathfrak {p}}}inR{\displaystyle R}. The open setsDf:={p⊆R,f∉p}{\displaystyle D_{f}:=\{{\mathfrak {p}}\subseteq R,f\notin {\mathfrak {p}}\}}form a basis for theZariski topologyon this space. Given anR{\displaystyle R}-moduleM{\displaystyle M}, there is a sheaf, denoted byM~{\displaystyle {\tilde {M}}}on theSpec⁡R{\displaystyle \operatorname {Spec} R}, that satisfies There is another characterization of sheaves that is equivalent to the previously discussed. A presheafF{\displaystyle {\mathcal {F}}}is a sheaf if and only if for any openU{\displaystyle U}and any open cover{Ua}{\displaystyle \{U_{a}\}}ofU{\displaystyle U},F(U){\displaystyle {\mathcal {F}}(U)}is the fibre productF(U)≅F(Ua)×F(Ua∩Ub)F(Ub){\displaystyle {\mathcal {F}}(U)\cong {\mathcal {F}}(U_{a})\times _{{\mathcal {F}}(U_{a}\cap U_{b})}{\mathcal {F}}(U_{b})}. This characterization is useful in construction of sheaves, for example, ifF,G{\displaystyle {\mathcal {F}},{\mathcal {G}}}areabelian sheaves, then the kernel of sheaves morphismF→G{\displaystyle {\mathcal {F}}\to {\mathcal {G}}}is a sheaf, since projective limits commutes with projective limits. On the other hand, the cokernel is not always a sheaf because inductive limit not necessarily commutes with projective limits. One of the way to fix this is to consider Noetherian topological spaces; every open sets are compact so that the cokernel is a sheaf, since finite projective limits commutes with inductive limits. Any continuous mapf:Y→X{\displaystyle f:Y\to X}of topological spaces determines a sheafΓ(Y/X){\displaystyle \Gamma (Y/X)}onX{\displaystyle X}by setting Any suchs{\displaystyle s}is commonly called asectionoff{\displaystyle f}, and this example is the reason why the elements inF(U){\displaystyle {\mathcal {F}}(U)}are generally called sections. This construction is especially important whenf{\displaystyle f}is the projection of afiber bundleonto its base space. For example, the sheaves of smooth functions are the sheaves of sections of thetrivial bundle. Another example: the sheaf of sections of is the sheaf which assigns to anyU⊆C∖{0}{\displaystyle U\subseteq \mathbb {C} \setminus \{0\}}the set of branches of thecomplex logarithmonU{\displaystyle U}. Given a pointx{\displaystyle x}and an abelian groupS{\displaystyle S}, the skyscraper sheafSx{\displaystyle S_{x}}is defined as follows: ifU{\displaystyle U}is an open set containingx{\displaystyle x}, thenSx(U)=S{\displaystyle S_{x}(U)=S}. IfU{\displaystyle U}does not containx{\displaystyle x}, thenSx(U)=0{\displaystyle S_{x}(U)=0}, thetrivial group. The restriction maps are either the identity onS{\displaystyle S}, if both open sets containx{\displaystyle x}, or the zero map otherwise. On ann{\displaystyle n}-dimensionalCk{\displaystyle C^{k}}-manifoldM{\displaystyle M}, there are a number of important sheaves, such as the sheaf ofj{\displaystyle j}-times continuously differentiable functionsOMj{\displaystyle {\mathcal {O}}_{M}^{j}}(withj≤k{\displaystyle j\leq k}). Its sections on some openU{\displaystyle U}are theCj{\displaystyle C^{j}}-functionsU→R{\displaystyle U\to \mathbb {R} }. Forj=k{\displaystyle j=k}, this sheaf is called thestructure sheafand is denotedOM{\displaystyle {\mathcal {O}}_{M}}. The nonzeroCk{\displaystyle C^{k}}functions also form a sheaf, denotedOX×{\displaystyle {\mathcal {O}}_{X}^{\times }}.Differential forms(of degreep{\displaystyle p}) also form a sheafΩMp{\displaystyle \Omega _{M}^{p}}. In all these examples, the restriction morphisms are given by restricting functions or forms. The assignment sendingU{\displaystyle U}to the compactly supported functions onU{\displaystyle U}is not a sheaf, since there is, in general, no way to preserve this property by passing to a smaller open subset. Instead, this forms acosheaf, adualconcept where the restriction maps go in the opposite direction than with sheaves.[3]However, taking thedualof these vector spaces does give a sheaf, the sheaf ofdistributions. In addition to the constant presheaf mentioned above, which is usually not a sheaf, there are further examples of presheaves that are not sheaves: One of the historical motivations for sheaves have come from studyingcomplex manifolds,[4]complex analytic geometry,[5]andscheme theoryfromalgebraic geometry. This is because in all of the previous cases, we consider a topological spaceX{\displaystyle X}together with a structure sheafO{\displaystyle {\mathcal {O}}}giving it the structure of a complex manifold, complex analytic space, or scheme. This perspective of equipping a topological space with a sheaf is essential to the theory of locally ringed spaces (see below). One of the main historical motivations for introducing sheaves was constructing a device which keeps track ofholomorphic functionsoncomplex manifolds. For example, on acompactcomplex manifoldX{\displaystyle X}(likecomplex projective spaceor thevanishing locusin projective space of ahomogeneous polynomial), theonlyholomorphic functions f:X→C{\displaystyle f:X\to \mathbb {C} } are the constant functions.[6][7]This means there exist two compact complex manifoldsX,X′{\displaystyle X,X'}which are not isomorphic, but nevertheless their rings of global holomorphic functions, denotedH(X),H(X′){\displaystyle {\mathcal {H}}(X),{\mathcal {H}}(X')}, are isomorphic. Contrast this withsmooth manifoldswhere every manifoldM{\displaystyle M}can be embedded inside someRn{\displaystyle \mathbb {R} ^{n}}, hence its ring of smooth functionsC∞(M){\displaystyle C^{\infty }(M)}comes from restricting the smooth functions fromC∞(Rn){\displaystyle C^{\infty }(\mathbb {R} ^{n})}, of which there exist plenty. Another complexity when considering the ring of holomorphic functions on a complex manifoldX{\displaystyle X}is given a small enough open setU⊆X{\displaystyle U\subseteq X}, the holomorphic functions will be isomorphic toH(U)≅H(Cn){\displaystyle {\mathcal {H}}(U)\cong {\mathcal {H}}(\mathbb {C} ^{n})}. Sheaves are a direct tool for dealing with this complexity since they make it possible to keep track of the holomorphic structure on the underlying topological space ofX{\displaystyle X}on arbitrary open subsetsU⊆X{\displaystyle U\subseteq X}. This means asU{\displaystyle U}becomes more complex topologically, the ringH(U){\displaystyle {\mathcal {H}}(U)}can be expressed from gluing theH(Ui){\displaystyle {\mathcal {H}}(U_{i})}. Note that sometimes this sheaf is denotedO(−){\displaystyle {\mathcal {O}}(-)}or justO{\displaystyle {\mathcal {O}}}, or evenOX{\displaystyle {\mathcal {O}}_{X}}when we want to emphasize the space the structure sheaf is associated to. Another common example of sheaves can be constructed by considering a complex submanifoldY↪X{\displaystyle Y\hookrightarrow X}. There is an associated sheafOY{\displaystyle {\mathcal {O}}_{Y}}which takes an open subsetU⊆X{\displaystyle U\subseteq X}and gives the ring of holomorphic functions onU∩Y{\displaystyle U\cap Y}. This kind of formalism was found to be extremely powerful and motivates a lot ofhomological algebrasuch assheaf cohomologysince anintersection theorycan be built using these kinds of sheavesfrom the Serre intersection formula. Morphisms of sheaves are, roughly speaking, analogous to functions between them. In contrast to a function between sets, which is simply an assignment of outputs to inputs, morphisms of sheaves are also required to be compatible with the local–global structures of the underlying sheaves. This idea is made precise in the following definition. LetF{\displaystyle {\mathcal {F}}}andG{\displaystyle {\mathcal {G}}}be two sheaves of sets (respectively abelian groups, rings, etc.) onX{\displaystyle X}. Amorphismφ:F→G{\displaystyle \varphi :{\mathcal {F}}\to {\mathcal {G}}}consists of a morphismφU:F(U)→G(U){\displaystyle \varphi _{U}:{\mathcal {F}}(U)\to {\mathcal {G}}(U)}of sets (respectively abelian groups, rings, etc.) for each open setU{\displaystyle U}ofX{\displaystyle X}, subject to the condition that this morphism is compatible with restrictions. In other words, for every open subsetV{\displaystyle V}of an open setU{\displaystyle U}, the following diagram iscommutative. For example, taking the derivative gives a morphism of sheaves onR{\displaystyle \mathbb {R} },ddx:ORn→ORn−1.{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} x}}\colon {\mathcal {O}}_{\mathbb {R} }^{n}\to {\mathcal {O}}_{\mathbb {R} }^{n-1}.}Indeed, given an (n{\displaystyle n}-times continuously differentiable) functionf:U→R{\displaystyle f:U\to \mathbb {R} }(withU{\displaystyle U}inR{\displaystyle \mathbb {R} }open), the restriction (to a smaller open subsetV{\displaystyle V}) of its derivative equals the derivative off|V{\displaystyle f|_{V}}. With this notion of morphism, sheaves of sets (respectively abelian groups, rings, etc.) on a fixed topological spaceX{\displaystyle X}form acategory. The general categorical notions ofmono-,epi-andisomorphismscan therefore be applied to sheaves. A morphismφ:F→G{\displaystyle \varphi \colon {\mathcal {F}}\rightarrow {\mathcal {G}}}of sheaves onX{\displaystyle X}is an isomorphism (respectively monomorphism) if and only if there exists an open cover{Uα}{\displaystyle \{U_{\alpha }\}}ofX{\displaystyle X}such thatφ|Uα:F(Uα)→G(Uα){\displaystyle \varphi |_{U_{\alpha }}\colon {\mathcal {F}}(U_{\alpha })\rightarrow {\mathcal {G}}(U_{\alpha })}are isomorphisms (respectively injective morphisms) of sets (respectively abelian groups, rings, etc.) for allα{\displaystyle \alpha }. These statements give examples of how to work with sheaves using local information, but it's important to note that we cannot check if a morphism of sheaves is an epimorphism in the same manner. Indeed the statement that maps on the level of open setsφU:F(U)→G(U){\displaystyle \varphi _{U}\colon {\mathcal {F}}(U)\rightarrow {\mathcal {G}}(U)}are not always surjective for epimorphisms of sheaves is equivalent to non-exactness of the global sections functor—or equivalently, to non-triviality ofsheaf cohomology. ThestalkFx{\displaystyle {\mathcal {F}}_{x}}of a sheafF{\displaystyle {\mathcal {F}}}captures the properties of a sheaf "around" a pointx∈X{\displaystyle x\in X}, generalizing thegerms of functions. Here, "around" means that, conceptually speaking, one looks at smaller and smallerneighborhoodsof the point. Of course, no single neighborhood will be small enough, which requires considering a limit of some sort. More precisely, the stalk is defined by thedirect limitbeing over all open subsets ofX{\displaystyle X}containing the given pointx{\displaystyle x}. In other words, an element of the stalk is given by a section over some open neighborhood ofx{\displaystyle x}, and two such sections are considered equivalent if their restrictions agree on a smaller neighborhood. The natural morphismF(U)→Fx{\displaystyle {\mathcal {F}}(U)\to {\mathcal {F}}_{x}}takes a sections{\displaystyle s}inF(U){\displaystyle {\mathcal {F}}(U)}to itsgermsx{\displaystyle s_{x}}atx{\displaystyle x}. This generalises the usual definition of agerm. In many situations, knowing the stalks of a sheaf is enough to control the sheaf itself. For example, whether or not a morphism of sheaves is a monomorphism, epimorphism, or isomorphism can be tested on the stalks. In this sense, a sheaf is determined by its stalks, which are a local data. By contrast, theglobalinformation present in a sheaf, i.e., theglobal sections, i.e., the sectionsF(X){\displaystyle {\mathcal {F}}(X)}on the whole spaceX{\displaystyle X}, typically carry less information. For example, for acompactcomplex manifoldX{\displaystyle X}, the global sections of the sheaf of holomorphic functions are justC{\displaystyle \mathbb {C} }, since any holomorphic function is constant byLiouville's theorem.[6] It is frequently useful to take the data contained in a presheaf and to express it as a sheaf. It turns out that there is a best possible way to do this. It takes a presheafF{\displaystyle {\mathcal {F}}}and produces a new sheafaF{\displaystyle a{\mathcal {F}}}called thesheafificationorsheaf associated to the presheafF{\displaystyle {\mathcal {F}}}. For example, the sheafification of the constant presheaf (see above) is called theconstant sheaf. Despite its name, its sections arelocallyconstantfunctions. The sheafaF{\displaystyle a{\mathcal {F}}}can be constructed using theétalé spaceE{\displaystyle E}ofF{\displaystyle {\mathcal {F}}}, namely as the sheaf of sections of the map Another construction of the sheafaF{\displaystyle a{\mathcal {F}}}proceeds by means of a functorL{\displaystyle L}from presheaves to presheaves that gradually improves the properties of a presheaf: for any presheafF{\displaystyle {\mathcal {F}}},LF{\displaystyle L{\mathcal {F}}}is a separated presheaf, and for any separated presheafF{\displaystyle {\mathcal {F}}},LF{\displaystyle L{\mathcal {F}}}is a sheaf. The associated sheafaF{\displaystyle a{\mathcal {F}}}is given byLLF{\displaystyle LL{\mathcal {F}}}.[8] The idea that the sheafaF{\displaystyle a{\mathcal {F}}}is the best possible approximation toF{\displaystyle {\mathcal {F}}}by a sheaf is made precise using the followinguniversal property: there is a natural morphism of presheavesi:F→aF{\displaystyle i\colon {\mathcal {F}}\to a{\mathcal {F}}}so that for any sheafG{\displaystyle {\mathcal {G}}}and any morphism of presheavesf:F→G{\displaystyle f\colon {\mathcal {F}}\to {\mathcal {G}}}, there is a unique morphism of sheavesf~:aF→G{\displaystyle {\tilde {f}}\colon a{\mathcal {F}}\rightarrow {\mathcal {G}}}such thatf=f~i{\displaystyle f={\tilde {f}}i}. In fact,a{\displaystyle a}is the leftadjoint functorto the inclusion functor (orforgetful functor) from the category of sheaves to the category of presheaves, andi{\displaystyle i}is theunitof the adjunction. In this way, the category of sheaves turns into aGiraud subcategoryof presheaves. This categorical situation is the reason why the sheafification functor appears in constructing cokernels of sheaf morphisms or tensor products of sheaves, but not for kernels, say. IfK{\displaystyle K}is asubsheafof a sheafF{\displaystyle F}of abelian groups, then thequotient sheafQ{\displaystyle Q}is the sheaf associated to the presheafU↦F(U)/K(U){\displaystyle U\mapsto F(U)/K(U)}; in other words, the quotient sheaf fits into an exact sequence of sheaves of abelian groups; (this is also called asheaf extension.) LetF,G{\displaystyle F,G}be sheaves of abelian groups. The setHom⁡(F,G){\displaystyle \operatorname {Hom} (F,G)}of morphisms of sheaves fromF{\displaystyle F}toG{\displaystyle G}forms an abelian group (by the abelian group structure ofG{\displaystyle G}). Thesheaf homofF{\displaystyle F}andG{\displaystyle G}, denoted by, is the sheaf of abelian groupsU↦Hom⁡(F|U,G|U){\displaystyle U\mapsto \operatorname {Hom} (F|_{U},G|_{U})}whereF|U{\displaystyle F|_{U}}is the sheaf onU{\displaystyle U}given by(F|U)(V)=F(V){\displaystyle (F|_{U})(V)=F(V)}(note sheafification is not needed here). The direct sum ofF{\displaystyle F}andG{\displaystyle G}is the sheaf given byU↦F(U)⊕G(U){\displaystyle U\mapsto F(U)\oplus G(U)}, and the tensor product ofF{\displaystyle F}andG{\displaystyle G}is the sheaf associated to the presheafU↦F(U)⊗G(U){\displaystyle U\mapsto F(U)\otimes G(U)}. All of these operations extend tosheaves of modulesover asheaf of ringsA{\displaystyle A}; the above is the special case whenA{\displaystyle A}is theconstant sheafZ_{\displaystyle {\underline {\mathbf {Z} }}}. Since the data of a (pre-)sheaf depends on the open subsets of the base space, sheaves on different topological spaces are unrelated to each other in the sense that there are no morphisms between them. However, given a continuous mapf:X→Y{\displaystyle f:X\to Y}between two topological spaces, pushforward and pullback relate sheaves onX{\displaystyle X}to those onY{\displaystyle Y}and vice versa. The pushforward (also known asdirect image) of a sheafF{\displaystyle {\mathcal {F}}}onX{\displaystyle X}is the sheaf defined by HereV{\displaystyle V}is an open subset ofY{\displaystyle Y}, so that its preimage is open inX{\displaystyle X}by the continuity off{\displaystyle f}. This construction recovers the skyscraper sheafSx{\displaystyle S_{x}}mentioned above: wherei:{x}→X{\displaystyle i:\{x\}\to X}is the inclusion, andS{\displaystyle S}is regarded as a sheaf on thesingletonbyS({∗})=S,S(∅)=∅{\displaystyle S(\{*\})=S,S(\emptyset )=\emptyset }. For a map betweenlocally compact spaces, thedirect image with compact supportis a subsheaf of the direct image.[9]By definition,(f!F)(V){\displaystyle (f_{!}{\mathcal {F}})(V)}consists of thoses∈F(f−1(V)){\displaystyle s\in {\mathcal {F}}(f^{-1}(V))}whosesupportis mappedproperly. Iff{\displaystyle f}is proper itself, thenf!F=f∗F{\displaystyle f_{!}{\mathcal {F}}=f_{*}{\mathcal {F}}}, but in general they disagree. The pullback orinverse imagegoes the other way: it produces a sheaf onX{\displaystyle X}, denotedf−1G{\displaystyle f^{-1}{\mathcal {G}}}out of a sheafG{\displaystyle {\mathcal {G}}}onY{\displaystyle Y}. Iff{\displaystyle f}is the inclusion of an open subset, then the inverse image is just a restriction, i.e., it is given by(f−1G)(U)=G(U){\displaystyle (f^{-1}{\mathcal {G}})(U)={\mathcal {G}}(U)}for an openU{\displaystyle U}inX{\displaystyle X}. A sheafF{\displaystyle {\mathcal {F}}}(on some spaceX{\displaystyle X}) is calledlocally constantifX=⋃i∈IUi{\displaystyle X=\bigcup _{i\in I}U_{i}}by some open subsetsUi{\displaystyle U_{i}}such that the restriction ofF{\displaystyle {\mathcal {F}}}to all these open subsets is constant. On a wide range of topological spacesX{\displaystyle X}, such sheaves areequivalenttorepresentationsof thefundamental groupπ1(X){\displaystyle \pi _{1}(X)}. For general mapsf{\displaystyle f}, the definition off−1G{\displaystyle f^{-1}{\mathcal {G}}}is more involved; it is detailed atinverse image functor. The stalk is an essential special case of the pullback in view of a natural identification, wherei{\displaystyle i}is as above: More generally, stalks satisfy(f−1G)x=Gf(x){\displaystyle (f^{-1}{\mathcal {G}})_{x}={\mathcal {G}}_{f(x)}}. For the inclusionj:U→X{\displaystyle j:U\to X}of an open subset, theextension by zeroj!F{\displaystyle j_{!}{\mathcal {F}}}(pronounced "j lowershriekof F") of a sheafF{\displaystyle {\mathcal {F}}}of abelian groups onU{\displaystyle U}is the sheafification of the presheaf defined by For a sheafG{\displaystyle {\mathcal {G}}}onX{\displaystyle X}, this construction is in a sense complementary toi∗{\displaystyle i_{*}}, wherei:X∖U→X{\displaystyle i:X\setminus U\to X}is the inclusion of the complement ofU{\displaystyle U}: More generally, ifA⊂X{\displaystyle A\subset X}is alocally closed subset, then there exists an openU{\displaystyle U}ofX{\displaystyle X}containingA{\displaystyle A}such thatA{\displaystyle A}is closed inU{\displaystyle U}. Letf:A→U{\displaystyle f:A\to U}andj:U→X{\displaystyle j:U\to X}be the natural inclusions. Then theextension by zeroof a sheafF{\displaystyle {\mathcal {F}}}onA{\displaystyle A}is defined byj!f∗F{\displaystyle j_{!}f_{*}F}. Due to its nice behavior on stalks, the extension by zero functor is useful for reducing sheaf-theoretic questions onX{\displaystyle X}to ones on the strata of astratification, i.e., a decomposition ofX{\displaystyle X}into smaller, locally closed subsets. In addition to (pre-)sheaves as introduced above, whereF(U){\displaystyle {\mathcal {F}}(U)}is merely a set, it is in many cases important to keep track of additional structure on these sections. For example, the sections of the sheaf of continuous functions naturally form a realvector space, and restriction is alinear mapbetween these vector spaces. Presheaves with values in an arbitrary categoryC{\displaystyle C}are defined by first considering the category of open sets onX{\displaystyle X}to be theposetal categoryO(X){\displaystyle O(X)}whose objects are the open sets ofX{\displaystyle X}and whose morphisms are inclusions. Then aC{\displaystyle C}-valued presheaf onX{\displaystyle X}is the same as acontravariant functorfromO(X){\displaystyle O(X)}toC{\displaystyle C}. Morphisms in this category of functors, also known asnatural transformations, are the same as the morphisms defined above, as can be seen by unraveling the definitions. If the target categoryC{\displaystyle C}admits alllimits, aC{\displaystyle C}-valued presheaf is a sheaf if the following diagram is anequalizerfor every open coverU={Ui}i∈I{\displaystyle {\mathcal {U}}=\{U_{i}\}_{i\in I}}of any open setU{\displaystyle U}: Here the first map is the product of the restriction maps and the pair of arrows the products of the two sets of restrictions and IfC{\displaystyle C}is anabelian category, this condition can also be rephrased by requiring that there is anexact sequence A particular case of this sheaf condition occurs forU{\displaystyle U}being the empty set, and the index setI{\displaystyle I}also being empty. In this case, the sheaf condition requiresF(∅){\displaystyle {\mathcal {F}}(\emptyset )}to be theterminal objectinC{\displaystyle C}. In several geometrical disciplines, includingalgebraic geometryanddifferential geometry, the spaces come along with a natural sheaf of rings, often called the structure sheaf and denoted byOX{\displaystyle {\mathcal {O}}_{X}}. Such a pair(X,OX){\displaystyle (X,{\mathcal {O}}_{X})}is called aringed space. Many types of spaces can be defined as certain types of ringed spaces. Commonly, all the stalksOX,x{\displaystyle {\mathcal {O}}_{X,x}}of the structure sheaf arelocal rings, in which case the pair is called alocally ringed space. For example, ann{\displaystyle n}-dimensionalCk{\displaystyle C^{k}}manifoldM{\displaystyle M}is a locally ringed space whose structure sheaf consists ofCk{\displaystyle C^{k}}-functions on the open subsets ofM{\displaystyle M}. The property of being alocallyringed space translates into the fact that such a function, which is nonzero at a pointx{\displaystyle x}, is also non-zero on a sufficiently small open neighborhood ofx{\displaystyle x}. Some authors actuallydefinereal (or complex) manifolds to be locally ringed spaces that are locally isomorphic to the pair consisting of an open subset ofRn{\displaystyle \mathbb {R} ^{n}}(respectivelyCn{\displaystyle \mathbb {C} ^{n}}) together with the sheaf ofCk{\displaystyle C^{k}}(respectively holomorphic) functions.[10]Similarly,schemes, the foundational notion of spaces in algebraic geometry, are locally ringed spaces that are locally isomorphic to thespectrum of a ring. Given a ringed space, asheaf of modulesis a sheafM{\displaystyle {\mathcal {M}}}such that on every open setU{\displaystyle U}ofX{\displaystyle X},M(U){\displaystyle {\mathcal {M}}(U)}is anOX(U){\displaystyle {\mathcal {O}}_{X}(U)}-module and for every inclusion of open setsV⊆U{\displaystyle V\subseteq U}, the restriction mapM(U)→M(V){\displaystyle {\mathcal {M}}(U)\to {\mathcal {M}}(V)}is compatible with the restriction mapO(U)→O(V){\displaystyle {\mathcal {O}}(U)\to {\mathcal {O}}(V)}: the restriction offs{\displaystyle fs}is the restriction off{\displaystyle f}times that ofs{\displaystyle s}for anyf{\displaystyle f}inO(U){\displaystyle {\mathcal {O}}(U)}ands{\displaystyle s}inM(U){\displaystyle {\mathcal {M}}(U)}. Most important geometric objects are sheaves of modules. For example, there is a one-to-one correspondence betweenvector bundlesandlocally free sheavesofOX{\displaystyle {\mathcal {O}}_{X}}-modules. This paradigm applies to real vector bundles, complex vector bundles, or vector bundles in algebraic geometry (whereO{\displaystyle {\mathcal {O}}}consists of smooth functions, holomorphic functions, or regular functions, respectively). Sheaves of solutions to differential equations areD{\displaystyle D}-modules, that is, modules over the sheaf ofdifferential operators. On any topological space, modules over the constant sheafZ_{\displaystyle {\underline {\mathbf {Z} }}}are the same assheaves of abelian groupsin the sense above. There is a different inverse image functor for sheaves of modules over sheaves of rings. This functor is usually denotedf∗{\displaystyle f^{*}}and it is distinct fromf−1{\displaystyle f^{-1}}. Seeinverse image functor. Finiteness conditions for module overcommutative ringsgive rise to similar finiteness conditions for sheaves of modules:M{\displaystyle {\mathcal {M}}}is calledfinitely generated(respectivelyfinitely presented) if, for every pointx{\displaystyle x}ofX{\displaystyle X}, there exists an open neighborhoodU{\displaystyle U}ofx{\displaystyle x}, a natural numbern{\displaystyle n}(possibly depending onU{\displaystyle U}), and a surjective morphism of sheavesOXn|U→M|U{\displaystyle {\mathcal {O}}_{X}^{n}|_{U}\to {\mathcal {M}}|_{U}}(respectively, in addition a natural numberm{\displaystyle m}, and an exact sequenceOXm|U→OXn|U→M|U→0{\displaystyle {\mathcal {O}}_{X}^{m}|_{U}\to {\mathcal {O}}_{X}^{n}|_{U}\to {\mathcal {M}}|_{U}\to 0}.) Paralleling the notion of acoherent module,M{\displaystyle {\mathcal {M}}}is called acoherent sheafif it is of finite type and if, for every open setU{\displaystyle U}and every morphism of sheavesϕ:OXn→M{\displaystyle \phi :{\mathcal {O}}_{X}^{n}\to {\mathcal {M}}}(not necessarily surjective), the kernel ofϕ{\displaystyle \phi }is of finite type.OX{\displaystyle {\mathcal {O}}_{X}}iscoherentif it is coherent as a module over itself. Like for modules, coherence is in general a strictly stronger condition than finite presentation. TheOka coherence theoremstates that the sheaf of holomorphic functions on acomplex manifoldis coherent. In the examples above it was noted that some sheaves occur naturally as sheaves of sections. In fact, all sheaves of sets can be represented as sheaves of sections of a topological space called theétalé space, from the French word étalé[etale], meaning roughly "spread out". IfF∈Sh(X){\displaystyle F\in {\text{Sh}}(X)}is a sheaf overX{\displaystyle X}, then theétalé space(sometimes called theétale space) ofF{\displaystyle F}is a topological spaceE{\displaystyle E}together with alocal homeomorphismπ:E→X{\displaystyle \pi :E\to X}such that the sheaf of sectionsΓ(π,−){\displaystyle \Gamma (\pi ,-)}ofπ{\displaystyle \pi }isF{\displaystyle F}. The spaceE{\displaystyle E}is usually very strange, and even if the sheafF{\displaystyle F}arises from a natural topological situation,E{\displaystyle E}may not have any clear topological interpretation. For example, ifF{\displaystyle F}is the sheaf of sections of a continuous functionf:Y→X{\displaystyle f:Y\to X}, thenE=Y{\displaystyle E=Y}if and only iff{\displaystyle f}is alocal homeomorphism. The étalé spaceE{\displaystyle E}is constructed from the stalks ofF{\displaystyle F}overX{\displaystyle X}. As a set, it is theirdisjoint unionandπ{\displaystyle \pi }is the obvious map that takes the valuex{\displaystyle x}on the stalk ofF{\displaystyle F}overx∈X{\displaystyle x\in X}. The topology ofE{\displaystyle E}is defined as follows. For each elements∈F(U){\displaystyle s\in F(U)}and eachx∈U{\displaystyle x\in U}, we get a germ ofs{\displaystyle s}atx{\displaystyle x}, denoted[s]x{\displaystyle [s]_{x}}orsx{\displaystyle s_{x}}. These germs determine points ofE{\displaystyle E}. For anyU{\displaystyle U}ands∈F(U){\displaystyle s\in F(U)}, the union of these points (for allx∈U{\displaystyle x\in U}) is declared to be open inE{\displaystyle E}. Notice that each stalk has thediscrete topologyas subspace topology. A morphism between two sheaves determine a continuous map of the corresponding étalé spaces that is compatible with the projection maps (in the sense that every germ is mapped to a germ over the same point). This makes the construction into a functor. The construction above determines anequivalence of categoriesbetween the category of sheaves of sets onX{\displaystyle X}and the category of étalé spaces overX{\displaystyle X}. The construction of an étalé space can also be applied to a presheaf, in which case the sheaf of sections of the étalé space recovers the sheaf associated to the given presheaf. This construction makes all sheaves intorepresentable functorson certain categories of topological spaces. As above, letF{\displaystyle F}be a sheaf onX{\displaystyle X}, letE{\displaystyle E}be its étalé space, and letπ:E→X{\displaystyle \pi :E\to X}be the natural projection. Consider theovercategoryTop/X{\displaystyle {\text{Top}}/X}of topological spaces overX{\displaystyle X}, that is, the category of topological spaces together with fixed continuous maps toX{\displaystyle X}. Every object of this category is a continuous mapf:Y→X{\displaystyle f:Y\to X}, and a morphism fromY→X{\displaystyle Y\to X}toZ→X{\displaystyle Z\to X}is a continuous mapY→Z{\displaystyle Y\to Z}that commutes with the two maps toX{\displaystyle X}. There is a functor Γ:Top/X→Sets{\displaystyle \Gamma :{\text{Top}}/X\to {\text{Sets}}} sending an objectf:Y→X{\displaystyle f:Y\to X}tof−1F(Y){\displaystyle f^{-1}F(Y)}. For example, ifi:U↪X{\displaystyle i:U\hookrightarrow X}is the inclusion of an open subset, then Γ(i)=f−1F(U)=F(U)=Γ(F,U){\displaystyle \Gamma (i)=f^{-1}F(U)=F(U)=\Gamma (F,U)} and for the inclusion of a pointi:{x}↪X{\displaystyle i:\{x\}\hookrightarrow X}, then Γ(i)=f−1F({x})=F|x{\displaystyle \Gamma (i)=f^{-1}F(\{x\})=F|_{x}} is the stalk ofF{\displaystyle F}atx{\displaystyle x}. There is a natural isomorphism (f−1F)(Y)≅HomTop/X⁡(f,π){\displaystyle (f^{-1}F)(Y)\cong \operatorname {Hom} _{\mathbf {Top} /X}(f,\pi )}, which shows thatπ:E→X{\displaystyle \pi :E\to X}(for the étalé space) represents the functorΓ{\displaystyle \Gamma }. E{\displaystyle E}is constructed so that the projection mapπ{\displaystyle \pi }is a covering map. In algebraic geometry, the natural analog of a covering map is called anétale morphism. Despite its similarity to "étalé", the word étale[etal]has a different meaning in French. It is possible to turnE{\displaystyle E}into aschemeandπ{\displaystyle \pi }into a morphism of schemes in such a way thatπ{\displaystyle \pi }retains the same universal property, butπ{\displaystyle \pi }isnotin general an étale morphism because it is not quasi-finite. It is, however,formally étale. The definition of sheaves by étalé spaces is older than the definition given earlier in the article. It is still common in some areas of mathematics such asmathematical analysis. In contexts where the open setU{\displaystyle U}is fixed, and the sheaf is regarded as a variable, the setF(U){\displaystyle F(U)}is also often denotedΓ(U,F).{\displaystyle \Gamma (U,F).} As was noted above, this functor does not preserve epimorphisms. Instead, an epimorphism of sheavesF→G{\displaystyle {\mathcal {F}}\to {\mathcal {G}}}is a map with the following property: for any sectiong∈G(U){\displaystyle g\in {\mathcal {G}}(U)}there is a coveringU={Ui}i∈I{\displaystyle {\mathcal {U}}=\{U_{i}\}_{i\in I}}where U=⋃i∈IUi{\displaystyle U=\bigcup _{i\in I}U_{i}} of open subsets, such that the restrictiong|Ui{\displaystyle g|_{U_{i}}}are in the image ofF(Ui){\displaystyle {\mathcal {F}}(U_{i})}. However,g{\displaystyle g}itself need not be in the image ofF(U){\displaystyle {\mathcal {F}}(U)}. A concrete example of this phenomenon is the exponential map between the sheaf ofholomorphic functionsand non-zero holomorphic functions. This map is an epimorphism, which amounts to saying that any non-zero holomorphic functiong{\displaystyle g}(on some open subset inC{\displaystyle \mathbb {C} }, say), admits acomplex logarithmlocally, i.e., after restrictingg{\displaystyle g}to appropriate open subsets. However,g{\displaystyle g}need not have a logarithm globally. Sheaf cohomology captures this phenomenon. More precisely, for anexact sequenceof sheaves of abelian groups (i.e., an epimorphismF2→F3{\displaystyle {\mathcal {F}}_{2}\to {\mathcal {F}}_{3}}whose kernel isF1{\displaystyle {\mathcal {F}}_{1}}), there is a long exact sequence0→Γ(U,F1)→Γ(U,F2)→Γ(U,F3)→H1(U,F1)→H1(U,F2)→H1(U,F3)→H2(U,F1)→…{\displaystyle 0\to \Gamma (U,{\mathcal {F}}_{1})\to \Gamma (U,{\mathcal {F}}_{2})\to \Gamma (U,{\mathcal {F}}_{3})\to H^{1}(U,{\mathcal {F}}_{1})\to H^{1}(U,{\mathcal {F}}_{2})\to H^{1}(U,{\mathcal {F}}_{3})\to H^{2}(U,{\mathcal {F}}_{1})\to \dots }By means of this sequence, the first cohomology groupH1(U,F1){\displaystyle H^{1}(U,{\mathcal {F}}_{1})}is a measure for the non-surjectivity of the map between sections ofF2{\displaystyle {\mathcal {F}}_{2}}andF3{\displaystyle {\mathcal {F}}_{3}}. There are several different ways of constructing sheaf cohomology.Grothendieck (1957)introduced them by defining sheaf cohomology as thederived functorofΓ{\displaystyle \Gamma }. This method is theoretically satisfactory, but, being based oninjective resolutions, of little use in concrete computations.Godement resolutionsare another general, but practically inaccessible approach. Especially in the context of sheaves on manifolds, sheaf cohomology can often be computed using resolutions bysoft sheaves,fine sheaves, andflabby sheaves(also known asflasque sheavesfrom the Frenchflasquemeaning flabby). For example, apartition of unityargument shows that the sheaf of smooth functions on a manifold is soft. The higher cohomology groupsHi(U,F){\displaystyle H^{i}(U,{\mathcal {F}})}fori>0{\displaystyle i>0}vanish for soft sheaves, which gives a way of computing cohomology of other sheaves. For example, thede Rham complexis a resolution of the constant sheafR_{\displaystyle {\underline {\mathbf {R} }}}on any smooth manifold, so the sheaf cohomology ofR_{\displaystyle {\underline {\mathbf {R} }}}is equal to itsde Rham cohomology. A different approach is byČech cohomology. Čech cohomology was the first cohomology theory developed for sheaves and it is well-suited to concrete calculations, such as computing thecoherent sheaf cohomologyof complex projective spacePn{\displaystyle \mathbb {P} ^{n}}.[11]It relates sections on open subsets of the space to cohomology classes on the space. In most cases, Čech cohomology computes the same cohomology groups as the derived functor cohomology. However, for some pathological spaces, Čech cohomology will give the correctH1{\displaystyle H^{1}}but incorrect higher cohomology groups. To get around this,Jean-Louis Verdierdevelopedhypercoverings. Hypercoverings not only give the correct higher cohomology groups but also allow the open subsets mentioned above to be replaced by certain morphisms from another space. This flexibility is necessary in some applications, such as the construction ofPierre Deligne'smixed Hodge structures. Many other coherent sheaf cohomology groups are found using an embeddingi:X↪Y{\displaystyle i:X\hookrightarrow Y}of a spaceX{\displaystyle X}into a space with known cohomology, such asPn{\displaystyle \mathbb {P} ^{n}}, or someweighted projective space. In this way, the known sheaf cohomology groups on these ambient spaces can be related to the sheavesi∗F{\displaystyle i_{*}{\mathcal {F}}}, givingHi(Y,i∗F)≅Hi(X,F){\displaystyle H^{i}(Y,i_{*}{\mathcal {F}})\cong H^{i}(X,{\mathcal {F}})}. For example, computing thecoherent sheaf cohomology of projective plane curvesis easily found. One big theorem in this space is theHodge decompositionfound using aspectral sequence associated to sheaf cohomology groups, proved by Deligne.[12][13]Essentially, theE1{\displaystyle E_{1}}-page with terms E1p,q=Hp(X,ΩXq){\displaystyle E_{1}^{p,q}=H^{p}(X,\Omega _{X}^{q})} the sheaf cohomology of asmoothprojective varietyX{\displaystyle X}, degenerates, meaningE1=E∞{\displaystyle E_{1}=E_{\infty }}. This gives the canonical Hodge structure on the cohomology groupsHk(X,C){\displaystyle H^{k}(X,\mathbb {C} )}. It was later found these cohomology groups can be easily explicitly computed usingGriffiths residues. SeeJacobian ideal. These kinds of theorems lead to one of the deepest theorems about the cohomology of algebraic varieties,the decomposition theorem, paving the path forMixed Hodge modules. Another clean approach to the computation of some cohomology groups is theBorel–Bott–Weil theorem, which identifies the cohomology groups of someline bundlesonflag manifoldswithirreducible representationsofLie groups. This theorem can be used, for example, to easily compute the cohomology groups of all line bundles on projective space andgrassmann manifolds. In many cases there is a duality theory for sheaves that generalizesPoincaré duality. SeeGrothendieck dualityandVerdier duality. Thederived categoryof the category of sheaves of, say, abelian groups on some spaceX, denoted here asD(X){\displaystyle D(X)}, is the conceptual haven for sheaf cohomology, by virtue of the following relation: The adjunction betweenf−1{\displaystyle f^{-1}}, which is the left adjoint off∗{\displaystyle f_{*}}(already on the level of sheaves of abelian groups) gives rise to an adjunction whereRf∗{\displaystyle Rf_{*}}is the derived functor. This latter functor encompasses the notion of sheaf cohomology sinceHn(X,F)=Rnf∗F{\displaystyle H^{n}(X,{\mathcal {F}})=R^{n}f_{*}{\mathcal {F}}}forf:X→{∗}{\displaystyle f:X\to \{*\}}. Likef∗{\displaystyle f_{*}}, the direct image with compact supportf!{\displaystyle f_{!}}can also be derived. By virtue of the following isomorphismRf!F{\displaystyle Rf_{!}{\mathcal {F}}}parametrizes thecohomology with compact supportof thefibersoff{\displaystyle f}: This isomorphism is an example of abase change theorem. There is another adjunction Unlike all the functors considered above, the twisted (or exceptional) inverse image functorf!{\displaystyle f^{!}}is in general only defined on the level ofderived categories, i.e., the functor is not obtained as the derived functor of some functor betweenabelian categories. Iff:X→{∗}{\displaystyle f:X\to \{*\}}andXis a smoothorientable manifoldof dimensionn, then This computation, and the compatibility of the functors with duality (seeVerdier duality) can be used to obtain a high-brow explanation ofPoincaré duality. In the context of quasi-coherent sheaves on schemes, there is a similar duality known ascoherent duality. Perverse sheavesare certain objects inD(X){\displaystyle D(X)}, i.e., complexes of sheaves (but not in general sheaves proper). They are an important tool to study the geometry ofsingularities.[16] Another important application of derived categories of sheaves is with the derived category ofcoherent sheaveson a schemeX{\displaystyle X}denotedDCoh(X){\displaystyle D_{Coh}(X)}. This was used by Grothendieck in his development ofintersection theory[17]usingderived categoriesandK-theory, that the intersection product of subschemesY1,Y2{\displaystyle Y_{1},Y_{2}}is represented inK-theoryas [Y1]⋅[Y2]=[OY1⊗OXLOY2]∈K(Coh(X)){\displaystyle [Y_{1}]\cdot [Y_{2}]=[{\mathcal {O}}_{Y_{1}}\otimes _{{\mathcal {O}}_{X}}^{\mathbf {L} }{\mathcal {O}}_{Y_{2}}]\in K({\text{Coh(X)}})} whereOYi{\displaystyle {\mathcal {O}}_{Y_{i}}}arecoherent sheavesdefined by theOX{\displaystyle {\mathcal {O}}_{X}}-modules given by theirstructure sheaves. André Weil'sWeil conjecturesstated that there was acohomology theoryforalgebraic varietiesoverfinite fieldsthat would give an analogue of theRiemann hypothesis. The cohomology of a complex manifold can be defined as the sheaf cohomology of the locally constant sheafC_{\displaystyle {\underline {\mathbf {C} }}}in the Euclidean topology, which suggests defining a Weil cohomology theory in positive characteristic as the sheaf cohomology of a constant sheaf. But the only classical topology on such a variety is theZariski topology, and the Zariski topology has very few open sets, so few that the cohomology of any Zariski-constant sheaf on an irreducible variety vanishes (except in degree zero).Alexandre Grothendiecksolved this problem by introducingGrothendieck topologies, which axiomatize the notion ofcovering. Grothendieck's insight was that the definition of a sheaf depends only on the open sets of a topological space, not on the individual points. Once he had axiomatized the notion of covering, open sets could be replaced by other objects. A presheaf takes each one of these objects to data, just as before, and a sheaf is a presheaf that satisfies the gluing axiom with respect to our new notion of covering. This allowed Grothendieck to defineétale cohomologyandℓ-adic cohomology, which eventually were used to prove the Weil conjectures. A category with a Grothendieck topology is called asite. A category of sheaves on a site is called atoposor aGrothendieck topos. The notion of a topos was later abstracted byWilliam Lawvereand Miles Tierney to define anelementary topos, which has connections tomathematical logic. The first origins ofsheaf theoryare hard to pin down – they may be co-extensive with the idea ofanalytic continuation[clarification needed]. It took about 15 years for a recognisable, free-standing theory of sheaves to emerge from the foundational work oncohomology. At this point sheaves had become a mainstream part of mathematics, with use by no means restricted toalgebraic topology. It was later discovered that the logic in categories of sheaves isintuitionistic logic(this observation is now often referred to asKripke–Joyal semantics, but probably should be attributed to a number of authors).
https://en.wikipedia.org/wiki/Sheaf_(mathematics)
X10is aprogramming languagebeing developed byIBMat theThomas J. Watson Research Centeras part of the Productive, Easy-to-use, Reliable Computing System (PERCS) project funded byDARPA'sHigh Productivity Computing Systems(HPCS) program. Its primary authors are Kemal Ebcioğlu, Saravanan Arumugam (Aswath), Vijay Saraswat, and Vivek Sarkar.[1] X10 is designed specifically forparallel computingusing thepartitioned global address space(PGAS) model. A computation is divided among a set ofplaces, each of which holds some data and hosts one or moreactivitiesthat operate on those data. It has a constrained type system for object-oriented programming, a form ofdependent types. Other features include user-defined primitivestructtypes; globally distributedarrays, and structured and unstructured parallelism.[2] X10 uses the concept of parent and child relationships for activities to prevent the lock stalemate that can occur when two or more processes wait for each other to finish before they can complete. An activity may spawn one or more child activities, which may themselves have children. Children cannot wait for a parent to finish, but a parent can wait for a child using thefinishcommand.[3] [4] Thisprogramming-language-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/X10_(programming_language)
Structured concurrencyis aprogramming paradigmaimed at improving the clarity, quality, and development time of acomputer programby using a structured approach toconcurrent programming. The core concept is the encapsulation of concurrent threads of execution (here encompassing kernel and userland threads and processes) by way of control flow constructs that have clear entry and exit points and that ensure all spawned threads have completed before exit. Such encapsulation allows errors in concurrent threads to be propagated to the control structure's parent scope and managed by the native error handling mechanisms of each particular computer language. It allows control flow to remain readily evident by the structure of the source code despite the presence of concurrency. To be effective, this model must be applied consistently throughout all levels of the program – otherwise concurrent threads may leak out, become orphaned, or fail to have runtime errors correctly propagated. Structured concurrency is analogous tostructured programming, which uses control flow constructs that encapsulate sequential statements and subroutines. Thefork–join modelfrom the 1960s, embodied by multiprocessing tools likeOpenMP, is an early example of a system ensuring all threads have completed before exit. However, Smith argues that this model is not true structured concurrency as the programming language is unaware of the joining behavior, and is thus unable to enforce safety.[1] The concept was formulated in 2016 by Martin Sústrik (a developer ofZeroMQ) with his C library libdill, withgoroutinesas a starting point.[2]It was further refined in 2017 by Nathaniel J. Smith, who introduced a "nursery pattern" in hisPythonimplementation called Trio.[3]Meanwhile, Roman Elizarov independently came upon the same ideas while developing an experimentalcoroutinelibrary for theKotlin language,[4][5]which later became a standard library.[6] In 2021,Swiftadopted structured concurrency.[7]Later that year, a draft proposal was published to add structured concurrency toJava.[8] A major point of variation is how an error in one member of a concurrent thread tree is handled. Simple implementations will merely wait until the children and siblings of the failing thread run to completion before propagating the error to the parent scope. However, that could take an indefinite amount of time. The alternative is to employ a general cancellation mechanism (typically a cooperative scheme allowing program invariants to be honored) to terminate the children and sibling threads in an expedient manner.
https://en.wikipedia.org/wiki/Structured_concurrency
Acontent-addressable parallel processor(CAPP) also known asassociative processor[1]is a type ofparallel processorwhich usescontent-addressing memory(CAM) principles. CAPPs are intended for bulk computation. The syntactic structure of theircomputing algorithmare simple, whereas the number of concurrent processes may be very large, only limited by the number of locations in the CAM. The best-known CAPP may beSTARAN, completed in 1972; several similar systems were later built in other countries. A CAPP is distinctly different from aVon Neumann architectureor classical computer that stores data in cells addressed individually by numeric address. The CAPP executes a stream of instructions that address memory based on the content (stored values) of the memory cells. As a parallel processor, it acts on all of the cells containing that content at once. The content of all matching cells can be changed simultaneously. A typical CAPP might consist of an array of content-addressable memory of fixed word length, a sequential instruction store, and a general purpose computer of the Von Neumann architecture that is used to interface peripherals. Thiscomputer hardwarearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Content_Addressable_Parallel_Processor
This is a selected list of internationalacademic conferencesin the fields ofdistributed computing,parallel computing, andconcurrent computing. The conferences listed here are major conferences of the area; they have been selected using the following criteria:- For the first criterion, references are provided; criteria 2–3 are usually clear from the name of the conference.
https://en.wikipedia.org/wiki/List_of_distributed_computing_conferences
Loop-level parallelismis a form ofparallelisminsoftware programmingthat is concerned with extracting parallel tasks fromloops. The opportunity for loop-level parallelism often arises in computing programs where data is stored inrandom accessdata structures. Where a sequential program will iterate over the data structure and operate on indices one at a time, a program exploiting loop-level parallelism will use multiplethreadsorprocesseswhich operate on some or all of the indices at the same time. Such parallelism provides aspeedupto overall execution time of the program, typically in line withAmdahl's law. For simple loops, where each iteration is independent of the others, loop-level parallelism can beembarrassingly parallel, as parallelizing only requires assigning a process to handle each iteration. However, many algorithms are designed to run sequentially, and fail when parallel processesracedue to dependence within the code. Sequential algorithms are sometimes applicable to parallel contexts with slight modification. Usually, though, they requireprocess synchronization. Synchronization can be either implicit, viamessage passing, or explicit, via synchronization primitives likesemaphores. Consider the following code operating on a listLof lengthn. Each iteration of the loop takes the value from the current index ofL, and increments it by 10. If statementS1takesTtime to execute, then the loop takes timen * Tto execute sequentially, ignoring time taken by loop constructs. Now, consider a system withpprocessors wherep > n. Ifnthreads run in parallel, the time to execute allnsteps is reduced toT. Less simple cases produce inconsistent, i.e.non-serializableoutcomes. Consider the following loop operating on the same listL. Each iteration sets the current index to be the value of the previous plus ten. When run sequentially, each iteration is guaranteed that the previous iteration will already have the correct value. With multiple threads,process schedulingand other considerations prevent the execution order from guaranteeing an iteration will execute only after its dependence is met. It very well may happen before, leading to unexpected results. Serializability can be restored by adding synchronization to preserve the dependence on previous iterations. There are several types of dependences that can be found within code.[1][2] In order to preserve the sequential behaviour of a loop when run in parallel, True Dependence must be preserved. Anti-Dependence and Output Dependence can be dealt with by giving each process its own copy of variables (known as privatization).[1] S2 ->T S3, meaning that S2 has a true dependence on S3 because S2 writes to the variablea, which S3 reads from. S2 ->A S3, meaning that S2 has an anti-dependence on S3 because S2 reads from the variablebbefore S3 writes to it. S2 ->O S3, meaning that S2 has an output dependence on S3 because both write to the variablea. S2 ->I S3, meaning that S2 has an input dependence on S3 because S2 and S3 both read from variablec. Loops can have two types of dependence: In loop-independent dependence, loops have inter-iteration dependence, but do not have dependence between iterations. Each iteration may be treated as a block and performed in parallel without other synchronization efforts. In the following example code used for swapping the values of two array of length n, there is a loop-independent dependence ofS1 ->T S3. In loop-carried dependence, statements in an iteration of a loop depend on statements in another iteration of the loop. Loop-Carried Dependence uses a modified version of the dependence notation seen earlier. Example of loop-carried dependence whereS1[i] ->T S1[i + 1], whereiindicates the current iteration, andi + 1indicates the next iteration. A Loop-carried dependence graph graphically shows the loop-carried dependencies between iterations. Each iteration is listed as a node on the graph, and directed edges show the true, anti, and output dependencies between each iteration. There are a variety of methodologies for parallelizing loops. Each implementation varies slightly in how threads synchronize, if at all. In addition, parallel tasks must somehow be mapped to a process. These tasks can either be allocated statically or dynamically. Research has shown that load-balancing can be better achieved through some dynamic allocation algorithms than when done statically.[4] The process of parallelizing a sequential program can be broken down into the following discrete steps.[1]Each concrete loop-parallelization below implicitly performs them. When a loop has a loop-carried dependence, one way to parallelize it is to distribute the loop into several different loops. Statements that are not dependent on each other are separated so that these distributed loops can be executed in parallel. For example, consider the following code. The loop has a loop carried dependenceS1[i] ->T S1[i+1]but S2 and S1 do not have a loop-independent dependence so we can rewrite the code as follows. Note that now loop1 and loop2 can be executed in parallel. Instead of single instruction being performed in parallel on different data as in data level parallelism, here different loops perform different tasks on different data. Let's say the time of execution of S1 and S2 beTS1{\displaystyle T_{S_{1}}}andTS2{\displaystyle T_{S_{2}}}then the execution time for sequential form of above code isn∗(TS1+TS2){\displaystyle n*(T_{S_{1}}+T_{S_{2}})}, Now because we split the two statements and put them in two different loops, gives us an execution time ofn∗TS1+TS2{\displaystyle n*T_{S_{1}}+T_{S_{2}}}. We call this type of parallelism either function or task parallelism. DOALL parallelism exists when statements within a loop can be executed independently (situations where there is no loop-carried dependence).[1]For example, the following code does not read from the arraya, and does not update the arraysb, c. No iterations have a dependence on any other iteration. Let's say the time of one execution of S1 beTS1{\displaystyle T_{S_{1}}}then the execution time for sequential form of above code isn∗TS1{\displaystyle n*T_{S_{1}}}, Now because DOALL Parallelism exists when all iterations are independent, speed-up may be achieved by executing all iterations in parallel which gives us an execution time ofTS1{\displaystyle T_{S_{1}}}, which is the time taken for one iteration in sequential execution. The following example, using a simplified pseudo code, shows how a loop might be parallelized to execute each iteration independently. DOACROSS Parallelism exists where iterations of a loop are parallelized by extracting calculations that can be performed independently and running them simultaneously.[5] Synchronization exists to enforce loop-carried dependence. Consider the following, synchronous loop with dependenceS1[i] ->T S1[i+1]. Each loop iteration performs two actions Calculating the valuea[i-1] + b[i] + 1, and then performing the assignment can be decomposed into two lines(statements S1 and S2): The first line,int tmp = b[i] + 1;, has no loop-carried dependence. The loop can then be parallelized by computing the temp value in parallel, and then synchronizing the assignment toa[i]. Let's say the time of execution of S1 and S2 beTS1{\displaystyle T_{S_{1}}}andTS2{\displaystyle T_{S_{2}}}then the execution time for sequential form of above code isn∗(TS1+TS2){\displaystyle n*(T_{S_{1}}+T_{S_{2}})}, Now because DOACROSS Parallelism exists, speed-up may be achieved by executing iterations in a pipelined fashion which gives us an execution time ofTS1+n∗TS2{\displaystyle T_{S_{1}}+n*T_{S_{2}}}. DOPIPE Parallelism implements pipelined parallelism for loop-carried dependence where a loop iteration is distributed over multiple, synchronized loops.[1]The goal of DOPIPE is to act like an assembly line, where one stage is started as soon as there is sufficient data available for it from the previous stage.[6] Consider the following, synchronous code with dependenceS1[i] ->T S1[i+1]. S1 must be executed sequentially, but S2 has no loop-carried dependence. S2 could be executed in parallel using DOALL Parallelism after performing all calculations needed by S1 in series. However, the speedup is limited if this is done. A better approach is to parallelize such that the S2 corresponding to each S1 executes when said S1 is finished. Implementing pipelined parallelism results in the following set of loops, where the second loop may execute for an index as soon as the first loop has finished its corresponding index. Let's say the time of execution of S1 and S2 beTS1{\displaystyle T_{S_{1}}}andTS2{\displaystyle T_{S_{2}}}then the execution time for sequential form of above code isn∗(TS1+TS2){\displaystyle n*(T_{S_{1}}+T_{S_{2}})}, Now because DOPIPE Parallelism exists, speed-up may be achieved by executing iterations in a pipelined fashion which gives us an execution time ofn∗TS1+(n/p)∗TS2{\displaystyle n*T_{S_{1}}+(n/p)*T_{S_{2}}}, wherepis the number of processor in parallel.
https://en.wikipedia.org/wiki/Loop-level_parallelism
Dataflow architectureis adataflow-basedcomputer architecturethat directly contrasts the traditionalvon Neumann architectureorcontrol flowarchitecture. Dataflow architectures have noprogram counter, in concept: the executability and execution of instructions is solely determined based on the availability of input arguments to the instructions,[1]so that the order of instruction execution may be hard to predict. Although no commercially successful general-purpose computer hardware has used a dataflow architecture, it has been successfully implemented in specialized hardware such as indigital signal processing,network routing,graphics processing,telemetry, and more recently in data warehousing, andartificial intelligence(as: polymorphic dataflow[2]Convolution Engine,[3]structure-driven,[4]dataflowscheduling[5]). It is also very relevant in many software architectures today includingdatabaseengine designs andparallel computingframeworks.[citation needed] Synchronous dataflow architectures tune to match the workload presented by real-time data path applications such as wire speed packet forwarding. Dataflow architectures that are deterministic in nature enable programmers to manage complex tasks such as processorload balancing, synchronization and accesses to common resources.[6] Meanwhile, there is a clash of terminology, since the termdataflowis used for a subarea of parallel programming: fordataflow programming. Hardware architectures for dataflow was a major topic incomputer architectureresearch in the 1970s and early 1980s.Jack DennisofMITpioneered the field of static dataflow architectures while the Manchester Dataflow Machine[7]and MIT Tagged Token architecture were major projects in dynamic dataflow. The research, however, never overcame the problems related to: Instructions and their data dependencies proved to be too fine-grained to be effectively distributed in a large network. That is, the time for the instructions and tagged results to travel through a large connection network was longer than the time to do many computations. Maurice Wilkeswrote in 1995 that "Data flow stands apart as being the most radical of all approaches to parallelism and the one that has been least successful. ... If any practical machine based on data flow ideas and offering real power ever emerges, it will be very different from what the originators of the concept had in mind."[8] Out-of-order execution(OOE) has become the dominant computing paradigm since the 1990s. It is a form of restricted dataflow. This paradigm introduced the idea of anexecution window. Theexecution windowfollows the sequential order of the von Neumann architecture, however within the window, instructions are allowed to be completed in data dependency order. This is accomplished in CPUs that dynamically tag the data dependencies of the code in the execution window. The logical complexity of dynamically keeping track of the data dependencies, restrictsOOECPUsto a small number of execution units (2-6) and limits the execution window sizes to the range of 32 to 200 instructions, much smaller than envisioned for full dataflow machines.[citation needed] Designs that use conventional memory addresses as data dependency tags are called static dataflow machines. These machines did not allow multiple instances of the same routines to be executed simultaneously because the simple tags could not differentiate between them. Designs that usecontent-addressable memory(CAM) are called dynamic dataflow machines. They use tags in memory to facilitate parallelism. Normally, in the control flow architecture,compilersanalyze programsource codefor data dependencies between instructions in order to better organize the instruction sequences in the binary output files. The instructions are organized sequentially but the dependency information itself is not recorded in the binaries. Binaries compiled for a dataflow machine contain this dependency information. A dataflow compiler records these dependencies by creating unique tags for each dependency instead of using variable names. By giving each dependency a unique tag, it allows the non-dependent code segments in the binary to be executedout of orderand in parallel. Compiler detects the loops, break statements and various programming control syntax for data flow. Programs are loaded into the CAM of a dynamic dataflow computer. When all of the tagged operands of an instruction become available (that is, output from previous instructions and/or user input), the instruction is marked as ready for execution by anexecution unit. This is known asactivatingorfiringthe instruction. Once an instruction is completed by an execution unit, its output data is sent (with its tag) to the CAM. Any instructions that are dependent upon this particular datum (identified by its tag value) are then marked as ready for execution. In this way, subsequent instructions are executed in proper order, avoidingrace conditions. This order may differ from the sequential order envisioned by the human programmer, the programmed order. An instruction, along with its required data operands, is transmitted to an execution unit as a packet, also called aninstruction token. Similarly, output data is transmitted back to the CAM as adata token. The packetization of instructions and results allows for parallel execution of ready instructions on a large scale. Dataflow networks deliver the instruction tokens to the execution units and return the data tokens to the CAM. In contrast to the conventionalvon Neumann architecture, data tokens are not permanently stored in memory, rather they are transient messages that only exist when in transit to the instruction storage.
https://en.wikipedia.org/wiki/Dataflow_architecture
Manycore processorsare special kinds ofmulti-core processorsdesigned for a high degree ofparallel processing, containing numerous simpler, independentprocessor cores(from a few tens of cores to thousands or more). Manycore processors are used extensively inembedded computersandhigh-performance computing. Manycore processors are distinct frommulti-core processorsin being optimized from the outset for a higher degree ofexplicit parallelism, and for higher throughput (or lower power consumption) at the expense of latency and lowersingle-thread performance. The broader category ofmulti-core processors, by contrast, are usually designed to efficiently runbothparallelandserial code, and therefore place more emphasis on high single-thread performance (e.g. devoting more silicon toout-of-order execution, deeperpipelines, moresuperscalarexecution units, and larger, more general caches), andshared memory. These techniques devote runtime resources toward figuring out implicit parallelism in a single thread. They are used in systems where they have evolved continuously (with backward compatibility) from single core processors. They usually have a 'few' cores (e.g. 2, 4, 8) and may be complemented by a manycoreaccelerator(such as aGPU) in aheterogeneous system. Cache coherencyis an issue limiting the scaling of multicore processors. Manycore processors may bypass this with methods such asmessage passing,[1]scratchpad memory,DMA,[2]partitioned global address space,[3]or read-only/non-coherent caches. A manycore processor using anetwork on a chipand local memories gives software the opportunity to explicitly optimise the spatial layout of tasks (e.g. as seen in tooling developed forTrueNorth).[4] Manycore processors may have more in common (conceptually) with technologies originating inhigh-performance computingsuch asclustersandvector processors.[5] GPUs may be considered a form of manycore processor having multipleshader processing units, and only being suitable for highly parallel code (high throughput, but extremely poor single thread performance). A number of computers built from multicore processors have one million or more individual CPU cores. Examples include: Quite a fewsupercomputershave over 5 million CPU cores. When there are also coprocessors, e.g. GPUs used with, then those cores are not listed in the core-count, then quite a few more computers would hit those targets.
https://en.wikipedia.org/wiki/Manycore_processor
Incomputing, aparallel programming modelis anabstractionofparallel computerarchitecture, with which it is convenient to expressalgorithmsand their composition inprograms. The value of a programming model can be judged on itsgenerality: how well a range of different problems can be expressed for a variety of different architectures, and itsperformance: how efficiently the compiled programs can execute.[1]The implementation of a parallel programming model can take the form of alibraryinvoked from aprogramming language, as an extension to an existing languages. Consensus around a particular programming model is important because it leads to different parallel computers being built with support for the model, thereby facilitatingportabilityof software. In this sense, programming models are referred to asbridgingbetween hardware and software.[2] Classifications of parallel programming models can be divided broadly into two areas: process interaction and problem decomposition.[3][4][5] Process interaction relates to the mechanisms by which parallel processes are able to communicate with each other. The most common forms of interaction are shared memory and message passing, but interaction can also be implicit (invisible to the programmer). Shared memory is an efficient means of passing data between processes. In a shared-memory model, parallel processes share a global address space that they read and write to asynchronously. Asynchronous concurrent access can lead torace conditions, and mechanisms such aslocks,semaphoresandmonitorscan be used to avoid these. Conventionalmulti-core processorsdirectly support shared memory, which many parallel programming languages and libraries, such asCilk,OpenMPandThreading Building Blocks, are designed to exploit. In a message-passing model, parallel processes exchange data through passing messages to one another. These communications can be asynchronous, where a message can be sent before the receiver is ready, or synchronous, where the receiver must be ready. TheCommunicating sequential processes(CSP) formalisation of message passing uses synchronous communication channels to connect processes, and led to important languages such asOccam,LimboandGo. In contrast, theactor modeluses asynchronous message passing and has been employed in the design of languages such asD,Scalaand SALSA. Partitioned Global Address Space (PGAS) models provide a middle ground between shared memory and message passing. PGAS provides a global memory address space abstraction that is logically partitioned, where a portion is local to each process. Parallel processes communicate by asynchronously performing operations (e.g. reads and writes) on the global address space, in a manner reminiscent of shared memory models. However by semantically partitioning the global address space into portions with affinity to a particular processes, they allow programmers to exploitlocality of referenceand enable efficient implementation ondistributed memoryparallel computers. PGAS is offered by many parallel programming languages and libraries, such asFortran 2008,Chapel,UPC++, andSHMEM. In an implicit model, no process interaction is visible to the programmer and instead the compiler and/or runtime is responsible for performing it. Two examples of implicit parallelism are withdomain-specific languageswhere the concurrency within high-level operations is prescribed, and withfunctional programming languagesbecause the absence ofside-effectsallows non-dependent functions to be executed in parallel.[6]However, this kind of parallelism is difficult to manage[7]and functional languages such asConcurrent HaskellandConcurrent MLprovide features to manage parallelism explicitly and correctly. A parallel program is composed of simultaneously executing processes. Problem decomposition relates to the way in which the constituent processes are formulated.[8][5] A task-parallel model focuses on processes, or threads of execution. These processes will often be behaviourally distinct, which emphasises the need for communication. Task parallelism is a natural way to express message-passing communication. InFlynn's taxonomy, task parallelism is usually classified asMIMD/MPMDorMISD. A data-parallel model focuses on performing operations on a data set, typically a regularly structured array. A set of tasks will operate on this data, but independently on disjoint partitions. InFlynn's taxonomy, data parallelism is usually classified asMIMD/SPMDorSIMD. Stream parallelism, also known as pipeline parallelism, focuses on dividing a computation into a sequence of stages, where each stage processes a portion of the input data. Each stage operates independently and concurrently, and the output of one stage serves as the input to the next stage. Stream parallelism is particularly suitable for applications with continuous data streams or pipelined computations. As with implicit process interaction, an implicit model of parallelism reveals nothing to the programmer as the compiler, the runtime or the hardware is responsible. For example, in compilers,automatic parallelizationis the process of converting sequential code into parallel code, and in computer architecture,superscalar executionis a mechanism wherebyinstruction-level parallelismis exploited to perform operations in parallel. Parallel programming models are closely related tomodels of computation. A model of parallel computation is anabstractionused to analyze the cost of computational processes, but it does not necessarily need to be practical, in that it can be implemented efficiently in hardware and/or software. A programming model, in contrast, does specifically imply the practical considerations of hardware and software implementation.[9] A parallel programming language may be based on one or a combination of programming models. For example,High Performance Fortranis based on shared-memory interactions and data-parallel problem decomposition, andGoprovides mechanism for shared-memory and message-passing interaction.
https://en.wikipedia.org/wiki/Parallel_programming_model
Theparallelization contractorPACTprogramming model is a generalization of theMapReduceprogramming modeland usessecond order functionsto perform concurrent computations on large (Petabytes) data sets in parallel. Similar to MapReduce, arbitrary user code is handed and executed by PACTs. However, PACT generalizes a couple of MapReduce's concepts: Apache Flink, an open-source parallel data processing platform has implementedPACTs. Flink allows users to specify user functions with annotations. Parallelization Contracts (PACTs) are data processing operators in a data flow. Therefore, a PACT has one or more data inputs and one or more outputs. A PACT consists of two components: The figure below shows how those components work together. Input Contracts split the input data into independently processable subset. The user code is called for each of these independent subsets. All calls can be executed in parallel, because the subsets are independent. Optionally, the user code can be annotated with additional information. These annotations disclose some information on the behavior of the black-box user function. ThePACT Compilercan utilize the information to obtain more efficient execution plans. However, while a missing annotation will not change the result of the execution, an incorrect Output Contract produces wrong results. The currently supported Input Contracts and annotation are presented and discussed in the following. Input Contracts split the input data of a PACT into independently processable subsets that are handed to the user function of the PACT. Input Contracts vary in the number of data inputs and the way how independent subsets are generated. More formally, Input Contracts are second-order functions with a first-order function (the user code), one or more input sets, and none or more key fields per input as parameters. The first-order function is called (one or) multiple times with subsets of the input set(s). Since the first-order functions have no side effects, each call is independent from each other and all calls can be done in parallel. The second-order functionsmap()andreduce()of the MapReduce programming model are Input Contracts in the context of the PACT programming model. The Map Input Contract works in the same way as in MapReduce. It has a single input and assigns each input record to its own subset. Hence, all records are processed independently from each other. The Reduce Input Contract has the same semantics as the reduce function in MapReduce. It has a single input and groups together all records that have identical key fields. Each of these groups is handed as a whole to the user code and processed by it (see figure below). The PACT Programming Model does also support optional Combiners, e.g. for partial aggregations. The Cross Input Contract works on two inputs. It builds the Cartesian product of the records of both inputs. Each element of the Cartesian product (pair of records) is handed to the user code. The Match Input Contract works on two inputs. From both inputs it matches those records that are identical on their key fields come from different inputs. Hence, it resembles an equality join where the keys of both inputs are the attributes to join on. Each matched pair of records is handed to the user code. The CoGroup Input Contract works on two inputs as well. It can be seen as a Reduce on two inputs. On each input, the records are grouped by key (such as Reduce does) and handed to the user code. In contrast to Match, the user code is also called for a key if only one input has a pair with it. In contrast to MapReduce, PACT uses a more generic data model of records (Pact Record) to pass data between functions. The Pact Record can be thought of as a tuple with a free schema. The interpretation of the fields of a record is up to the user function. A Key/Value pair (as in MapReduce) is a special case of that record with only two fields (the key and the value). For input contracts that operate on keys (like //Reduce//, //Match//, or //CoGroup//, one specifies which combination of the record's fields make up the key. An arbitrary combination of fields may used. See theQuery Exampleon how programs defining //Reduce// and //Match// contracts on one or more fields and can be written to minimally move data between fields. The record may be sparsely filled, i.e. it may have fields that have //null// values. It is legal to produce a record where for example only fields 2 and 5 are set. Fields 1, 3, 4 are interpreted to be //null//. Fields that are used by a contract as key fields may however not be null, or an exception is raised. User code annotation are optional in the PACT programming model. They allow the developer to make certain behaviors of her/his user code explicit to the optimizer. The PACT optimizer can utilize that information to obtain more efficient execution plans. However, it will not impact the correctness of the result if a valid annotation was not attached to the user code. On the other hand, invalidly specified annotations might cause the computation of wrong results. In the following, we list the current set of available Output Contracts. TheConstant Fieldsannotation marks fields that are not modified by the user code function. Note that for every input record a constant field may not change its content and position in any output record! In case of binary second-order functions such as Cross, Match, and CoGroup, the user can specify one annotation per input. TheConstant Fields Exceptannotation is inverse to the Constant Fields annotation. It annotates all fields which might be modified by the annotated user-function, hence the optimizer considers any not annotated field as constant. This annotation should be used very carefully! Again, for binary second-order functions (Cross, Match, CoGroup), one annotation per input can be defined. Note that either the Constant Fields or the Constant Fields Except annotation may be used for an input. PACT programs are constructed as data flow graphs that consist of data sources, PACTs, and data sinks. One or more data sources read files that contain the input data and generate records from those files. Those records are processed by one or more PACTs, each consisting of an Input Contract, user code, and optional code annotations. Finally, the results are written back to output files by one or more data sinks. In contrast to the MapReduce programming model, a PACT program can be arbitrary complex and has no fixed structure. The figure below shows a PACT program with two data sources, four PACTs, and one data sink. Each data source reads data from a specified location in the file system. Both sources forward the data to respective PACTs with Map Input Contracts. The user code is not shown in the figure. The output of both Map PACTs streams into a PACT with a Match Input Contract. The last PACT has a Reduce Input Contract and forwards its result to the data sink. Wiki:pactProgram.png?nolink&600 For a more detailed comparison of the MapReduce and PACT programming models you can read our paper //"MapReduce and PACT - Comparing Data Parallel Programming Models"// (see ourpage).
https://en.wikipedia.org/wiki/Parallelization_contract
In the fields ofdatabasesandtransaction processing(transaction management), aschedule(orhistory) of a system is an abstract model to describe the order ofexecutionsin a set of transactions running in the system. Often it is alistof operations (actions) ordered by time, performed by a set oftransactionsthat are executed together in the system. If the order in time between certain operations is not determined by the system, then apartial orderis used. Examples of such operations are requesting a read operation, reading, writing, aborting,committing, requesting alock, locking, etc. Often, only a subset of the transaction operation types are included in a schedule. Schedules are fundamental concepts in databaseconcurrency controltheory. In practice, most general purpose database systems employ conflict-serializable and strict recoverable schedules. Grid notation: Operations (a.k.a., actions): Alternatively, a schedule can be represented with adirected acyclic graph(or DAG) in which there is an arc (i.e.,directed edge) between eachordered pairof operations. The following is an example of a schedule: In this example, the columns represent the different transactions in the schedule D. Schedule D consists of three transactions T1, T2, T3. First T1 Reads and Writes to object X, and then Commits. Then T2 Reads and Writes to object Y and Commits, and finally, T3 Reads and Writes to object Z and Commits. The schedule D above can be represented as list in the following way: D = R1(X) W1(X) Com1 R2(Y) W2(Y) Com2 R3(Z) W3(Z) Com3 Usually, for the purpose of reasoning about concurrency control in databases, an operation is modelled asatomic, occurring at a point in time, without duration. Real executed operations always have some duration. Operations of transactions in a schedule can interleave (i.e., transactions can be executedconcurrently), but time orders between operations in each transaction must remain unchanged. The schedule is inpartial orderwhen the operations of transactions in a schedule interleave (i.e., when the schedule is conflict-serializable but not serial). The schedule is intotal orderwhen the operations of transactions in a schedule do not interleave (i.e., when the schedule is serial). Acomplete scheduleis one that contains either an abort (a.k.a.rollback)or commit action for each of its transactions. A transaction's last action is either to commit or abort. To maintainatomicity, a transaction must undo all its actions if it is aborted. A schedule isserialif the executed transactions are non-interleaved (i.e., a serial schedule is one in which no transaction starts until a running transaction has ended). Schedule D is an example of a serial schedule: A schedule isserializableif it is equivalent (in its outcome) to a serial schedule. In schedule E, the order in which the actions of the transactions are executed is not the same as in D, but in the end, E gives the same result as D. Serializability is used to keep the data in the data item in a consistent state. It is the major criterion for the correctness of concurrent transactions' schedule, and thus supported in all general purpose database systems. Schedules that are not serializable are likely to generate erroneous outcomes; which can be extremely harmful (e.g., when dealing with money within banks).[1][2][3] If any specific order between some transactions is requested by an application, then it is enforced independently of the underlying serializability mechanisms. These mechanisms are typically indifferent to any specific order, and generate some unpredictablepartial orderthat is typically compatible with multiple serial orders of these transactions. Two actions are said to be in conflict (conflicting pair) if and only if all of the 3 following conditions are satisfied: Equivalently, two actions are considered conflicting if and only if they arenoncommutative. Equivalently, two actions are considered conflicting if and only if they are aread-write,write-read, orwrite-writeconflict. The following set of actions is conflicting: While the following sets of actions are not conflicting: Reducing conflicts, such as through commutativity, enhances performance because conflicts are the fundamental cause of delays and aborts. The conflict ismaterializedif the requested conflicting operation is actually executed: in many cases a requested/issued conflicting operation by a transaction is delayed and even never executed, typically by alockon the operation's object, held by another transaction, or when writing to a transaction's temporary private workspace and materializing, copying to the database itself, upon commit; as long as a requested/issued conflicting operation is not executed upon the database itself, the conflict isnon-materialized; non-materialized conflicts are not represented by an edge in the precedence graph. The schedules S1 and S2 are said to be conflict-equivalent if and only if both of the following two conditions are satisfied: Equivalently, two schedules are said to be conflict equivalent if and only if one can be transformed to another by swapping pairs of non-conflicting operations (whether adjacent or not) while maintaining the order of actions for each transaction.[4] Equivalently, two schedules are said to be conflict equivalent if and only if one can be transformed to another by swapping pairs of non-conflicting adjacent operations with different transactions.[7] A schedule is said to beconflict-serializablewhen the schedule is conflict-equivalent to one or more serial schedules. Equivalently, a schedule is conflict-serializable if and only if itsprecedence graphis acyclic when only committed transactions are considered. Note that if the graph is defined to also include uncommitted transactions, then cycles involving uncommitted transactions may occur without conflict serializability violation. The schedule K is conflict-equivalent to the serial schedule <T1,T2>, but not <T2,T1>. Conflict serializability can be enforced by restarting any transaction within the cycle in the precedence graph, or by implementingtwo-phase locking,timestamp ordering, orserializable snapshot isolation.[8] Two schedules S1 and S2 are said to be view-equivalent when the following conditions are satisfied: Additionally, two view-equivalent schedules must involve the same set of transactions such that each transaction has the same actions in the same order. In the example below, the schedules S1 and S2 are view-equivalent, but neither S1 nor S2 are view-equivalent to the schedule S3. The conditions for S3 to be view-equivalent to S1 and S2 were not satisfied at the corresponding superscripts for the following reasons: To quickly analyze whether two schedules are view-equivalent, write both schedules as a list with each action's subscript representing which view-equivalence condition they match. The schedules are view equivalent if and only if all the actions have the same subscript (or lack thereof) in both schedules: A schedule isview-serializableif it is view-equivalent to some serial schedule. Note that by definition, all conflict-serializable schedules are view-serializable. Notice that the above example (which is the same as the example in the discussion of conflict-serializable) is both view-serializable and conflict-serializable at the same time. There are however view-serializable schedules that are not conflict-serializable: those schedules with a transaction performing ablind write: The above example is not conflict-serializable, but it is view-serializable since it has a view-equivalent serial schedule <T1,| T2,| T3>. Since determining whether a schedule is view-serializable isNP-complete, view-serializability has little practical interest.[citation needed] In arecoverable schedule, transactions only commit after all transactions whose changes they read have committed. A schedule becomesunrecoverableif a transactionTi{\displaystyle T_{i}}reads and relies on changes from another transactionTj{\displaystyle T_{j}}, and thenTi{\displaystyle T_{i}}commits andTj{\displaystyle T_{j}}aborts. These schedules are recoverable. The schedule F is recoverable because T1 commits before T2, that makes the value read by T2 correct. Then T2 can commit itself. In the F2 schedule, if T1 aborted, T2 has to abort because the value of A it read is incorrect. In both cases, the database is left in a consistent state. Schedule J is unrecoverable because T2 committed before T1 despite previously reading the value written by T1. Because T1 aborted after T2 committed, the value read by T2 is wrong. Because a transaction cannot be rolled-back after it commits, the schedule is unrecoverable. Cascadeless schedules(a.k.a, "Avoiding Cascading Aborts (ACA) schedules") are schedules which avoid cascading aborts by disallowingdirty reads.Cascading abortsoccur when one transaction's abort causes another transaction to abort because it read and relied on the first transaction's changes to an object. Adirty readoccurs when a transaction reads data from uncommitted write in another transaction.[9] The following examples are the same as the ones in the discussion on recoverable: In this example, although F2 is recoverable, it does not avoid cascading aborts. It can be seen that if T1 aborts, T2 will have to be aborted too in order to maintain the correctness of the schedule as T2 has already read the uncommitted value written by T1. The following is a recoverable schedule which avoids cascading abort. Note, however, that the update of A by T1 is always lost (since T1 is aborted). Note that this Schedule would not be serializable if T1 would be committed. Cascading aborts avoidance is sufficient but not necessary for a schedule to be recoverable. A schedule isstrictif for any two transactions T1, T2, if a write operation of T1 precedes aconflictingoperation of T2 (either read or write), then the commit or abort event of T1 also precedes that conflicting operation of T2. For example, the schedule F3 above is strict. Any strict schedule is cascade-less, but not the converse. Strictness allows efficient recovery of databases from failure. The following expressions illustrate the hierarchical (containment) relationships betweenserializabilityandrecoverabilityclasses: TheVenn diagram(below) illustrates the above clauses graphically.
https://en.wikipedia.org/wiki/Serializability
Asynchronous programming languageis acomputer programming languageoptimized for programmingreactive systems. Computer systemscan be sorted in three main classes: Synchronous programming, also calledsynchronous reactive programming(SRP), is a computerprogramming paradigmsupported by synchronous programming languages. The principle of SRP is to make the same abstraction for programming languages as the synchronous abstraction in digital circuits. Synchronous circuits are indeed designed at a high level of abstraction where the timing characteristics of the electronic transistors are neglected. Each gate of the circuit (or, and, ...) is therefore assumed to compute its result instantaneously, each wire is assumed to transmit its signal instantaneously. A synchronous circuit is clocked and at each tick of its clock, it computes instantaneously its output values and the new values of its memory cells (latches) from its input values and the current values of its memory cells. In other words, the circuit behaves as if the electrons were flowing infinitely fast. The first synchronous programming languages were invented in France in the 1980s:Esterel,Lustre, andSIGNAL. Since then, many other synchronous languages have emerged. The synchronous abstraction makes reasoning about time in a synchronous program a lot easier, thanks to the notion oflogical ticks: a synchronous program reacts to its environment in a sequence of ticks, and computations within a tick are assumed to be instantaneous, i.e., as if the processor executing them were infinitely fast. The statement "a||b" is therefore abstracted as the package "ab" where "a" and "b" are simultaneous. To take a concrete example, the Esterel statement "'every 60 second emit minute" specifies that the signal "minute" is exactly synchronous with the 60-th occurrence of the signal "second". At a more fundamental level, the synchronous abstraction eliminates the non-determinism resulting from the interleaving of concurrent behaviors. This allowsdeterministicsemantics, therefore making synchronous programs amenable to formal analysis,verificationand certified code generation, and usable asformal specificationformalisms. In contrast, in the asynchronous model of computation, on a sequential processor, the statement "a||b" can be either implemented as "a;b" or as "b;a". This is known as theinterleaving-based non determinism. The drawback with an asynchronous model is that it intrinsically forbids deterministic semantics (e.g., race conditions), which makes formal reasoning such as analysis and verification more complex. Nonetheless, asynchronous formalisms are very useful to model, design and verify distributed systems, because they are intrinsically asynchronous. Also in contrast are systems with processes that basicallyinteract synchronously. An example would be systems based on theCommunicating sequential processes (CSP)model, which allows deterministic (external) and nondeterministic (internal) choice.
https://en.wikipedia.org/wiki/Synchronous_programming
Thetransputeris a series of pioneeringmicroprocessorsfrom the 1980s, intended forparallel computing. To support this, each transputer had its own integrated memory andserial communicationlinks to exchange data with other transputers. They were designed and produced byInmos, asemiconductorcompany based inBristol,United Kingdom.[1] For some time in the late 1980s, many[2]considered the transputer to be the next great design for the future of computing. While the transputer did not achieve this expectation, the transputer architecture was highly influential in provoking new ideas incomputer architecture, several of which have re-emerged in different forms in modern systems.[3] In the early 1980s, conventionalcentral processing units(CPUs) appeared to have reached a performance limit. Up to that time, manufacturing difficulties limited the amount of circuitry that could fit on a chip. Continued improvements in thefabricationprocess had largely removed this restriction. Within a decade, chips could hold more circuitry than the designers knew how to use. Traditionalcomplex instruction set computer(CISC) designs were reaching a performance plateau, and it wasn't clear it could be overcome.[4] It seemed that the only way forward was to increase the use of parallelism, the use of several CPUs that would work together to solve several tasks at the same time. This depended on such machines being able to run several tasks at once, a process termedmultitasking. This had generally been too difficult for prior microprocessor designs to handle, but more recent designs were able to accomplish it effectively. It was clear that in the future, this would be a feature of alloperating systems(OSs). A side effect of most multitasking design is that it often also allows the processes to be run on physically different CPUs, in which case it is termedmultiprocessing. A low-cost CPU built for multiprocessing could allow the speed of a machine to be raised by adding more CPUs, potentially far more cheaply than by using one faster CPU design. The first transputer designs were due to computer scientistDavid Mayand telecommunications consultant Robert Milne. In 1990, May received an Honorary DSc fromUniversity of Southampton, followed in 1991 by his election as a Fellow ofThe Royal Societyand the award of the Patterson Medal of theInstitute of Physicsin 1992.Tony Fuge, then a leading engineer at Inmos, was awarded thePrince Philip Designers Prizein 1987 for his work on the T414 transputer.[5] The transputer was the first general purpose microprocessor designed specifically to be used inparallel computingsystems. The goal was to produce a family of chips ranging in power and cost that could be wired together to form a complete parallel computer. The name, from "transistor" and "computer",[6]was selected to indicate the role the individual transputers would play: numbers of them would be used as basic building blocks in a larger integrated system, just astransistorshad been used in earlier designs. Originally the plan was to make the transputer cost only a few dollars per unit. Inmos saw them being used for practically everything, from operating as the main CPU for a computer to acting as achannel controllerfordisk drivesin the same machine. In a traditional machine, the processing capability of a disk controller, for instance, would be idle when the disk was not being accessed. In contrast, in a transputer system, spare cycles on any of these transputers could be used for other tasks, greatly increasing the overall performance of the machines. The transputer had large on-chip memory, making it essentially aprocessor-in-memory. Even one transputer would have all the circuitry needed to work by itself, a feature more commonly associated withmicrocontrollers. The intent was to allow transputers to be connected together as easily as possible, with no need for a complexbus, ormotherboard. Power and a simpleclock signalhad to be supplied, but little else:random-access memory(RAM), a RAM controller, bus support and even areal-time operating system(RTOS) were all built in. In this way, the last of the transputers were singleReusable Micro Cores (RMC)in the then emergingSoCmarket. The original transputer used a very simple and rather unusual architecture to achieve a high performance in a small area. It usedmicrocodeas the main method to control the data path, but unlike other designs of the time, many instructions took only one cycle to execute. Instructionopcodeswere used as the entry points to the microcoderead-only memory(ROM) and the outputs from the ROM were fed directly to the data path. For multi-cycle instructions, while the data path was performing the first cycle, the microcode decoded four possible options for the second cycle. The decision as to which of these options would actually be used could be made near the end of the first cycle. This allowed for very fast operation while keeping the architecture generic.[7] Theclock rateof 20 MHz was quite high for the era and the designers were very concerned about the practicality of distributing such a fast clock signal on a board. A slower external clock of 5 MHz was used, and this was multiplied up to the needed internal frequency using aphase-locked loop(PLL). The internal clock actually hadfour non-overlapping phasesand designers were free to use whichever combination of these they wanted, so it could be argued that the transputer actually ran at 80 MHz.Dynamic logicwas used in many parts of the design to reduce area and increase speed. Unfortunately, these methods are difficult to combine withautomatic test pattern generationscan testing so they fell out of favour for later designs. Prentice-Hall published a book[8]on the general principles of the transputer. The basic design of the transputer includedserial linksknown as "os-link"s[9][10]that allowed it to communicate with up to four other transputers, each at 5, 10, or 20 Mbit/s – which was very fast for the 1980s. Any number of transputers could be connected together over links (which could run tens of metres) to form one computingfarm. A hypothetical desktop machine might have two of the "low end" transputers handlinginput/output(I/O) tasks on some of their serial lines (hooked up to appropriate hardware) while they talked to one of their larger cousins acting as aCPUon another. There were limits to the size of a system that could be built in this fashion. Since each transputer was linked to another in a fixed point-to-point layout, sending messages to a more distant transputer required that messages be relayed by each chip in the line. This introduced a delay with every "hop" over a link, leading to long delays on large nets. To solve this problem Inmos also provided a zero-delay switch that connected up to 32 transputers (or switches) into even larger networks. Transputers could boot from memory, as is the case for most computers, but could also be bootedover its network links. A special pin on the chips, BootFromROM, indicated which method it should use. If BootFromROM was asserted when the chip was reset, it would begin processing at the instruction two bytes from the top of memory, which was normally used to perform a backward jump into the boot code. If this pin was not asserted, the chip would instead wait for bytes to be received on any network link. The first byte to be received was the length of the code to follow. Following bytes were copied into low memory and then jumped into once that number of bytes had been received. The general concept for the system was to have one transputer act as the central authority for booting a system containing a number of connected transputers. The selected transputer would have the BootFromROM permanently asserted, which would cause it to begin running a booter process from ROM on startup. The other transputers would have the BootFromROM tied low, and would simply wait. The loader would boot the central transputer, which would then begin sending boot code to the other transputers in the network, and could customize the code sent to each one, for instance, sending adevice driverto the transputer connected to the hard drives. The system also included the 'special' code lengths of 0 and 1 which were reserved forPEEK and POKE. This allowed inspection and changing of RAM in an unbooted transputer. After a peek, followed by a memory address, or a poke, with an address and single word of data, the transputer would return to waiting for a bootstrap. This mechanism was generally used for debugging. Added circuitry scheduled traffic over the links. Processes waiting for communications would automatically pause while the networking circuitry finished its reads or writes. Other processes running on the transputer would then be given that processing time. It included twopriority levelsto improvereal-timeandmultiprocessoroperation. The same logical system was used to communicate between programs running on one transputer, implemented asvirtual network linksin memory. So programs asking for any input or output automatically paused while the operation completed, a task that normally required an operating system to handle as the arbiter of hardware. Operating systems on the transputer did not need to handle scheduling; the chip could be considered to have an OS inside it. To include all this function on one chip, the transputer's core logic was simpler than most CPUs. While some have called itreduced instruction set computer(RISC) due to its rather sparse nature, and because that was then a desirable marketingbuzzword, it was heavilymicrocoded, had a limited register set, and complex memory-to-memory instructions, all of which place it firmly in theCISCcamp. Unlike register-heavyload/store RISCCPUs, the transputer had only three data registers, which behaved as a stack. In addition a workspace pointer pointed to a conventional memory stack, easily accessible via the instructionsLoad LocalandStore Local. This allowed for very fastcontext switchingby simply changing the workspace pointer to the memory used by another process (a method used in a number of contemporary designs, such as theTMS9900). The three register stack contents were not preserved past certain instructions, like Jump, when the transputer could do a context switch. The transputer instruction set consisted of 8-bit instructions assembled fromopcodeandoperandnibbles. Theuppernibble contained the 16 possible primary instruction codes, making it one of the very few commercializedminimal instruction set computers. Thelowernibble contained the one immediate constant operand, commonly used as an offset relative to the workspace (memory stack) pointer. Twoprefixinstructions allowed construction of larger constants by prepending their lower nibbles to the operands of following instructions. Further instructions were supported via the instruction codeOperate(Opr), which decoded the constant operand as an extended zero-operand opcode, providing for almost endless and easy instruction set expansion as newer implementations of the transputer were introduced. The 16 'primary' one-operand instructions were: All these instructions take a constant, representing an offset or an arithmetic constant. If this constant was less than 16, all these instructions coded to one byte. The first 16 'secondary' zero-operand instructions (using the OPR primary instruction) were: To provide an easy means of prototyping, constructing and configuring multiple-transputer systems, Inmos introduced theTRAM(TRAnsputer Module) standard in 1987. A TRAM was essentially a building blockdaughterboardcomprising a transputer and, optionally, external memory and/or peripheral devices, with simple standardised connectors providing power, transputer links, clock and system signals. Various sizes of TRAM were defined, from the basic Size 1 TRAM (3.66 in by 1.05 in) up to Size 8 (3.66 in by 8.75 in). Inmos produced a range of TRAMmotherboardsfor various host buses such asIndustry Standard Architecture(ISA),MicroChannel, orVMEbus. TRAM links operate at 10 Mbit/s or 20 Mbit/s.[11] Transputers were intended to be programmed using the programming languageoccam, based on thecommunicating sequential processes(CSP)process calculus.[12]The transputer was built to runOccamspecifically, more than contemporaryCISCdesigns were built to run languages likePascalorC. Occam supportedconcurrencyand channel-based inter-process or inter-processor communication as a fundamental part of the language. With the parallelism and communications built into the chip and the language interacting with it directly, writing code for things like device controllers became a triviality; even the most basic code could watch the serial ports for I/O, and would automatically sleep when there was no data. The initial Occam development environment for the transputer was the Inmos D700Transputer Development System(TDS). This was an unorthodox integrated development environment incorporating an editor, compiler, linker and (post-mortem) debugger. The TDS was a transputer application written in Occam. The TDS text editor was notable in that it was afolding editor, allowing blocks of code to be hidden and revealed, to make the structure of the code more apparent. Unfortunately, the combination of an unfamiliar programming language and equally unfamiliar development environment did nothing for the early popularity of the transputer. Later, Inmos would release more conventional Occam cross-compilers, theOccam 2 Toolsets. Implementations of more mainstream programming languages, such as C,FORTRAN,Ada,Forth, and Pascal were also later released by both Inmos and third-party vendors. These usually included language extensions or libraries providing, in a less elegant way, Occam-like concurrency and channel-based communication. The transputer's lack of support for virtual memory inhibited the porting of mainstream variants of theUnixoperating system, though ports ofUnix-likeoperating systems (such asMinixandIdrisfromWhitesmiths) were produced. An advanced Unix-likedistributed operating system,Helios, was also designed specifically for multi-transputer systems byPerihelion Software. The first transputers were announced in 1983 and released in 1984. In keeping with their role asmicrocontroller-like devices, they included on-board RAM and a built-in RAM controller which enabled more memory to be added with no added hardware. Unlike other designs, transputers did not include I/O lines: these were to be added with hardware attached to the existing serial links. There was one 'Event' line, similar to a conventional processor's interrupt line. Treated as a channel, a program could 'input' from the event channel, and proceed only after the event line was asserted. All transputers ran from an external 5 MHz clock input; this was multiplied to provide the processor clock. The transputer did not include amemory management unit(MMU) or avirtual memorysystem. Transputer variants (except the cancelled T9000) can be categorised into three groups: the16-bitT2series, the32-bitT4series, and the 32-bitT8series with 64-bitIEEE 754floating-pointsupport. The prototype 16-bit transputer was theS43, which lacked the scheduler and DMA-controlled block transfer on the links. At launch, theT212andM212(the latter with an on-board disk controller) were the 16-bit offerings. The T212 was available in 17.5 and 20 MHz processor clock speed ratings. The T212 was superseded by theT222, with on-chip RAM expanded from 2 KB to 4 KB, and, later, theT225. This added debugging-breakpointsupport (by extending the instruction "J 0") plus some extra instructions from the T800 instruction set. Both the T222 and T225 ran at 20 MHz. Launched in October 1985, theT414employed the equivalent of 900,000 transistors and was fabricated with a1.5 micrometrefeature size. It was a 32-bit design, able to process 32-bit units of data and to address up to 4 GB of main memory.[13]Originally, the first 32-bit variant was to be theT424, but fabrication difficulties meant that this was redesigned as the T414 with 2 KB on-board RAM instead of the intended 4 KB. The T414 was available in 15 and 20 MHz varieties. The RAM was later reinstated to 4 KB on theT425(in 20, 25, and 30 MHz varieties), which also added theJ 0breakpoint support and extra T800 instructions. TheT400, released in September 1989, was a low-cost 20 MHz T425 derivative with 2 KB and two instead of four links, intended for theembedded systemsmarket. The second-generationT800transputer, introduced in 1987, had an extended instruction set. The most important addition was a 64-bitfloating-point unit(FPU) and three added registers for floating point, implementing theIEEE 754-1985floating point standard. It also had 4 KB of on-board RAM and was available in 20 or 25 MHz versions. Breakpoint support was added in the laterT801andT805, the former featuring separate address and data buses to improve performance. The T805 was also later available as a 30 MHz part. An enhancedT810was planned, which would have had more RAM, more and faster links, extra instructions, and improved microcode, but this was cancelled around 1990. Inmos also produced a variety of support chips for the transputer processors, such as theC00432-way link switch and theC011andC012"link adapters" which allowed transputer links to be interfaced to an 8-bit data bus. Part of the original Inmos strategy was to make CPUs so small and cheap that they could be combined with other logic in one device. Although asystem on a chip(SoC) as they are commonly termed, are ubiquitous now, the concept was almost unheard of back in the early 1980s. Two projects were started in around 1983, theM212and theTV-toy. The M212 was based on a standard T212 core with the addition of a disk controller for the ST 506 and ST 412 Shugart standards. TV-toy was to be the basis for avideo game consoleand was joint project between Inmos andSinclair Research. The links in the T212 and T414/T424 transputers had hardware DMA engines so that transfers could happen in parallel with execution of other processes. A variant of the design, termed the T400, not to be confused with a later transputer of the same name, was designed where the CPU handled these transfers. This reduced the size of the device considerably since 4 link engines were approximately the same size as the whole CPU. The T400 was intended to be used as a core in what were then calledsystems on silicon(SOS) devices, now termed and better known assystem on a chip(SoC). It was this design that was to form part of TV-toy. The project was canceled in 1985. Although the prior SoC projects had had only limited success (the M212 was sold for a time), many designers still firmly believed in the concept and in 1987, a new project, the T100 was started which combined an 8-bit version of the transputer CPU with configurable logic based on state machines. The transputer instruction set is based on 8-bit instructions and can easily be used with any word size which is a multiple of 8 bits. The target market for the T100 was to be bus controllers such as Futurebus, and an upgrade for the standard link adapters (C011 etc.). The project was stopped when the T840 (later to become the basis of the T9000) was started. TPCORE is an implementation of the transputer, including the os-links, that runs in afield-programmable gate array(FPGA).[9][14] Inmos improved on the performance of the T8 series transputers with the introduction of theT9000(code-namedH1during development). The T9000 shared most features with the T800, but moved several pieces of the design into hardware and added several features forsuperscalarsupport. Unlike the earlier models, the T9000 had a true 16 KB high-speedcache(using random replacement) instead of RAM, but also allowed it to be used as memory and included MMU-like functionality to handle all of this (termed thePMI). For more speed the T9000 cached the top 32 locations of the stack, instead of three as in earlier versions. The T9000 used a five-stage pipeline for even more speed. An interesting addition was thegrouper[15]which would collect instructions out of the cache and group them into larger packages of up to 8 bytes to feed the pipeline faster. Groups then completed in one cycle, as if they were single larger instructions working on a faster CPU. The link system was upgraded to a new 100 MHz mode, but unlike the prior systems, the links were no longer downwardly compatible. This new packet-based link protocol was calledDS-Link,[16]and later formed the basis of theIEEE 1355serial interconnect standard. The T9000 also added link routing hardware called theVCP(Virtual Channel Processor) which changed the links from point-to-point to a true network, allowing for the creation of any number ofvirtual channelson the links. This meant programs no longer had to be aware of the physical layout of the connections. A range of DS-Link support chips were also developed, including theC10432-way crossbar switch, and theC101link adapter. Long delays in the T9000's development meant that the faster load/store designs were already outperforming it by the time it was to be released. It consistently failed to reach its own performance goal of beating the T800 by a factor of ten. When the project was finally cancelled it was still achieving only about 36 MIPS at 50 MHz. The production delays gave rise to the quip that the best host architecture for a T9000 was an overhead projector. This was too much for Inmos, which did not have the funding needed to continue development. By this time, the company had been sold to SGS-Thomson (nowSTMicroelectronics), whose focus was the embedded systems market, and eventually the T9000 project was abandoned. However, a comprehensively redesigned 32-bit transputer intended for embedded applications, theST20series, was later produced, using some technology developed for the T9000. The ST20 core was incorporated into chipsets forset-top boxandGlobal Positioning System(GPS) applications. Although not strictly a transputer, the ST20 was heavily influenced by the T4 and T9 and formed the basis of the T450, which was arguably the last of the transputers. The mission of the ST20 was to be a reusable core in the then emerging SoC market. The original name of the ST20 was the Reusable Micro Core (RMC). The architecture was loosely based on the original T4 architecture with a microcode-controlled data path. However, it was a full redesign, usingVHDLas the design language and with an optimized (and rewritten) microcode compiler. The project was conceived as early as 1990 when it was realized that the T9 would be too big for many applications. Actual design work started in mid-1992. Several trial designs were done, ranging from a very simple RISC-style CPU with complex instructions implemented in software via traps to a rather complex superscalar design similar in concept to theTomasulo algorithm. The final design looked very similar to the original T4 core although some simple instruction grouping and aworkspace cachewere added to help with performance. While the transputer was simple but powerful compared to many contemporary designs, it never came close to meeting its goal of being used universally in both CPU and microcontroller roles. In the microcontroller market, the market was dominated by 8-bit machines where cost was the most serious consideration. Here, even the T2s were too powerful and costly for most users. In thecomputer desktopandworkstationfield, the transputer was fairly fast (operating at about 10 millioninstructions per second(MIPS) at 20 MHz). This was excellent performance for the early 1980s, but by the time thefloating-point unit(FPU) equipped T800 was shipping, other RISC designs had surpassed it. This could have been mitigated to a large extent if machines had used multiple transputers as planned, but T800s cost about $400 each when introduced, which meant a poor price/performance ratio. Few transputer-based workstation systems were designed; the most notable likely being theAtari Transputer Workstation. The transputer was more successful in the field ofmassively parallelcomputing, where several vendors produced transputer-based systems in the late 1980s. These includedMeiko Scientific(founded by ex-Inmos employees),Floating Point Systems,Parsytec,[17]and Parsys. Several British academic institutions founded research activities in the application of transputer-based parallel systems, includingBristol Polytechnic's Bristol Transputer Centre and theUniversity of Edinburgh'sEdinburgh Concurrent SupercomputerProject. Also, the Data Acquisition and Second Level Trigger systems of the High Energy PhysicsZEUSExperiment for theHadron Elektron Ring Anlage(HERA) collider atDESYwas based on a network of over 300 synchronously clocked transputers divided into several subsystems. These controlled both the readout of the custom detector electronics and ran reconstruction algorithms for physics event selection. The parallel processing abilities of the transputer were put to use commercially forimage processingby the world's largest printing company,RR Donnelley & Sons, in the early 1990s. The ability to quickly transform digital images in preparation for print gave the firm a significant edge over their competitors. This development was led by Michael Bengtson in the RR Donnelley Technology Center. Within a few years, the processing ability of even desktop computers ended the need for custom multi-processing systems for the firm.[citation needed] The German company Jäger Messtechnik used transputers for their early ADwin real-timedata acquisitionand control products.[18] A French company built the Archipel Volvox Supercomputer with up to 144 T800 and T400 Transputers. It was controlled by a Silicon Graphics Indigo2 running UNIX and a special card that interfaced to the Volvox backplanes. Transputers also found use in protocol analysers such as the Siemens/Tektronix K1103 and in military applications where the array architecture suited applications such as radar and the serial links (that were high speed in the 1980s) served well to save cost and weight in sub-system communications. The transputer also appeared in products related tovirtual realitysuch as the ProVision 100 system made by Division Limited of Bristol, featuring a combination ofIntel i860,80486/33 andToshibaHSP processors, together with T805 or T425 transputers, implementing arendering enginethat could then be accessed as aserverbyPC,Sun SPARCstationorVAXsystems.[19][20] Myriade, a Europeanminiaturized satelliteplatform developed byAstrium SatellitesandCNESand used by satellites such as thePicard, is based on the T805 yielding around 4 MIPS and is scheduled to stay in production until about 2015.[21][22] The asynchronous operation of the communications and computation allowed the development of asynchronous algorithms, such as Bane's "Asychronous Polynomial Zero Finding" algorithm.[23]The field of asynchronous algorithms, and the asynchronous implementation of current algorithms, is likely to play a key role in the move toexascale computing. TheHigh Energy Transient Explorer2 (HETE-2) spacecraft used 4× T805 transputers and 8× DSP56001 yielding about 100 millioninstructions per second(MIPS) of performance.[24] Growing internal parallelism has been one driving force behind improvements in conventional CPU designs. Instead of explicit thread-level parallelism (as is used in the transputer), CPU designs exploited implicit parallelism at the instruction-level, inspecting code sequences for data dependencies and issuing multiple independent instructions to different execution units. This is termedsuperscalarprocessing. Superscalar processors are suited for optimising the execution of sequentially constructed fragments of code. The combination of superscalar processing andspeculative executiondelivered a tangible performance increase on existing bodies of code – which were mostly written in Pascal, Fortran, C and C++. Given these substantial and regular performance improvements to existing code there was little incentive to rewrite software in languages or coding styles which expose more task-level parallelism. Nevertheless, the model of cooperating concurrent processors can still be found incluster computingsystems that dominatesupercomputerdesign in the 21st century. Unlike the transputer architecture, the processing units in these systems typically use superscalar CPUs with access to substantial amounts of memory and disk storage, running conventional operating systems and network interfaces. Resulting from the more complex nodes, the software architecture used for coordinating the parallelism in such systems is typically far more heavyweight than in the transputer architecture. The fundamental transputer motive remains, yet was masked for over 20 years by the repeated doubling of transistor counts. Inevitably, microprocessor designers finally ran out of uses for the greater physical resources, almost at the same time when technology scaling began to hit its limits. Power consumption, and thus heat dissipation needs, render furtherclock rateincreases unfeasible. These factors led the industry towards solutions little different in essence from those proposed by Inmos. Some of the most powerful supercomputers in the world, based on designs fromColumbia Universityand built as IBMBlue Gene, are real-world incarnations of the transputer dream. They are vast assemblies of identical, relatively low-performance SoCs. Recent trends have also tried to solve the transistor dilemma in ways that would have been too futuristic even for Inmos. On top of adding components to the CPU die and placing multiple dies in one system, modern processors increasingly place multiple cores in one die. The transputer designers struggled to fit even one core into its transistor budget. Today designers, working with a 1000-fold increase in transistor densities, can now typically place many. One of the most recent commercial developments has emerged from the firmXMOS, which has developed a family of embedded multi-core multi-threaded processors which resonate strongly with the transputer and Inmos. There is an emerging class of multicore/manycore processors taking the approach of anetwork on a chip(NoC), such as theCell processor,AdaptevaEpiphany architecture, Tilera, etc. The transputer and Inmos helped establishBristol, UK, as a hub for microelectronic design and innovation.
https://en.wikipedia.org/wiki/Transputer
Incomputing, avector processororarray processoris acentral processing unit(CPU) that implements aninstruction setwhere itsinstructionsare designed to operate efficiently and effectively on largeone-dimensional arraysof data calledvectors. This is in contrast toscalar processors, whose instructions operate on single data items only, and in contrast to some of those same scalar processors having additionalsingle instruction, multiple data(SIMD) orSIMD within a register(SWAR) Arithmetic Units. Vector processors can greatly improve performance on certain workloads, notablynumerical simulation,compressionand similar tasks.[1]Vector processing techniques also operate invideo-game consolehardware and ingraphics accelerators. Vector machines appeared in the early 1970s and dominatedsupercomputerdesign through the 1970s into the 1990s, notably the variousCrayplatforms. The rapid fall in theprice-to-performance ratioof conventionalmicroprocessordesigns led to a decline in vector supercomputers during the 1990s. Vector processing development began in the early 1960s at theWestinghouse Electric Corporationin theirSolomonproject. Solomon's goal was to dramatically increase math performance by using a large number of simplecoprocessorsunder the control of a single masterCentral processing unit(CPU). The CPU fed a single common instruction to all of thearithmetic logic units(ALUs), one per cycle, but with a different data point for each one to work on. This allowed the Solomon machine to apply a singlealgorithmto a largedata set, fed in the form of an array.[citation needed] In 1962, Westinghouse cancelled the project, but the effort was restarted by theUniversity of Illinois at Urbana–Champaignas theILLIAC IV. Their version of the design originally called for a 1GFLOPSmachine with 256 ALUs, but, when it was finally delivered in 1972, it had only 64 ALUs and could reach only 100 to 150 MFLOPS. Nevertheless, it showed that the basic concept was sound, and, when used on data-intensive applications, such ascomputational fluid dynamics, the ILLIAC was the fastest machine in the world. The ILLIAC approach of using separate ALUs for each data element is not common to later designs, and is often referred to under a separate category,massively parallelcomputing. Around this time Flynn categorized this type of processing as an early form ofsingle instruction, multiple threads(SIMT).[citation needed] International Computers Limitedsought to avoid many of the difficulties with the ILLIAC concept with its ownDistributed Array Processor(DAP) design, categorising the ILLIAC and DAP as cellular array processors that potentially offered substantial performance benefits over conventional vector processor designs such as the CDC STAR-100 and Cray 1.[2] Acomputer for operations with functionswas presented and developed by Kartsev in 1967.[3] The first vector supercomputers are theControl Data CorporationSTAR-100andTexas InstrumentsAdvanced Scientific Computer(ASC), which were introduced in 1974 and 1972, respectively. The basic ASC (i.e., "one pipe") ALU used a pipeline architecture that supported both scalar and vector computations, with peak performance reaching approximately 20 MFLOPS, readily achieved when processing long vectors. Expanded ALU configurations supported "two pipes" or "four pipes" with a corresponding 2X or 4X performance gain. Memory bandwidth was sufficient to support these expanded modes. The STAR-100 was otherwise slower than CDC's own supercomputers like theCDC 7600, but at data-related tasks they could keep up while being much smaller and less expensive. However the machine also took considerable time decoding the vector instructions and getting ready to run the process, so it required very specific data sets to work on before it actually sped anything up. The vector technique was first fully exploited in 1976 by the famousCray-1. Instead of leaving the data in memory like the STAR-100 and ASC, the Cray design had eightvector registers, which held sixty-four 64-bit words each. The vector instructions were applied between registers, which is much faster than talking to main memory. Whereas the STAR-100 would apply a single operation across a long vector in memory and then move on to the next operation, the Cray design would load a smaller section of the vector into registers and then apply as many operations as it could to that data, thereby avoiding many of the much slower memory access operations. The Cray design usedpipeline parallelismto implement vector instructions rather than multiple ALUs. In addition, the design had completely separate pipelines for different instructions, for example, addition/subtraction was implemented in different hardware than multiplication. This allowed a batch of vector instructions to be pipelined into each of the ALU subunits, a technique they calledvector chaining. The Cray-1 normally had a performance of about 80 MFLOPS, but with up to three chains running it could peak at 240 MFLOPS and averaged around 150 – far faster than any machine of the era. Other examples followed.Control Data Corporationtried to re-enter the high-end market again with itsETA-10machine, but it sold poorly and they took that as an opportunity to leave the supercomputing field entirely. In the early and mid-1980s Japanese companies (Fujitsu,HitachiandNippon Electric Corporation(NEC) introduced register-based vector machines similar to the Cray-1, typically being slightly faster and much smaller.Oregon-basedFloating Point Systems(FPS) built add-on array processors forminicomputers, later building their ownminisupercomputers. Throughout, Cray continued to be the performance leader, continually beating the competition with a series of machines that led to theCray-2,Cray X-MPandCray Y-MP. Since then, the supercomputer market has focused much more onmassively parallelprocessing rather than better implementations of vector processors. However, recognising the benefits of vector processing, IBM developedVirtual Vector Architecturefor use in supercomputers coupling several scalar processors to act as a vector processor. Although vector supercomputers resembling the Cray-1 are less popular these days, NEC has continued to make this type of computer up to the present day with theirSX seriesof computers. Most recently, theSX-Aurora TSUBASAplaces the processor and either 24 or 48 gigabytes of memory on anHBM2 module within a card that physically resembles a graphics coprocessor, but instead of serving as a co-processor, it is the main computer with the PC-compatible computer into which it is plugged serving support functions. Modern graphics processing units (GPUs) include an array ofshader pipelineswhich may be driven bycompute kernels, and can be considered vector processors (using a similar strategy for hiding memory latencies). As shown inFlynn's 1972 paperthe key distinguishing factor of SIMT-based GPUs is that it has a single instruction decoder-broadcaster but that the cores receiving and executing that same instruction are otherwise reasonably normal: their own ALUs, their own register files, their own Load/Store units and their own independent L1 data caches. Thus although all cores simultaneously execute the exact same instruction in lock-step with each other they do so with completely different data from completely different memory locations. This issignificantlymore complex and involved than"Packed SIMD", which is strictly limited to execution of parallel pipelined arithmetic operations only. Although the exact internal details of today's commercial GPUs are proprietary secrets, the MIAOW[4]team was able to piece together anecdotal information sufficient to implement a subset of the AMDGPU architecture.[5] Several modern CPU architectures are being designed as vector processors. TheRISC-V vector extensionfollows similar principles as the early vector processors, and is being implemented in commercial products such as theAndes TechnologyAX45MPV.[6]There are also severalopen sourcevector processor architectures being developed, includingForwardComandLibre-SOC. As of 2016[update]most commodity CPUs implement architectures that feature fixed-length SIMD instructions. On first inspection these can be considered a form of vector processing because they operate on multiple (vectorized, explicit length) data sets, and borrow features from vector processors. However, by definition, the addition of SIMD cannot, by itself, qualify a processor as an actualvector processor, because SIMD isfixed-length, and vectors arevariable-length. The difference is illustrated below with examples, showing and comparing the three categories: Pure SIMD, Predicated SIMD, and Pure Vector Processing.[citation needed] Other CPU designs include some multiple instructions for vector processing on multiple (vectorized) data sets, typically known asMIMD(Multiple Instruction, Multiple Data) and realized withVLIW(Very Long Instruction Word) andEPIC(Explicitly Parallel Instruction Computing). TheFujitsu FR-VVLIW/vector processor combines both technologies. SIMD instruction sets lack crucial features when compared to vector instruction sets. The most important of these is that vector processors, inherently by definition and design, have always been variable-length since their inception. Whereas pure (fixed-width, no predication) SIMD is often mistakenly claimed to be "vector" (because SIMD processes data which happens to be vectors), through close analysis and comparison of historic and modern ISAs, actual vector ISAs may be observed to have the following features that no SIMD ISA has:[citation needed] Predicated SIMD (part ofFlynn's taxonomy) which is comprehensive individual element-level predicate masks on every vector instruction as is now available in ARM SVE2.[10]AndAVX-512, almost qualifies as a vector processor.[how?]Predicated SIMD uses fixed-width SIMD ALUs but allows locally controlled (predicated) activation of units to provide the appearance of variable length vectors. Examples below help explain these categorical distinctions. SIMD, because it uses fixed-width batch processing, isunable by designto cope with iteration and reduction. This is illustrated further with examples, below. Additionally, vector processors can be more resource-efficient by using slower hardware and saving power, but still achieving throughput and having less latency than SIMD, throughvector chaining.[11][12] Consider both a SIMD processor and a vector processor working on 4 64-bit elements, doing a LOAD, ADD, MULTIPLY and STORE sequence. If the SIMD width is 4, then the SIMD processor must LOAD four elements entirely before it can move on to the ADDs, must complete all the ADDs before it can move on to the MULTIPLYs, and likewise must complete all of the MULTIPLYs before it can start the STOREs. This is by definition and by design.[13] Having to perform 4-wide simultaneous 64-bit LOADs and 64-bit STOREs is very costly in hardware (256 bit data paths to memory). Having 4x 64-bit ALUs, especially MULTIPLY, likewise. To avoid these high costs, a SIMD processor would have to have 1-wide 64-bit LOAD, 1-wide 64-bit STORE, and only 2-wide 64-bit ALUs. As shown in the diagram, which assumes amulti-issue execution model, the consequences are that the operations now take longer to complete. If multi-issue is not possible, then the operations take even longer because the LD may not be issued (started) at the same time as the first ADDs, and so on. If there are only 4-wide 64-bit SIMD ALUs, the completion time is even worse: only when all four LOADs have completed may the SIMD operations start, and only when all ALU operations have completed may the STOREs begin. A vector processor, by contrast, even if it issingle-issueand uses no SIMD ALUs, only having 1-wide 64-bit LOAD, 1-wide 64-bit STORE (and, as in theCray-1, the ability to run MULTIPLY simultaneously with ADD), may complete the four operations faster than a SIMD processor with 1-wide LOAD, 1-wide STORE, and 2-wide SIMD. This more efficient resource utilization, due tovector chaining, is a key advantage and difference compared to SIMD. SIMD, by design and definition, cannot perform chaining except to the entire group of results.[14] In general terms, CPUs are able to manipulate one or two pieces of data at a time. For instance, most CPUs have an instruction that essentially says "add A to B and put the result in C". The data for A, B and C could be—in theory at least—encoded directly into the instruction. However, in efficient implementation things are rarely that simple. The data is rarely sent in raw form, and is instead "pointed to" by passing in an address to a memory location that holds the data. Decoding this address and getting the data out of the memory takes some time, during which the CPU traditionally would sit idle waiting for the requested data to show up. As CPU speeds have increased, thismemory latencyhas historically become a large impediment to performance; seeRandom-access memory § Memory wall. In order to reduce the amount of time consumed by these steps, most modern CPUs use a technique known asinstruction pipeliningin which the instructions pass through several sub-units in turn. The first sub-unit reads the address and decodes it, the next "fetches" the values at those addresses, and the next does the math itself. With pipelining the "trick" is to start decoding the next instruction even before the first has left the CPU, in the fashion of anassembly line, so theaddress decoderis constantly in use. Any particular instruction takes the same amount of time to complete, a time known as thelatency, but the CPU can process an entire batch of operations, in an overlapping fashion, much faster and more efficiently than if it did so one at a time. Vector processors take this concept one step further. Instead of pipelining just the instructions, they also pipeline the data itself. The processor is fed instructions that say not just to add A to B, but to add all of the numbers "from here to here" to all of the numbers "from there to there". Instead of constantly having to decode instructions and then fetch the data needed to complete them, the processor reads a single instruction from memory, and it is simply implied in the definition of the instructionitselfthat the instruction will operate again on another item of data, at an address one increment larger than the last. This allows for significant savings in decoding time. To illustrate what a difference this can make, consider the simple task of adding two groups of 10 numbers together. In a normal programming language one would write a "loop" that picked up each of the pairs of numbers in turn, and then added them. To the CPU, this would look something like this: But to a vector processor, this task looks considerably different: Note the complete lack of looping in the instructions, because it is thehardwarewhich has performed 10 sequential operations: effectively the loop count is on an explicitper-instructionbasis. Cray-style vector ISAs take this a step further and provide a global "count" register, called vector length (VL): There are several savings inherent in this approach.[15] Additionally, in more modern vector processor ISAs, "Fail on First" or "Fault First" has been introduced (see below) which brings even more advantages. But more than that, a high performance vector processor may have multiplefunctional unitsadding those numbers in parallel. The checking of dependencies between those numbers is not required as a vector instruction specifies multiple independent operations. This simplifies the control logic required, and can further improve performance by avoiding stalls. The math operations thus completed far faster overall, the limiting factor being the time required to fetch the data from memory. Not all problems can be attacked with this sort of solution. Including these types of instructions necessarily adds complexity to the core CPU. That complexity typically makesotherinstructions run slower—i.e., whenever it isnotadding up many numbers in a row. The more complex instructions also add to the complexity of the decoders, which might slow down the decoding of the more common instructions such as normal adding. (This can be somewhat mitigated by keeping the entire ISA toRISCprinciples: RVV only adds around 190 vector instructions even with the advanced features.[16]) Vector processors were traditionally designed to work best only when there are large amounts of data to be worked on. For this reason, these sorts of CPUs were found primarily insupercomputers, as the supercomputers themselves were, in general, found in places such as weather prediction centers and physics labs, where huge amounts of data are "crunched". However, as shown above and demonstrated by RISC-V RVV theefficiencyof vector ISAs brings other benefits which are compelling even for Embedded use-cases. The vector pseudocode example above comes with a big assumption that the vector computer can process more than ten numbers in one batch. For a greater quantity of numbers in the vector register, it becomes unfeasible for the computer to have a register that large. As a result, the vector processor either gains the ability to perform loops itself, or exposes some sort of vector control (status) register to the programmer, usually known as a vector Length. The self-repeating instructions are found in early vector computers like the STAR-100, where the above action would be described in a single instruction (somewhat likevadd c, a, b, $10). They are also found in thex86architecture as theREPprefix. However, only very simple calculations can be done effectively in hardware this way without a very large cost increase. Since all operands have to be in memory for the STAR-100 architecture, the latency caused by access became huge too. Broadcom included space in all vector operations of theVideocoreIV ISA for aREPfield, but unlike the STAR-100 which uses memory for its repeats, the Videocore IV repeats are on all operations including arithmetic vector operations. The repeat length can be a small range ofpower of twoor sourced from one of the scalar registers.[17] TheCray-1introduced the idea of usingprocessor registersto hold vector data in batches. The batch lengths (vector length, VL) could be dynamically set with a special instruction, the significance compared to Videocore IV (and, crucially as will be shown below, SIMD as well) being that the repeat length does not have to be part of the instruction encoding. This way, significantly more work can be done in each batch; the instruction encoding is much more elegant and compact as well. The only drawback is that in order to take full advantage of this extra batch processing capacity, the memory load and store speed correspondingly had to increase as well. This is sometimes claimed[by whom?]to be a disadvantage of Cray-style vector processors: in reality it is part of achieving high performance throughput, as seen inGPUs, which face exactly the same issue. Modern SIMD computers claim to improve on early Cray by directly using multiple ALUs, for a higher degree of parallelism compared to only using the normal scalar pipeline. Modern vector processors (such as theSX-Aurora TSUBASA) combine both, by issuing multiple data to multiple internal pipelined SIMD ALUs, the number issued being dynamically chosen by the vector program at runtime. Masks can be used to selectively load and store data in memory locations, and use those same masks to selectively disable processing element of SIMD ALUs. Some processors with SIMD (AVX-512, ARMSVE2) are capable of this kind of selective, per-element ("predicated") processing, and it is these which somewhat deserve the nomenclature "vector processor" or at least deserve the claim of being capable of "vector processing". SIMD processors without per-element predication (MMX,SSE,AltiVec) categorically do not. Modern GPUs, which have many small compute units each with their own independent SIMD ALUs, useSingle Instruction Multiple Threads(SIMT). SIMT units run from a shared single broadcast synchronised Instruction Unit. The "vector registers" are very wide and the pipelines tend to be long. The "threading" part of SIMT involves the way data is handled independently on each of the compute units. In addition, GPUs such as the BroadcomVideocoreIV and other external vector processors like theNEC SX-Aurora TSUBASAmay use fewer vector units than the width implies: instead of having 64 units for a 64-number-wide register, the hardware might instead do a pipelined loop over 16 units for a hybrid approach. The BroadcomVideocoreIV is also capable of this hybrid approach: nominally stating that its SIMD QPU Engine supports 16-long FP array operations in its instructions, it actually does them 4 at a time, as (another) form of "threads".[18] This example starts with an algorithm ("IAXPY"), first show it in scalar instructions, then SIMD, then predicated SIMD, and finally vector instructions. This incrementally helps illustrate the difference between a traditional vector processor and a modern SIMD one. The example starts with a 32-bit integer variant of the "DAXPY" function, inC: In each iteration, every element of y has an element of x multiplied by a and added to it. The program is expressed in scalar linear form for readability. The scalar version of this would load one of each of x and y, process one calculation, store one result, and loop: The STAR-like code remains concise, but because the STAR-100's vectorisation was by design based around memory accesses, an extra slot of memory is now required to process the information. Two times the latency is also needed due to the extra requirement of memory access. A modern packed SIMD architecture, known by many names (listed inFlynn's taxonomy), can do most of the operation in batches. The code is mostly similar to the scalar version. It is assumed that both x and y areproperly alignedhere (only start on a multiple of 16) and that n is a multiple of 4, as otherwise some setup code would be needed to calculate a mask or to run a scalar version. It can also be assumed, for simplicity, that the SIMD instructions have an option to automatically repeat scalar operands, like ARM NEON can.[19]If it does not, a "splat" (broadcast) must be used, to copy the scalar argument across a SIMD register: The time taken would be basically the same as a vector implementation ofy = mx + cdescribed above. Note that both x and y pointers are incremented by 16, because that is how long (in bytes) four 32-bit integers are. The decision was made that the algorithmshallonly cope with 4-wide SIMD, therefore the constant is hard-coded into the program. Unfortunately for SIMD, the clue was in the assumption above, "that n is a multiple of 4" as well as "aligned access", which, clearly, is a limited specialist use-case. Realistically, for general-purpose loops such as in portable libraries, where n cannot be limited in this way, the overhead of setup and cleanup for SIMD in order to cope with non-multiples of the SIMD width, can far exceed the instruction count inside the loop itself. Assuming worst-case that the hardware cannot do misaligned SIMD memory accesses, a real-world algorithm will: Eight-wide SIMD requires repeating the inner loop algorithm first with four-wide SIMD elements, then two-wide SIMD, then one (scalar), with a test and branch in between each one, in order to cover the first and last remaining SIMD elements (0 <= n <= 7). This more thantriplesthe size of the code, in fact in extreme cases it results in anorder of magnitudeincrease in instruction count! This can easily be demonstrated by compiling the iaxpy example forAVX-512, using the options"-O3 -march=knl"togcc. Over time as the ISA evolves to keep increasing performance, it results in ISA Architects adding 2-wide SIMD, then 4-wide SIMD, then 8-wide and upwards. It can therefore be seen whyAVX-512exists in x86. Without predication, the wider the SIMD width the worse the problems get, leading to massive opcode proliferation, degraded performance, extra power consumption and unnecessary software complexity.[20] Vector processors on the other hand are designed to issue computations of variable length for an arbitrary count, n, and thus require very little setup, and no cleanup. Even compared to those SIMD ISAs which have masks (but nosetvlinstruction), Vector processors produce much more compact code because they do not need to perform explicit mask calculation to cover the last few elements (illustrated below). Assuming a hypothetical predicated (mask capable) SIMD ISA, and again assuming that the SIMD instructions can cope with misaligned data, the instruction loop would look like this: Here it can be seen that the code is much cleaner but a little complex: at least, however, there is no setup or cleanup: on the last iteration of the loop, the predicate mask will be set to either 0b0000, 0b0001, 0b0011, 0b0111 or 0b1111, resulting in between 0 and 4 SIMD element operations being performed, respectively. One additional potential complication: some RISC ISAs do not have a "min" instruction, needing instead to use a branch or scalar predicated compare. It is clear how predicated SIMD at least merits the term "vector capable", because it can cope with variable-length vectors by using predicate masks. The final evolving step to a "true" vector ISA, however, is to not have any evidence in the ISAat allof a SIMD width, leaving that entirely up to the hardware. For Cray-style vector ISAs such as RVV, an instruction called "setvl" (set vector length) is used. The hardware first defines how many data values it can process in one "vector": this could be either actual registers or it could be an internal loop (the hybrid approach, mentioned above). This maximum amount (the number of hardware "lanes") is termed "MVL" (Maximum Vector Length). Note that, as seen in SX-Aurora and Videocore IV, MVL may be an actual hardware lane quantityor a virtual one.(Note: As mentioned in the ARM SVE2 Tutorial, programmersmustnot make the mistake of assuming a fixed vector width: consequently MVL is not a quantity that the programmer needs to know. This can be a little disconcerting after years of SIMD mindset).[tone] On calling setvl with the number of outstanding data elements to be processed, "setvl" is permitted (essentially required) to limit that to the Maximum Vector Length (MVL) and thus returns theactualnumber that can be processed by the hardware in subsequent vector instructions, and sets the internal special register, "VL", to that same amount. ARM refers to this technique as "vector length agnostic" programming in its tutorials on SVE2.[21] Below is the Cray-style vector assembler for the same SIMD style loop, above. Note that t0 (which, containing a convenient copy of VL, can vary) is used instead of hard-coded constants: This is essentially not very different from the SIMD version (processes 4 data elements per loop), or from the initial Scalar version (processes just the one). n still contains the number of data elements remaining to be processed, but t0 contains the copy of VL – the number that isgoingto be processed in each iteration. t0 is subtracted from n after each iteration, and if n is zero then all elements have been processed. A number of things to note, when comparing against the Predicated SIMD assembly variant: Thus it can be seen, very clearly, how vector ISAs reduce the number of instructions. Also note, that just like the predicated SIMD variant, the pointers to x and y are advanced by t0 times four because they both point to 32 bit data, but that n is decremented by straight t0. Compared to the fixed-size SIMD assembler there is very little apparent difference: x and y are advanced by hard-coded constant 16, n is decremented by a hard-coded 4, so initially it is hard to appreciate the significance. The difference comes in the realisation that the vector hardware could be capable of doing 4 simultaneous operations, or 64, or 10,000, it would be the exact same vector assembler for all of themand there would still be no SIMD cleanup code. Even compared to the predicate-capable SIMD, it is still more compact, clearer, more elegant and uses less resources. Not only is it a much more compact program (saving on L1 Cache size), but as previously mentioned, the vector version can issue far more data processing to the ALUs, again saving power because Instruction Decode and Issue can sit idle. Additionally, the number of elements going in to the function can start at zero. This sets the vector length to zero, which effectively disables all vector instructions, turning them intono-ops, at runtime. Thus, unlike non-predicated SIMD, even when there are no elements to process there is still no wasted cleanup code. This example starts with an algorithm which involves reduction. Just as with the previous example, it will be first shown in scalar instructions, then SIMD, and finally vector instructions, starting inc: Here, an accumulator (y) is used to sum up all the values in the array, x. The scalar version of this would load each of x, add it to y, and loop: This is very straightforward. "y" starts at zero, 32 bit integers are loaded one at a time into r1, added to y, and the address of the array "x" moved on to the next element in the array. This is where the problems start. SIMD by design is incapable of doing arithmetic operations "inter-element". Element 0 of one SIMD register may be added to Element 0 of another register, but Element 0 maynotbe added to anythingotherthan another Element 0. This places some severe limitations on potential implementations. For simplicity it can be assumed that n is exactly 8: At this point four adds have been performed: but with 4-wide SIMD being incapableby designof addingx[0]+x[1]for example, things go rapidly downhill just as they did with the general case of using SIMD for general-purpose IAXPY loops. To sum the four partial results, two-wide SIMD can be used, followed by a single scalar add, to finally produce the answer, but, frequently, the data must be transferred out of dedicated SIMD registers before the last scalar computation can be performed. Even with a general loop (n not fixed), the only way to use 4-wide SIMD is to assume four separate "streams", each offset by four elements. Finally, the four partial results have to be summed. Other techniques involve shuffle: examples online can be found forAVX-512of how to do "Horizontal Sum"[22][23] Aside from the size of the program and the complexity, an additional potential problem arises if floating-point computation is involved: the fact that the values are not being summed in strict order (four partial results) could result in rounding errors. Vector instruction sets have arithmetic reduction operationsbuilt-into the ISA. If it is assumed that n is less or equal to the maximum vector length, only three instructions are required: The code when n is larger than the maximum vector length is not that much more complex, and is a similar pattern to the first example ("IAXPY"). The simplicity of the algorithm is stark in comparison to SIMD. Again, just as with the IAXPY example, the algorithm is length-agnostic (even on Embedded implementations where maximum vector length could be only one). Implementations in hardware may, if they are certain that the right answer will be produced, perform the reduction in parallel. Some vector ISAs offer a parallel reduction mode as an explicit option, for when the programmer knows that any potential rounding errors do not matter, and low latency is critical.[24] This example again highlights a key critical fundamental difference between true vector processors and those SIMD processors, including most commercial GPUs, which are inspired by features of vector processors. Compared to any SIMD processor claiming to be a vector processor, the order of magnitude reduction in program size is almost shocking. However, this level of elegance at the ISA level has quite a high price tag at the hardware level: Overall then there is a choice to either have These stark differences are what distinguishes a vector processor from one that has SIMD. Where many SIMD ISAs borrow or are inspired by the list below, typical features that a vector processor will have are:[25][26][27] With many 3Dshaderapplications needingtrigonometricoperations as well as short vectors for common operations (RGB, ARGB, XYZ, XYZW) support for the following is typically present in modern GPUs, in addition to those found in vector processors: Introduced in ARM SVE2 and RISC-V RVV is the concept of speculative sequential Vector Loads. ARM SVE2 has a special register named "First Fault Register",[36]where RVV modifies (truncates) the Vector Length (VL).[37] The basic principle of ffirst is to attempt a large sequential Vector Load, but to allow the hardware to arbitrarily truncate theactualamount loaded to either the amount that would succeed without raising a memory fault or simply to an amount (greater than zero) that is most convenient. The important factor is thatsubsequentinstructions are notified or may determine exactly how many Loads actually succeeded, using that quantity to only carry out work on the data that has actually been loaded. Contrast this situation with SIMD, which is a fixed (inflexible) load width and fixed data processing width, unable to cope with loads that cross page boundaries, and even if they were they are unable to adapt to what actually succeeded, yet, paradoxically, if the SIMD program were to even attempt to find out in advance (in each inner loop, every time) what might optimally succeed, those instructions only serve to hinder performance because they would, by necessity, be part of the critical inner loop. This begins to hint at the reason why ffirst is so innovative, and is best illustrated by memcpy or strcpy when implemented with standard 128-bit non-predicated non-ffirst SIMD. For IBM POWER9 the number of hand-optimised instructions to implement strncpy is in excess of 240.[38]By contrast, the same strncpy routine in hand-optimised RVV assembler is a mere 22 instructions.[39] The above SIMD example could potentially fault and fail at the end of memory, due to attempts to read too many values: it could also cause significant numbers of page or misaligned faults by similarly crossing over boundaries. In contrast, by allowing the vector architecture the freedom to decide how many elements to load, the first part of a strncpy, if beginning initially on a sub-optimal memory boundary, may return just enough loads such that onsubsequentiterations of the loop the batches of vectorised memory reads are optimally aligned with the underlying caches and virtual memory arrangements. Additionally, the hardware may choose to use the opportunity to end any given loop iteration's memory readsexactlyon a page boundary (avoiding a costly second TLB lookup), with speculative execution preparing the next virtual memory page whilst data is still being processed in the current loop. All of this is determined by the hardware, not the program itself.[40] Letrbe the vector speed ratio andfbe the vectorization ratio. If the time taken for the vector unit to add an array of 64 numbers is 10 times faster than its equivalent scalar counterpart, r = 10. Also, if the total number of operations in a program is 100, out of which only 10 are scalar (after vectorization), then f = 0.9, i.e., 90% of the work is done by the vector unit. It follows the achievable speed up of: r/[(1−f)∗r+f]{\displaystyle r/[(1-f)*r+f]} So, even if the performance of the vector unit is very high (r=∞{\displaystyle r=\infty }) there is a speedup less than1/(1−f){\displaystyle 1/(1-f)}, which suggests that the ratiofis crucial to the performance. This ratio depends on the efficiency of the compilation like adjacency of the elements in memory.
https://en.wikipedia.org/wiki/Vector_processing
Deadlochis an Australianblack comedycrimemysterytelevision series that premiered onAmazon Prime Videoon 2 June 2023. Created byKate McCartneyandKate McLennan, the series is set in Deadloch, a fictional town inTasmania, and starsKate Box,Madeleine Sami,Alicia Gardiner, andNina Oyama.Deadlochwas produced byAmazon Studios. The series was renewed for a second season in July 2024.[1] The beguilingly sleepy settlement of Deadloch, on Tasmania's coastline, is shaken when the body of a local man turns up dead on the beach. Two female detectives reluctantly take charge of the investigation together: the fastidious Senior Sergeant Dulcie Collins, and the brash and reckless Detective Eddie Redcliffe from Darwin, aided by overeager Constable Abby Matsuda and ditsy Officer Sven Alderman. The murder coincides with the town's annual "Winter Feastival" — a celebration of local arts, cuisine and culture. The investigation forces Dulcie and Eddie to cope with each other's drastically opposite investigation styles, as they discover secrets being hidden in a town struggling to disguise the deep rift that's slowly splitting it and the lives of its residents.[2][need quotation to verify] Kate McCartney and Kate McLennan are theshow runnersand producers of the series. "The Kates", as they are nicknamed, were inspired to write a comedy from a set-up similar to that of the UK seriesBroadchurchafter they both watched the series, so much so the working title of the project was "FunnyBroadchurch". Actress Nina Oyama told theSydney Morning Herald: "The show is first and foremost a crime show, because of the way it’s laid out, and the way people will keep returning to it will be for the crime-based and mystery-based reasons... But it's also very funny.” There was also an intention in the production to subvert some of the typical genre tropes, and reverse who are usually considered the victims in society. There is a sub-plot of aFirst Nationsstoryline around local teenagers played by Leonie Whyman and Kartanya Maynard.[6] Deadlochwas written by McCartney and McLennan along with Kim Wilson, Christian White, Anchuli Felicia King, Kirsty Fisher, and Madelaine Sami. Production on the series got underway in February 2022. Directors on the series included Ben Chessell, Gracie Otto, andBeck Cole. Production is by Andy Walker for Prime Video, Guesswork Television, and OK Great Productions, with Fiona McConaghy as co-producer. McCartney, McLennan, Kevin Whyte, and Tanya Phegan were executive producers.[7]The score was written by Amanda Brown.[8] The series was renewed for a second season on 8 July 2024.[1] Filming took place in southernTasmania, outsideHobart, aroundCygnet,SnugandKingston.[9] Filming for season 2 moved from Tasmania to the Northern Territory.[1] The series premiered onAmazon Prime Videoon 2 June 2023 with three episodes, new episodes available weekly up to 7 July 2023.[10][11] The series was received positively. On thereview aggregatorwebsiteRotten Tomatoes, 100% of 22 critics' reviews are positive, with an average rating of 8.0/10. The website's consensus reads: "An irreverent twist on the crime procedural,Deadloch's addictive mixture of mystery and mordant humor makes most of its corpse-strewn competition look comparably stiff."[12] In a favourable response from Luke Buckmaster ofThe Guardian, he gave the series four out of five stars, and praised creators Kate McCartney and Kate McLennan – "They are moving into the next phase of their career, with Deadloch, a narratively richer series that’s dark and dramatic, and often also very funny."[13]In a positive review from Pemi Bakshi ofGraziamagazine said that "the eight-part series blends humour and commentary to bring us a wickedly entertaining take on the detective show genre."[14]In a somewhat more mixed review for websiteScreen Hub, Stephen A. Russell gave a rating of three stars out of five and commented that "WhileDeadloch's far from dead on arrival, its enervating lack of structural ambition did kill a lot of the buzz I had going in."[15]
https://en.wikipedia.org/wiki/Deadloch
Deathlockmay refer to:
https://en.wikipedia.org/wiki/Deathlock_(disambiguation)
Dreadlocks, also known asdreadsorlocs, are ahairstylemade of rope-like strands of matted hair. Dreadlocks can form naturally invery curly hair, or they can be created with techniques like twisting,backcombing, or crochet.[1][2][3][4] The worddreadlocksis usually understood to come fromJamaican Creoledread, "member of theRastafarian movementwho wears his hair in dreadlocks" (compareNazirite), referring to theirdread or awe of God.[5]An older name for dreadlocks waselflocks, from the notion thatelveshad matted the locks in people'ssleep. Other origins have been proposed. Some authors trace the term to theMau Mau, a group of whom apparently coined it fromBritish colonialistsin 1959 as a reference to their dreadful hair. In their 2014 bookHair Story: Untangling the Roots of Black Hair in America, Ayana Byrd and Lori Tharps claimed that the namedredlocsoriginated in the time of theslave trade: when transported Africans disembarked from the slave ships after spending months confined inunhygienic conditions,whiteswould report that their undressed and matted kinky hair was "dreadful". According to them, it is due to these circumstances that many people wearing the style today drop theaindreadlockto avoid negative implications.[6] The worddreadlocksrefers to locks of entangled hair.[7] Several languages have names for these locks: According to Sherrow inEncyclopedia of Hair: A Cultural History, dreadlocks date back to ancient times in various cultures. Inancient Egypt, Egyptians wore locked hairstyles andwigsappeared onbas-reliefs, statuary and other artifacts.[14]Mummified remains of Egyptians with locked wigs have also been recovered from archaeological sites.[15]According to Maria Delongoria, braided hair was worn by people in theSahara desertsince 3000 BCE. Dreadlocks were also worn by followers ofAbrahamic religions. For example,Ethiopian CopticBahatowie priests adopted dreadlocks as a hairstyle before the fifth century CE (400 or 500 CE). Locking hair was practiced by some ethnic groups inEast,Central,West, andSouthernAfrica.[16][17][18] Pre-ColumbianAztecpriests were described inAztec codices(including theDurán Codex, theCodex Tudelaand theCodex Mendoza) as wearing their hair untouched, allowing it to grow long and matted.[19]Bernal Diaz del Castillo records: There were priests with long robes of black cloth... The hair of these priests was very long and so matted that it could not be separated or disentangled, and most of them had their ears scarified, and their hair was clotted with blood. The earliest known possible depictions of dreadlocks in Europe date back as far as 1600–1500 BCE in theMinoan Civilization, centered inCrete(now part ofGreece).[21]Frescoesdiscovered on theAegean islandofThera(modernSantorini, Greece) portray individuals with long braided hair or long dreadlocks.[20][23][24][25]Another source describes the hair of the boys in theAkrotiri Boxer Frescoas long tresses, not dreadlocks. Tresses of hair are defined byCollins Dictionaryas braided hair, braided plaits, or long loose curls of hair.[26][27][28] InSenegal, the Baye Fall, followers of theMouridemovement, a Sufi movement ofIslamfounded in 1887 CE byShaykh Aamadu Bàmba Mbàkke, are famous for growing dreadlocks and wearing multi-colored gowns.[29] Cheikh Ibra Fall, founder of the Baye Fall school of theMouride Brotherhood, popularized the style by adding a mystic touch to it.[30]This sect of Islam in Senegal, where Muslims wearndjan(dreadlocks), aimed to Africanize Islam. Dreadlocks to this group of Islamic followers symbolize their religious orientation.[31][32]Jamaican Rastas also reside in Senegal and have settled in areas near Baye Fall communities. Baye Fall and Jamaican Rastas have similar cultural beliefs regarding dreadlocks. Both groups wear knitted caps to cover their locs and wear locs for religious and spiritual purposes.[33]Male members of the Baye Fall religion wear locs to detach from mainstream Western ideals.[34] In the 1970s, Americans and Britons attended reggae concerts and were exposed to various aspects of Jamaican culture, including dreadlocks.Hippiesrelated to the Rastafarian idea of rejectingcapitalismandcolonialism, symbolized by the name "Babylon". Rastafarians rejected Babylon in multiple ways, including by wearing their hair naturally in locs to defy Western standards of beauty. The 1960s was the height of thecivil rights movementin the U.S., and some White Americans joined Black people in the fight against inequality andsegregationand were inspired by Black culture. As a result, some White people joined the Rastafarian movement. Dreadlocks were not a common hairstyle in the United States, but by the 1970s, some White Americans were inspired by reggae music, the Rastafarian movement, andAfrican-American hair cultureand started wearing dreadlocks.[35][36]According to authors Bronner and Dell Clark, the clothing styles worn by hippies in the 1960s and 1970s were copied fromAfrican-American culture. The word hippie comes from theAfrican-American slangwordhip. African-American dress and hairstyles such as braids (often decorated with beads), dreadlocks, and language were copied by hippies and developed into a new countercultural movement used by hippies.[37][38] In Europe in the 1970s, hundreds of Jamaicans and otherCaribbean peopleimmigrated to metropolitan centers of London,Birmingham, Paris, and Amsterdam. Communities ofJamaicans,Caribbeans, and Rastas emerged in these areas. Thus Europeans in these metropolitan cities were introduced to Black cultures from the Caribbean and Rastafarian practices and were inspired byCaribbean culture, leading some of them to adopt Black hair culture, music, and religion. However, the strongest influence of Rastafari religion is amongEurope's Black population.[39][40] Whenreggae music, which espoused Rastafarian ideals, gained popularity and mainstream acceptance in the 1970s, thanks toBob Marley's music and cultural influence, dreadlocks (often called "dreads") became a notable fashion statement worldwide, and have been worn by prominent authors, actors, athletes, and rappers.[41][42]Rastafari influenced its members worldwide to embrace dreadlocks. Black Rastas loc their hair to embrace their African heritage and accept African features as beautiful, such as dark skin tones, Afro-textured hair, and African facial features.[43] Hip Hopandrapartists such asLauryn Hill,Lil Wayne,T-Pain,Snoop Dog,J-Cole,Wiz Khalifa,Chief Keef,Lil Jon, and other artists wear dreadlocks, which further popularized the hairstyle in the 1990s, early 2000s, and present day. Dreadlocks are a part of hip-hop fashion and reflect Black cultural music of liberation and identity.[44][45][46][47]Many rappers andAfrobeatartists inUgandawear locs, such asNavio, Delivad Julio,Fik Fameica, Vyper Ranking, Byaxy, Liam Voice, and other artists. From reggae music to hip hop, rap, and Afrobeat, Black artists in theAfrican diasporawear locs to display their Black identity and culture.[48][49][50] Youth in Kenya who are fans of rap and hip hop music, and Kenyan rappers and musicians, wear locs to connect to the history of theMau Mau freedom fighterswho wore locs as symbols of anti-colonialism, and to Bob Marley, who was a Rasta.[51]Hip hop and reggae fashion spread toGhanaand fused with traditional Ghanaian culture.Ghanaian musicianswear dreadlocks incorporating reggae symbols and hip hop clothes mixed with traditional Ghanaian textiles, such as wearingGhanaian headwrapsto hold their locs.[52][53]Ghanaian women wear locs as a symbol of African beauty. The beauty industry in Ghana believe locs are a traditional African hair practice and market hair care products to promote natural African hairstyles such as afros and locs.[54]The previous generations of Black artists have inspired younger contemporary Black actresses to loc their hair, such asChloe Bailey,Halle Bailey, andR&BandPop musicsingerWillow Smith. More Black actors in Hollywood are choosing to loc their hair to embrace their Black heritage.[55] Although more Black women in Hollywood and the beauty and music industries are wearing locs, there has never been a BlackMiss Americawinner with locs because there is pushback in the fashion industry towards Black women's natural hair. For example, modelAdesuwa Aighewilocked her hair and was told she might not receive any casting calls because of her dreadlocks. Some Black women in modeling agencies are forced to straighten their hair. However, more Black women are resisting and choosing to wear Black hairstyles such as afros and dreadlocks in fashion shows and beauty pageants.[56][57]For example, in 2007 Miss Universe Jamaica and Rastafarian,Zahra Redwood, was the first Black woman to break the barrier on a world pageant stage when she wore locs, paving the way and influencing other Black women to wear locs in beauty pageants. In 2015,Miss Jamaica WorldSanneta Myrie was the first contestant to wear locs to theMiss World Pageant.[58]In 2018,Dee-Ann Kentish-Rogersof Britain was crowned Miss Universe wearing her locs and became the first Black British woman to win the competition with natural locs.[59][60] Hollywood cinemaoften uses the dreadlock hairstyle as a prop in movies for villains and pirates. According to author Steinhoff, this appropriates dreadlocks and removes them from their original meaning of Black heritage to one of dread and otherness. In the moviePirates of the Caribbean, the pirate Jack Sparrow wears dreadlocks. Dreadlocks are used in Hollywood to mystify a character and make them appear threatening or living a life of danger. In the movieThe Curse of the Black Pearl, pirates were dressed in dreadlocks to signify their cursed lives.[61] Locks have been worn for various reasons in many cultures and ethnic groups around the world throughout history. Their use has also been raised in debates aboutcultural appropriation.[62][63][64][65][66][67] The practice of wearing braids and dreadlocks in Africa dates back to 3,000 BC in the Sahara Desert. It has been commonly thought that other cultures influenced the dreadlock tradition in Africa. TheKikuyuandSomaliwear braided and locked hairstyles.[68][69]Warriors among theFulani,Wolof, andSererinMauritania, andMandinkainMaliwere known for centuries to have worncornrowswhen young and dreadlocks when old. InWest Africa, the water spiritMami Watais said to have long locked hair. Mami Wata's spiritual powers of fertility and healing come from her dreadlocks.[70][71]West African spiritual priests calledDadawear dreadlocks to venerate Mami Wata in her honor as spiritual consecrations.[72]SomeEthiopianChristian monks and Bahatowie priests of theEthiopian Coptic Churchlock their hair for religious purposes.[73][74]InYorubaland,Aladura churchprophets calledwooliimat their hair into locs and wear long blue, red, white, or purple garments with caps and carry iron rods used as a staff.[75]Prophets lock their hair in accordance with the Nazarene vow in the Christian bible. This is not to be confused with the Rastafari religion that was started in the 1930s. The Aladura church was founded in 1925 andsyncretizesindigenous Yoruba beliefs about dreadlocks with Christianity.[76]Moses Orimolade Tunolasewas the founder of the first African Pentecostal movement started in 1925 in Nigeria. Tunolase wore dreadlocks and members of his church wear dreadlocks in his honor and for spiritual protection.[77] TheYorubawordDadais given to children inNigeriaborn with dreadlocks.[78][79]SomeYoruba peoplebelieve children born with dreadlocks have innate spiritual powers, and cutting their hair might cause serious illness. Only the child's mother can touch their hair. "Dada children are believed to be young gods, they are often offered at spiritual altars for chief priests to decide their fate. Some children end up becoming spiritual healers and serve at the shrine for the rest of their lives." If their hair is cut, it must be cut by a chief priest and placed in a pot of water with herbs, and the mixture is used to heal the child if they get sick. Among theIgbo, Dada children are said to be reincarnatedJujuistsof great spiritual power because of their dreadlocks.[80][81]Children born with dreadlocks are viewed as special. However, adults with dreadlocks are viewed negatively. Yoruba Dada children's dreadlocks are shaved at a river, and their hair is grown back "tamed" and have a hairstyle that conforms to societal standards. The child continues to be recognized as mysterious and special.[82]It is believed that the hair of Dada children was braided in heaven before they were born and will bring good fortune and wealth to their parents. When the child is older, the hair is cut during a special ritual.[83]InYoruba mythology, theOrisha Yemojagave birth to a Dada who is a deified king in Yoruba.[84][85]However, dreadlocks are viewed in a negative light in Nigeria due to their stereotypical association with gangs and criminal activity; men with dreadlocks faceprofilingfrom Nigerian police.[86][87] InGhana, among theAshanti people,Okomfo priestsare identified by their dreadlocks. They are not allowed to cut their hair and must allow it to mat and lock naturally. Locs are symbols of higher power reserved for priests.[88][89][90]Other spiritual people in Southern Africa who wear dreadlocks areSangomas. Sangomas wear red and white beaded dreadlocks to connect to ancestral spirits. Two African men were interviewed, explaining why they chose to wear dreadlocks. "One – Mr. Ngqula – said he wore his dreadlocks to obey his ancestors' call, given through dreams, to become a 'sangoma' in accordance with hisXhosa culture. Another – Mr. Kamlana – said he was instructed to wear his dreadlocks by his ancestors and did so to overcome 'intwasa', a condition understood in African culture as an injunction from the ancestors to become a traditional healer, from which he had suffered since childhood."[91][92]InZimbabwe, there is a tradition of locking hair calledmhotsiworn by spirit mediums calledsvikiro. The Rastafarian religion spread to Zimbabwe and influenced some women inHarareto wear locs because they believe in the Rastafari's pro-Black teachings and rejection of colonialism.[93] Maasaiwarriors inKenyaare known for their long, thin, red dreadlocks, dyed with red root extracts orred ochre(red earth clay).[94]TheHimba womeninNamibiaare also known for their red-colored dreadlocks. Himba women usered earth clay mixed with butterfatand roll their hair with the mixture. They use natural moisturizers to maintain the health of their hair.Hamar womeninEthiopiawear red-colored locs made using red earth clay.[95]InAngola, Mwila women create thick dreadlocks covered in herbs, crushed tree bark, dried cow dung, butter, and oil. The thick dreadlocks are dyed using oncula, an ochre of red crushed rock.[96][97][98]InSouthern,Eastern, andNorthernAfrica, Africans use red ochre as sunscreen and cover their dreadlocks and braids with ochre to hold their hair in styles and as a hair moisturizer by mixing it with fats. Red ochre has a spiritual meaning of fertility, and in Maasai culture, the color red symbolizes bravery and is used in ceremonies and dreadlock hair traditions.[99][100] Historians note that West and Central African people braid their hair to signify age, gender, rank, role in society, and ethnic affiliation. It is believed braided and locked hair provides spiritual protection, connects people to the spirit of the earth, bestows spiritual power, and enables people to communicate with the gods and spirits.[101][102][103]In the 15th and 16th centuries, theAtlantic slave tradesaw Black Africans forcibly transported fromSub-Saharan AfricatoNorth Americaand, upon their arrival in theNew World, their heads would be shaved in an effort to erase their culture.[104][105][106][107]Enslaved Africans spent months inslave shipsand their hair matted into dreadlocks that European slave traders called "dreadful."[108][109] In theAfrican diaspora, people loc their hair to have a connection to the spirit world and receive messages from spirits. It is believed locs of hair are antennas making the wearer receptive to spiritual messages.[110]Other reasons people loc their hair are for fashion and to maintain the health of natural hair, also calledkinky hair.[111]In the 1960s and 1970s in theUnited States, theBlack Power movement,Black is Beautifulmovement, and thenatural hair movementinspired manyBlack Americansto wear their hair natural inafros,braids, and locked hairstyles.[112][113]The Black is Beautiful cultural movement spread toBlack communities in Britain. In the 1960s and 1970s, Black people in Britain were aware of thecivil rights movementand other cultural movements in Black America and the social and political changes occurring at the time. The Black is Beautiful movement and Rastafari culture in Europe influenced Afro-Britons to wear their hair in natural loc styles and afros as a way to fight against racism, Western standards of beauty, and to develop unity among Black people of diverse backgrounds.[114][115]From the twentieth century to the present day, dreadlocks have been symbols of Black liberation and are worn by revolutionaries, activists,womanists, and radical artists in the diaspora.[116][117]For example, Black American literary authorToni Morrisonwore locs, andAlice Walkerwears locs to reconnect with their African heritage.[118] Natural Black hairstyles worn by Black women are seen as not feminine and unprofessional in some American businesses.[119]Wearing locs in the diaspora signifies a person's racial identity and defiance of European standards of beauty, such as straight blond hair.[120]Locs encourage Black people to embrace other aspects of their culture that are tied to Black hair, such as wearing African ornaments like cowrie shells,beads, andAfrican headwrapsthat are sometimes worn with locs.[121][122]SomeBlack Canadianwomen wear locs to connect to theglobal Black culture. Dreadlocks unite Black people in the diaspora because wearing locs has the same meaning in areas of the world where there are Black people: opposing Eurocentric standards of beauty and sharing a Black and African diaspora identity.[123][124]For many Black women in the diaspora, locs are a fashion statement to express individuality and the beauty and versatility of Black hair. Locs are also aprotective hairstyleto maintain the health of their hair by wearing kinky hair in natural locs or faux locs. To protect their natural hair from the elements during thechanging seasons, Black women wear certain hairstyles to protect and retain the moisture in their hair. Black women wear soft locs as a protective hairstyle because they enclose natural hair inside them, protecting their natural hair from environmental damage. This protective soft loc style is created by "wrapping hair around the natural hair or crocheting pre-made soft locs into cornrows."[125]In the diaspora, Black men and women wear different styles of dreadlocks. Each style requires a different method of care. Freeform locs are formed organically by not combing the hair or manipulating the hair. There are also goddess locs, faux locs, sister locs, twisted locs, Rasta locs, crinkle locs, invisible locs, and other loc styles.[126][127][128] SomeIndigenous AustraliansofNorth Westand North Central Australia, as well as the Gold Coast region of Eastern Australia, have historically worn their hair in a locked style, sometimes also having long beards that are fully or partially locked. Traditionally, some wear the dreadlocks loose, while others wrap the dreadlocks around their heads or bind them at the back of the head.[129]In North Central Australia, the tradition is for the dreadlocks to be greased with fat and coated with red ochre, which assists in their formation.[130]In 1931 inWarburton Range, Western Australia, a photograph was taken of an Aboriginal Australian man with dreadlocks.[131] In the 1970s, hippies from Australia's southern region moved toKuranda, where they introduced the Rastafari movement as expressed in thereggae musicofPeter ToshandBob Marleyto theBuluwaipeople in the 1970s. Aboriginal Australians found parallels between the struggles of Black people in theAmericasand their own racial struggles in Australia. Willie Brim, a Buluwai man born in the 1960s in Kuranda, identified with Tosh's and Marley's spiritually conscious music, and inspired particularly by Peter Tosh's albumBush Doctor, in 1978 he founded a reggae band calledMantakaafter the area alongside the Barron River where he grew up. He combined his people's cultural traditions with the reggae guitar he had played since he was young, and his band's music reflects Buluwai culture and history. Now a leader of the Buluwai people and a cultural steward, Brim and his band send an "Aboriginal message" to the world. He and other Buluwai people wear dreadlocks as a native part of their culture and not as an influence from the Rastafari religion. Although Brim was inspired by reggae music, he is not a Rastafarian as he and his people have their own spirituality.[132]Foreigners visiting Australia think the Buluwai people wearing dreadlocks were influenced by the Rastafarian movement, but the Buluwai say their ancestors wore dreadlocks before the movement began.[133]Some Indigenous Australians wear anAustralian Aboriginal flag(a symbol of unity and Indigenous identity in Australia) tied around their head to hold their dreadlocks.[134] WithinTibetan Buddhismand other more esoteric forms of Buddhism, locks have occasionally been substituted for the more traditional shaved head. The most recognizable of these groups are known as theNgagpasofTibet. For Buddhists of these particular sects and degrees of initiation, their locked hair is not only a symbol of their vows but an embodiment of the particular powers they are sworn to carry.[135]Hevajra Tantra1.4.15 states that the practitioner of particular ceremonies "should arrange his piled up hair" as part of the ceremonial protocol.[136]Archeologists found a statue of a male deity,Shiva, with dreadlocks in Stung Treng province inCambodia.[137]In a sect of tantric Buddhism, some initiates wear dreadlocks.[138][139]The sect of tantric Buddhism in which initiates wear dreadlocks is calledweikzaandPassayanaorVajrayana Buddhism. Thissect of Buddhismis practiced in Burma. The initiates spend years in the forest with this practice, and when they return to the temples, they should not shave their heads to reintegrate.[140] The practice of wearing ajaṭā(dreadlocks) is observed in modern-day Hinduism,[142][143][144]most notably by sadhus who worshipShiva.[145][146]TheKapalikas, first commonly referenced in the6th century CE, were known to wear thejaṭā[147]as a form of deity imitation of thedevaBhairava-Shiva.[148]Shiva is often depicted with dreadlocks. According to Ralph Trueb, "Shiva's dreadlocks represent the potent power of his mind that enables him to catch and bind the unruly and wild rivergoddess Ganga."[149] In a village inPune, Savitha Uttam Thorat, some women hesitate to cut their long dreadlocks because it is believed it will cause misfortune or bring down divine wrath. Dreadlocks practiced by the women in this region ofIndiaare believed to be possessed by the goddessYellamma. Cutting off the hair is believed to bring misfortune onto the woman, because having dreadlocks is considered to be a gift from the goddess Yellamma (also known as Renuka).[150]Some of the women have long and heavy dreadlocks that put a lot of weight on their necks, causing pain and limited mobility.[151][152]Some in local government and police in theMaharashtra regiondemand the women cut their hair, because the religious practice of Yellamma forbids women from washing and cutting their dreadlocks, causing health issues.[153]These locks of hair dedicated to Yellamma are calledjade, believed to be evidence of divine presence. However, in Southern India, people advocate for the end of the practice.[154]The goddessAngala Parameshvariin Indian mythology is said to havecataik-karimatted hair (dreadlocks). Women healers in India are identified by their locs of hair and are respected in spiritual rituals because they are believed to be connected to goddesses. A woman who has ajatais believed to derive her spiritual powers orshaktifrom her dreadlocks.[155] Rastafari movementdreadlocks are symbolic of theLion of Judah, and were inspired by theNazaritesof the Bible.[156]Jamaicans locked their hair after seeing images of Ethiopians with locs fighting Italian soldiers during theSecond Italo-Ethiopian War. The afro is the preferred hairstyle worn byEthiopians. During the Italian invasion, Ethiopians vowed not to cut their hair using the Biblical example of Samson, who got his strength from his seven locks of hair, until emperor Ras Tafari Makonnen (Haile Selassie) and Ethiopia were liberated and Selassie was returned from exile.[157]Scholars also state another indirect Ethiopian influence for Rastas locking their hair are the Bahatowie priests in Ethiopia and their tradition of wearing dreadlocks for religious reasons since the 5th century AD.[158]Another African influence for Rastas wearing locs was seeing photos ofMau Mau freedom fighterswith locs inKenyafighting against the British authorities in the 1950s. Dreadlocks to the Mau Mau freedom fighters were a symbol of anti-colonialism, and this symbolism of dreadlocks was an inspiration for Rastas to loc their hair in opposition to racism and promote an African identity.[159][160][161]The branch of Rastafari that was inspired to loc their hair after the Mau Mau freedom fighters was theNyabinghi Order, previously calledYoung Black Faith. Young Black Faith were considered a radical group of younger Rastafari members. Eventually, other Rastafari groups started locking their hair.[162] In the Rastafarian belief, people wear locs for a spiritual connection to the universe and the spirit of the earth. It is believed that by shaking their locs, they will bring down the destruction ofBabylon. Babylon in the Rastafarian belief issystemic racism, colonialism, and any system of economic and social oppression of Black people.[163][164]Locs are also worn to defy European standards of beauty and help to develop a sense of Black pride and acceptance of African features as beautiful.[165][166]In another branch of Rastafari calledBoboshanti Order of Rastafari, dreadlocks are worn to display a black person's identity and social protest against racism.[167]The Bobo Ashanti are one of the strictestMansions of Rastafari. They cover their locs with brightturbansand wear long robes and can usually be distinguished from other Rastafari members because of this.[168]Other Rastas wear aRastacapto tuck their locs under the cap.[169] TheBobo Ashanti("Bobo" meaning "black" inIyaric;[170]and "Ashanti" in reference to theAshanti peopleofGhana, whom the Bobos claim are their ancestors),[171]were founded byEmmanuel Charles Edwardsin 1959 during the period known as the "groundation", where many protests took place calling for the repatriation of African descendants and slaves to Kingston. A Boboshanti branch spread to Ghana because of repatriated Jamaicans and other Black Rastas moving to Ghana. Prior to Rastas living in Ghana,GhanaiansandWest Africanspreviously had their own beliefs about locked hair. Dreadlocks in West Africa are believed to bestow children born with locked hair with spiritual power, and thatDadachildren, that is, those born with dreadlocks, were given to their parents bywater deities. Rastas and Ghanaians have similar beliefs about the spiritual significance of dreadlocks, such as not touching a person's or child's locs, maintaining clean locs, locs spiritual connections to spirits, and locs bestowing spiritual powers to the wearer.[172] Dreadlocks have become a popular hairstyle among professional athletes. However, some athletes are discriminated against and were forced to cut their dreadlocks. For example, in December 2018, a Black high school wrestler in New Jersey was forced to cut his dreadlocks 90 seconds before his match, sparking a civil rights case that led to the passage of the CROWN Act in 2019.[173] In professionalAmerican football, the number of players with dreadlocks has increased sinceAl HarrisandRicky Williamsfirst wore the style during the 1990s. In 2012, about 180National Football Leagueplayers wore dreadlocks. A significant number of these players are defensive backs, who are less likely to be tackled than offensive players. According to the NFL's rulebook, a player's hair is considered part of their "uniform", meaning the locks are fair game when attempting to bring them down.[174][175] In theNBA, there has been controversy over Brooklyn Nets guardJeremy Lin, an Asian-American who garnered mild controversy over his choice of dreadlocks. Former NBA playerKenyon Martinaccused Lin of appropriating African-American culture in a since-deleted social media post, after which Lin pointed out that Martin has multiple Chinese characters tattooed on his body.[176] David Diamante, the American Boxingring announcerofItalian Americanheritage, sports prominent dreadlocks. Dreadlocks can be formed through several methods. Very curly hair forms single-strand knots that can naturally entangle into dreadlocks.[177]For other types of hair various methods utilized to create dreadlocks include crochet hooks and backcombing. Dreadlocks should not be confused with matting, which occurs from the unintentional neglect and damage of any type of hair.[178] On 3 July 2019, California became the first US state to prohibit discrimination over natural hair. GovernorGavin Newsomsigned theCROWN Actinto law, banning employers and schools fromdiscriminating against hairstylessuch as dreadlocks, braids, afros, andtwists.[179]Likewise, later in 2019, Assembly Bill 07797 became law in New York state; it "prohibits racediscrimination based on natural hairor hairstyles".[180][181]Scholars call discrimination based on hair "hairism". Despite the passage of the CROWN Act, hairism continues, with some Black people being fired from work or not hired because of their dreadlocks.[182][183][184]According to the CROWN 2023 Workplace Research Study, sixty-six percent of Black women change their hairstyle for job interviews, and twenty-five percent of Black women said they were denied a job because of their hairstyle.[185]The CROWN Act was passed to challenge the idea that Black people must emulate other hairstyles to be accepted in public and educational spaces.[186]As of 2023, 24 states have passed the CROWN Act. July 3 is recognized as National CROWN Day, also called Black Hair Independence Day.[187][188][189] The Perception Institute conducted a "Good Hair Study" using images of Black women wearing natural styles in locs, afros, twists, and other Black hairstyles. The Perception Institute is "a consortium of researchers, advocates and strategists" that uses psychological and emotional test studies to make participants aware of their racial biases. A Black-owned hair supply company,Shea Moisture, partnered with Perception Institute to conduct the study. The tests were done to reduce hair- and racially-based discrimination in education, civil justice, and law enforcement places. The study used animplicit-association teston 4,000 participants of all racial backgrounds and showed most of the participants had negative views about natural Black hairstyles. The study also showedMillennialswere the most accepting of kinky hair texture on Black people. "Noliwe Rooks, aCornell Universityprofessor who writes about the intersection of beauty and race, says for some reason, natural Black hair just frightens some White people."[190][191] In September of 2016, a lawsuit was filed by theEqual Employment Opportunity Commissionagainst the company Catastrophe Management Solutions located inMobile, Alabama. The court case ended with the decision that it was not a discriminatory practice for the company to refuse to hire an African American because they wore dreadlocks.[192] In someTexaspublic schools, dreadlocks are prohibited, especially for male students, because long braided hair is considered unmasculine according to Western standards of masculinity which define masculinity as "short, tidy hair." Black and Native American boys are stereotyped and receive negative treatment and negative labeling for wearing dreadlocks, cornrows, and long braids. Non-white students are prohibited from practicing their traditional hairstyles that are a part of their culture.[193][194] The policing of Black hairstyles also occurs inLondon, England.Black studentsin England are prohibited from wearing natural hairstyles such as dreadlocks,afros, braids, twists, and other African and Black hairstyles. Black students are suspended from school, are stereotyped, and receive negative treatment from teachers.[195] InMidrand, north ofJohannesburginSouth Africa, a Black girl was kicked out of school for wearing her hair in a natural dreadlock style[citation needed]. Hair and dreadlock discrimination is experienced by people of color all over the world who do not conform to Western standards of beauty.[196][197]AtPretoria High School for GirlsinGauteng provincein South Africa, Black girls are discriminated against for wearing African hairstyles and are forced tostraightentheir hair.[198] In 2017, theUnited States Armylifted the ban on dreadlocks. In the army, Black women can now wearbraidsand locs under the condition that they are groomed, clean, and meet the length requirements.[199]From slavery into the present day, the policing of Black women's hair continues to be controlled by some institutions and people. Even when Black women wear locs and they are clean and well-kept, some people do not consider locs to be feminine and professional because of thenatural kinkytexture of Black hair.[200][201] Four African countries approved the wearing of dreadlocks in their courts:Kenya,Malawi,South Africa, andZimbabwe. However, hairism continues despite the approval. Although locked hairstyles are a traditional practice on theAfrican continent, some Africans disapprove of the hairstyle because of cultural taboos or pressure from Europeans in African schools and local African governments to conform to Eurocentric standards of beauty.[202][203] According to a 2011 article fromThe New Republic, Black men who wear locs areracially profiledand watched more by the police and are believed to be "thugs" or involved in gangs and violent crimes than Black men who do not wear dreadlocks.[204] On 10 December 2010, theGuinness Book of World Recordsrested its "longest dreadlocks" category after investigating its first and only female title holder, Asha Mandela, with this official statement: Following a review of our guidelines for the longest dreadlock, we have taken expert advice and made the decision to rest this category. The reason for this is that it is difficult, and in many cases impossible, to measure the authenticity of the locks due to expert methods employed in the attachment of hair extensions/re-attachment of broken-off dreadlocks. Effectively the dreadlock can become an extension and therefore impossible to adjudicate accurately. It is for this reason Guinness World Records has decided to rest the category and will no longer be monitoring the category for longest dreadlock.[205]
https://en.wikipedia.org/wiki/Dreadlock
A bargainingimpasse(French pronunciation:[ɛ̃pas]) occurs when the two sides negotiating an agreement are unable to reach an agreement and become deadlocked. An impasse is almost invariably mutually harmful, either as a result of direct action which may be taken such as astrikein employment negotiation or sanctions/military action ininternational relations, or simply due to the resulting delay in negotiating a mutually beneficial agreement. The word impasse may also refer to any situation in which noprogresscan be made. Impasses provide opportunities forproblem solvingto provide aninsightthat leads to progress. Impasse can provide a credible signal that a party's position is genuine and not merely anambit claim. Impasse may also arise if parties suffer fromself-serving bias. Most disputes arise in situations where facts are able to be interpreted in multiple ways, and if parties interpret the facts to their own benefit they may be unable to accept the opposing party's claim as reasonable. They may believe the other side is either bluffing or acting unfairly and deserves to be "punished". As bargaining impasse is mutually harmful, it may be beneficial for the parties to accept bindingarbitrationormediationto settle their dispute, or the state may impose such a solution. Indeed, compulsory arbitration following impasse is a common feature ofindustrial relations lawin the United States[1]and elsewhere. The wordimpasseis taken from the Frenchimpasse.
https://en.wikipedia.org/wiki/Impasse
Inconcurrent programming, concurrent accesses toshared resourcescan lead to unexpected or erroneous behavior. Thus, the parts of the program where the shared resource is accessed need to be protected in ways that avoid the concurrent access. One way to do so is known as acritical sectionorcritical region. This protected section cannot be entered by more than one process or thread at a time; others are suspended until the first leaves the critical section. Typically, the critical section accesses a shared resource, such as adata structure, peripheral device, or network connection, that would not operate correctly in the context of multiple concurrent accesses.[1] Different code or processes may consist of the same variable or other resources that must be read or written but whose results depend on the order in which the actions occur. For example, if a variablexis to be read by process A, and process B must write to the variablexat the same time, process A might get either the old or new value ofx. Process A: Process B: In cases where alockingmechanism with finer granularity is not needed, a critical section is important. In the above case, if A needs to read the updated value ofx, executing process A, and process B at the same time may not give required results. To prevent this, variablexis protected by a critical section. First, B gets the access to the section. Once B finishes writing the value, A gets the access to the critical section, and variablexcan be read. By carefully controlling which variables are modified inside and outside the critical section, concurrent access to the shared variable are prevented. A critical section is typically used when a multi-threaded program must update multiple related variables without a separate thread making conflicting changes to that data. In a related situation, a critical section may be used to ensure that a shared resource, for example, a printer, can only be accessed by one process at a time. The implementation of critical sections vary among different operating systems. A critical section will usually terminate in finite time,[2]and a thread, task, or process must wait for a fixed time to enter it (bounded waiting). To ensure exclusive use of critical sections, some synchronization mechanism is required at the entry and exit of the program. A critical section is a piece of a program that requiresmutual exclusionof access. As shown in the figure,[3]in the case of mutual exclusion (mutex), one thread blocks a critical section by using locking techniques when it needs to access the shared resource, and other threads must wait their turn to enter the section. This prevents conflicts when two or more threads share the same memory space and want to access a common resource.[2] The simplest method to prevent any change of processor control inside the critical section is implementing a semaphore. In uniprocessor systems, this can be done by disabling interrupts on entry into the critical section, avoiding system calls that can cause acontext switchwhile inside the section, and restoring interrupts to their previous state on exit. With this implementation, any execution thread entering any critical section in the system will prevent any other thread, including an interrupt, from being granted processing time on the CPU until the original thread leaves its critical section. This brute-force approach can be improved by usingsemaphores. To enter a critical section, a thread must obtain a semaphore, which it releases on leaving the section. Other threads are prevented from entering the critical section at the same time as the original thread, but are free to gain control of the CPU and execute other code, including other critical sections that are protected by different semaphores. Semaphore locking also has a time limit to prevent a deadlock condition in which a lock is acquired by a single process for an infinite time, stalling the other processes that need to use the shared resource protected by the critical section. Typically, critical sections prevent thread andprocess migrationbetween processors and thepreemptionof processes and threads by interrupts and other processes and threads. Critical sections often allow nesting. Nesting allows multiple critical sections to be entered and exited at little cost. If theschedulerinterrupts the current process or thread in a critical section, the scheduler will either allow the currently executing process or thread to run to completion of the critical section, or it will schedule the process or thread for another complete quantum. The scheduler will not migrate the process or thread to another processor, and it will not schedule another process or thread to run while the current process or thread is in a critical section. Similarly, if aninterruptoccurs in a critical section, the interrupt information is recorded for future processing, and execution is returned to the process or thread in the critical section.[citation needed]Once the critical section is exited, and in some cases the scheduled quantum completed, the pending interrupt will be executed. The concept of scheduling quantum applies to "round-robin" and similarscheduling policies. Since critical sections mayexecuteonly on the processor on which they are entered, synchronization is only required within the executing processor. This allows critical sections to be entered and exited at almost no cost. No inter-processor synchronization is required. Only instruction stream synchronization is needed.[4]Most processors provide the required amount of synchronization by interrupting the current execution state. This allows critical sections in most cases to be nothing more than a per processor count of critical sections entered. Performance enhancements include executing pending interrupts at the exit of all critical sections and allowing the scheduler to run at the exit of all critical sections. Furthermore, pending interrupts may be transferred to other processors for execution. Critical sections should not be used as a long-lasting locking primitive. Critical sections should be kept short enough so they can be entered, executed, and exited without any interrupts occurring from thehardwareand the scheduler. Kernel-level critical sections are the base of thesoftware lockoutissue. In parallel programming, the code is divided into threads. Theread-write conflictingvariables are split between threads and each thread has a copy of them. Data structures such aslinked lists,trees, andhash tableshave data variables that are linked and cannot be split between threads; hence, implementing parallelism is very difficult.[5]To improve the efficiency of implementing data structures, multiple operations such as insertion, deletion, and search can be executed in parallel. While performing these operations, there may be scenarios where the same element is being searched by one thread and is being deleted by another. In such cases, the output may beerroneous. The thread searching the element may have a hit, whereas the other thread may subsequently delete it. These scenarios will cause issues in the program running by providing false data. To prevent this, one method is to keep the entire data structure under critical section so that only one operation is handled at a time. Another method is locking the node in use under critical section, so that other operations do not use the same node. Using critical section, thus, ensures that the code provides expected outputs.[5] Critical sections also occur in code which manipulates external peripherals, such as I/O devices. The registers of a peripheral must be programmed with certain values in a certain sequence. If two or more processes control a device simultaneously, neither process will have the device in the state it requires and incorrect behavior will ensue. When a complex unit of information must be produced on an output device by issuing multiple output operations, exclusive access is required so that another process does not corrupt the datum by interleaving its own bits of output. In the input direction, exclusive access is required when reading a complex datum via multiple separate input operations. This prevents another process from consuming some of the pieces, causing corruption. Storage devices provide a form of memory. The concept of critical sections is equally relevant to storage devices as to shared data structures in main memory. A process which performs multiple access or update operations on a file is executing a critical section that must be guarded with an appropriate file locking mechanism.
https://en.wikipedia.org/wiki/Critical_section
Insoftware engineering,double-checked locking(also known as "double-checked locking optimization"[1]) is asoftware design patternused to reduce the overhead of acquiring alockby testing the locking criterion (the "lock hint") before acquiring the lock. Locking occurs only if the locking criterion check indicates that locking is required. The original form of the pattern, appearing inPattern Languages of Program Design 3,[2]hasdata races, depending on thememory modelin use, and it is hard to get right. Some consider it to be ananti-pattern.[3]There are valid forms of the pattern, including the use of thevolatilekeyword in Java and explicit memory barriers in C++.[4] The pattern is typically used to reduce locking overhead when implementing "lazy initialization" in a multi-threaded environment, especially as part of theSingleton pattern. Lazy initialization avoids initializing a value until the first time it is accessed. Consider, for example, this code segment in theJava programming language:[4] The problem is that this does not work when using multiple threads. Alockmust be obtained in case two threads callgetHelper()simultaneously. Otherwise, either they may both try to create the object at the same time, or one may wind up getting a reference to an incompletely initialized object. Synchronizing with a lock can fix this, as is shown in the following example: This is correct and will most likely have sufficient performance. However, the first call togetHelper()will create the object and only the few threads trying to access it during that time need to be synchronized; after that all calls just get a reference to the member variable. Since synchronizing a method could in some extreme cases decrease performance by a factor of 100 or higher,[5]the overhead of acquiring and releasing a lock every time this method is called seems unnecessary: once the initialization has been completed, acquiring and releasing the locks would appear unnecessary. Many programmers, including the authors of the double-checked locking design pattern, have attempted to optimize this situation in the following manner: Intuitively, this algorithm is an efficient solution to the problem. But if the pattern is not written carefully, it will have adata race. For example, consider the following sequence of events: Most runtimes havememory barriersor other methods for managing memory visibility across execution units. Without a detailed understanding of the language's behavior in this area, the algorithm is difficult to implement correctly. One of the dangers of using double-checked locking is that even a naive implementation will appear to work most of the time: it is not easy to distinguish between a correct implementation of the technique and one that has subtle problems. Depending on thecompiler, the interleaving of threads by theschedulerand the nature of otherconcurrent system activity, failures resulting from an incorrect implementation of double-checked locking may only occur intermittently. Reproducing the failures can be difficult. For the singleton pattern, double-checked locking is not needed: If control enters the declaration concurrently while the variable is being initialized, the concurrent execution shall wait for completion of the initialization. C++11 and beyond also provide a built-in double-checked locking pattern in the form ofstd::once_flagandstd::call_once: If one truly wishes to use the double-checked idiom instead of the trivially working example above (for instance because Visual Studio before the 2015 release did not implement the C++11 standard's language about concurrent initialization quoted above[7]), one needs to use acquire and release fences:[8] pthread_once()must be used to initialize library (or sub-module) code when its API does not have a dedicated initialization procedure required to be called in single-threaded mode. As ofJ2SE 5.0, thevolatilekeyword is defined to create a memory barrier. This allows a solution that ensures that multiple threads handle the singleton instance correctly. This new idiom is described in[3]and[4]. Note the local variable "localRef", which seems unnecessary. The effect of this is that in cases wherehelperis already initialized (i.e., most of the time), the volatile field is only accessed once (due to "return localRef;" instead of "return helper;"), which can improve the method's overall performance by as much as 40 percent.[9] Java 9 introduced theVarHandleclass, which allows use of relaxed atomics to access fields, giving somewhat faster reads on machines with weak memory models, at the cost of more difficult mechanics and loss of sequential consistency (field accesses no longer participate in the synchronization order, the global order of accesses to volatile fields).[10] If the helper object is static (one per class loader), an alternative is theinitialization-on-demand holder idiom[11](See Listing 16.6[12]from the previously cited text.) This relies on the fact that nested classes are not loaded until they are referenced. Semantics offinalfield in Java 5 can be employed to safely publish the helper object without usingvolatile:[13] The local variabletempWrapperis required for correctness: simply usinghelperWrapperfor both null checks and the return statement could fail due to read reordering allowed under the Java Memory Model.[14]Performance of this implementation is not necessarily better than thevolatileimplementation. In .NET Framework 4.0, theLazy<T>class was introduced, which internally uses double-checked locking by default (ExecutionAndPublication mode) to store either the exception that was thrown during construction, or the result of the function that was passed toLazy<T>:[15]
https://en.wikipedia.org/wiki/Double-checked_locking
File lockingis a mechanism that restricts access to acomputer file, or to a region of a file, by allowing only oneuserorprocessto modify or delete it at a specific time, and preventing reading of the file while it's being modified or deleted. Systems implement locking to prevent aninterceding updatescenario, which is an example of arace condition, by enforcing the serialization of update processes to any given file. The following example illustrates the interceding update problem: Mostoperating systemssupport the concept ofrecord locking, which means that individual records within any given file may be locked, thereby increasing the number ofconcurrentupdate processes. Database maintenance uses file locking, whereby it can serialize access to the entire physical file underlying a database. Although this does prevent any other process from accessing the file, it can be more efficient than individually locking many regions in the file by removing the overhead of acquiring and releasing each lock. Poor use of file locks, like any computerlock, can result in poor performance or indeadlocks. File locking may also refer to additional security applied by a computer user either by using Windows security, NTFS permissions or by installing a third party file locking software. IBM pioneered file locking in 1963 for use in mainframe computers usingOS/360, where it was termed "exclusive control".[1] Microsoft Windows uses three distinct mechanisms to manage access to shared files: Windows inherits the semantics of share-access controls from theMS-DOSsystem, where sharing was introduced in MS-DOS 3.3 . Thus, an application must explicitly allow sharing when it opens a file; otherwise it has exclusive read, write, and delete access to the file until closed (other types of access, such as those to retrieve the attributes of a file are allowed.) For a file opened with shared access, applications may then use byte-range locking to control access to specific regions of the file. Such byte-range locks specify a region of the file (offset and length) and the type of lock (shared or exclusive). Note that the region of the file being locked isnotrequired to have data within the file, and applications sometimes exploit this ability to implement their functionality. For applications that use the file read/writeAPIsin Windows, byte-range locks are enforced (also referred to asmandatory locks) by the file systems that execute within Windows. For applications that use the file mappingAPIsin Windows, byte-range locks are not enforced (also referred to asadvisory locks.) Byte-range locking may also have other side-effects on the Windows system. For example, the Windows file-sharing mechanism will typically disable client side caching of a file forallclients when byte-range locks are used byanyclient. The client will observe slower access because read and write operations must be sent to the server where the file is stored. Improper error-handling in an application program can lead to a scenario where a file is locked (either using "share" access or with byte-range file locking) and cannot be accessed by other applications. If so, the user may be able to restore file access by manually terminating the malfunctioning program. This is typically done through theTask Managerutility. Thesharing mode(dwShareMode) parameter of theCreateFile[2]function (used to open files) determines file-sharing. The sharing mode can be specified to allow sharing the file for read, write, or delete access, or any combination of these. Subsequent attempts to open the file must be compatible with all previously granted sharing-access to the file. When the file is closed, sharing-access restrictions are adjusted to remove the restrictions imposed by that specific file open. Byte-range locking type is determined by thedwFlagsparameter in theLockFileEx[4]function used to lock a region of a file. TheWindows APIfunctionLockFile[5]can also be used and acquires an exclusive lock on the region of the file. Any file containing an executable program file that is currently running on the computer system as a program (e.g. anEXE,COM,DLL,CPLor other binary program file format) is normally locked by the operating system itself, preventing any application from modifying or deleting it. Any attempt to do so will be denied with a sharing violation error, despite the fact that the program file is not opened by any application. However, some access is still allowed. For example, a running application file can be renamed or copied (read) even when executing. Files are accessed by applications in Windows by usingfile handles. These file handles can be explored with theProcess Explorerutility. This utility can also be used to force-close handles without needing to terminate the application holding them. This can cause an undefined behavior, since the program will receive an unexpected error when using the force-closed handle and may even operate on an unexpected file since the handle number may be recycled.[citation needed] Microsoft Windows XPandServer 2003editions have introducedvolume snapshot(VSS) capability toNTFS, allowing open files to be accessed bybackup softwaredespite any exclusive locks. However, unless software is rewritten to specifically support this feature, the snapshot will becrash consistentonly, while properly supported applications can assist the operating system in creating "transactionally consistent" snapshots. Other commercial software for accessing locked files under Windows includeFile Access ManagerandOpen File Manager. These work by installing their owndriversto access the files inkernel mode. Unix-likeoperating systems (includingLinuxand Apple'smacOS) do not normally automatically lock open files. Several kinds of file-locking mechanisms are available in different flavors of Unix, and many operating systems support more than one kind for compatibility. The most common mechanism isfcntl. Two other such mechanisms areflock(2)andlockf(3), each of which may be implemented atopfcntlor may be implemented separately fromfcntl. Although some types of locks can be configured to be mandatory, file locks under Unix are by defaultadvisory. This means that cooperating processes may use locks to coordinate access to a file among themselves, but uncooperative processes are also free to ignore locks and access the file in any way they choose. In other words, file locks lock out other file lockers only, not I/O. Two kinds of locks are offered: shared locks and exclusive locks. In the case offcntl, different kinds of locks may be applied to different sections (byte ranges) of a file, or else to the whole file. Shared locks can be held by multiple processes at the same time, but an exclusive lock can only be held by one process, and cannot coexist with a shared lock. To acquire a shared lock, a process must wait until no processes hold any exclusive locks. To acquire an exclusive lock, a process must wait until no processes hold either kind of lock. Unlike locks created byfcntl, those created byflockare preserved acrossforks, making them useful in forking servers. It is therefore possible for more than one process to hold an exclusive lock on the same file, provided these processes share a filial relationship and the exclusive lock was initially created in a single process before being duplicated across afork. Shared locks are sometimes called "read locks" and exclusive locks are sometimes called "write locks". However, because locks on Unix are advisory, this isn't enforced. Thus it is possible for a database to have a concept of "shared writes" vs. "exclusive writes"; for example, changing a field in place may be permitted under shared access, whereas garbage-collecting and rewriting the database may require exclusive access. File locks apply to the actual file, rather than the file name. This is important since Unix allows multiple names to refer to the same file. Together with non-mandatory locking, this leads to great flexibility in accessing files from multiple processes. On the other hand, the cooperative locking approach can lead to problems when a process writes to a file without obeying file locks set by other processes. For this reason, some Unix-like operating systems also offer limited support formandatory locking.[6]On such systems, a file whosesetgidbit is on but whose group execution bit is off when that file is opened will be subject to automatic mandatory locking if the underlying filesystem supports it. However, non-local NFS partitions tend to disregard this bit.[7]If a file is subject to mandatory locking, attempts to read from a region that is locked with an exclusive lock, or to write to a region that is locked with a shared or exclusive lock, will block until the lock is released. This strategy first originated in System V, and can be seen today in theSolaris,HP-UX, and Linux operating systems. It is not part of POSIX, however, and BSD-derived operating systems such asFreeBSD,OpenBSD,NetBSD, and Apple'smacOSdo not support it.[8]Linux also supportsmandatory lockingthrough the special-o mandparameter for file system mounting (mount(8)), but this is rarely used. Some Unix-like operating systems prevent attempts to open the executable file of a running program for writing; this is a third form of locking, separate from those provided byfcntlandflock. More than one process can hold an exclusiveflockon a given file if the exclusive lock was duplicated across a laterfork. This simplifies coding for network servers and helps prevent race conditions, but can be confusing to the unaware. Mandatory locks have no effect on theunlinksystem call. Consequently, certain programs may, effectively, circumvent mandatory locking. Stevens & Rago (2005) observed that theededitor indeed did that.[9] Whether and howflocklocks work on network filesystems, such asNFS, is implementation dependent. OnBSDsystems,flockcalls on a file descriptor open to a file on an NFS-mounted partition are successfulno-ops. OnLinuxprior to 2.6.12,flockcalls on NFS files would act only locally. Kernel 2.6.12 and above implementflockcalls on NFS files using POSIX byte-range locks. These locks will be visible to other NFS clients that implementfcntl-stylePOSIX locks, but invisible to those that do not.[10] Lock upgrades and downgradesreleasethe old lock before applying the new lock. If an application downgrades an exclusive lock to a shared lock while another application is blocked waiting for an exclusive lock, the latter application may get the exclusive lock and lock the first application out. This means that lock downgrades can block, which may be counter-intuitive. Allfcntllocks associated with a file for a given process are removed whenanyfile descriptor for that file is closed by that process, even if a lock was never requested for that file descriptor. Also,fcntllocks are not inherited by a child process. Thefcntlclose semantics are particularly troublesome for applications that call subroutine libraries that may access files. Neither of these "bugs" occurs using realflock-style locks. Preservation of the lock status on open file descriptors passed to another process using aUnix domain socketis implementation dependent. One source of lock failure occurs when buffered I/O has buffers assigned in the user's local workspace, rather than in an operating system buffer pool.freadandfwriteare commonly used to do buffered I/O, and once a section of a file is read, another attempt to read that same section will, most likely, obtain the data from the local buffer. The problem is another user attached to the same file has their own local buffers, and the same thing is happening for them. Anfwriteof data obtained from the buffer byfreadwillnotbe obtaining the data from the file itself, and some other user could have changed it. Both could useflockto ensure exclusive access, which prevents simultaneous writes, but since the reads are reading from the buffer and not the file itself, any data changed by user #1 can be lost by user #2 (over-written). The best solution to this problem is to use unbuffered I/O (readandwrite) withflock, which also means usinglseekinstead offseekandftell. Of course, you'll have to make adjustments for function parameters and results returned. Generally speaking, buffered I/O isunsafewhen used with shared files. InAmigaOS, a lock on a file (or directory) can be acquired using theLockfunction (in thedos.library). A lock can be shared (other processes can read the file/directory, but can't modify or delete it), or exclusive so that only the process which successfully acquires the lock can access or modify the object. The lock is on the whole object and not part of it. The lock must be released with theUnLockfunction: unlike in Unix, the operating system does not implicitly unlock the object when the process terminates. Shell scriptsand other programs often use a strategy similar to the use of file locking: creation oflock files, which are files whose contents are irrelevant (although often one will find theprocess identifierof the holder of the lock in the file) and whose sole purpose is to signal by their presence that some resource is locked. A lock file is often the best approach if the resource to be controlled is not a regular file at all, so using methods for locking files does not apply. For example, a lock file might govern access to a set of related resources, such as several different files, directories, a group of disk partitions, or selected access to higher level protocols like servers or database connections. When using lock files, care must be taken to ensure that operations areatomic. To obtain a lock, the process must verify that the lock file does not exist and then create it, whilst preventing another process from creating it in the meantime. Various methods to do this include: Lock files are often named with a tilde (~) prefixed to the name of the file they are locking, or a duplicate of the full file name suffixed with.LCK. If they are locking a resource other than a file, they may be named more arbitrarily. An unlocker is a utility used to determine what process is locking a file, and displays a list of processes as well as choices on what to do with the process (kill task, unlock, etc.) along with a list of file options such as delete or rename. Their purpose is to remove improper or stale file locks, which often arise from anomalous situations, such as crashed or hung processes, that lead to file locks that persist despite the owning process having died already. On some Unix-like systems, utilities such asfstatandlockfcan be used to inspect the state of file locks by process, by filename, or both.[citation needed] On Windows systems, if a file is locked, it's possible to schedule its moving or deletion to be performed on the next reboot. This approach is typically used by installers to replace locked system files. Inversion control systemsfile locking is used to prevent two users changing the same file version in parallel and then when saving, the second user to overwrite what first user changed. This is implemented by marking locked files as read-only in the file system. A user wanting to change the file performs an unlock (also called checkout) operation, and until a check-in (store) operation is done, or the lock is reverted, nobody else is allowed to unlock the file.
https://en.wikipedia.org/wiki/File_locking
Incomputer science, analgorithmis callednon-blockingif failure orsuspensionof anythreadcannot cause failure or suspension of another thread;[1]for some operations, these algorithms provide a useful alternative to traditionalblocking implementations. A non-blocking algorithm islock-freeif there is guaranteed system-wideprogress, andwait-freeif there is also guaranteed per-thread progress. "Non-blocking" was used as a synonym for "lock-free" in the literature until the introduction of obstruction-freedom in 2003.[2] The word "non-blocking" was traditionally used to describetelecommunications networksthat could route a connection through a set of relays "without having to re-arrange existing calls"[This quote needs a citation](seeClos network). Also, if the telephone exchange "is not defective, it can always make the connection"[This quote needs a citation](seenonblocking minimal spanning switch). The traditional approach to multi-threaded programming is to uselocksto synchronize access to sharedresources. Synchronization primitives such asmutexes,semaphores, andcritical sectionsare all mechanisms by which a programmer can ensure that certain sections of code do not execute concurrently, if doing so would corrupt shared memory structures. If one thread attempts to acquire a lock that is already held by another thread, the thread will block until the lock is free. Blocking a thread can be undesirable for many reasons. An obvious reason is that while the thread is blocked, it cannot accomplish anything: if the blocked thread had been performing a high-priority orreal-timetask, it would be highly undesirable to halt its progress. Other problems are less obvious. For example, certain interactions between locks can lead to error conditions such asdeadlock,livelock, andpriority inversion. Using locks also involves a trade-off between coarse-grained locking, which can significantly reduce opportunities forparallelism, and fine-grained locking, which requires more careful design, increases locking overhead and is more prone to bugs. Unlike blocking algorithms, non-blocking algorithms do not suffer from these downsides, and in addition are safe for use ininterrupt handlers: even though thepreemptedthread cannot be resumed, progress is still possible without it. In contrast, global data structures protected by mutual exclusion cannot safely be accessed in an interrupt handler, as the preempted thread may be the one holding the lock. While this can be rectified by masking interrupt requests during the critical section, this requires the code in the critical section to have bounded (and preferably short) running time, or excessive interrupt latency may be observed.[3] A lock-free data structure can be used to improve performance. A lock-free data structure increases the amount of time spent in parallel execution rather than serial execution, improving performance on amulti-core processor, because access to the shared data structure does not need to be serialized to stay coherent.[4] With few exceptions, non-blocking algorithms useatomicread-modify-writeprimitives that the hardware must provide, the most notable of which iscompare and swap (CAS).Critical sectionsare almost always implemented using standard interfaces over these primitives (in the general case, critical sections will be blocking, even when implemented with these primitives). In the 1990s all non-blocking algorithms had to be written "natively" with the underlying primitives to achieve acceptable performance. However, the emerging field ofsoftware transactional memorypromises standard abstractions for writing efficient non-blocking code.[5][6] Much research has also been done in providing basicdata structuressuch asstacks,queues,sets, andhash tables. These allow programs to easily exchange data between threads asynchronously. Additionally, some non-blocking data structures are weak enough to be implemented without special atomic primitives. These exceptions include: Several libraries internally use lock-free techniques,[7][8][9]but it is difficult to write lock-free code that is correct.[10][11][12][13] Non-blocking algorithms generally involve a series of read, read-modify-write, and write instructions in a carefully designed order. Optimizing compilers can aggressively re-arrange operations. Even when they don't, many modern CPUs often re-arrange such operations (they have a "weakconsistency model"), unless amemory barrieris used to tell the CPU not to reorder.C++11programmers can usestd::atomicin<atomic>, andC11programmers can use<stdatomic.h>, both of which supply types and functions that tell the compiler not to re-arrange such instructions, and to insert the appropriate memory barriers.[14] Wait-freedom is the strongest non-blocking guarantee of progress, combining guaranteed system-wide throughput withstarvation-freedom. An algorithm is wait-free if every operation has a bound on the number of steps the algorithm will take before the operation completes.[15]This property is critical for real-time systems and is always nice to have as long as the performance cost is not too high. It was shown in the 1980s[16]that all algorithms can be implemented wait-free, and many transformations from serial code, calleduniversal constructions, have been demonstrated. However, the resulting performance does not in general match even naïve blocking designs. Several papers have since improved the performance of universal constructions, but still, their performance is far below blocking designs. Several papers have investigated the difficulty of creating wait-free algorithms. For example, it has been shown[17]that the widely available atomicconditionalprimitives,CASandLL/SC, cannot provide starvation-free implementations of many common data structures without memory costs growing linearly in the number of threads. However, these lower bounds do not present a real barrier in practice, as spending a cache line or exclusive reservation granule (up to 2 KB on ARM) of store per thread in the shared memory is not considered too costly for practical systems. Typically, the amount of store logically required is a word, but physically CAS operations on the same cache line will collide, and LL/SC operations in the same exclusive reservation granule will collide, so the amount of store physically required[citation needed]is greater.[clarification needed] Wait-free algorithms were rare until 2011, both in research and in practice. However, in 2011 Kogan andPetrank[18]presented a wait-free queue building on theCASprimitive, generally available on common hardware. Their construction expanded the lock-free queue of Michael and Scott,[19]which is an efficient queue often used in practice. A follow-up paper by Kogan and Petrank[20]provided a method for making wait-free algorithms fast and used this method to make the wait-free queue practically as fast as its lock-free counterpart. A subsequent paper by Timnat and Petrank[21]provided an automatic mechanism for generating wait-free data structures from lock-free ones. Thus, wait-free implementations are now available for many data-structures. Under reasonable assumptions, Alistarh, Censor-Hillel, and Shavit showed that lock-free algorithms are practically wait-free.[22]Thus, in the absence of hard deadlines, wait-free algorithms may not be worth the additional complexity that they introduce. Lock-freedom allows individual threads to starve but guarantees system-wide throughput. An algorithm is lock-free if, when the program threads are run for a sufficiently long time, at least one of the threads makes progress (for some sensible definition of progress). All wait-free algorithms are lock-free. In particular, if one thread is suspended, then a lock-free algorithm guarantees that the remaining threads can still make progress. Hence, if two threads can contend for the same mutex lock or spinlock, then the algorithm isnotlock-free. (If we suspend one thread that holds the lock, then the second thread will block.) An algorithm is lock-free if infinitely often operation by some processors will succeed in a finite number of steps. For instance, ifNprocessors are trying to execute an operation, some of theNprocesses will succeed in finishing the operation in a finite number of steps and others might fail and retry on failure. The difference between wait-free and lock-free is that wait-free operation by each process is guaranteed to succeed in a finite number of steps, regardless of the other processors. In general, a lock-free algorithm can run in four phases: completing one's own operation, assisting an obstructing operation, aborting an obstructing operation, and waiting. Completing one's own operation is complicated by the possibility of concurrent assistance and abortion, but is invariably the fastest path to completion. The decision about when to assist, abort or wait when an obstruction is met is the responsibility of acontention manager. This may be very simple (assist higher priority operations, abort lower priority ones), or may be more optimized to achieve better throughput, or lower the latency of prioritized operations. Correct concurrent assistance is typically the most complex part of a lock-free algorithm, and often very costly to execute: not only does the assisting thread slow down, but thanks to the mechanics of shared memory, the thread being assisted will be slowed, too, if it is still running. Obstruction-freedom is the weakest natural non-blocking progress guarantee. An algorithm is obstruction-free if at any point, a single thread executed in isolation (i.e., with all obstructing threads suspended) for a bounded number of steps will complete its operation.[15]All lock-free algorithms are obstruction-free. Obstruction-freedom demands only that any partially completed operation can be aborted and the changes made rolled back. Dropping concurrent assistance can often result in much simpler algorithms that are easier to validate. Preventing the system from continuallylive-lockingis the task of a contention manager. Some obstruction-free algorithms use a pair of "consistency markers" in the data structure. Processes reading the data structure first read one consistency marker, then read the relevant data into an internal buffer, then read the other marker, and then compare the markers. The data is consistent if the two markers are identical. Markers may be non-identical when the read is interrupted by another process updating the data structure. In such a case, the process discards the data in the internal buffer and tries again.
https://en.wikipedia.org/wiki/Lock-free_and_wait-free_algorithms
Inconcurrent programming, amonitoris a synchronization construct that prevents threads from concurrently accessing a shared object's state and allows them to wait for the state to change. They provide a mechanism for threads to temporarily give up exclusive access in order to wait for some condition to be met, before regaining exclusive access and resuming their task. A monitor consists of amutex (lock)and at least onecondition variable. Acondition variableis explicitly 'signalled' when the object's state is modified, temporarily passing the mutex to another thread 'waiting' on thecondition variable. Another definition ofmonitoris athread-safeclass,object, ormodulethat wraps around amutexin order to safely allow access to a method or variable by more than onethread. The defining characteristic of a monitor is that its methods are executed withmutual exclusion: At each point in time, at most one thread may be executing any of itsmethods. By using one or more condition variables it can also provide the ability for threads to wait on a certain condition (thus using the above definition of a "monitor"). For the rest of this article, this sense of "monitor" will be referred to as a "thread-safe object/class/module". Monitors were invented byPer Brinch Hansen[1]andC. A. R. Hoare,[2]and were first implemented inBrinch Hansen'sConcurrent Pascallanguage.[3] While a thread is executing a method of a thread-safe object, it is said tooccupythe object, by holding itsmutex (lock). Thread-safe objects are implemented to enforce thatat each point in time, at most one thread may occupy the object. The lock, which is initially unlocked, is locked at the start of each public method, and is unlocked at each return from each public method. Upon calling one of the methods, a thread must wait until no other thread is executing any of the thread-safe object's methods before starting execution of its method. Note that without this mutual exclusion, two threads could cause money to be lost or gained for no reason. For example, two threads withdrawing 1000 from the account could both return true, while causing the balance to drop by only 1000, as follows: first, both threads fetch the current balance, find it greater than 1000, and subtract 1000 from it; then, both threads store the balance and return. For many applications, mutual exclusion is not enough. Threads attempting an operation may need to wait until some conditionPholds true. Abusy waitingloop will not work, as mutual exclusion will prevent any other thread from entering the monitor to make the condition true. Other "solutions" exist such as having a loop that unlocks the monitor, waits a certain amount of time, locks the monitor and checks for the conditionP. Theoretically, it works and will not deadlock, but issues arise. It is hard to decide an appropriate amount of waiting time: too small and the thread will hog the CPU, too big and it will be apparently unresponsive. What is needed is a way to signal the thread when the conditionPis true (orcouldbe true). A classic concurrency problem is that of thebounded producer/consumer, in which there is aqueueorring bufferof tasks with a maximum size, with one or more threads being "producer" threads that add tasks to the queue, and one or more other threads being "consumer" threads that take tasks out of the queue. The queue is assumed to be non–thread-safe itself, and it can be empty, full, or between empty and full. Whenever the queue is full of tasks, then we need the producer threads to block until there is room from consumer threads dequeueing tasks. On the other hand, whenever the queue is empty, then we need the consumer threads to block until more tasks are available due to producer threads adding them. As the queue is a concurrent object shared between threads, accesses to it must be madeatomic, because the queue can be put into aninconsistent stateduring the course of the queue access that should never be exposed between threads. Thus, any code that accesses the queue constitutes acritical sectionthat must be synchronized by mutual exclusion. If code and processor instructions in critical sections of code that access the queue could beinterleavedby arbitrarycontext switchesbetween threads on the same processor or by simultaneously-running threads on multiple processors, then there is a risk of exposing inconsistent state and causingrace conditions. A naive approach is to design the code withbusy-waitingand no synchronization, making the code subject to race conditions: This code has a serious problem in that accesses to the queue can be interrupted and interleaved with other threads' accesses to the queue. Thequeue.enqueueandqueue.dequeuemethods likely have instructions to update the queue's member variables such as its size, beginning and ending positions, assignment and allocation of queue elements, etc. In addition, thequeue.isEmpty()andqueue.isFull()methods read this shared state as well. If producer/consumer threads are allowed to be interleaved during the calls to enqueue/dequeue, then inconsistent state of the queue can be exposed leading to race conditions. In addition, if one consumer makes the queue empty in-between another consumer's exiting the busy-wait and calling "dequeue", then the second consumer will attempt to dequeue from an empty queue leading to an error. Likewise, if a producer makes the queue full in-between another producer's exiting the busy-wait and calling "enqueue", then the second producer will attempt to add to a full queue leading to an error. One naive approach to achieve synchronization, as alluded to above, is to use "spin-waiting", in which a mutex is used to protect the critical sections of code and busy-waiting is still used, with the lock being acquired and released in between each busy-wait check. This method assures that an inconsistent state does not occur, but wastes CPU resources due to the unnecessary busy-waiting. Even if the queue is empty and producer threads have nothing to add for a long time, consumer threads are always busy-waiting unnecessarily. Likewise, even if consumers are blocked for a long time on processing their current tasks and the queue is full, producers are always busy-waiting. This is a wasteful mechanism. What is needed is a way to make producer threads block until the queue is non-full, and a way to make consumer threads block until the queue is non-empty. (N.B.: Mutexes themselves can also bespin-lockswhich involve busy-waiting in order to get the lock, but in order to solve this problem of wasted CPU resources, we assume thatqueueLockis not a spin-lock and properly uses a blocking lock queue itself.) The solution is to usecondition variables. Conceptually, a condition variable is a queue of threads, associated with a mutex, on which a thread may wait for some condition to become true. Thus each condition variablecis associated with anassertionPc. While a thread is waiting on a condition variable, that thread is not considered to occupy the monitor, and so other threads may enter the monitor to change the monitor's state. In most types of monitors, these other threads may signal the condition variablecto indicate that assertionPcis true in the current state. Thus there are three main operations on condition variables: As a design rule, multiple condition variables can be associated with the same mutex, but not vice versa. (This is aone-to-manycorrespondence.) This is because the predicatePcis the same for all threads using the monitor and must be protected with mutual exclusion from all other threads that might cause the condition to be changed or that might read it while the thread in question causes it to be changed, but there may be different threads that want to wait for a different condition on the same variable requiring the same mutex to be used. In the producer-consumer exampledescribed above, the queue must be protected by a unique mutex object,m. The "producer" threads will want to wait on a monitor using lockmand a condition variablecfull{\displaystyle c_{full}}which blocks until the queue is non-full. The "consumer" threads will want to wait on a different monitor using the same mutexmbut a different condition variablecempty{\displaystyle c_{empty}}which blocks until the queue is non-empty. It would (usually) not make sense to have different mutexes for the same condition variable, but this classic example shows why it often certainly makes sense to have multiple condition variables using the same mutex. A mutex used by one or more condition variables (one or more monitors) may also be shared with code that doesnotuse condition variables (and which simply acquires/releases it without any wait/signal operations), if thosecritical sectionsdo not happen to require waiting for a certain condition on the concurrent data. The proper basic usage of a monitor is: The following is the same pseudocode but with more verbose comments to better explain what is going on: Having introduced the usage of condition variables, let us use it to revisit and solve the classic bounded producer/consumer problem. The classic solution is to use two monitors, comprising two condition variables sharing one lock on the queue: This ensures concurrency between the producer and consumer threads sharing the task queue, and blocks the threads that have nothing to do rather than busy-waiting as shown in the aforementioned approach using spin-locks. A variant of this solution could use a single condition variable for both producers and consumers, perhaps named "queueFullOrEmptyCV" or "queueSizeChangedCV". In this case, more than one condition is associated with the condition variable, such that the condition variable represents a weaker condition than the conditions being checked by individual threads. The condition variable represents threads that are waiting for the queue to be non-fullandones waiting for it to be non-empty. However, doing this would require usingbroadcastin all the threads using the condition variable and cannot use a regularsignal. This is because the regularsignalmight wake up a thread of the wrong type whose condition has not yet been met, and that thread would go back to sleep without a thread of the correct type getting signaled. For example, a producer might make the queue full and wake up another producer instead of a consumer, and the woken producer would go back to sleep. In the complementary case, a consumer might make the queue empty and wake up another consumer instead of a producer, and the consumer would go back to sleep. Usingbroadcastensures that some thread of the right type will proceed as expected by the problem statement. Here is the variant using only one condition variable and broadcast: Monitors are implemented using anatomicread-modify-write primitive and a waiting primitive. The read-modify-write primitive (usually test-and-set or compare-and-swap) is usually in the form of a memory-locking instruction provided by theISA, but can also be composed of non-locking instructions on single-processor devices when interrupts are disabled. The waiting primitive can be abusy-waitloop or an OS-provided primitive that prevents the thread from beingscheduleduntil it is ready to proceed. Here is an example pseudocode implementation of parts of a threading system and mutexes and Mesa-style condition variables, usingtest-and-setand a first-come, first-served policy: The original proposals byC. A. R. HoareandPer Brinch Hansenwere forblocking condition variables. With a blocking condition variable, the signaling thread must wait outside the monitor (at least) until the signaled thread relinquishes occupancy of the monitor by either returning or by again waiting on a condition variable. Monitors using blocking condition variables are often calledHoare-stylemonitors orsignal-and-urgent-waitmonitors. We assume there are two queues of threads associated with each monitor object In addition we assume that for each condition variablec, there is a queue All queues are typically guaranteed to befairand, in some implementations, may be guaranteed to befirst in first out. The implementation of each operation is as follows. (We assume that each operation runs in mutual exclusion to the others; thus restarted threads do not begin executing until the operation is complete.) Thescheduleroutine selects the next thread to occupy the monitor or, in the absence of any candidate threads, unlocks the monitor. The resulting signaling discipline is known as"signal and urgent wait,"as the signaler must wait, but is given priority over threads on the entrance queue. An alternative is"signal and wait,"in which there is nosqueue and signaler waits on theequeue instead. Some implementations provide asignal and returnoperation that combines signaling with returning from a procedure. In either case ("signal and urgent wait" or "signal and wait"), when a condition variable is signaled and there is at least one thread waiting on the condition variable, the signaling thread hands occupancy over to the signaled thread seamlessly, so that no other thread can gain occupancy in between. IfPcis true at the start of eachsignalcoperation, it will be true at the end of eachwaitcoperation. This is summarized by the followingcontracts. In these contracts,Iis the monitor'sinvariant. In these contracts, it is assumed thatIandPcdo not depend on the contents or lengths of any queues. (When the condition variable can be queried as to the number of threads waiting on its queue, more sophisticated contracts can be given. For example, a useful pair of contracts, allowing occupancy to be passed without establishing the invariant, is: (See Howard[4]and Buhret al.[5]for more.) It is important to note here that the assertionPcis entirely up to the programmer; he or she simply needs to be consistent about what it is. We conclude this section with an example of a thread-safe class using a blocking monitor that implements a bounded,thread-safestack. Note that, in this example, the thread-safe stack is internally providing a mutex, which, as in the earlier producer/consumer example, is shared by both condition variables, which are checking different conditions on the same concurrent data. The only difference is that the producer/consumer example assumed a regular non-thread-safe queue and was using a standalone mutex and condition variables, without these details of the monitor abstracted away as is the case here. In this example, when the "wait" operation is called, it must somehow be supplied with the thread-safe stack's mutex, such as if the "wait" operation is an integrated part of the "monitor class". Aside from this kind of abstracted functionality, when a "raw" monitor is used, it willalwayshave to include a mutex and a condition variable, with a unique mutex for each condition variable. Withnonblocking condition variables(also called"Mesa style"condition variables or"signal and continue"condition variables), signaling does not cause the signaling thread to lose occupancy of the monitor. Instead the signaled threads are moved to theequeue. There is no need for thesqueue. With nonblocking condition variables, thesignaloperation is often callednotify— a terminology we will follow here. It is also common to provide anotify alloperation that moves all threads waiting on a condition variable to theequeue. The meaning of various operations are given here. (We assume that each operation runs in mutual exclusion to the others; thus restarted threads do not begin executing until the operation is complete.) As a variation on this scheme, the notified thread may be moved to a queue calledw, which has priority overe. See Howard[4]and Buhret al.[5]for further discussion. It is possible to associate an assertionPcwith each condition variablecsuch thatPcis sure to be true upon return fromwaitc. However, one must ensure thatPcis preserved from the time thenotifying thread gives up occupancy until the notified thread is selected to re-enter the monitor. Between these times there could be activity by other occupants. Thus it is common forPcto simply betrue. For this reason, it is usually necessary to enclose eachwaitoperation in a loop like this wherePis some condition stronger thanPc. The operationsnotifycandnotify allcare treated as "hints" thatPmay be true for some waiting thread. Every iteration of such a loop past the first represents a lost notification; thus with nonblocking monitors, one must be careful to ensure that too many notifications cannot be lost. As an example of "hinting," consider a bank account in which a withdrawing thread will wait until the account has sufficient funds before proceeding In this example, the condition being waited for is a function of the amount to be withdrawn, so it is impossible for a depositing thread toknowthat it made such a condition true. It makes sense in this case to allow each waiting thread into the monitor (one at a time) to check if its assertion is true. In theJavalanguage, each object may be used as a monitor. Methods requiring mutual exclusion must be explicitly marked with thesynchronizedkeyword. Blocks of code may also be marked bysynchronized.[6] Rather than having explicit condition variables, each monitor (i.e., object) is equipped with a single wait queue in addition to its entrance queue. All waiting is done on this single wait queue and allnotifyandnotifyAlloperations apply to this queue.[7]This approach has been adopted in other languages, for exampleC#. Another approach to signaling is to omit thesignaloperation. Whenever a thread leaves the monitor (by returning or waiting), the assertions of all waiting threads are evaluated until one is found to be true. In such a system, condition variables are not needed, but the assertions must be explicitly coded. The contract for wait is Brinch Hansen and Hoare developed the monitor concept in the early 1970s, based on earlier ideas of their own and ofEdsger Dijkstra.[8]Brinch Hansen published the first monitor notation, adopting theclassconcept ofSimula 67,[1]and invented a queueing mechanism.[9]Hoare refined the rules of process resumption.[2]Brinch Hansen created the first implementation of monitors, inConcurrent Pascal.[8]Hoare demonstrated their equivalence tosemaphores. Monitors (and Concurrent Pascal) were soon used to structure process synchronization in theSolo operating system.[10][11] Programming languages that have supported monitors include: A number of libraries have been written that allow monitors to be constructed in languages that do not support them natively. When library calls are used, it is up to the programmer to explicitly mark the start and end of code executed with mutual exclusion.Pthreadsis one such library.
https://en.wikipedia.org/wiki/Monitor_(synchronization)
Incomputer science,mutual exclusionis a property ofconcurrency control, which is instituted for the purpose of preventingrace conditions. It is the requirement that onethread of executionnever enters acritical sectionwhile aconcurrentthread of execution is already accessing said critical section, which refers to an interval of time during which a thread of execution accesses ashared resourceorshared memory. The shared resource is adata object, which two or more concurrent threads are trying to modify (where two concurrent read operations are permitted but, no two concurrent write operations or one read and one write are permitted, since it leads to data inconsistency). Mutual exclusion algorithms ensure that if a process is already performing write operation on a data object [critical section] no other process/thread is allowed to access/modify the same object until the first process has finished writing upon the data object [critical section] and released the object for other processes to read and write upon. The requirement of mutual exclusion was first identified and solved byEdsger W. Dijkstrain his seminal 1965 paper "Solution of a problem in concurrent programming control",[1][2]which is credited as the first topic in the study of concurrent algorithms.[3] A simple example of why mutual exclusion is important in practice can be visualized using asingly linked listof four items, where the second and third are to be removed. The removal of a node that sits between two other nodes is performed by changing thenextpointerof the previous node to point to the next node (in other words, if nodeiis being removed, then thenextpointer of nodei– 1is changed to point to nodei+ 1, thereby removing from the linked list any reference to nodei). When such a linked list is being shared between multiple threads of execution, two threads of execution may attempt to remove two different nodes simultaneously, one thread of execution changing thenextpointer of nodei– 1to point to nodei+ 1, while another thread of execution changes thenextpointer of nodeito point to nodei+ 2. Although both removal operations complete successfully, the desired state of the linked list is not achieved: nodei+ 1remains in the list, because thenextpointer of nodei– 1points to nodei+ 1. This problem (called arace condition) can be avoided by using the requirement of mutual exclusion to ensure that simultaneous updates to the same part of the list cannot occur. The term mutual exclusion is also used in reference to the simultaneous writing of a memory address by one thread while the aforementioned memory address is being manipulated or read by one or more other threads. The problem which mutual exclusion addresses is a problem of resource sharing: how can a software system control multiple processes' access to a shared resource, when each process needs exclusive control of that resource while doing its work? The mutual-exclusion solution to this makes the shared resource available only while the process is in a specific code segment called thecritical section. It controls access to the shared resource by controlling each mutual execution of that part of its program where the resource would be used. A successful solution to this problem must have at least these two properties: Deadlock freedom can be expanded to implement one or both of these properties: Every process's program can be partitioned into four sections, resulting in four states. Program execution cycles through these four states in order:[5] If a process wishes to enter the critical section, it must first execute the trying section and wait until it acquires access to the critical section. After the process has executed its critical section and is finished with the shared resources, it needs to execute the exit section to release them for other processes' use. The process then returns to its non-critical section. Onuni-processorsystems, the simplest solution to achieve mutual exclusion is to disableinterruptsduring a process's critical section. This will prevent anyinterrupt service routinesfrom running (effectively preventing a process from beingpreempted). Although this solution is effective, it leads to many problems. If a critical section is long, then thesystem clockwill drift every time a critical section is executed because the timer interrupt is no longer serviced, so tracking time is impossible during the critical section. Also, if a process halts during its critical section, control will never be returned to another process, effectively halting the entire system. A more elegant method for achieving mutual exclusion is thebusy-wait. Busy-waiting is effective for both uniprocessor andmultiprocessorsystems. The use of shared memory and anatomictest-and-setinstruction provide the mutual exclusion. A process cantest-and-seton a location in shared memory, and since the operation is atomic, only one process can set the flag at a time. Any process that is unsuccessful in setting the flag can either go on to do other tasks and try again later, release the processor to another process and try again later, or continue to loop while checking the flag until it is successful in acquiring it.Preemptionis still possible, so this method allows the system to continue to function—even if a process halts while holding the lock. Several other atomic operations can be used to provide mutual exclusion of data structures; most notable of these iscompare-and-swap(CAS). CAS can be used to achievewait-freemutual exclusion for any shared data structure by creating alinked listwhere each node represents the desired operation to be performed. CAS is then used to change thepointersin the linked list[6]during the insertion of a new node. Only one process can be successful in its CAS; all other processes attempting to add a node at the same time will have to try again. Each process can then keep a local copy of the data structure, and upon traversing the linked list, can perform each operation from the list on its local copy. In addition to hardware-supported solutions, some software solutions exist that usebusy waitingto achieve mutual exclusion. Examples include: These algorithms do not work ifout-of-order executionis used on the platform that executes them. Programmers have to specify strict ordering on the memory operations within a thread.[8] It is often preferable to use synchronization facilities provided by anoperating system's multithreading library, which will take advantage of hardware solutions if possible but will use software solutions if no hardware solutions exist. For example, when the operating system'slocklibrary is used and a thread tries to acquire an already acquired lock, the operating system could suspend the thread using acontext switchand swap it out with another thread that is ready to be run, or could put that processor into a low power state if there is no other thread that can be run. Therefore, most modern mutual exclusion methods attempt to reducelatencyand busy-waits by using queuing and context switches. However, if the time that is spent suspending a thread and then restoring it can be proven to be always more than the time that must be waited for a thread to become ready to run after being blocked in a particular situation, thenspinlocksare an acceptable solution (for that situation only).[citation needed] One binarytest&setregister is sufficient to provide the deadlock-free solution to the mutual exclusion problem. But a solution built with a test&set register can possibly lead to the starvation of some processes which become caught in the trying section.[4]In fact,Ω(n){\displaystyle \Omega ({\sqrt {n}})}distinct memory states are required to avoid lockout. To avoid unbounded waiting,ndistinct memory states are required.[9] Most algorithms for mutual exclusion are designed with the assumption that no failure occurs while a process is running inside the critical section. However, in reality such failures may be commonplace. For example, a sudden loss of power or faulty interconnect might cause a process in a critical section to experience an unrecoverable error or otherwise be unable to continue. If such a failure occurs, conventional, non-failure-tolerant mutual exclusion algorithms may deadlock or otherwise fail key liveness properties. To deal with this problem, several solutions using crash-recovery mechanisms have been proposed.[10] The solutions explained above can be used to build the synchronization primitives below: Many forms of mutual exclusion have side-effects. For example, classicsemaphorespermitdeadlocks, in which one process gets a semaphore, another process gets a second semaphore, and then both wait till the other semaphore to be released. Other common side-effects includestarvation, in which a process never gets sufficient resources to run to completion;priority inversion, in which a higher-priority thread waits for a lower-priority thread; and high latency, in which response to interrupts is not prompt. Much research is aimed at eliminating the above effects, often with the goal of guaranteeingnon-blocking progress. No perfect scheme is known. Blocking system calls used to sleep an entire process. Until such calls becamethreadsafe, there was no proper mechanism for sleeping a single thread within a process (seepolling).[citation needed]
https://en.wikipedia.org/wiki/Mutual_exclusion
Incomputer science, areaders–writer(single-writerlock,[1]amulti-readerlock,[2]apush lock,[3]or anMRSW lock) is asynchronizationprimitive that solves one of thereaders–writers problems. An RW lock allowsconcurrentaccess for read-only operations, whereas write operations require exclusive access. This means that multiple threads can read the data in parallel but an exclusivelockis needed for writing or modifying data. When a writer is writing the data, all other writers and readers will be blocked until the writer is finished writing. A common use might be to control access to a data structure in memory that cannot be updatedatomicallyand is invalid (and should not be read by another thread) until the update is complete. Readers–writer locks are usually constructed on top ofmutexesandcondition variables, or on top ofsemaphores. Some RW locks allow the lock to be atomically upgraded from being locked in read-mode to write-mode, as well as being downgraded from write-mode to read-mode.[1]Upgrading a lock from read-mode to write-mode is prone to deadlocks, since whenever two threads holding reader locks both attempt to upgrade to writer locks, a deadlock is created that can only be broken by one of the threads releasing its reader lock. The deadlock can be avoided by allowing only one thread to acquire the lock in "read-mode with intent to upgrade to write" while there are no threads in write mode and possibly non-zero threads in read-mode. RW locks can be designed with different priority policies for reader vs. writer access. The lock can either be designed to always give priority to readers (read-preferring), to always give priority to writers (write-preferring) or beunspecifiedwith regards to priority. These policies lead to different tradeoffs with regards toconcurrencyandstarvation. Several implementation strategies for readers–writer locks exist, reducing them to synchronization primitives that are assumed to pre-exist. Raynaldemonstrates how to implement an R/W lock using two mutexes and a single integer counter. The counter,b, tracks the number of blocking readers. One mutex,r, protectsband is only used by readers; the other,g(for "global") ensures mutual exclusion of writers. This requires that a mutex acquired by one thread can be released by another. The following ispseudocodefor the operations: Initialize Begin Read End Read Begin Write End Write This implementation is read-preferring.[4]: 76 Alternatively an RW lock can be implemented in terms of acondition variable,cond, an ordinary (mutex) lock,g, and various counters and flags describing the threads that are currently active or waiting.[7][8][9]For a write-preferring RW lock one can use two integer counters and one Boolean flag: Initiallynum_readers_activeandnum_writers_waitingare zero andwriter_activeis false. The lock and release operations can be implemented as Begin Read End Read Begin Write End Write Theread-copy-update(RCU) algorithm is one solution to the readers–writers problem. RCU iswait-freefor readers. TheLinux kernelimplements a special solution for few writers calledseqlock.
https://en.wikipedia.org/wiki/Read/write_lock_pattern
These tables provide acomparison of operating systems, of computer devices, as listing general and technical information for a number of widely used and currently availablePCor handheld (includingsmartphoneandtablet computer)operating systems. The article "Usage share of operating systems" provides a broader, and more general, comparison of operating systems that includesservers,mainframesandsupercomputers. Because of the large number and variety of availableLinux distributions, they are all grouped under a single entry; seecomparison of Linux distributionsfor a detailed comparison. There is also a variety of BSD and DOS operating systems, covered incomparison of BSD operating systemsandcomparison of DOS operating systems. The nomenclature for operating systems varies among providers and sometimes within providers. For purposes of this article the terms used are; License and pricing policies also vary among different systems. The tables below use the following terms: AIX/ESA V1 & V2 versions 7.1-9 sold as retail upgrades[1] S/370-XAS/370-ESAESA/390 lines of code foruserlandlibraries and applications vary depending on the distribution ForPOSIXcompliant (or partly compliant) systems likeFreeBSD,Linux,macOSorSolaris, the basic commands are the same because they are standardized. NOTE:Linux systems may vary by distribution which specific program, or even 'command' is called, via the POSIXaliasfunction. For example, if you wanted to use the DOSdirto give you a directory listing with one detailed file listing per line you could usealiasdir='ls -lahF'(e.g. in a session configuration file).
https://en.wikipedia.org/wiki/Comparison_of_operating_systems
DBOS(Database-Oriented Operating System) is adatabase-orientedoperating systemmeant to simplify and improve thescalability, security and resilience of large-scale distributed applications.[1][2]It started in 2020 as a jointopen sourceproject withMIT,StanfordandCarnegie Mellon University, after a brainstorm betweenMichael StonebrakerandMatei Zahariaon how to scale and improve scheduling and performance of millions ofApache Sparktasks.[2] The basic idea is to run a multi-node multi-core,transactional, highly-availabledistributed database, such asVoltDB, as the only application for amicrokernel, and then to implement scheduling, messaging,file systemsand other operating system services on top of the database. The architectural philosophy is described by this quote from the abstract of their initial preprint: All operating system state should be represented uniformly as database tables, and operations on this state should be made via queries from otherwise stateless tasks. This design makes it easy to scale and evolve the OS without whole-system refactoring, inspect and debug system state, upgrade components without downtime, manage decisions using machine learning, and implement sophisticated security features.[3] Stonebrakerclaims a variety of security benefits, from a "smaller, less porous attack surface", to the ability to log and analyze how the system state changes in real-time due to the transactional nature of the OS.[1]Recovery from a severe bug or an attack can be as simple asrolling backthe database to a previous state. And since the database is already distributed, the complexity oforchestrationsystems likeKubernetescan be avoided. A prototype was built with competitive performance to existing systems.[4] In March of 2024, DBOS Cloud became the first commercial service from DBOS Inc. It provides transactionalFunctions as a Service(FaaS), and is positioned as a competitor toserverless computingarchitectures likeAWS Lambda. DBOS Cloud is currently based onFoundationDB, a fastACIDNoSQLdatabase, running on theFirecrackermicroVM service from AWS. It provides built-in support for features like multinode scaling and a "time-traveler" debugger that can help track down elusiveheisenbugsand works inVisual Studio Code. Another feature is reliable execution, allowing a program to continue running even if the operating system needs to be restarted, and ensuring that no work is repeated.[5] Firecracker runs on stripped downLinuxmicrokernelvia a stripped downKVMhypervisor, so parts of the Linux kernel are still under the covers, but work is ongoing to eliminate them.[6] DBOS Cloud has been tested running across 1,000 cores running applications. The firstAPIprovided is forTypeScript, via the open-source DBOS Transact framework.[6]It provides a runtime with built-in reliable message delivery andidempotency.[7] Holger Mueller of Constellation Research wondered how well DBOS the company can scale. “Will a small team at DBOS be able to run an OS, database, observability, workflow and cyber stack as good as the combination of the best of breed vendors?”[8] Thisoperating-system-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/DBOS
This is alist of operating systems. Computeroperating systemscan be categorized by technology, ownership, licensing, working state, usage, and by many other characteristics. In practice, many of these groupings may overlap. Criteria for inclusion is notability, as shown either through an existing Wikipedia article or citation to a reliable source. Non-Unix Operating Systems: Multiple Console Time Sharing System(MCTS), from General Motors Research Source:[20]
https://en.wikipedia.org/wiki/List_of_operating_systems
This is a list of people who made transformative breakthroughs in the creation, development and imagining of whatcomputerscould do. ~ Items marked with a tilde are circa dates.
https://en.wikipedia.org/wiki/List_of_pioneers_in_computer_science
This page is a glossary ofOperating systemsterminology.[1][2]
https://en.wikipedia.org/wiki/Glossary_of_operating_systems_terms
Amicrocontroller(MC,uC, orμC) ormicrocontroller unit(MCU) is a smallcomputeron a singleintegrated circuit. A microcontroller contains one or moreCPUs(processor cores) along withmemoryand programmableinput/outputperipherals. Program memory in the form ofNOR flash,OTP ROM, orferroelectric RAMis also often included on the chip, as well as a small amount ofRAM. Microcontrollers are designed forembeddedapplications, in contrast to themicroprocessorsused inpersonal computersor other general-purpose applications consisting of various discrete chips. In modern terminology, a microcontroller is similar to, but less sophisticated than, asystem on a chip(SoC). A SoC may include a microcontroller as one of its components but usually integrates it with advanced peripherals like agraphics processing unit(GPU), aWi-Fimodule, or one or morecoprocessors. Microcontrollers are used inautomatically controlledproducts and devices, such as automobile engine control systems, implantable medical devices, remote controls, office machines, appliances, power tools, toys, and otherembedded systems. By reducing the size and cost compared to a design that uses a separatemicroprocessor, memory, and input/output devices, microcontrollers make digital control of more devices and processes practical.Mixed-signalmicrocontrollers are common, integrating analog components needed to control non-digital electronic systems. In the context of theInternet of Things, microcontrollers are an economical and popular means ofdata collection,sensingandactuatingthe physical world asedge devices. Some microcontrollers may use four-bitwordsand operate at frequencies as low as4 kHzfor lowpower consumption(single-digitmilliwattsor microwatts). They generally have the ability to retain functionality whilewaitingfor aneventsuch as a button press or otherinterrupt; power consumption while sleeping (CPU clock and most peripherals off) may be just nanowatts, making many of them well suited for long lasting battery applications. Other microcontrollers may serve performance-critical roles, where they may need to act more like adigital signal processor(DSP), with higher clock speeds and power consumption. The first multi-chip microprocessors, theFour-Phase Systems AL1in 1969 and theGarrett AiResearchMP944in 1970, were developed with multiple MOS LSI chips. The first single-chip microprocessor was theIntel 4004, released on a single MOS LSI chip in 1971. It was developed byFederico Faggin, using hissilicon-gateMOS technology, along withIntelengineersMarcian HoffandStan Mazor, andBusicomengineerMasatoshi Shima.[1]It was followed by the4-bitIntel 4040, the8-bitIntel 8008, and the 8-bitIntel 8080. All of these processors required several external chips to implement a working system, including memory and peripheral interface chips. As a result, the total system cost was several hundred (1970s US) dollars, making it impossible to economically computerize small appliances. MOS Technologyintroduced its sub-$100 microprocessors in 1975, the 6501 and6502. Their chief aim was to reduce this cost barrier but these microprocessors still required external support, memory, and peripheral chips which kept the total system cost in the hundreds of dollars. One book creditsTIengineers Gary Boone and Michael Cochran with the successful creation of the first microcontroller in 1971. The result of their work was theTMS 1000, which became commercially available in 1974. It combined read-only memory, read/write memory, processor and clock on one chip and was targeted at embedded systems.[2] During the early-to-mid-1970s, Japanese electronics manufacturers began producing microcontrollers for automobiles, including 4-bit MCUs forin-car entertainment, automatic wipers, electronic locks, and dashboard, and 8-bit MCUs for engine control.[3] Partly in response to the existence of the single-chip TMS 1000,[4]Intel developed a computer system on a chip optimized for control applications, theIntel 8048, with commercial parts first shipping in 1977.[4]It combinedRAMandROMon the same chip with a microprocessor. Among numerous applications, this chip would eventually find its way into over one billion PC keyboards. At that time Intel's President, Luke J. Valenter, stated that the microcontroller was one of the most successful products in the company's history, and he expanded the microcontroller division's budget by over 25%. Most microcontrollers at this time had concurrent variants. One hadEPROMprogram memory, with a transparent quartz window in the lid of the package to allow it to be erased by exposure toultravioletlight. These erasable chips were often used for prototyping. The other variant was either a mask-programmed ROM or aPROMvariant which was only programmable once. For the latter, sometimes the designationOTPwas used, standing for "one-time programmable". In an OTP microcontroller, the PROM was usually of identical type as the EPROM, but the chip package had no quartz window; because there was no way to expose the EPROM to ultraviolet light, it could not be erased. Because the erasable versions required ceramic packages with quartz windows, they were significantly more expensive than the OTP versions, which could be made in lower-cost opaque plastic packages. For the erasable variants, quartz was required, instead of less expensive glass, for its transparency to ultraviolet light—to which glass is largely opaque—but the main cost differentiator was the ceramic package itself.Piggyback microcontrollerswere also used.[5][6][7] In 1993, the introduction ofEEPROMmemory allowed microcontrollers (beginning with the MicrochipPIC16C84)[8]to be electrically erased quickly without an expensive package as required forEPROM, allowing both rapid prototyping, andin-system programming. (EEPROM technology had been available prior to this time,[9]but the earlier EEPROM was more expensive and less durable, making it unsuitable for low-cost mass-produced microcontrollers.) The same year, Atmel introduced the first microcontroller usingFlash memory, a special type of EEPROM.[10]Other companies rapidly followed suit, with both memory types. Nowadays microcontrollers are cheap and readily available for hobbyists, with large online communities around certain processors. In 2002, about 55% of allCPUssold in the world were 8-bit microcontrollers and microprocessors.[11] Over two billion 8-bit microcontrollers were sold in 1997,[12]and according to Semico, over four billion 8-bit microcontrollers were sold in 2006.[13]More recently, Semico has claimed the MCU market grew 36.5% in 2010 and 12% in 2011.[14] A typical home in a developed country is likely to have only four general-purpose microprocessors but around three dozen microcontrollers. A typical mid-range automobile has about 30 microcontrollers. They can also be found in many electrical devices such as washing machines, microwave ovens, and telephones. Historically, the 8-bit segment has dominated the MCU market [..] 16-bit microcontrollers became the largest volume MCU category in 2011, overtaking 8-bit devices for the first time that year [..] IC Insights believes the makeup of the MCU market will undergo substantial changes in the next five years with 32-bit devices steadily grabbing a greater share of sales and unit volumes. By 2017, 32-bit MCUs are expected to account for 55% of microcontroller sales [..] In terms of unit volumes, 32-bit MCUs are expected account for 38% of microcontroller shipments in 2017, while 16-bit devices will represent 34% of the total, and 4-/8-bit designs are forecast to be 28% of units sold that year. The 32-bit MCU market is expected to grow rapidly due to increasing demand for higher levels of precision in embedded-processing systems and the growth in connectivity using the Internet. [..] In the next few years, complex 32-bit MCUs are expected to account for over 25% of the processing power in vehicles. Cost to manufacture can be underUS$0.10per unit. Cost has plummeted over time, with the cheapest8-bitmicrocontrollers being available for underUS$0.03in 2018,[16]and some32-bitmicrocontrollers aroundUS$1for similar quantities. In 2012, following a global crisis—a worst ever annual sales decline and recovery and average sales price year-over-year plunging 17%—the biggest reduction since the 1980s—the average price for a microcontroller wasUS$0.88(US$0.69for 4-/8-bit,US$0.59for 16-bit,US$1.76for 32-bit).[15] In 2012, worldwide sales of 8-bit microcontrollers were aroundUS$4 billion, while4-bitmicrocontrollers also saw significant sales.[17] In 2015, 8-bit microcontrollers could be bought forUS$0.311(1,000 units),[18]16-bit forUS$0.385(1,000 units),[19]and 32-bit forUS$0.378(1,000 units, but atUS$0.35for 5,000).[20] In 2018, 8-bit microcontrollers could be bought forUS$0.03,[16]16-bit forUS$0.393(1,000 units, but atUS$0.563for 100 orUS$0.349for full reel of 2,000),[21]and 32-bit forUS$0.503(1,000 units, but atUS$0.466for 5,000).[22] In 2018, the low-priced microcontrollers above from 2015 were all more expensive (with inflation calculated between 2018 and 2015 prices for those specific units) at: the 8-bit microcontroller could be bought forUS$0.319(1,000 units) or 2.6% higher,[18]the 16-bit one forUS$0.464(1,000 units) or 21% higher,[19]and the 32-bit one forUS$0.503(1,000 units, but atUS$0.466for 5,000) or 33% higher.[20] On 21 June 2018, the "world's smallest computer" was announced by theUniversity of Michigan. The device is a "0.04mm316nWwireless and batteryless sensor system with integratedCortex-M0+processor and optical communication for cellular temperature measurement." It "measures just0.3 mmto a side—dwarfed by a grain of rice. [...] In addition to the RAM andphotovoltaics, the new computing devices have processors andwireless transmitters and receivers. Because they are too small to have conventional radio antennae, they receive and transmit data with visible light. A base station provides light for power and programming, and it receives the data."[23]The device is1⁄10th the size of IBM's previously claimed world-record-sized computer from months back in March 2018,[24]which is "smaller than a grain of salt",[25]has a million transistors, costs less than$0.10to manufacture, and, combined withblockchaintechnology, is intended for logistics and "crypto-anchors"—digital fingerprintapplications.[26] A microcontroller can be considered a self-contained system with a processor, memory and peripherals and can be used as anembedded system.[27]The majority of microcontrollers in use today are embedded in other machinery, such as automobiles, telephones, appliances, and peripherals for computer systems. While some embedded systems are very sophisticated, many have minimal requirements for memory and program length, with nooperating system, and low software complexity. Typical input and output devices include switches,relays,solenoids,LED's, small or customliquid-crystal displays, radio frequency devices, and sensors for data such as temperature, humidity, light level etc. Embedded systems usually have no keyboard, screen, disks, printers, or other recognizable I/O devices of apersonal computer, and may lack human interaction devices of any kind. Microcontrollers must providereal-time(predictable, though not necessarily fast) response to events in the embedded system they are controlling. When certain events occur, aninterruptsystem can signal the processor to suspend processing the current instruction sequence and to begin aninterrupt service routine(ISR, or "interrupt handler") which will perform any processing required based on the source of the interrupt, before returning to the original instruction sequence. Possible interrupt sources are device-dependent and often include events such as an internal timer overflow, completing an analog-to-digital conversion, a logic-level change on an input such as from a button being pressed, and data received on a communication link. Where power consumption is important as in battery devices, interrupts may also wake a microcontroller from a low-power sleep state where the processor is halted until required to do something by a peripheral event. Typically microcontroller programs must fit in the available on-chip memory, since it would be costly to provide a system with external, expandable memory. Compilers and assemblers are used to convert bothhigh-levelandassembly languagecode into a compactmachine codefor storage in the microcontroller's memory. Depending on the device, the program memory may be permanent,read-only memorythat can only be programmed at the factory, or it may be field-alterableflashor erasable read-only memory. Manufacturers have often produced special versions of their microcontrollers in order to help the hardware andsoftware developmentof the target system. Originally these includedEPROMversions that have a "window" on the top of the device through which program memory can be erased byultravioletlight, ready for reprogramming after a programming ("burn") and test cycle. Since 1998, EPROM versions are rare and have been replaced byEEPROMand flash, which are easier to use (can be erased electronically) and cheaper to manufacture. Other versions may be available where the ROM is accessed as an external device rather than as internal memory, however these are becoming rare due to the widespread availability of cheap microcontroller programmers. The use of field-programmable devices on a microcontroller may allow field update of thefirmwareor permit late factory revisions to products that have been assembled but not yet shipped. Programmable memory also reduces the lead time required for deployment of a new product. Where hundreds of thousands of identical devices are required, using parts programmed at the time of manufacture can be economical. These "mask-programmed" parts have the program laid down in the same way as the logic of the chip, at the same time. A customized microcontroller incorporates a block of digital logic that can be personalized for additional processing capability,peripheralsandinterfacesthat are adapted to the requirements of the application. One example is theAT91CAPfromAtmel. Microcontrollers usually contain from several to dozens of general purpose input/output pins (GPIO). GPIO pins are software configurable to either an input or an output state. When GPIO pins are configured to an input state, they are often used to read sensors or external signals. Configured to the output state, GPIO pins can drive external devices such as LEDs or motors, often indirectly, through external power electronics. Many embedded systems need to read sensors that produce analog signals. However, because they are built to interpret and process digital data, i.e. 1s and 0s, they are not able to do anything with the analog signals that may be sent to it by a device. So, ananalog-to-digital converter(ADC) is used to convert the incoming data into a form that the processor can recognize. A less common feature on some microcontrollers is adigital-to-analog converter(DAC) that allows the processor to output analog signals or voltage levels. In addition to the converters, many embedded microprocessors include a variety of timers as well. One of the most common types of timers is theprogrammable interval timer(PIT). A PIT may either count down from some value to zero, or up to the capacity of the count register, overflowing to zero. Once it reaches zero, it sends an interrupt to the processor indicating that it has finished counting. This is useful for devices such as thermostats, which periodically test the temperature around them to see if they need to turn the air conditioner on/off, the heater on/off, etc. A dedicatedpulse-width modulation(PWM) block makes it possible for the CPU to controlpower converters,resistiveloads,motors, etc., without using many CPU resources in tight timerloops. Auniversal asynchronous receiver/transmitter(UART) block makes it possible to receive and transmit data over a serial line with very little load on the CPU. Dedicated on-chip hardware also often includes capabilities to communicate with other devices (chips) in digital formats such as Inter-Integrated Circuit (I²C), Serial Peripheral Interface (SPI), Universal Serial Bus (USB), andEthernet.[28] Microcontrollers may not implement an external address or data bus as they integrate RAM andnon-volatile memoryon the same chip as the CPU. Using fewer pins, the chip can be placed in a much smaller, cheaper package. Integrating the memory and other peripherals on a single chip and testing them as a unit increases the cost of that chip, but often results in decreased net cost of the embedded system as a whole. Even if the cost of a CPU that has integrated peripherals is slightly more than the cost of a CPU and external peripherals, having fewer chips typically allows a smaller and cheaper circuit board, and reduces the labor required to assemble and test the circuit board, in addition to tending to decrease the defect rate for the finished assembly. A microcontroller is a singleintegrated circuit, commonly with the following features: This integration drastically reduces the number of chips and the amount of wiring andcircuit boardspace that would be needed to produce equivalent systems using separate chips. Furthermore, on low pin count devices in particular, each pin may interface to several internal peripherals, with the pin function selected by software. This allows a part to be used in a wider variety of applications than if pins had dedicated functions. Microcontrollers have proved to be highly popular inembedded systemssince their introduction in the 1970s. Some microcontrollers use aHarvard architecture: separate memory buses for instructions and data, allowing accesses to take place concurrently. Where a Harvard architecture is used, instruction words for the processor may be a different bit size than the length of internal memory and registers; for example: 12-bit instructions used with 8-bit data registers. The decision of which peripheral to integrate is often difficult. The microcontroller vendors often trade operating frequencies and system design flexibility against time-to-market requirements from their customers and overall lower system cost. Manufacturers have to balance the need to minimize the chip size against additional functionality. Microcontroller architectures vary widely. Some designs include general-purpose microprocessor cores, with one or more ROM, RAM, or I/O functions integrated onto the package. Other designs are purpose-built for control applications. A microcontroller instruction set usually has many instructions intended forbit manipulation(bit-wise operations) to make control programs more compact.[29]For example, a general-purpose processor might require several instructions to test a bit in a register and branch if the bit is set, where a microcontroller could have a single instruction to provide that commonly required function. Microcontrollers historically have not hadmath coprocessors, sofloating-point arithmetichas been performed by software. However, some recent designs do include FPUs and DSP-optimized features. An example would be Microchip's PIC32 MIPS-based line. Microcontrollers were originally programmed only inassembly language, but varioushigh-level programming languages, such asC,PythonandJavaScript, are now also in common use to target microcontrollers andembedded systems.[30]Compilersforgeneral-purpose languageswill typically have some restrictions as well as enhancements to better support the unique characteristics of microcontrollers. Some microcontrollers have environments to aid developing certain types of applications. Microcontroller vendors often make tools freely available to make it easier to adopt their hardware. Microcontrollers with specialty hardware may require their own non-standard dialects of C, such asSDCC for the 8051, which prevent using standard tools (such as code libraries or static analysis tools) even for code unrelated to hardware features.Interpretersmay also contain nonstandard features, such asMicroPython, although a fork,CircuitPython, has looked to move hardware dependencies to libraries and have the language adhere to a moreCPythonstandard. Interpreter firmware is also available for some microcontrollers. For example,BASICon the early microcontrollerIntel8052;[31]BASIC andFORTHon theZilog Z8[32]as well as some modern devices. Typically these interpreters supportinteractive programming. Simulatorsare available for some microcontrollers. These allow a developer to analyze what the behavior of the microcontroller and their program should be if they were using the actual part. Asimulatorwill show the internal processor state and also that of the outputs, as well as allowing input signals to be generated. While on the one hand most simulators will be limited from being unable to simulate much other hardware in a system, they can exercise conditions that may otherwise be hard to reproduce at will in the physical implementation, and can be the quickest way to debug and analyze problems. Recent microcontrollers are often integrated with on-chipdebugcircuitry that when accessed by anin-circuit emulator(ICE) viaJTAG, allow debugging of the firmware with adebugger. A real-time ICE may allow viewing and/or manipulating of internal states while running. A tracing ICE can record executed program and MCU states before/after a trigger point. As of 2008[update], there are several dozen microcontroller architectures and vendors including: Many others exist, some of which are used in very narrow range of applications or are more like applications processors than microcontrollers. The microcontroller market is extremely fragmented, with numerous vendors, technologies, and markets. Note that many vendors sell or have sold multiple architectures. In contrast to general-purpose computers, microcontrollers used in embedded systems often seek to optimizeinterrupt latencyover instruction throughput. Issues include both reducing the latency, and making it be more predictable (to support real-time control). When an electronic device causes an interrupt, during thecontext switchthe intermediate results (registers) have to be saved before the software responsible for handling the interrupt can run. They must also be restored after thatinterrupt handleris finished. If there are moreprocessor registers, this saving and restoring process may take more time, increasing the latency. (If an ISR does not require the use of some registers, it may simply leave them alone rather than saving and restoring them, so in that case those registers are not involved with the latency.) Ways to reduce such context/restore latency include having relatively few registers in their central processing units (undesirable because it slows down most non-interrupt processing substantially), or at least having the hardware not save them all (this fails if the software then needs to compensate by saving the rest "manually"). Another technique involves spending silicon gates on "shadow registers": One or more duplicate registers used only by the interrupt software, perhaps supporting a dedicated stack. Other factors affecting interrupt latency include: Lower end microcontrollers tend to support fewer interrupt latency controls than higher end ones. Two different kinds of memory are commonly used with microcontrollers, anon-volatile memoryfor storing firmware and aread–write memoryfor temporary data. From the earliest microcontrollers to today, six-transistor SRAM is almost always used as the read/write working memory, with a few more transistors per bit used in theregister file. In addition to the SRAM, some microcontrollers also have internal EEPROM and/orNVRAMfor data storage; and ones that do not have any (such as theBASIC Stamp), or where the internal memory is insufficient, are often connected to an external EEPROM or flash memory chip. A few microcontrollers beginning in 2003 have "self-programmable" flash memory.[10] The earliest microcontrollers used mask ROM to storefirmware. Later microcontrollers (such as the early versions of theFreescale 68HC11and earlyPIC microcontrollers) hadEPROMmemory, which used a translucent window to allow erasure via UV light, while production versions had no such window, being OTP (one-time-programmable). Firmware updates were equivalent to replacing the microcontroller itself, thus many products were not upgradeable. MotorolaMC68HC805[9]was the first microcontroller to useEEPROMto store the firmware. EEPROM microcontrollers became more popular in 1993 when Microchip introducedPIC16C84[8]and Atmel introduced an8051-coremicrocontroller that was first one to useNOR Flash memoryto store the firmware.[10]Today's microcontrollers almost all use flash memory, with a few models using FRAM and some ultra-low-cost parts still using OTP or Mask ROM.
https://en.wikipedia.org/wiki/Microcontroller
Anetwork operating system(NOS) is a specializedoperating systemfor a network device such as arouter,switchor firewall. Historically operating systems with networking capabilities were described as network operating systems, because they allowed personal computers (PCs) to participate incomputer networksandshared file and printer accesswithin a local area network (LAN). This description of operating systems is now largely historical, as common operating systems include anetwork stackto support a client–server model. Network Operating Systems (NOS) are responsible for managing various network activities. Key functions include creating and managing user accounts, controlling access to resources such as files and printers, and facilitating communication between devices. NOS also monitors network performance, addresses issues, and manages resources to ensure efficient and secure operation of the network.[1] Packet switchingnetworks were developed to share hardware resources, such as amainframe computer, aprinteror a large and expensivehard disk.[2]: 318 Historically, a network operating system was anoperating systemfor a computer which implemented network capabilities. Operating systems with anetwork stackallowedpersonal computersto participate in aclient-server architecturein which aserverenables multiple clients to share resources, such asprinters.[3][4][5] These limited client/server networks were gradually replaced byPeer-to-peernetworks, which used networking capabilities to share resources and files located on a variety of computers of all sizes. A peer-to-peer network sets all connected computers equal; they all share the same abilities to use resources available on the network.[4] Today,distributed computingandgroupwareapplications have become the norm. Computer operating systems include a networking stack as a matter of course.[2]: 318During the 1980s the need to integrate dissimilar computers with network capabilities grew and the number of networked devices grew rapidly. Partly because it allowed for multi-vendorinteroperability, and could route packets globally rather than being restricted to a single building, the Internet protocol suite became almost universally adopted in network architectures. Thereafter, computer operating systems and thefirmwareof network devices tended to support Internet protocols.[2]: 305 Network operating systems can be embedded in arouterorhardware firewallthat operates the functions in thenetwork layer(layer 3).[6]Notable network operating systems include:
https://en.wikipedia.org/wiki/Network_operating_system
Anobject-oriented operating system[1]is anoperating systemthat is designed, structured, and operated usingobject-oriented programmingprinciples. An object-oriented operating system is in contrast to an object-orienteduser interfaceor programmingframework, which can be run on a non-object-oriented operating system likeDOSorUnix. There are alreadyobject-based languageconcepts involved in the design of a more typical operating system such asUnix. While a more traditional language likeCdoes not support object-orientation as fluidly as more recent languages, the notion of, for example, afile,stream, ordevice driver(in Unix, each represented as afile descriptor) can be considered a good example of objects. They are, after all,abstract data types, with variousmethodsin the form ofsystem callswhich behavior varies based on the type of object and which implementation details are hidden from the caller. Object-orientationhas been defined asobjects+inheritance, and inheritance is only one approach to the more general problem ofdelegationthat occurs in every operating system.[2]Object-orientation has been more widely used in theuser interfacesof operating systems than in theirkernels. An object is an instance of a class, which provides a certain set of functionalities. Two objects can be differentiated based on the functionalities (or methods) they support. In an operating system context, objects are associated with a resource. Historically, the object-oriented design principles were used in operating systems to provide several protection mechanisms.[1] Protection mechanisms in an operating system help in providing a clear separation between different user programs. It also protects the operating system from any malicious user program behavior. For example, consider the case of user profiles in an operating system. The user should not have access to resources of another user. The object model deals with these protection issues with each resource acting as an object. Every object can perform only a set of operations. In the context of user profiles, the set of operations is limited byprivilege levelof a user.[1] Present-day operating systems use object-oriented design principles for many components of the system, which includes protection.
https://en.wikipedia.org/wiki/Object-oriented_operating_system
Lisp machinesare general-purpose computers designed to efficiently runLispas their main software andprogramming language, usually via hardware support. They are an example of ahigh-level language computer architecture. In a sense, they were the first commercial single-userworkstations. Despite being modest in number (perhaps 7,000 units total as of 1988[1]) Lisp machines commercially pioneered many now-commonplace technologies, includingwindowing systems,computer mice, high-resolution bit-mappedraster graphics, computer graphic rendering,laser printing, networking innovations such asChaosnet, and effectivegarbage collection.[2]Several firms built and sold Lisp machines in the 1980s:Symbolics(3600, 3640, XL1200, MacIvory, and other models),Lisp MachinesIncorporated (LMI Lambda),Texas Instruments(Explorer, MicroExplorer), andXerox(Interlisp-D workstations). The operating systems were written inLisp Machine Lisp, Interlisp (Xerox), and later partly inCommon Lisp. Artificial intelligence(AI) computer programs of the 1960s and 1970s intrinsically required what was then considered a huge amount of computer power, as measured in processor time and memory space. The power requirements of AI research were exacerbated by the Lisp symbolic programming language, when commercial hardware was designed and optimized forassembly- andFortran-like programming languages. At first, the cost of such computer hardware meant that it had to be shared among many users. Asintegrated circuittechnology shrank the size and cost of computers in the 1960s and early 1970s, and the memory needs of AI programs began to exceed theaddress spaceof the most common research computer, theDigital Equipment Corporation(DEC)PDP-10, researchers considered a new approach: a computer designed specifically to develop and run large artificial intelligence programs, and tailored to the semantics of theLisplanguage. To keep theoperating system(relatively) simple, these machines would often not be shared, but would be dedicated to a single user at a time.[citation needed] In 1973,Richard GreenblattandThomas Knight, programmers atMassachusetts Institute of Technology(MIT)Artificial Intelligence Laboratory(AI Lab), began what would become the MIT Lisp Machine Project when they first began building a computer hardwired to run certain basic Lisp operations, rather than run them in software, in a 24-bittagged architecture. The machine also did incremental (orArena)garbage collection.[citation needed]More specifically, since Lisp variables are typed at runtime rather than compile time, a simple addition of two variables could take five times as long on conventional hardware, due to test and branch instructions. Lisp Machines ran the tests in parallel with the more conventional single instruction additions. If the simultaneous tests failed, then the result was discarded and recomputed; this meant in many cases a speed increase by several factors. This simultaneous checking approach was used as well in testing the bounds of arrays when referenced, and other memory management necessities (not merely garbage collection or arrays). Type checking was further improved and automated when the conventional byte word of 32 bits was lengthened to 36 bits forSymbolics3600-model Lisp machines[3]and eventually to 40 bits or more (usually, the excess bits not accounted for by the following were used forerror-correcting codes). The first group of extra bits were used to hold type data, making the machine atagged architecture, and the remaining bits were used to implementCDR coding(wherein the usual linked list elements are compressed to occupy roughly half the space), aiding garbage collection by reportedly an order of magnitude. A further improvement was two microcode instructions which specifically supported Lispfunctions, reducing the cost of calling a function to as little as 20 clock cycles, in some Symbolics implementations. The first machine was called the CONS machine (named after the list construction operatorconsin Lisp). Often it was affectionately referred to as theKnight machine, perhaps sinceKnightwrote his master's thesis on the subject; it was extremely well received.[citation needed]It was subsequently improved into a version called CADR (a pun; in Lisp, thecadrfunction, which returns the second item of a list, is pronounced/ˈkeɪ.dəɹ/or/ˈkɑ.dəɹ/, as some pronounce the word "cadre") which was based on essentially the same architecture. About 25 of what were essentially prototype CADRs were sold within and without MIT for ~$50,000; it quickly became the favorite machine for hacking – many of the most favored software tools were quickly ported to it (e.g.Emacswas ported fromITSin 1975[disputed–discuss]). It was so well received at an AI conference held at MIT in 1978 thatDefense Advanced Research Projects Agency(DARPA) began funding its development. In 1979,Russell Noftsker, being convinced that Lisp machines had a bright commercial future due to the strength of the Lisp language and the enabling factor of hardware acceleration, proposed to Greenblatt that they commercialize the technology.[citation needed]In a counter-intuitive move for an AI Lab hacker, Greenblatt acquiesced, hoping perhaps that he could recreate the informal and productive atmosphere of the Lab in a real business. These ideas and goals were considerably different from those of Noftsker. The two negotiated at length, but neither would compromise. As the proposed firm could succeed only with the full and undivided assistance of the AI Lab hackers as a group, Noftsker and Greenblatt decided that the fate of the enterprise was up to them, and so the choice should be left to the hackers. The ensuing discussions of the choice divided the lab into two factions. In February 1979, matters came to a head. The hackers sided with Noftsker, believing that a commercial venture-fund-backed firm had a better chance of surviving and commercializing Lisp machines than Greenblatt's proposed self-sustaining start-up. Greenblatt lost the battle. It was at this juncture thatSymbolics, Noftsker's enterprise, slowly came together. While Noftsker was paying his staff a salary, he had no building or any equipment for the hackers to work on. He bargained withPatrick Winstonthat, in exchange for allowing Symbolics' staff to keep working out of MIT, Symbolics would let MIT use internally and freely all the software Symbolics developed. A consultant fromCDC, who was trying to put together a natural language computer application with a group of West-coast programmers, came to Greenblatt, seeking a Lisp machine for his group to work with, about eight months after the disastrous conference with Noftsker. Greenblatt had decided to start his own rival Lisp machine firm, but he had done nothing. The consultant, Alexander Jacobson, decided that the only way Greenblatt was going to start the firm and build the Lisp machines that Jacobson desperately needed was if Jacobson pushed and otherwise helped Greenblatt launch the firm. Jacobson pulled together business plans, a board, a partner for Greenblatt (one F. Stephen Wyle). The newfound firm was namedLISP Machine, Inc.(LMI), and was funded by CDC orders, via Jacobson. Around this time Symbolics (Noftsker's firm) began operating. It had been hindered by Noftsker's promise to give Greenblatt a year'shead start, and by severe delays in procuring venture capital. Symbolics still had the major advantage that while 3 or 4 of the AI Lab hackers had gone to work for Greenblatt, 14 other hackers had signed onto Symbolics. Two AI Lab people were not hired by either:Richard StallmanandMarvin Minsky. Stallman, however, blamed Symbolics for the decline of the hacker community that had centered around the AI lab. For two years, from 1982 to the end of 1983, Stallman worked by himself to clone the output of the Symbolics programmers, with the aim of preventing them from gaining a monopoly on the lab's computers.[4] Regardless, after a series of internal battles, Symbolics did get off the ground in 1980/1981, selling the CADR as the LM-2, whileLisp Machines, Inc. sold it as the LMI-CADR. Symbolics did not intend to produce many LM-2s, since the 3600 family of Lisp machines was supposed to ship quickly, but the 3600s were repeatedly delayed, and Symbolics ended up producing ~100 LM-2s, each of which sold for $70,000. Both firms developed second-generation products based on the CADR: theSymbolics3600 and the LMI-LAMBDA (of which LMI managed to sell ~200). The 3600, which shipped a year late, expanded on the CADR by widening the machine word to 36-bits, expanding the address space to 28-bits,[5]and adding hardware to accelerate certain common functions that were implemented in microcode on the CADR. The LMI-LAMBDA, which came out a year after the 3600, in 1983, was compatible with the CADR (it could run CADR microcode), but hardware differences existed.Texas Instruments(TI) joined the fray when it licensed the LMI-LAMBDA design and produced its own variant, theTI Explorer. Some of the LMI-LAMBDAs and the TI Explorer were dual systems with both a Lisp and aUnixprocessor. TI also developed a 32-bitmicroprocessorversion of its Lisp CPU for the TI Explorer. This Lisp chip also was used for the MicroExplorer – aNuBusboard for the AppleMacintosh II(NuBus was initially developed at MIT for use in Lisp machines). Symbolics continued to develop the 3600 family and its operating system,Genera, and produced the Ivory, aVLSIimplementation of the Symbolics architecture. Starting in 1987, several machines based on the Ivory processor were developed: boards for Suns and Macs, stand-alone workstations and even embedded systems (I-Machine Custom LSI, 32 bit address, Symbolics XL-400, UX-400, MacIvory II; in 1989 available platforms were Symbolics XL-1200, MacIvory III, UX-1200, Zora, NXP1000 "pizza box"). Texas Instruments shrank the Explorer into silicon as the MicroExplorer which was offered as a card for the AppleMac II. LMI abandoned the CADR architecture and developed its own K-Machine,[6]but LMI went bankrupt before the machine could be brought to market. Before its demise, LMI was working on a distributed system for the LAMBDA using Moby space.[7] These machines had hardware support for various primitive Lisp operations (data type testing,CDR coding) and also hardware support for incrementalgarbage collection. They ran large Lisp programs very efficiently. The Symbolics machine was competitive against many commercial superminicomputers, but was never adapted for conventional purposes. The Symbolics Lisp Machines were also sold to some non-AI markets likecomputer graphics, modeling, and animation. The MIT-derived Lisp machines ran a Lisp dialect namedLisp Machine Lisp, descended from MIT'sMaclisp. The operating systems were written from the ground up in Lisp, often using object-oriented extensions. Later, these Lisp machines also supported various versions ofCommon Lisp(withFlavors,New Flavors, andCommon Lisp Object System(CLOS)). Bolt, Beranek and Newman(BBN) developed its own Lisp machine, named Jericho,[8]which ran a version ofInterlisp. It was never marketed. Frustrated, the whole AI group resigned, and were hired mostly by Xerox. So,XeroxPalo Alto Research Centerhad, simultaneously with Greenblatt's own development at MIT, developed their own Lisp machines which were designed to run InterLisp (and laterCommon Lisp). The same hardware was used with different software also asSmalltalkmachines and as theXerox Staroffice system. These included the Xerox 1100,Dolphin(1979); the Xerox 1132,Dorado; the Xerox 1108,Dandelion(1981); the Xerox 1109,Dandetiger; and theXerox 1186/6085,Daybreak.[9]The operating system of the Xerox Lisp machines has also been ported to a virtual machine and is available for several platforms as a product namedMedley. The Xerox machine was well known for its advanced development environment (InterLisp-D), the ROOMS window manager, for its early graphical user interface and for novel applications likeNoteCards(one of the firsthypertextapplications). Xerox also worked on a Lisp machine based onreduced instruction set computing(RISC), using the 'Xerox Common Lisp Processor' and planned to bring it to market by 1987,[10]which did not occur. In the mid-1980s, Integrated Inference Machines (IIM) built prototypes of Lisp machines named Inferstar.[11] In 1984–85 a UK firm, Racal-Norsk, a joint subsidiary ofRacalandNorsk Data, attempted to repurpose Norsk Data'sND-500supermini as a microcoded Lisp machine, running CADR software: the Knowledge Processing System (KPS).[12] There were several attempts by Japanese manufacturers to enter the Lisp machine market: theFujitsuFacom-alpha[13]mainframe co-processor, NTT's Elis,[14][15]Toshiba's AI processor (AIP)[16]and NEC's LIME.[17]Several university research efforts produced working prototypes, among them are Kobe University's TAKITAC-7,[18]RIKEN's FLATS,[19]and Osaka University's EVLIS.[20] In France, two Lisp Machine projects arose: M3L[21]at Toulouse Paul Sabatier University and later MAIA.[22] In Germany Siemens designed the RISC-based Lisp co-processor COLIBRI.[23][24][25][26] With the onset of theAI winterand the early beginnings of themicrocomputer revolution, which would sweep away the minicomputer and workstation makers, cheaper desktop PCs soon could run Lisp programs even faster than Lisp machines, with no use of special purpose hardware. Their high profit margin hardware business eliminated, most Lisp machine makers had gone out of business by the early 90s, leaving only software based firms likeLucid Inc.or hardware makers who had switched to software and services to avoid the crash. As of January 2015[update], besides Xerox and TI, Symbolics is the only Lisp machine firm still operating, selling theOpen GeneraLisp machine software environment and theMacsymacomputer algebra system.[27][28] Several attempts to write open-source emulators for various Lisp Machines have been made: CADR Emulation,[29]Symbolics L Lisp Machine Emulation,[30]the E3 Project (TI Explorer II Emulation),[31]Meroko (TI Explorer I),[32]and Nevermore (TI Explorer I).[33]On 3 October 2005, the MIT released the CADR Lisp Machine source code as open source.[34] In September 2014, Alexander Burger, developer ofPicoLisp, announced PilMCU, an implementation of PicoLisp in hardware.[35] The Bitsavers' PDF Document Archive[36]has PDF versions of the extensive documentation for the Symbolics Lisp Machines,[37]the TI Explorer[38]and MicroExplorer[39]Lisp Machines and the Xerox Interlisp-D Lisp Machines.[40] Domains using the Lisp machines were mostly in the wide field of artificial intelligence applications, but also in computer graphics, medical image processing, and many others. The main commercial expert systems of the 80s were available: Intellicorp'sKnowledge Engineering Environment(KEE), Knowledge Craft, from The Carnegie Group Inc., and ART (Automated Reasoning Tool) from Inference Corporation.[41] Initially the Lisp machines were designed as personal workstations for software development in Lisp. They were used by one person and offered no multi-user mode. The machines provided a large, black and white, bitmap display, keyboard and mouse, network adapter, local hard disks, more than 1 MB RAM, serial interfaces, and a local bus for extension cards. Color graphics cards, tape drives, and laser printers were optional. The processor did not run Lisp directly, but was astack machinewith instructions optimized for compiled Lisp. The early Lisp machines used microcode to provide the instruction set. For several operations, type checking and dispatching was done in hardware at runtime. For example, only one addition operation could be used with various numeric types (integer, float, rational, and complex numbers). The result was a very compact compiled representation of Lisp code. The following example uses a function that counts the number of elements of a list for which a predicate returnstrue. The disassembled machine code for above function (for the Ivory microprocessor from Symbolics): The operating system usedvirtual memoryto provide a large address space. Memory management was done with garbage collection. All codeshared a single address space. All data objects were stored with a tag in memory, so that the type could be determined at runtime. Multiple execution threads were supported and termedprocesses. All processes ran in the one address space. All operating system software was written in Lisp. Xerox used Interlisp. Symbolics, LMI, and TI used Lisp Machine Lisp (descendant of MacLisp). With the appearance of Common Lisp, Common Lisp was supported on the Lisp Machines and some system software was ported to Common Lisp or later written in Common Lisp. Some later Lisp machines (like the TI MicroExplorer, the Symbolics MacIvory or the Symbolics UX400/1200) were no longer complete workstations, but boards designed to be embedded in host computers: AppleMacintosh IIandSun-3orSun-4. Some Lisp machines, such as the Symbolics XL1200, had extensive graphics abilities using special graphics boards. These machines were used in domains like medical image processing, 3D animation, and CAD.
https://en.wikipedia.org/wiki/Lisp_machine
OSP, an Environment forOperating System Projects, is a teachingoperating systemdesigned to provide an environment for an introductory course in operating systems. By selectively omitting specific modules of the operating system and having the students re-implement the missing functionality, an instructor can generate projects that require students to understand fundamental operating system concepts. The distribution includes the OSP project generator, which can be used to package a project and produce stubs (files that are empty except for required components, and that can be compiled) for the files that the students must implement. OSP includes a simulator that the student code runs on. Thisoperating-system-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Operating_System_Projects
System Commander(SCfor short) is a graphicalboot manager/loadersoftwareapplication developed byVCOM. The software allowed for multipleoperating systemsto be installed onto a machine at once, providing a menu from which the user selected the operating system they wished to boot from. Other software with similar functionality includesNTLDR,LILO,GRUB, andGraphical Boot Manager. One of its components was namedPartition Commander. System Commanderwas introduced in 1993 by V Communications,[1]sometimes referred to as VCOM.[2][3] A major feature of System Commander is its ability to hide partitions. An operating system could be configured to not recognize other partitions, preventing the data of other installed operating systems from being accessed or tampered with. System Commanderrequires either 32-bitDOSorWindows(95/98/Me/2000/XP/2003/Vista) to install.[4] The software can run in screen resolutions up to1600x1200. The package also included apartition manager(VCOM'sPartition Commander) for creating, resizing and deletingpartitions.[5]Partition Commander was not the only software in this market;PartitionMagicwas a recognized alternative.[6][7] A 2005 review noted that some of the conveniences of Partition Magic, such as walk away and let a series of operations take place; with Partition Commander "you have to wait until each task is complete before initiating the next."[4]
https://en.wikipedia.org/wiki/System_Commander
Incomputing, asystem imageis aserializedcopy of the entirestateof acomputer systemstored in somenon-volatileform, such as a binary executablefile. If a system has all its state written to a disk (i.e. on adisk image), then a system image can be produced by copying the disk to a file elsewhere, often withdisk cloningapplications. On many systems a complete system image cannot be created by a disk cloning program running within that system because information can be held outside of disks and volatile memory, for example in non-volatile memory, such asboot ROMs. A system is said to be capable of using system images if it can be shut down and later restored to exactly the same state. In such cases, system images can be used forbackup. Hibernationis an example that uses an image of the entire machine'sRAM. Aprocessimage is a copy of a given process'sstateat a given point in time. It is often used to createpersistencewithin an otherwise volatile system. A common example is adatabase management system(DBMS). Most DBMS can store the state of itsdatabaseor databases to a file before being closed down (seedatabase dump). The DBMS can then be restarted later with the information in the database intact and proceed as though the software had never stopped. Another example would be thehibernatefeature of many operating systems. Here, the state of allRAMmemory is stored to disk, the computer is brought into an energy saving mode, then later restored to normal operation. Someemulatorsprovide a facility to save an image of the system being emulated. In video gaming this is often referred to as asavestate. Another use iscode mobility: amobile agentcan migrate between machines by having its state saved, then copying the data to another machine and restarting there. Someprogramming languagesprovide a command to take a system image of a program. This is normally a standard feature inSmalltalk(inspired byFLEX) andLisp, among other languages. Development in these languages is often quite different from many other programming languages. For example, in Lisp the programmer may load packages or other code into a running Lispimplementationusing theread-eval-print loop, which usually compiles the programs. Data is loaded into the running Lisp system. The programmer may thendumpa system image, containing that pre-compiled and possibly customized code—and also all loaded application data. Often this image is an executable, and can be run on other machines. This system image can be the form in which executable programs are distributed—this method has often been used by programs (such asTeXandEmacs) largely implemented in Lisp, Smalltalk, oridiosyncraticlanguages to avoid spending time repeating the same initialization work every time they start up. Similar,Lisp Machineswere booted from Lisp images, called Worlds. The World contains the complete operating system, its applications and its data in a single file. It was also possible to save incremental Worlds, that contain only the changes from some base World. Before saving the World, the Lisp Machine operating system could optimize the contents of memory (better memory layout, compacting data structures, sorting data, ...). Although its purpose is different, a "system image" is often similar in structure to acore dump.
https://en.wikipedia.org/wiki/System_image
This article presents atimelineof events in the history of computeroperating systemsfrom 1951 to the current day. For a narrative explaining the overall developments, see theHistory of operating systems. NetBSD8.1 (15.0) iOS 18.0 iPadOS 18 watchOS11 tvOS18
https://en.wikipedia.org/wiki/Timeline_of_operating_systems
Instatistics, thefrequencyorabsolute frequencyof aneventi{\displaystyle i}is the numberni{\displaystyle n_{i}}of times the observation has occurred/been recorded in anexperimentor study.[1]: 12–19These frequencies are often depicted graphically or tabular form. Thecumulative frequencyis the total of the absolute frequencies of all events at or below a certain point in an ordered list of events.[1]: 17–19 Therelative frequency(orempirical probability) of an event is the absolute frequencynormalizedby the total number of events: The values offi{\displaystyle f_{i}}for all eventsi{\displaystyle i}can be plotted to produce a frequency distribution. In the case whenni=0{\displaystyle n_{i}=0}for certaini{\displaystyle i},pseudocountscan be added. Afrequency distributionshows a summarized grouping of data divided into mutually exclusive classes and the number of occurrences in a class. It is a way of showing unorganized data notably to show results of an election, income of people for a certain region, sales of a product within a certain period, student loan amounts of graduates, etc. Some of the graphs that can be used with frequency distributions arehistograms,line charts,bar chartsandpie charts. Frequency distributions are used for both qualitative and quantitative data. Generally the class interval or class width is the same for all classes. The classes all taken together must cover at least the distance from the lowest value (minimum) in the data to the highest (maximum) value. Equal class intervals are preferred in frequency distribution, while unequal class intervals (for example logarithmic intervals) may be necessary in certain situations to produce a good spread of observations between the classes and avoid a large number of empty, or almost empty classes.[2] The following are some commonly used methods of depicting frequency:[3] A histogram is a representation of tabulated frequencies, shown as adjacentrectanglesorsquares(in some of situations), erected over discrete intervals (bins), with an area proportional to the frequency of the observations in the interval. The height of a rectangle is also equal to the frequency density of the interval, i.e., the frequency divided by the width of the interval. The total area of the histogram is equal to the number of data. A histogram may also benormalizeddisplaying relative frequencies. It then shows the proportion of cases that fall into each of severalcategories, with the total area equaling 1. The categories are usually specified as consecutive, non-overlappingintervalsof a variable. The categories (intervals) must be adjacent, and often are chosen to be of the same size.[4]The rectangles of a histogram are drawn so that they touch each other to indicate that the original variable is continuous.[5] Abar chartorbar graphis achartwithrectangularbars withlengthsproportional to the values that they represent. The bars can be plotted vertically or horizontally. A vertical bar chart is sometimes called a column bar chart. Afrequency distributiontable is an arrangement of the values that one or more variables take in asample. Each entry in the table contains the frequency or count of the occurrences of values within a particular group or interval, and in this way, the table summarizes thedistributionof values in the sample. This is an example of a univariate (=singlevariable) frequency table. The frequency of each response to a survey question is depicted. A different tabulation scheme aggregates values into bins such that each bin encompasses a range of values. For example, the heights of the students in a class could be organized into the following frequency table. Bivariate joint frequency distributions are often presented as (two-way)contingency tables: The total row and total column report the marginal frequencies ormarginal distribution, while the body of the table reports the joint frequencies.[6] Under thefrequency interpretationofprobability, it is assumed that the source isergotic, i.e., as the length of a series of trials increases without bound, the fraction of experiments in which a given event occurs will approach a fixed value, known as thelimiting relative frequency.[7][8] This interpretation is often contrasted withBayesian probability. The termfrequentistwas first used byM. G. Kendallin 1949, to contrast withBayesians, whom he called "non-frequentists".[9][10]He observed Managing and operating on frequency tabulated data is much simpler than operation on raw data. There are simple algorithms to calculate median, mean, standard deviation etc. from these tables. Statistical hypothesis testingis founded on the assessment of differences and similarities between frequency distributions. This assessment involves measures ofcentral tendencyoraverages, such as themeanandmedian, and measures of variability orstatistical dispersion, such as thestandard deviationorvariance. A frequency distribution is said to beskewedwhen its mean and median are significantly different, or more generally when it isasymmetric. Thekurtosisof a frequency distribution is a measure of the proportion of extreme values (outliers), which appear at either end of thehistogram. If the distribution is more outlier-prone than thenormal distributionit is said to be leptokurtic; if less outlier-prone it is said to be platykurtic. Letter frequencydistributions are also used infrequency analysisto crackciphers, and are used to compare the relative frequencies of letters in different languages and other languages are often used like Greek, Latin, etc.
https://en.wikipedia.org/wiki/Frequency_(statistics)
Quasi-set theoryis a formal mathematical theory for dealing with collections of objects, some of which may be indistinguishable from one another. Quasi-set theory is mainly motivated by the assumption that certain objects treated inquantum physicsare indistinguishable and don't have individuality. TheAmerican Mathematical Societysponsored a 1974 meeting to evaluate the resolution and consequences of the23 problemsHilbertproposed in 1900. An outcome of that meeting was a new list of mathematical problems, the first of which, due toManin(1976, p. 36), questioned whether classicalset theorywas an adequate paradigm for treating collections of indistinguishableelementary particlesinquantum mechanics. He suggested that such collections cannot be sets in the usual sense, and that the study of such collections required a "new language". The use of the termquasi-setfollows a suggestion inda Costa's 1980 monographEnsaio sobre os Fundamentos da Lógica(see da Costa and Krause 1994), in which he explored possiblesemanticsfor what he called "Schrödinger Logics". In theselogics, the concept of identity is restricted to some objects of the domain, and has motivation inSchrödinger's claim that the concept of identity does not make sense for elementary particles (Schrödinger 1952). Thus in order to provide a semantics that fits the logic, da Costa submitted that "a theory of quasi-sets should be developed", encompassing "standard sets" as particular cases, yet da Costa did not develop this theory in any concrete way. To the same end and independently of da Costa,Dalla Chiaraand di Francia (1993) proposed a theory ofquasetsto enable asemantictreatment of the language ofmicrophysics. The first quasi-set theory was proposed by D. Krause in his PhD thesis, in 1990 (see Krause 1992). A related physics theory, based on the logic of adding fundamental indistinguishability to equality and inequality, was developed and elaborated independently in the bookThe Theory of IndistinguishablesbyA. F. Parker-Rhodes.[1] We now expound Krause's (1992) axiomatic theoryQ{\displaystyle {\mathfrak {Q}}}, the first quasi-set theory; other formulations and improvements have since appeared. For an updated paper on the subject, see French and Krause (2010). Krause builds on the set theory ZFU, consisting ofZermelo-Fraenkel set theorywith anontologyextended to include two kinds ofurelements: Quasi-sets (q-sets) are collections resulting from applying axioms, very similar to those for ZFU, to a basicdomaincomposed ofm-atoms,M-atoms, and aggregates of these. The axioms ofQ{\displaystyle {\mathfrak {Q}}}include equivalents ofextensionality, but in a weaker form, termed "weak extensionality axiom"; axioms asserting the existence of theempty set,unordered pair,union set, andpower set; theaxiom of separation; an axiom stating the image of a q-set under a q-function is also a q-set; q-set equivalents of the axioms ofinfinity,regularity, andchoice. Q-set theories based on other set-theoretical frameworks are, of course, possible. Q{\displaystyle {\mathfrak {Q}}}has a primitive concept of quasi-cardinal, governed by eight additional axioms, intuitively standing for the quantity of objects in a collection. The quasi-cardinal of a quasi-set is not defined in the usual sense (by means ofordinals) because them-atoms are assumed (absolutely) indistinguishable. Furthermore, it is possible to define a translation from the language of ZFU into the language ofQ{\displaystyle {\mathfrak {Q}}}in such a way so that there is a 'copy' of ZFU inQ{\displaystyle {\mathfrak {Q}}}. In this copy, all the usual mathematical concepts can be defined, and the 'sets' (in reality, the 'Q{\displaystyle {\mathfrak {Q}}}-sets') turn out to be those q-sets whosetransitive closurecontains no m-atoms. InQ{\displaystyle {\mathfrak {Q}}}there may exist q-sets, called "pure" q-sets, whose elements are all m-atoms, and the axiomatics ofQ{\displaystyle {\mathfrak {Q}}}provides the grounds for saying that nothing inQ{\displaystyle {\mathfrak {Q}}}distinguishes the elements of a pure q-set from one another, for certain pure q-sets. Within the theory, the idea that there is more than one entity inxis expressed by an axiom stating that the quasi-cardinal of the power quasi-set ofxhas quasi-cardinal 2qc(x), where qc(x) is the quasi-cardinal ofx(which is a cardinal obtained in the 'copy' of ZFU just mentioned). What exactly does this mean? Consider the level 2pof a sodium atom, in which there are six indiscernible electrons. Even so, physicists reason as if there are in fact six entities in that level, and not only one. In this way, by saying that the quasi-cardinal of the power quasi-set ofxis 2qc(x)(suppose thatqc(x) = 6 to follow the example), we are not excluding the hypothesis that there can exist six subquasi-sets ofxthat are 'singletons', although we cannot distinguish among them. Whether there are or not six elements inxis something that cannot be ascribed by the theory (although the notion is compatible with the theory). If the theory could answer this question, the elements ofxwould be individualized and hence counted, contradicting the basic assumption that they cannot be distinguished. In other words, we may consistently (within the axiomatics ofQ{\displaystyle {\mathfrak {Q}}}) reason as if there are six entities inx, butxmust be regarded as a collection whose elements cannot be discerned as individuals. Using quasi-set theory, we can express some facts of quantum physics without introducingsymmetryconditions (Krause et al. 1999, 2005). As is well known, in order to express indistinguishability, the particles are deemed to beindividuals, say by attaching them to coordinates or to adequate functions/vectors like |ψ>. Thus, given two quantum systems labeled |ψ1⟩ and |ψ2⟩ at the outset, we need to consider a function like |ψ12⟩ = |ψ1⟩|ψ2⟩ ± |ψ2⟩|ψ1⟩ (except for certain constants), which keep the quanta indistinguishable bypermutations; theprobability densityof the joint system independs on which is quanta #1 and which is quanta #2. (Note that precision requires that we talk of "two" quanta without distinguishing them, which is impossible in conventional set theories.) InQ{\displaystyle {\mathfrak {Q}}}, we can dispense with this "identification" of thequanta; for details, see Krause et al. (1999, 2005) and French and Krause (2006). Quasi-set theory is a way to operationalize Heinz Post's (1963) claim that quanta should be deemed indistinguishable "right from the start."
https://en.wikipedia.org/wiki/Quasi-set_theory
Inmathematics,the algebra of sets, not to be confused with themathematical structureofanalgebra of sets, defines the properties and laws ofsets, the set-theoreticoperationsofunion,intersection, andcomplementationand therelationsof setequalityand setinclusion. It also provides systematic procedures for evaluating expressions, and performing calculations, involving these operations and relations. Any set of sets closed under the set-theoretic operations forms aBoolean algebrawith the join operator beingunion, the meet operator beingintersection, the complement operator beingset complement, the bottom being⁠∅{\displaystyle \varnothing }⁠and the top being theuniverseset under consideration. The algebra of sets is the set-theoretic analogue of the algebra of numbers. Just as arithmeticadditionandmultiplicationareassociativeandcommutative, so are set union and intersection; just as the arithmetic relation "less than or equal" isreflexive,antisymmetricandtransitive, so is the set relation of "subset". It is the algebra of the set-theoretic operations of union, intersection and complementation, and the relations of equality and inclusion. For a basic introduction to sets see the article onsets, for a fuller account seenaive set theory, and for a full rigorousaxiomatictreatment seeaxiomatic set theory. Thebinary operationsof setunion(⁠∪{\displaystyle \cup }⁠) andintersection(⁠∩{\displaystyle \cap }⁠) satisfy manyidentities. Several of these identities or "laws" have well established names.[2] The union and intersection of sets may be seen as analogous to the addition and multiplication of numbers. Like addition and multiplication, the operations of union and intersection are commutative and associative, and intersectiondistributesover union. However, unlike addition and multiplication, union also distributes over intersection. Two additional pairs of properties involve the special sets called theempty set⁠∅{\displaystyle \varnothing }⁠and theuniverse set⁠U{\displaystyle {\boldsymbol {U}}}⁠; together with thecomplementoperator (⁠A∁{\displaystyle A^{\complement }}⁠denotes the complement of⁠A{\displaystyle A}⁠. This can also be written as⁠A′{\displaystyle A'}⁠, read as "A prime"). The empty set has no members, and the universe set has all possible members (in a particular context). The identity expressions (together with the commutative expressions) say that, just like 0 and 1 for addition and multiplication,⁠∅{\displaystyle \varnothing }⁠and⁠U{\displaystyle {\boldsymbol {U}}}⁠are theidentity elementsfor union and intersection, respectively. Unlike addition and multiplication, union and intersection do not haveinverse elements. However the complement laws give the fundamental properties of the somewhat inverse-likeunary operationof set complementation. The preceding five pairs of formulae—the commutative, associative, distributive, identity and complement formulae—encompass all of set algebra, in the sense that every valid proposition in the algebra of sets can be derived from them. Note that if the complement formulae are weakened to the rule⁠(A∁)∁=A{\displaystyle (A^{\complement })^{\complement }=A}⁠, then this is exactly the algebra of propositionallinear logic[clarification needed]. Each of the identities stated above is one of a pair of identities such that each can be transformed into the other by interchanging⁠∪{\displaystyle \cup }⁠and⁠∩{\displaystyle \cap }⁠, while also interchanging⁠∅{\displaystyle \varnothing }⁠and⁠U{\displaystyle {\boldsymbol {U}}}⁠. These are examples of an extremely important and powerful property of set algebra, namely, theprinciple of dualityfor sets, which asserts that for any true statement about sets, thedualstatement obtained by interchanging unions and intersections, interchanging⁠U{\displaystyle {\boldsymbol {U}}}⁠and⁠∅{\displaystyle \varnothing }⁠and reversing inclusions is also true. A statement is said to beself-dualif it is equal to its own dual. The following proposition states six more important laws of set algebra, involving unions and intersections. PROPOSITION 3: For anysubsets⁠A{\displaystyle A}⁠and⁠B{\displaystyle B}⁠of a universe set⁠U{\displaystyle {\boldsymbol {U}}}⁠, the following identities hold: As noted above, each of the laws stated in proposition 3 can be derived from the five fundamental pairs of laws stated above. As an illustration, a proof is given below for the idempotent law for union. Proof: The following proof illustrates that the dual of the above proof is the proof of the dual of the idempotent law for union, namely the idempotent law for intersection. Proof: Intersection can be expressed in terms of set difference: The following proposition states five more important laws of set algebra, involving complements. PROPOSITION 4: Let⁠A{\displaystyle A}⁠and⁠B{\displaystyle B}⁠besubsetsof a universe⁠U{\displaystyle {\boldsymbol {U}}}⁠, then: Notice that the double complement law is self-dual. The next proposition, which is also self-dual, says that the complement of a set is the only set that satisfies the complement laws. In other words, complementation is characterized by the complement laws. PROPOSITION 5: Let⁠A{\displaystyle A}⁠and⁠B{\displaystyle B}⁠be subsets of a universe⁠U{\displaystyle {\boldsymbol {U}}}⁠, then: The following proposition says thatinclusion, that is thebinary relationof one set being a subset of another, is apartial order. PROPOSITION 6: If⁠A{\displaystyle A}⁠,⁠B{\displaystyle B}⁠and⁠C{\displaystyle C}⁠are sets then the following hold: The following proposition says that for any setS, thepower setofS, ordered by inclusion, is abounded lattice, and hence together with the distributive and complement laws above, show that it is aBoolean algebra. PROPOSITION 7: If⁠A{\displaystyle A}⁠,⁠B{\displaystyle B}⁠and⁠C{\displaystyle C}⁠are subsets of a set⁠S{\displaystyle S}⁠then the following hold: The following proposition says that the statement⁠A⊆B{\displaystyle A\subseteq B}⁠is equivalent to various other statements involving unions, intersections and complements. PROPOSITION 8: For any two sets⁠A{\displaystyle A}⁠and⁠B{\displaystyle B}⁠, the following are equivalent: The above proposition shows that the relation of set inclusion can be characterized by either of the operations of set union or set intersection, which means that the notion of set inclusion is axiomatically superfluous. The following proposition lists several identities concerningrelative complementsand set-theoretic differences. PROPOSITION 9: For any universe⁠U{\displaystyle {\boldsymbol {U}}}⁠and subsets⁠A{\displaystyle A}⁠,⁠B{\displaystyle B}⁠and⁠C{\displaystyle C}⁠of⁠U{\displaystyle {\boldsymbol {U}}}⁠, the following identities hold:
https://en.wikipedia.org/wiki/Algebra_of_sets
Inmathematical logic, analternative set theoryis any of the alternative mathematical approaches to the concept ofsetand any alternative to the de facto standard set theory described inaxiomatic set theoryby the axioms ofZermelo–Fraenkel set theory. Alternative set theories include:[1]
https://en.wikipedia.org/wiki/Alternative_set_theory
In themathematicalfield ofcategory theory, thecategory of sets, denoted bySet, is thecategorywhoseobjectsaresets. The arrows ormorphismsbetween setsAandBare thefunctionsfromAtoB, and the composition of morphisms is thecomposition of functions. Many other categories (such as thecategory of groups, withgroup homomorphismsas arrows) add structure to the objects of the category of sets or restrict the arrows to functions of a particular kind (or both). The axioms of a category are satisfied bySetbecause composition of functions isassociative, and because every setXhas anidentity functionidX:X→Xwhich serves as identity element for function composition. TheepimorphismsinSetare thesurjectivemaps, themonomorphismsare theinjectivemaps, and theisomorphismsare thebijectivemaps. Theempty setserves as theinitial objectinSetwithempty functionsas morphisms. Everysingletonis aterminal object, with the functions mapping all elements of the source sets to the single target element as morphisms. There are thus nozero objectsinSet. The categorySetiscomplete and co-complete. Theproductin this category is given by thecartesian productof sets. Thecoproductis given by thedisjoint union: given setsAiwhereiranges over some index setI, we construct the coproduct as the union ofAi×{i} (the cartesian product withiserves to ensure that all the components stay disjoint). Setis the prototype of aconcrete category; other categories are concrete if they are "built on"Setin some well-defined way. Every two-element set serves as asubobject classifierinSet. The power object of a setAis given by itspower set, and theexponential objectof the setsAandBis given by the set of all functions fromAtoB.Setis thus atopos(and in particularcartesian closedandexact in the sense of Barr). Setis notabelian,additivenorpreadditive. Every non-empty set is aninjective objectinSet. Every set is aprojective objectinSet(assuming theaxiom of choice). Thefinitely presentable objectsinSetare the finite sets. Since every set is adirect limitof its finite subsets, the categorySetis alocally finitely presentable category. IfCis an arbitrary category, thecontravariant functorsfromCtoSetare often an important object of study. IfAis an object ofC, then the functor fromCtoSetthat sendsXto HomC(X,A) (the set of morphisms inCfromXtoA) is an example of such a functor. IfCis asmall category(i.e. the collection of its objects forms a set), then the contravariant functors fromCtoSet, together with natural transformations as morphisms, form a new category, afunctor categoryknown as the category ofpresheavesonC. InZermelo–Fraenkel set theorythe collection of all sets is not a set; this follows from theaxiom of foundation. One refers to collections that are not sets asproper classes. One cannot handle proper classes as one handles sets; in particular, one cannot write that those proper classes belong to a collection (either a set or a proper class). This is a problem because it means that the category of sets cannot be formalized straightforwardly in this setting. Categories likeSetwhose collection of objects forms a proper class are known aslarge categories, to distinguish them from the small categories whose objects form a set. One way to resolve the problem is to work in a system that gives formal status to proper classes, such asNBG set theory. In this setting, categories formed from sets are said to besmalland those (likeSet) that are formed from proper classes are said to belarge. Another solution is to assume the existence ofGrothendieck universes. Roughly speaking, a Grothendieck universe is a set which is itself a model of ZF(C) (for instance if a set belongs to a universe, its elements and its powerset will belong to the universe). The existence of Grothendieck universes (other than the empty set and the setVω{\displaystyle V_{\omega }}of allhereditarily finite sets) is not implied by the usual ZF axioms; it is an additional, independent axiom, roughly equivalent to the existence ofstrongly inaccessible cardinals. Assuming this extra axiom, one can limit the objects ofSetto the elements of a particular universe. (There is no "set of all sets" within the model, but one can still reason about the classUof all inner sets, i.e., elements ofU.) In one variation of this scheme, the class of sets is the union of the entire tower of Grothendieck universes. (This is necessarily aproper class, but each Grothendieck universe is a set because it is an element of some larger Grothendieck universe.) However, one does not work directly with the "category of all sets". Instead, theorems are expressed in terms of the categorySetUwhose objects are the elements of a sufficiently large Grothendieck universeU, and are then shown not to depend on the particular choice ofU. As a foundation forcategory theory, this approach is well matched to a system likeTarski–Grothendieck set theoryin which one cannot reason directly about proper classes; its principal disadvantage is that a theorem can be true of allSetUbut not ofSet. Various other solutions, and variations on the above, have been proposed.[1][2][3] The same issues arise with other concrete categories, such as thecategory of groupsor thecategory of topological spaces.
https://en.wikipedia.org/wiki/Category_of_sets
Inset theoryand its applications throughoutmathematics, aclassis a collection ofsets(or sometimes other mathematical objects) that can be unambiguously defined by apropertythat all its members share. Classes act as a way to have set-like collections while differing from sets so as to avoid paradoxes, especiallyRussell's paradox(see§ Paradoxes). The precise definition of "class" depends on foundational context. In work onZermelo–Fraenkel set theory, the notion of class is informal, whereas other set theories, such asvon Neumann–Bernays–Gödel set theory, axiomatize the notion of "proper class", e.g., as entities that are not members of another entity. A class that is not a set (informally in Zermelo–Fraenkel) is called aproper class, and a class that is a set is sometimes called asmall class. For instance, the class of allordinal numbers, and the class of all sets, are proper classes in many formal systems. InQuine's set-theoretical writing, the phrase "ultimate class" is often used instead of the phrase "proper class" emphasising that in the systems he considers, certain classes cannot be members, and are thus the final term in any membership chain to which they belong. Outside set theory, the word "class" is sometimes used synonymously with "set". This usage dates from a historical period where classes and sets were not distinguished as they are in modern set-theoretic terminology.[1]Many discussions of "classes" in the 19th century and earlier are really referring to sets, or rather perhaps take place without considering that certain classes can fail to be sets.[non-primary source needed] The collection of allalgebraic structuresof a given type will usually be a proper class. Examples include the class of allgroups, the class of allvector spaces, and many others. Incategory theory, acategorywhose collection ofobjectsforms a proper class (or whose collection ofmorphismsforms a proper class) is called alarge category. Thesurreal numbersare a proper class of objects that have the properties of afield. Within set theory, many collections of sets turn out to be proper classes. Examples include the class of all sets (the universal class), the class of all ordinal numbers, and the class of allcardinal numbers. One way to prove that a class is proper is to place it inbijectionwith the class of all ordinal numbers. This method is used, for example, in the proof that there is nofreecomplete latticeon three or moregenerators. Theparadoxes of naive set theorycan be explained in terms of the inconsistenttacit assumptionthat "all classes are sets". With a rigorous foundation, these paradoxes instead suggestproofsthat certain classes are proper (i.e., that they are not sets). For example,Russell's paradoxsuggests a proof that the class of all sets which do not contain themselves is proper, and theBurali-Forti paradoxsuggests that the class of allordinal numbersis proper. The paradoxes do not arise with classes because there is no notion of classes containing classes. Otherwise, one could, for example, define a class of all classes that do not contain themselves, which would lead to a Russell paradox for classes. Aconglomerate, on the other hand, can have proper classes as members.[2] ZF set theorydoes not formalize the notion of classes, so each formula with classes must be reduced syntactically to a formula without classes.[3]For example, one can reduce the formulaA={x∣x=x}{\displaystyle A=\{x\mid x=x\}}to∀x(x∈A↔x=x){\displaystyle \forall x(x\in A\leftrightarrow x=x)}. For a classA{\displaystyle A}and a set variable symbolx{\displaystyle x}, it is necessary to be able to expand each of the formulasx∈A{\displaystyle x\in A},x=A{\displaystyle x=A},A∈x{\displaystyle A\in x}, andA=x{\displaystyle A=x}into a formula without an occurrence of a class.[4]p. 339 Semantically, in ametalanguage, the classes can be described asequivalence classesoflogical formulas: IfA{\displaystyle {\mathcal {A}}}is astructureinterpreting ZF, then the object language "class-builder expression"{x∣ϕ}{\displaystyle \{x\mid \phi \}}is interpreted inA{\displaystyle {\mathcal {A}}}by the collection of all the elements from the domain ofA{\displaystyle {\mathcal {A}}}on whichλxϕ{\displaystyle \lambda x\phi }holds; thus, the class can be described as the set of all predicates equivalent toϕ{\displaystyle \phi }(which includesϕ{\displaystyle \phi }itself). In particular, one can identify the "class of all sets" with the set of all predicates equivalent tox=x{\displaystyle x=x}.[citation needed] Because classes do not have any formal status in the theory of ZF, the axioms of ZF do not immediately apply to classes. However, if aninaccessible cardinalκ{\displaystyle \kappa }is assumed, then the sets of smaller rank form a model of ZF (aGrothendieck universe), and its subsets can be thought of as "classes". In ZF, the concept of afunctioncan also be generalised to classes. A class function is not a function in the usual sense, since it is not a set; it is rather a formulaΦ(x,y){\displaystyle \Phi (x,y)}with the property that for any setx{\displaystyle x}there is no more than one sety{\displaystyle y}such that the pair(x,y){\displaystyle (x,y)}satisfiesΦ{\displaystyle \Phi }. For example, the class function mapping each set to its powerset may be expressed as the formulay=P(x){\displaystyle y={\mathcal {P}}(x)}. The fact that the ordered pair(x,y){\displaystyle (x,y)}satisfiesΦ{\displaystyle \Phi }may be expressed with the shorthand notationΦ(x)=y{\displaystyle \Phi (x)=y}. Another approach is taken by thevon Neumann–Bernays–Gödel axioms(NBG); classes are the basic objects in this theory, and a set is then defined to be a class that is an element of some other class. However, the class existence axioms of NBG are restricted so that they only quantify over sets, rather than over all classes. This causes NBG to be aconservative extensionof ZFC. Morse–Kelley set theoryadmits proper classes as basic objects, like NBG, but also allows quantification over all proper classes in its class existence axioms. This causes MK to be strictly stronger than both NBG and ZFC. In other set theories, such asNew Foundationsor the theory ofsemisets, the concept of "proper class" still makes sense (not all classes are sets) but the criterion of sethood is not closed under subsets. For example, any set theory with auniversal sethas proper classes which are subclasses of sets.
https://en.wikipedia.org/wiki/Class_(set_theory)
Inset theoryand related branches ofmathematics, afamily(orcollection) can mean, depending upon the context, any of the following:set,indexed set,multiset, orclass. A collectionF{\displaystyle F}ofsubsetsof a givensetS{\displaystyle S}is called afamily of subsetsofS{\displaystyle S}, or afamily of setsoverS.{\displaystyle S.}More generally, a collection of any sets whatsoever is called afamily of sets,set family, or aset system. Additionally, a family of sets may be defined as a function from a setI{\displaystyle I}, known as the index set, toF{\displaystyle F}, in which case the sets of the family are indexed by members ofI{\displaystyle I}.[1]In some contexts, a family of sets may be allowed to contain repeated copies of any given member,[2][3][4]and in other contexts it may form aproper class. A finite family of subsets of afinite setS{\displaystyle S}is also called ahypergraph. The subject ofextremal set theoryconcerns the largest and smallest examples of families of sets satisfying certain restrictions. The set of all subsets of a given setS{\displaystyle S}is called thepower setofS{\displaystyle S}and is denoted by℘(S).{\displaystyle \wp (S).}Thepower set℘(S){\displaystyle \wp (S)}of a given setS{\displaystyle S}is a family of sets overS.{\displaystyle S.} A subset ofS{\displaystyle S}havingk{\displaystyle k}elements is called ak{\displaystyle k}-subsetofS.{\displaystyle S.}Thek{\displaystyle k}-subsetsS(k){\displaystyle S^{(k)}}of a setS{\displaystyle S}form a family of sets. LetS={a,b,c,1,2}.{\displaystyle S=\{a,b,c,1,2\}.}An example of a family of sets overS{\displaystyle S}(in themultisetsense) is given byF={A1,A2,A3,A4},{\displaystyle F=\left\{A_{1},A_{2},A_{3},A_{4}\right\},}whereA1={a,b,c},A2={1,2},A3={1,2},{\displaystyle A_{1}=\{a,b,c\},A_{2}=\{1,2\},A_{3}=\{1,2\},}andA4={a,b,1}.{\displaystyle A_{4}=\{a,b,1\}.} The classOrd{\displaystyle \operatorname {Ord} }of allordinal numbersis alargefamily of sets. That is, it is not itself a set but instead aproper class. Any family of subsets of a setS{\displaystyle S}is itself a subset of thepower set℘(S){\displaystyle \wp (S)}if it has no repeated members. Any family of sets without repetitions is asubclassof theproper classof all sets (theuniverse). Hall's marriage theorem, due toPhilip Hall, gives necessary and sufficient conditions for a finite family of non-empty sets (repetitions allowed) to have asystem of distinct representatives. IfF{\displaystyle {\mathcal {F}}}is any family of sets then∪F:=⋃F∈FF{\displaystyle \cup {\mathcal {F}}:={\textstyle \bigcup \limits _{F\in {\mathcal {F}}}}F}denotes the union of all sets inF,{\displaystyle {\mathcal {F}},}where in particular,∪∅=∅.{\displaystyle \cup \varnothing =\varnothing .}Any familyF{\displaystyle {\mathcal {F}}}of sets is a family over∪F{\displaystyle \cup {\mathcal {F}}}and also a family over any superset of∪F.{\displaystyle \cup {\mathcal {F}}.} Certain types of objects from other areas of mathematics are equivalent to families of sets, in that they can be described purely as a collection of sets of objects of some type: A family of sets is said tocovera setX{\displaystyle X}if every point ofX{\displaystyle X}belongs to some member of the family. A subfamily of a cover ofX{\displaystyle X}that is also a cover ofX{\displaystyle X}is called asubcover. A family is called apoint-finite collectionif every point ofX{\displaystyle X}lies in only finitely many members of the family. If every point of a cover lies in exactly one member ofX{\displaystyle X}, the cover is apartitionofX.{\displaystyle X.} WhenX{\displaystyle X}is atopological space, a cover whose members are allopen setsis called anopen cover. A family is calledlocally finiteif each point in the space has aneighborhoodthat intersects only finitely many members of the family. Aσ-locally finiteorcountably locally finite collectionis a family that is the union of countably many locally finite families. A coverF{\displaystyle {\mathcal {F}}}is said torefineanother (coarser) coverC{\displaystyle {\mathcal {C}}}if every member ofF{\displaystyle {\mathcal {F}}}is contained in some member ofC.{\displaystyle {\mathcal {C}}.}Astar refinementis a particular type of refinement. ASperner familyis a set family in which none of the sets contains any of the others.Sperner's theorembounds the maximum size of a Sperner family. AHelly familyis a set family such that any minimal subfamily with empty intersection has bounded size.Helly's theoremstates thatconvex setsinEuclidean spacesof bounded dimension form Helly families. Anabstract simplicial complexis a set familyF{\displaystyle F}(consisting of finite sets) that isdownward closed; that is, every subset of a set inF{\displaystyle F}is also inF.{\displaystyle F.}Amatroidis an abstract simplicial complex with an additional property called theaugmentation property. Everyfilteris a family of sets. Aconvexity spaceis a set family closed under arbitrary intersections and unions ofchains(with respect to theinclusion relation). Other examples of set families areindependence systems,greedoids,antimatroids, andbornological spaces. Additionally, asemiringis aπ-systemwhere every complementB∖A{\displaystyle B\setminus A}is equal to a finitedisjoint unionof sets inF.{\displaystyle {\mathcal {F}}.}Asemialgebrais a semiring where every complementΩ∖A{\displaystyle \Omega \setminus A}is equal to a finitedisjoint unionof sets inF.{\displaystyle {\mathcal {F}}.}A,B,A1,A2,…{\displaystyle A,B,A_{1},A_{2},\ldots }are arbitrary elements ofF{\displaystyle {\mathcal {F}}}and it is assumed thatF≠∅.{\displaystyle {\mathcal {F}}\neq \varnothing .}
https://en.wikipedia.org/wiki/Family_of_sets
Inmathematics,fuzzy sets(also known asuncertain sets) aresetswhoseelementshave degrees of membership. Fuzzy sets were introduced independently byLotfi A. Zadehin 1965 as an extension of the classical notion of set.[1][2]At the same time,Salii (1965)defined a more general kind of structure called an "L-relation", which he studied in anabstract algebraiccontext; fuzzy relations are special cases ofL-relations whenLis theunit interval[0, 1]. They are now used throughoutfuzzy mathematics, having applications in areas such aslinguistics(De Cock, Bodenhofer & Kerre 2000),decision-making(Kuzmin 1982), andclustering(Bezdek 1978). In classicalset theory, the membership of elements in a set is assessed in binary terms according to abivalent condition—an element either belongs or does not belong to the set. By contrast, fuzzy set theory permits the gradual assessment of the membership of elements in a set; this is described with the aid of amembership functionvalued in therealunit interval [0, 1]. Fuzzy sets generalize classical sets, since theindicator functions(aka characteristic functions) of classical sets are special cases of the membership functions of fuzzy sets, if the latter only takes values 0 or 1.[3]In fuzzy set theory, classical bivalent sets are usually calledcrisp sets. The fuzzy set theory can be used in a wide range of domains in which information is incomplete or imprecise, such asbioinformatics.[4] A fuzzy set is a pair(U,m){\displaystyle (U,m)}whereU{\displaystyle U}is a set (often required to benon-empty) andm:U→[0,1]{\displaystyle m\colon U\rightarrow [0,1]}a membership function. The reference setU{\displaystyle U}(sometimes denoted byΩ{\displaystyle \Omega }orX{\displaystyle X}) is calleduniverse of discourse, and for eachx∈U,{\displaystyle x\in U,}the valuem(x){\displaystyle m(x)}is called thegradeof membership ofx{\displaystyle x}in(U,m){\displaystyle (U,m)}. The functionm=μA{\displaystyle m=\mu _{A}}is called themembership functionof the fuzzy setA=(U,m){\displaystyle A=(U,m)}. For a finite setU={x1,…,xn},{\displaystyle U=\{x_{1},\dots ,x_{n}\},}the fuzzy set(U,m){\displaystyle (U,m)}is often denoted by{m(x1)/x1,…,m(xn)/xn}.{\displaystyle \{m(x_{1})/x_{1},\dots ,m(x_{n})/x_{n}\}.} Letx∈U{\displaystyle x\in U}. Thenx{\displaystyle x}is called The (crisp) set of all fuzzy sets on a universeU{\displaystyle U}is denoted withSF(U){\displaystyle SF(U)}(or sometimes justF(U){\displaystyle F(U)}).[citation needed] For any fuzzy setA=(U,m){\displaystyle A=(U,m)}andα∈[0,1]{\displaystyle \alpha \in [0,1]}the following crisp sets are defined: Note that some authors understand "kernel" in a different way; see below. Although the complement of a fuzzy set has a single most common definition, the other main operations, union and intersection, do have some ambiguity. By the definition of the t-norm, we see that the union and intersection arecommutative,monotonic,associative, and have both anulland anidentity element. For the intersection, these are ∅ andU, respectively, while for the union, these are reversed. However, the union of a fuzzy set and its complement may not result in the full universeU, and the intersection of them may not give the empty set ∅. Since the intersection and union are associative, it is natural to define the intersection and union of a finitefamilyof fuzzy sets recursively. It is noteworthy that the generally accepted standard operators for the union and intersection of fuzzy sets are the max and min operators: The case of exponent two is special enough to be given a name. Taking00=1{\displaystyle 0^{0}=1}, we haveA0=U{\displaystyle A^{0}=U}andA1=A.{\displaystyle A^{1}=A.} In contrast to the general ambiguity of intersection and union operations, there is clearness for disjoint fuzzy sets: Two fuzzy setsA,B{\displaystyle A,B}aredisjointiff which is equivalent to and also equivalent to We keep in mind thatmin/maxis a t/s-norm pair, and any other will work here as well. Fuzzy sets are disjoint if and only if their supports aredisjointaccording to the standard definition for crisp sets. For disjoint fuzzy setsA,B{\displaystyle A,B}any intersection will give ∅, and any union will give the same result, which is denoted as with its membership function given by Note that only one of both summands is greater than zero. For disjoint fuzzy setsA,B{\displaystyle A,B}the following holds true: This can be generalized to finite families of fuzzy sets as follows: Given a familyA=(Ai)i∈I{\displaystyle A=(A_{i})_{i\in I}}of fuzzy sets with index setI(e.g.I= {1,2,3,...,n}). This family is(pairwise) disjointiff A family of fuzzy setsA=(Ai)i∈I{\displaystyle A=(A_{i})_{i\in I}}is disjoint, iff the family of underlying supportsSupp∘A=(Supp⁡(Ai))i∈I{\displaystyle \operatorname {Supp} \circ A=(\operatorname {Supp} (A_{i}))_{i\in I}}is disjoint in the standard sense for families of crisp sets. Independent of the t/s-norm pair, intersection of a disjoint family of fuzzy sets will give ∅ again, while the union has no ambiguity: with its membership function given by Again only one of the summands is greater than zero. For disjoint families of fuzzy setsA=(Ai)i∈I{\displaystyle A=(A_{i})_{i\in I}}the following holds true: For a fuzzy setA{\displaystyle A}with finite supportSupp⁡(A){\displaystyle \operatorname {Supp} (A)}(i.e. a "finite fuzzy set"), itscardinality(akascalar cardinalityorsigma-count) is given by In the case thatUitself is a finite set, therelative cardinalityis given by This can be generalized for the divisor to be a non-empty fuzzy set: For fuzzy setsA,G{\displaystyle A,G}withG≠ ∅, we can define therelative cardinalityby: which looks very similar to the expression forconditional probability. Note: For any fuzzy setA{\displaystyle A}the membership functionμA:U→[0,1]{\displaystyle \mu _{A}:U\to [0,1]}can be regarded as a familyμA=(μA(x))x∈U∈[0,1]U{\displaystyle \mu _{A}=(\mu _{A}(x))_{x\in U}\in [0,1]^{U}}. The latter is ametric spacewith several metricsd{\displaystyle d}known. A metric can be derived from anorm(vector norm)‖‖{\displaystyle \|\,\|}via For instance, ifU{\displaystyle U}is finite, i.e.U={x1,x2,...xn}{\displaystyle U=\{x_{1},x_{2},...x_{n}\}}, such a metric may be defined by: For infiniteU{\displaystyle U}, the maximum can be replaced by a supremum. Because fuzzy sets are unambiguously defined by their membership function, this metric can be used to measure distances between fuzzy sets on the same universe: which becomes in the above sample: Again for infiniteU{\displaystyle U}the maximum must be replaced by a supremum. Other distances (like the canonical 2-norm) may diverge, if infinite fuzzy sets are too different, e.g.,∅{\displaystyle \varnothing }andU{\displaystyle U}. Similarity measures (here denoted byS{\displaystyle S}) may then be derived from the distance, e.g. after a proposal by Koczy: or after Williams and Steele: whereα>0{\displaystyle \alpha >0}is a steepness parameter andexp⁡(x)=ex{\displaystyle \exp(x)=e^{x}}.[citation needed] Sometimes, more general variants of the notion of fuzzy set are used, with membership functions taking values in a (fixed or variable)algebraorstructureL{\displaystyle L}of a given kind; usually it is required thatL{\displaystyle L}be at least aposetorlattice. These are usually calledL-fuzzy sets, to distinguish them from those valued over the unit interval. The usual membership functions with values in [0, 1] are then called [0, 1]-valued membership functions. These kinds of generalizations were first considered in 1967 byJoseph Goguen, who was a student of Zadeh.[8]A classical corollary may be indicating truth and membership values by {f, t} instead of {0, 1}. An extension of fuzzy sets has been provided byAtanassov. Anintuitionistic fuzzy set(IFS)A{\displaystyle A}is characterized by two functions: with functionsμA,νA:U→[0,1]{\displaystyle \mu _{A},\nu _{A}:U\to [0,1]}with∀x∈U:μA(x)+νA(x)≤1{\displaystyle \forall x\in U:\mu _{A}(x)+\nu _{A}(x)\leq 1}. This resembles a situation like some person denoted byx{\displaystyle x}voting After all, we have a percentage of approvals, a percentage of denials, and a percentage of abstentions. For this situation, special "intuitive fuzzy" negators, t- and s-norms can be defined. WithD∗={(α,β)∈[0,1]2:α+β=1}{\displaystyle D^{*}=\{(\alpha ,\beta )\in [0,1]^{2}:\alpha +\beta =1\}}and by combining both functions to(μA,νA):U→D∗{\displaystyle (\mu _{A},\nu _{A}):U\to D^{*}}this situation resembles a special kind ofL-fuzzy sets. Once more, this has been expanded by definingpicture fuzzy sets(PFS) as follows: A PFS A is characterized by three functions mappingUto [0, 1]:μA,ηA,νA{\displaystyle \mu _{A},\eta _{A},\nu _{A}}, "degree of positive membership", "degree of neutral membership", and "degree of negative membership" respectively and additional condition∀x∈U:μA(x)+ηA(x)+νA(x)≤1{\displaystyle \forall x\in U:\mu _{A}(x)+\eta _{A}(x)+\nu _{A}(x)\leq 1}This expands the voting sample above by an additional possibility of "refusal of voting". WithD∗={(α,β,γ)∈[0,1]3:α+β+γ=1}{\displaystyle D^{*}=\{(\alpha ,\beta ,\gamma )\in [0,1]^{3}:\alpha +\beta +\gamma =1\}}and special "picture fuzzy" negators, t- and s-norms this resembles just another type ofL-fuzzy sets.[9] One extension of IFS is what is known as Pythagorean fuzzy sets. Such sets satisfy the constraintμA(x)2+νA(x)2≤1{\displaystyle \mu _{A}(x)^{2}+\nu _{A}(x)^{2}\leq 1}, which is reminiscent of the Pythagorean theorem.[10][11][12]Pythagorean fuzzy sets can be applicable to real life applications in which the previous condition ofμA(x)+νA(x)≤1{\displaystyle \mu _{A}(x)+\nu _{A}(x)\leq 1}is not valid. However, the less restrictive condition ofμA(x)2+νA(x)2≤1{\displaystyle \mu _{A}(x)^{2}+\nu _{A}(x)^{2}\leq 1}may be suitable in more domains.[13][14] As an extension of the case ofmulti-valued logic, valuations (μ:Vo→W{\displaystyle \mu :{\mathit {V}}_{o}\to {\mathit {W}}}) ofpropositional variables(Vo{\displaystyle {\mathit {V}}_{o}}) into a set of membership degrees (W{\displaystyle {\mathit {W}}}) can be thought of asmembership functionsmappingpredicatesinto fuzzy sets (or more formally, into an ordered set of fuzzy pairs, called a fuzzy relation). With these valuations, many-valued logic can be extended to allow for fuzzypremisesfrom which graded conclusions may be drawn.[15] This extension is sometimes called "fuzzy logic in the narrow sense" as opposed to "fuzzy logic in the wider sense," which originated in theengineeringfields ofautomatedcontrol andknowledge engineering, and which encompasses many topics involving fuzzy sets and "approximated reasoning."[16] Industrial applications of fuzzy sets in the context of "fuzzy logic in the wider sense" can be found atfuzzy logic. Afuzzy number[17]is a fuzzy set that satisfies all the following conditions: If these conditions are not satisfied, then A is not afuzzy number. The core of this fuzzy number is asingleton; its location is: Fuzzy numbers can be likened to thefunfairgame "guess your weight," where someone guesses the contestant's weight, with closer guesses being more correct, and where the guesser "wins" if he or she guesses near enough to the contestant's weight, with the actual weight being completely correct (mapping to 1 by the membership function). The kernelK(A)=Kern⁡(A){\displaystyle K(A)=\operatorname {Kern} (A)}of a fuzzy intervalA{\displaystyle A}is defined as the 'inner' part, without the 'outbound' parts where the membership value is constant ad infinitum. In other words, the smallest subset ofR{\displaystyle \mathbb {R} }whereμA(x){\displaystyle \mu _{A}(x)}is constant outside of it, is defined as the kernel. However, there are other concepts of fuzzy numbers and intervals as some authors do not insist on convexity. The use ofset membershipas a key component ofcategory theorycan be generalized to fuzzy sets. This approach, which began in 1968 shortly after the introduction of fuzzy set theory,[18]led to the development ofGoguen categoriesin the 21st century.[19][20]In these categories, rather than using two valued set membership, more general intervals are used, and may be lattices as inL-fuzzy sets.[20][21] There are numerous mathematical extensions similar to or more general than fuzzy sets. Since fuzzy sets were introduced in 1965 by Zadeh, many new mathematical constructions and theories treating imprecision, inaccuracy, vagueness, uncertainty and vulnerability have been developed. Some of these constructions and theories are extensions of fuzzy set theory, while others attempt to mathematically model inaccuracy/vagueness and uncertainty in a different way. The diversity of such constructions and corresponding theories includes: Thefuzzy relation equationis an equation of the formA·R=B, whereAandBare fuzzy sets,Ris a fuzzy relation, andA·Rstands for thecompositionofAwithR[citation needed]. A measuredof fuzziness for fuzzy sets of universeU{\displaystyle U}should fulfill the following conditions for allx∈U{\displaystyle x\in U}: In this cased(A){\displaystyle d(A)}is called theentropyof the fuzzy setA. ForfiniteU={x1,x2,...xn}{\displaystyle U=\{x_{1},x_{2},...x_{n}\}}the entropy of a fuzzy setA{\displaystyle A}is given by or just whereS(x)=He(x){\displaystyle S(x)=H_{e}(x)}isShannon's function(natural entropy function) andk{\displaystyle k}is a constant depending on the measure unit and the logarithm base used (here we have used the natural basee). The physical interpretation ofkis theBoltzmann constantkB. LetA{\displaystyle A}be a fuzzy set with acontinuousmembership function (fuzzy variable). Then and its entropy is There are many mathematical constructions similar to or more general than fuzzy sets. Since fuzzy sets were introduced in 1965, many new mathematical constructions and theories treating imprecision, inexactness, ambiguity, and uncertainty have been developed. Some of these constructions and theories are extensions of fuzzy set theory, while others try to mathematically model imprecision and uncertainty in a different way.[24]
https://en.wikipedia.org/wiki/Fuzzy_set
Mereology(/mɪəriˈɒlədʒi/; fromGreekμέρος 'part' (root: μερε-,mere-) and the suffix-logy, 'study, discussion, science') is the philosophical study of part-whole relationships, also calledparthood relationships.[1][2]As a branch ofmetaphysics, mereology examines the connections between parts and their wholes, exploring how components interact within a system. This theory has roots in ancient philosophy, with significant contributions fromPlato,Aristotle, and later,medievalandRenaissancethinkers likeThomas AquinasandJohn Duns Scotus.[3]Mereology was formally axiomatized in the 20th century byPolishlogicianStanisław Leśniewski, who introduced it as part of a comprehensive framework for logic and mathematics, and coined the word "mereology".[2] Mereological ideas were influential in early§ Set theory, and formal mereology has continued to be used by a minority in works on the§ Foundations of mathematics. Different axiomatizations of mereology have been applied in§ Metaphysics, used in§ Linguistic semanticsto analyze "mass terms", used in thecognitive sciences,[1]and developed in§ General systems theory. Mereology has been combined withtopology, for more on which see the article onmereotopology. Mereology is also used in the foundation ofWhitehead's point-free geometry, on which see Tarski 1956 and Gerla 1995. Mereology is used in discussions of entities as varied as musical groups, geographical regions, and abstract concepts, demonstrating its applicability to a wide range of philosophical and scientific discourses.[1] In metaphysics, mereology is used to formulate the thesis of "composition as identity", the theory that individuals or objects are identical tomereological sums(also calledfusions) of their parts.[3]A metaphysical thesis called "mereological monism" suggests that the version of mereology developed by Stanisław Leśniewski andNelson Goodman(commonly calledClassical Extensional Mereology, or CEM) serves as the general and exhaustive theory of parthood and composition, at least for a large and significant domain of things.[4]This thesis is controversial, since parthood may not seem to be a transitive relation (as claimed by CEM) in some cases, such as the parthood between organisms and their organs.[5]Nevertheless, CEM's assumptions are very common in mereological frameworks, due largely to Leśniewski influence as the one to first coin the word and formalize the theory: mereological theories commonly assume that everything is a part of itself (reflexivity), that a part of a part of a whole is itself a part of that whole (transitivity), and that two distinct entities cannot each be a part of the other (antisymmetry), so that the parthood relation is apartial order. An alternative is to assume instead that parthood is irreflexive (nothing is ever a part of itself) but still transitive, in which case antisymmetry follows automatically. Informal part-whole reasoning was consciously invoked inmetaphysicsandontologyfromPlato(in particular, in the second half of theParmenides) andAristotleonwards, and more or less unwittingly in 19th-century mathematics until the triumph ofset theoryaround 1910. Metaphysical ideas of this era that discuss the concepts of parts and wholes includedivine simplicityand theclassical conception of beauty. Ivor Grattan-Guinness(2001) sheds much light on part-whole reasoning during the 19th and early 20th centuries, and reviews howCantorandPeanodevisedset theory. It appears that the first to reason consciously and at length about parts and wholes[citation needed]wasEdmund Husserl, in 1901, in the second volume ofLogical Investigations– Third Investigation: "On the Theory of Wholes and Parts" (Husserl 1970 is the English translation). However, the word "mereology" is absent from his writings, and he employed no symbolism even though his doctorate was in mathematics. Stanisław Leśniewskicoined "mereology" in 1927, from the Greek word μέρος (méros, "part"), to refer to a formal theory of part-whole he devised in a series of highly technical papers published between 1916 and 1931, and translated in Leśniewski (1992). Leśniewski's studentAlfred Tarski, in his Appendix E to Woodger (1937) and the paper translated as Tarski (1984), greatly simplified Leśniewski's formalism. Other students (and students of students) of Leśniewski elaborated this "Polish mereology" over the course of the 20th century. For a good selection of the literature on Polish mereology, see Srzednicki and Rickey (1984). For a survey of Polish mereology, see Simons (1987). Since 1980 or so, however, research on Polish mereology has been almost entirely historical in nature. A. N. Whiteheadplanned a fourth volume ofPrincipia Mathematica, ongeometry, but never wrote it. His 1914 correspondence withBertrand Russellreveals that his intended approach to geometry can be seen, with the benefit of hindsight, as mereological in essence. This work culminated in Whitehead (1916) and the mereological systems of Whitehead (1919, 1920). In 1930, Henry S. Leonard completed a Harvard PhD dissertation in philosophy, setting out a formal theory of the part-whole relation. This evolved into the "calculus of individuals" ofGoodmanand Leonard (1940). Goodman revised and elaborated this calculus in the three editions of Goodman (1951). The calculus of individuals is the starting point for the post-1970 revival of mereology among logicians, ontologists, and computer scientists, a revival well-surveyed in Simons (1987), Casati and Varzi (1999), and Cotnoir and Varzi (2021). A basic choice in defining a mereological system, is whether to allow things to be considered parts of themselves (reflexivity of parthood). Innaive set theorya similar question arises: whether a set is to be considered a "member" of itself. In both cases, "yes" gives rise to paradoxes analogous toRussell's paradox: Let there be an objectOsuch that every object that is not a proper part of itself is a proper part ofO. IsOa proper part of itself? No, because no object is a proper part of itself; and yes, because it meets the specified requirement for inclusion as a proper part ofO. In set theory, a set is often termed animpropersubset of itself. Given such paradoxes, mereology requires anaxiomaticformulation. A mereological "system" is afirst-order theory(withidentity) whoseuniverse of discourseconsists of wholes and their respective parts, collectively calledobjects. Mereology is a collection of nested and non-nestedaxiomatic systems, not unlike the case withmodal logic. The treatment, terminology, and hierarchical organization below follow Casati and Varzi (1999: Ch. 3) closely. For a more recent treatment, correcting certain misconceptions, see Hovda (2008). Lower-case letters denote variables ranging over objects. Following each symbolic axiom or definition is the number of the corresponding formula in Casati and Varzi, written in bold. A mereological system requires at least one primitivebinary relation(dyadicpredicate). The most conventional choice for such a relation isparthood(also called "inclusion"), "xis apartofy", writtenPxy. Nearly all systems require that parthoodpartially orderthe universe. The following defined relations, required for the axioms below, follow immediately from parthood alone: Overlap and Underlap arereflexive,symmetric, andintransitive. Systems vary in what relations they take as primitive and as defined. For example, in extensional mereologies (defined below),parthoodcan be defined from Overlap as follows: The axioms are: Simons (1987), Casati and Varzi (1999) and Hovda (2008) describe many mereological systems whose axioms are taken from the above list. We adopt the boldface nomenclature of Casati and Varzi. The best-known such system is the one calledclassical extensional mereology, hereinafter abbreviatedCEM(other abbreviations are explained below). InCEM,P.1throughP.8'hold as axioms or are theorems. M9,Top, andBottomare optional. The systems in the table below arepartially orderedbyinclusion, in the sense that, if all the theorems of system A are also theorems of system B, but the converse is notnecessarily true, then BincludesA. The resultingHasse diagramis similar to Fig. 3.2 in Casati and Varzi (1999: 48). There are two equivalent ways of asserting that theuniverseispartially ordered: Assume either M1-M3, or that Proper Parthood istransitiveandasymmetric, hence astrict partial order. Either axiomatization results in the systemM. M2 rules out closed loops formed using Parthood, so that the part relation iswell-founded. Sets are well-founded if theaxiom of regularityis assumed. The literature contains occasional philosophical and common-sense objections to the transitivity of Parthood. M4 and M5 are two ways of asserting supplementation, the mereological analog of setcomplementation, with M5 being stronger because M4 is derivable from M5.Mand M4 yieldminimalmereology,MM. Reformulated in terms of Proper Part,MMis Simons's (1987) preferred minimal system. In any system in which M5 or M5' are assumed or can be derived, then it can be proved that two objects having the same proper parts are identical. This property is known asExtensionality, a term borrowed from set theory, for whichextensionalityis the defining axiom. Mereological systems in which Extensionality holds are termedextensional, a fact denoted by including the letterEin their symbolic names. M6 asserts that any two underlapping objects have a unique sum; M7 asserts that any two overlapping objects have a unique product. If the universe is finite or ifTopis assumed, then the universe is closed underSum. Universal closure ofProductand of supplementation relative toWrequiresBottom.WandNare, evidently, the mereological analog of theuniversalandempty sets, andSumandProductare, likewise, the analogs of set-theoreticalunionandintersection. If M6 and M7 are either assumed or derivable, the result is a mereology with closure. BecauseSumandProductare binary operations, M6 and M7 admit the sum and product of only a finite number of objects. TheUnrestricted Fusionaxiom, M8, enables taking the sum of infinitely many objects. The same holds forProduct, when defined. At this point, mereology often invokesset theory, but any recourse to set theory is eliminable by replacing a formula with aquantifiedvariable ranging over a universe of sets by a schematic formula with onefree variable. The formula comes out true (is satisfied) whenever the name of an object that would be amemberof the set (if it existed) replaces the free variable. Hence any axiom with sets can be replaced by anaxiom schemawith monadic atomic subformulae. M8 and M8' are schemas of just this sort. Thesyntaxof afirst-order theorycan describe only adenumerablenumber of sets; hence, only denumerably many sets may be eliminated in this fashion, but this limitation is not binding for the sort of mathematics contemplated here. If M8 holds, thenWexists for infinite universes. Hence,Topneed be assumed only if the universe is infinite and M8 does not hold.Top(postulatingW) is not controversial, butBottom(postulatingN) is. Leśniewski rejectedBottom, and most mereological systems follow his example (an exception is the work ofRichard Milton Martin). Hence, while the universe is closed under sum, the product of objects that do not overlap is typically undefined. A system withWbut notNis isomorphic to: PostulatingNrenders all possible products definable, but also transforms classical extensional mereology into a set-freemodelofBoolean algebra. If sets are admitted, M8 asserts the existence of the fusion of all members of any nonempty set. Any mereological system in which M8 holds is calledgeneral, and its name includesG. In any general mereology, M6 and M7 are provable. Adding M8 to an extensional mereology results ingeneral extensional mereology, abbreviatedGEM; moreover, the extensionality renders the fusion unique. On the converse, however, if the fusion asserted by M8 is assumed unique, so that M8' replaces M8, then—as Tarski (1929) had shown—M3 and M8' suffice to axiomatizeGEM, a remarkably economical result. Simons (1987: 38–41) lists a number ofGEMtheorems. M2 and a finite universe necessarily implyAtomicity, namely that everything either is an atom or includes atoms among its proper parts. If the universe is infinite,Atomicityrequires M9. Adding M9 to any mereological system,Xresults in the atomistic variant thereof, denotedAX.Atomicitypermits economies, for instance, assuming that M5' impliesAtomicityand extensionality, and yields an alternative axiomatization ofAGEM. From the beginnings of set theory, there has been a dispute between conceiving of sets "mereologically", where a set is the mereological sum of its elements, and conceiving of sets "collectively", where a set is something "over and above" its elements.[6]The latter conception is now dominant, but some of the earliest set theorists adhered to the mereological conception:Richard Dedekind, in "Was sind und was sollen die Zahlen?" (1888), avoided the empty set and used the same symbol for set membership and set inclusion,[7]which are two signs that he conceived of sets mereologically.[6]Similarly,Ernst Schröder, in "Vorlesungen über die Algebra der Logik" (1890),[8]also used the mereological conception.[6]It wasGottlob Frege, in a 1895 review of Schröder's work,[9]who first laid out the difference between collections and mereological sums.[6]The fact thatErnst Zermeloadopted the collective conception when he wrote his influential 1908 axiomatization of set theory[10][11]is certainly significant for, though it does not fully explain, its current popularity.[6] In set theory,singletonsare "atoms" that have no (non-empty) proper parts; set theory where sets cannot be built up from unit sets is a nonstandard type of set theory, callednon-well-founded set theory. The calculus of individuals was thought[by whom?]to require that an object either have no proper parts, in which case it is an "atom", or be the mereological sum of atoms. Eberle (1970), however, showed how to construct a calculus of individuals lacking "atoms", i.e., one where every object has a "proper part", so that theuniverseis infinite. A detailed comparison between mereology, set theory, and asemantic"ensemble theory" is presented in chapter 13 of Bunt (1985);[12]when David Lewis wrote his famous§ Parts of Classes, he found that "its main thesis had been anticipated in" Bunt's ensemble theory.[13] PhilosopherDavid Lewis, in his 1991 workParts of Classes,[13]axiomatizedZermelo-Fraenkel (ZFC) set theoryusing only classical mereology,plural quantification, and a primitivesingleton-forming operator,[14]governed by axioms that resemble the axioms for "successor" inPeano arithmetic.[15]This contrasts with more usual axiomatizations of ZFC, which use only the primitive notion ofmembership.[16]Lewis's work is named after his thesis that a class's subclasses are mereological parts of the class (in Lewis's usage, this means that a set's subsets, not counting the empty set, are parts of the set); this thesis has been disputed.[17] Michael Potter, a creator ofScott–Potter set theory, has criticized Lewis's work for failing to make set theory any more easily comprehensible, since Lewis says of his primitive singleton operator that, given the necessity (perceived by Lewis) of avoiding philosophically motivated mathematical revisionism, "I have to say, gritting my teeth, that somehow, I know not how, we do understand what it means to speak of singletons."[18]Potter says Lewis "could just as easily have said, gritting his teeth, that somehow, he knows not how, we do understand what it means to speak of membership, in which case there would have been no need for the rest of the book."[16] Forrest (2002) revised Lewis's analysis by first formulating a generalization ofCEM, called "Heyting mereology", whose sole nonlogical primitive isProper Part, assumedtransitiveandantireflexive. According to this theory, there exists a "fictitious" null individual that is a proper part of every individual; two schemas assert that everylatticejoin exists (lattices arecomplete) and that meetdistributesover join. On this Heyting mereology, Forrest erects a theory ofpseudosets, adequate for all purposes to which sets have been put. Mereology was influential in early conceptions ofset theory(see§ Set theory), which is currently thought of as afoundation for all mathematical theories.[19][20]Even after the currently-dominant "collective" conception of sets became prevalent, mereology has sometimes been developed as an alternative foundation, especially by authors who werenominalistsand therefore rejectedabstract objectssuch as sets. The advantage of mereology for nominalists is that mereological sums, unlike collective sets, are thought to be nothing "over and above" their (possibly concrete) parts.[3] Mereology may still be valuable to non-nominalists: Eberle (1970) defended the "ontological innocence" of mereology, which is the idea that one can employ mereology regardless of one's ontological stance regarding sets. This innocence results from mereology being formalizable in either of two equivalent ways: quantified variables ranging over auniverseof sets, or schematicpredicateswith a singlefree variable. Still,Stanisław LeśniewskiandNelson Goodman, who developed Classical Extensional Mereology, were nominalists,[21]and consciously developed mereology as an alternative to set theory as a foundation of mathematics.[4]Goodman[22]defended thePrinciple of Nominalism, which states that whenever two entities have the same basic constituents, they are identical.[23]Most mathematicians and philosophers have accepted set theory as a legitimate and valuable foundation for mathematics, effectively rejecting the Principle of Nominalism in favor of some other theory, such asmathematical platonism.[23]David Lewis, whose§ Parts of Classesattempted to reconstruct set theory using mereology, was also a nominalist.[24] Richard Milton Martin, who was also a nominalist, employed a version of the calculus of individuals throughout his career, starting in 1941. Goodman andQuine(1947) tried to develop thenaturalandreal numbersusing the calculus of individuals, but were mostly unsuccessful; Quine did not reprint that article in hisSelected Logic Papers. In a series of chapters in the books he published in the last decade of his life,Richard Milton Martinset out to do what Goodman and Quine had abandoned 30 years prior. A recurring problem with attempts to ground mathematics in mereology is how to build up the theory ofrelationswhile abstaining from set-theoretic definitions of theordered pair. Martin argued that Eberle's (1970) theory of relational individuals solved this problem. Burgess and Rosen (1997) provide a survey of attempts to found mathematics without using set theory, such as using mereology. Ingeneral systems theory,mereologyrefers to formal work on system decomposition and parts, wholes and boundaries (by, e.g.,Mihajlo D. Mesarovic(1970),Gabriel Kron(1963), or Maurice Jessel (see Bowden (1989, 1998)). A hierarchical version ofGabriel Kron's Network Tearing was published by Keith Bowden (1991), reflecting David Lewis's ideas ongunk. Such ideas appear in theoreticalcomputer scienceandphysics, often in combination withsheaf theory,topos, orcategory theory. See also the work ofSteve Vickerson (parts of) specifications in computer science,Joseph Goguenon physical systems, and Tom Etter (1996, 1998) on link theory andquantum mechanics. Bunt (1985), a study of thesemanticsof natural language, shows how mereology can help understand such phenomena as themass–count distinctionandverb aspect[example needed]. But Nicolas (2008) argues that a different logical framework, calledplural logic, should be used for that purpose. Also,natural languageoften employs "part of" in ambiguous ways (Simons 1987 discusses this at length)[example needed]. Hence, it is unclear how, if at all, one can translate certain natural language expressions into mereological predicates. Steering clear of such difficulties may require limiting the interpretation of mereology to mathematics andnatural science. Casati and Varzi (1999), for example, limit the scope of mereology tophysical objects. Inmetaphysicsthere are many troubling questions pertaining to parts and wholes. One question addresses constitution and persistence, another asks about composition. In metaphysics, there are several puzzles concerning cases of mereological constitution, that is, what makes up a whole.[25]There is still a concern with parts and wholes, but instead of looking at what parts make up a whole, the emphasis is on what a thing is made of, such as its materials, e.g., the bronze in a bronze statue. Below are two of the main puzzles that philosophers use to discuss constitution. Ship of Theseus:Briefly, the puzzle goes something like this. There is a ship called theShip of Theseus. Over time, the boards start to rot, so we remove the boards and place them in a pile. First question, is the ship made of the new boards the same as the ship that had all the old boards? Second, if we reconstruct a ship using all of the old planks, etc. from the Ship of Theseus, and we also have a ship that was built out of new boards (each added one-by-one over time to replace old decaying boards), which ship is the real Ship of Theseus? Statue and Lump of Clay:Roughly, a sculptor decides to mold a statue out of a lump of clay. At time t1 the sculptor has a lump of clay. After many manipulations at time t2 there is a statue. The question asked is, is the lump of clay and the statue (numerically) identical? If so, how and why?[26] Constitution typically has implications for views on persistence: how does an object persist over time if any of its parts (materials) change or are removed, as is the case with humans who lose cells, change height, hair color, memories, and yet we are said to be the same person today as we were when we were first born. For example, Ted Sider is the same today as he was when he was born—he just changed. But how can this be if many parts of Ted today did not exist when Ted was just born? Is it possible for things, such as organisms to persist? And if so, how? There are several views that attempt to answer this question. Some of the views are as follows (note, there are several other views):[27][28] (a) Constitution view. This view accepts cohabitation. That is, two objects share exactly the same matter. Here, it follows, that there are no temporal parts. (b)Mereological essentialism, which states that the only objects that exist are quantities of matter, which are things defined by their parts. The object persists if matter is removed (or the form changes); but the object ceases to exist if any matter is destroyed. (c) Dominant Sorts. This is the view that tracing is determined by which sort is dominant; they reject cohabitation. For example, lump does not equal statue because they're different "sorts". (d)Nihilism—which makes the claim that no objects exist, except simples, so there is no persistence problem. (e)4-dimensionalismortemporal parts(may also go by the namesperdurantismorexdurantism), which roughly states that aggregates of temporal parts are intimately related. For example, two roads merging, momentarily and spatially, are still one road, because they share a part. (f) 3-dimensionalism (may also go by the nameendurantism), where the object is wholly present. That is, the persisting object retains numerical identity. One question that is addressed by philosophers is which is more fundamental: parts, wholes, or neither?[29][30][31][32][33][34][35][36][37][38]Another pressing question is called the special composition question (SCQ): For any Xs, when is it the case that there is a Y such that the Xs compose Y?[27][39][40][41][42][43][44]This question has caused philosophers to run in three different directions: nihilism, universal composition (UC), or a moderate view (restricted composition). The first two views are considered extreme since the first denies composition, and the second allows any and all non-spatially overlapping objects to compose another object. The moderate view encompasses several theories that try to make sense of SCQ without saying 'no' to composition or 'yes' to unrestricted composition. There are philosophers who are concerned with the question of fundamentality. That is, which is more ontologically fundamental the parts or their wholes. There are several responses to this question, though one of the default assumptions is that the parts are more fundamental. That is, the whole is grounded in its parts. This is the mainstream view. Another view, explored by Schaffer (2010) is monism, where the parts are grounded in the whole. Schaffer does not just mean that, say, the parts that make up my body are grounded in my body. Rather, Schaffer argues that the wholecosmosis more fundamental and everything else is a part of the cosmos. Then, there is the identity theory which claims that there is no hierarchy or fundamentality to parts and wholes. Instead wholesare just(or equivalent to) their parts. There can also be a two-object view which says that the wholes are not equal to the parts—they are numerically distinct from one another. Each of these theories has benefits and costs associated with them.[29][30][31][32] Philosophers want to know when some Xs compose something Y. There are several kinds of response: (a) Contact—Xs compose a complex Y if and only if the Xs are in contact; (b) Fastenation—Xs compose a complex Y if and only if the Xs are fastened; (c) Cohesion—Xs compose a complex Y if and only if the Xs cohere (cannot be pulled apart or moved in relation to each other without breaking); (d) Fusion—Xs compose a complex Y if and only if the Xs are fused (joined together such that there is no boundary); (e) Organicism—Xs compose a complex Y if and only if either the activities of the Xs constitute a life or there is only one of the Xs;[45]and (f) Brutal Composition—"It's just the way things are." There is no true, nontrivial, and finitely long answer.[46] Many more hypotheses continue to be explored. A common problem with these theories is that they are vague. It remains unclear what "fastened" or "life" mean, for example. And there are other problems with the restricted composition responses, many of them which depend on which theory is being discussed.[40]
https://en.wikipedia.org/wiki/Mereology
I can remember Bertrand Russell telling me of a horrible dream. He was in the top floor of the University Library, about A.D. 2100. A library assistant was going round the shelves carrying an enormous bucket, taking down books, glancing at them, restoring them to the shelves or dumping them into the bucket. At last he came to three large volumes which Russell could recognize as the last surviving copy ofPrincipia Mathematica. He took down one of the volumes, turned over a few pages, seemed puzzled for a moment by the curious symbolism, closed the volume, balanced it in his hand and hesitated.... He [Russell] said once, after some contact with the Chinese language, that he was horrified to find that the language ofPrincipia Mathematicawas an Indo-European one. ThePrincipia Mathematica(often abbreviatedPM) is a three-volume work on thefoundations of mathematicswritten by the mathematician–philosophersAlfred North WhiteheadandBertrand Russelland published in 1910, 1912, and 1913. In 1925–1927, it appeared in a second edition with an importantIntroduction to the Second Edition, anAppendix Athat replaced✱9with a newAppendix BandAppendix C.PMwas conceived as a sequel to Russell's 1903The Principles of Mathematics, but asPMstates, this became an unworkable suggestion for practical and philosophical reasons: "The present work was originally intended by us to be comprised in a second volume ofPrinciples of Mathematics... But as we advanced, it became increasingly evident that the subject is a very much larger one than we had supposed; moreover on many fundamental questions which had been left obscure and doubtful in the former work, we have now arrived at what we believe to be satisfactory solutions." PM, according to its introduction, had three aims: (1) to analyze to the greatest possible extent the ideas and methods of mathematical logic and to minimize the number ofprimitive notions,axioms, andinference rules; (2) to precisely express mathematical propositions insymbolic logicusing the most convenient notation that precise expression allows; (3) to solve the paradoxes that plagued logic andset theoryat the turn of the 20th century, likeRussell's paradox.[3] This third aim motivated the adoption of the theory oftypesinPM. The theory of types adopts grammatical restrictions on formulas that rule out the unrestricted comprehension of classes, properties, and functions. The effect of this is that formulas such as would allow the comprehension of objects like the Russell set turn out to be ill-formed: they violate the grammatical restrictions of the system ofPM. PMsparked interest in symbolic logic and advanced the subject, popularizing it and demonstrating its power.[4]TheModern LibraryplacedPM23rd in their list of the top 100 English-language nonfiction books of the twentieth century.[5] ThePrincipiacovered onlyset theory,cardinal numbers,ordinal numbers, andreal numbers. Deeper theorems fromreal analysiswere not included, but by the end of the third volume it was clear to experts that a large amount of known mathematics couldin principlebe developed in the adopted formalism. It was also clear how lengthy such a development would be. A fourth volume on the foundations ofgeometryhad been planned, but the authors admitted to intellectual exhaustion upon completion of the third. As noted in the criticism of the theory byKurt Gödel(below), unlike aformalist theory, the "logicistic" theory ofPMhas no "precise statement of the syntax of the formalism". Furthermore in the theory, it is almost immediately observable thatinterpretations(in the sense ofmodel theory) are presented in terms oftruth-valuesfor the behaviour of the symbols "⊢" (assertion of truth), "~" (logical not), and "V" (logical inclusive OR). Truth-values:PMembeds the notions of "truth" and "falsity" in the notion "primitive proposition". A raw (pure) formalist theory would not provide the meaning of the symbols that form a "primitive proposition"—the symbols themselves could be absolutely arbitrary and unfamiliar. The theory would specify onlyhow the symbols behave based on the grammar of the theory. Then later, byassignmentof "values", a model would specify aninterpretationof what the formulas are saying. Thus in the formalKleenesymbol set below, the "interpretation" of what the symbols commonly mean, and by implication how they end up being used, is given in parentheses, e.g., "¬ (not)". But this is not a pure Formalist theory. The following formalist theory is offered as contrast to the logicistic theory ofPM. A contemporary formal system would be constructed as follows: The theory ofPMhas both significant similarities, and similar differences, to a contemporary formal theory.[clarification needed]Kleene states that "this deduction of mathematics from logic was offered as intuitive axiomatics. The axioms were intended to be believed, or at least to be accepted as plausible hypotheses concerning the world".[10]Indeed, unlike a Formalist theory that manipulates symbols according to rules of grammar,PMintroduces the notion of "truth-values", i.e., truth and falsity in thereal-worldsense, and the "assertion of truth" almost immediately as the fifth and sixth elements in the structure of the theory (PM1962:4–36): Cf.PM1962:90–94, for the first edition: Thefirstedition (see discussion relative to the second edition, below) begins with a definition of the sign "⊃" ✱1.01.p⊃q.=.~p∨q.Df. ✱1.1. Anything implied by a true elementary proposition is true.Ppmodus ponens (✱1.11was abandoned in the second edition.) ✱1.2. ⊦:p∨p.⊃.p.Ppprinciple of tautology ✱1.3. ⊦:q.⊃.p∨q.Ppprinciple of addition ✱1.4. ⊦:p∨q.⊃.q∨p.Ppprinciple of permutation ✱1.5. ⊦:p∨ (q∨r).⊃.q∨ (p∨r).Ppassociative principle ✱1.6. ⊦:.q⊃r.⊃:p∨q.⊃.p∨r.Ppprinciple of summation ✱1.7. Ifpis an elementary proposition, ~pis an elementary proposition.Pp ✱1.71. Ifpandqare elementary propositions,p∨qis an elementary proposition.Pp ✱1.72. If φpand ψpare elementary propositional functions which take elementary propositions as arguments, φp∨ ψpis an elementary proposition.Pp Together with the "Introduction to the Second Edition", the second edition's Appendix A abandons the entire section✱9. This includes six primitive propositions✱9through✱9.15together with the Axioms of reducibility. The revised theory is made difficult by the introduction of theSheffer stroke("|") to symbolise "incompatibility" (i.e., if both elementary propositionspandqare true, their "stroke"p|qis false), the contemporary logicalNAND(not-AND). In the revised theory, the Introduction presents the notion of "atomic proposition", a "datum" that "belongs to the philosophical part of logic". These have no parts that are propositions and do not contain the notions "all" or "some". For example: "this is red", or "this is earlier than that". Such things can existad finitum, i.e., even an "infinite enumeration" of them to replace "generality" (i.e., the notion of "for all").[12]PMthen "advance[s] to molecular propositions" that are all linked by "the stroke". Definitions give equivalences for "~", "∨", "⊃", and ".". The new introduction defines "elementary propositions" as atomic and molecular positions together. It then replaces all the primitive propositions✱1.2to✱1.72with a single primitive proposition framed in terms of the stroke: The new introduction keeps the notation for "there exists" (now recast as "sometimes true") and "for all" (recast as "always true"). Appendix A strengthens the notion of "matrix" or "predicative function" (a "primitive idea",PM1962:164) and presents four new Primitive propositions as✱8.1–✱8.13. ✱88. Multiplicative axiom ✱120. Axiom of infinity In simple type theory objects are elements of various disjoint "types". Types are implicitly built up as follows. If τ1,...,τmare types then there is a type (τ1,...,τm) that can be thought of as the class of propositional functions of τ1,...,τm(which in set theory is essentially the set of subsets of τ1×...×τm). In particular there is a type () of propositions, and there may be a type ι (iota) of "individuals" from which other types are built. Russell and Whitehead's notation for building up types from other types is rather cumbersome, and the notation here is due toChurch. In theramified type theoryof PM all objects are elements of various disjoint ramified types. Ramified types are implicitly built up as follows. If τ1,...,τm,σ1,...,σnare ramified types then as in simple type theory there is a type (τ1,...,τm,σ1,...,σn) of "predicative" propositional functions of τ1,...,τm,σ1,...,σn. However, there are also ramified types (τ1,...,τm|σ1,...,σn) that can be thought of as the classes of propositional functions of τ1,...τmobtained from propositional functions of type (τ1,...,τm,σ1,...,σn) by quantifying over σ1,...,σn. Whenn=0 (so there are no σs) these propositional functions are called predicative functions or matrices. This can be confusing because modern mathematical practice does not distinguish between predicative and non-predicative functions, and in any case PM never defines exactly what a "predicative function" actually is: this is taken as a primitive notion. Russell and Whitehead found it impossible to develop mathematics while maintaining the difference between predicative and non-predicative functions, so they introduced theaxiom of reducibility, saying that for every non-predicative function there is a predicative function taking the same values. In practice this axiom essentially means that the elements of type (τ1,...,τm|σ1,...,σn) can be identified with the elements of type (τ1,...,τm), which causes the hierarchy of ramified types to collapse down to simple type theory. (Strictly speaking, PM allows two propositional functions to be different even if they take the same values on all arguments; this differs from modern mathematical practice where one normally identifies two such functions.) InZermeloset theory one can model the ramified type theory of PM as follows. One picks a set ι to be the type of individuals. For example, ι might be the set of natural numbers, or the set of atoms (in a set theory with atoms) or any other set one is interested in. Then if τ1,...,τmare types, the type (τ1,...,τm) is the power set of the product τ1×...×τm, which can also be thought of informally as the set of (propositional predicative) functions from this product to a 2-element set {true,false}. The ramified type (τ1,...,τm|σ1,...,σn) can be modeled as the product of the type (τ1,...,τm,σ1,...,σn) with the set of sequences ofnquantifiers (∀ or ∃) indicating which quantifier should be applied to each variable σi. (One can vary this slightly by allowing the σs to be quantified in any order, or allowing them to occur before some of the τs, but this makes little difference except to the bookkeeping.) The introduction to the second edition cautions: One point in regard to which improvement is obviously desirable is the axiom of reducibility ... . This axiom has a purely pragmatic justification ... but it is clearly not the sort of axiom with which we can rest content. On this subject, however, it cannot be said that a satisfactory solution is yet obtainable. DrLeon Chwistek[Theory of Constructive Types] took the heroic course of dispensing with the axiom without adopting any substitute; from his work it is clear that this course compels us to sacrifice a great deal of ordinary mathematics. There is another course, recommended by Wittgenstein† (†Tractatus Logico-Philosophicus, *5.54ff) for philosophical reasons. This is to assume that functions of propositions are always truth-functions, and that a function can only occur in a proposition through its values. (...) [Working through the consequences] ... the theory of inductive cardinals and ordinals survives; but it seems that the theory of infinite Dedekindian and well-ordered series largely collapses, so that irrationals, and real numbers generally, can no longer be adequately dealt with. Also Cantor's proof that 2n > n breaks down unless n is finite.[13] It might be possible to sacrifice infinite well-ordered series to logical rigour, but the theory of real numbers is an integral part of ordinary mathematics, and can hardly be the subject of reasonable doubt. We are therefore justified (sic) in supposing that some logical axioms which is true will justify it. The axiom required may be more restricted than the axiom of reducibility, but if so, it remains to be discovered.[14] One author[4]observes that "The notation in that work has been superseded by the subsequent development of logic during the 20th century, to the extent that the beginner has trouble reading PM at all"; while much of the symbolic content can be converted to modern notation, the original notation itself is "a subject of scholarly dispute", and some notation "embodies substantive logical doctrines so that it cannot simply be replaced by contemporary symbolism".[15] Kurt Gödelwas harshly critical of the notation: "What is missing, above all, is a precise statement of the syntax of the formalism. Syntactical considerations are omitted even in cases where they are necessary for the cogency of the proofs."[16]This is reflected in the example below of the symbols "p", "q", "r" and "⊃" that can be formed into the string "p⊃q⊃r".PMrequires adefinitionof what this symbol-string means in terms of other symbols; in contemporary treatments the "formation rules" (syntactical rules leading to "well formed formulas") would have prevented the formation of this string. Source of the notation: Chapter I "Preliminary Explanations of Ideas and Notations" begins with the source of the elementary parts of the notation (the symbols =⊃≡−ΛVε and the system of dots): PM changed Peano's Ɔ to ⊃, and also adopted a few of Peano's later symbols, such as ℩ and ι, and Peano's practice of turning letters upside down. PMadopts the assertion sign "⊦" from Frege's 1879Begriffsschrift:[18] Thus to assert a propositionpPMwrites: (Observe that, as in the original, the left dot is square and of greater size than the full stop on the right.) Most of the rest of the notation in PM was invented by Whitehead.[20] PM's dots[21]are used in a manner similar to parentheses. Each dot (or multiple dot) represents either a left or right parenthesis or the logical symbol ∧. More than one dot indicates the "depth" of the parentheses, for example, ".", ":" or ":.", "::". However the position of the matching right or left parenthesis is not indicated explicitly in the notation but has to be deduced from some rules that are complex and at times ambiguous. Moreover, when the dots stand for a logical symbol ∧ its left and right operands have to be deduced using similar rules. First one has to decide based on context whether the dots stand for a left or right parenthesis or a logical symbol. Then one has to decide how far the other corresponding parenthesis is: here one carries on until one meets either a larger number of dots, or the same number of dots next that have equal or greater "force", or the end of the line. Dots next to the signs ⊃, ≡,∨, =Df have greater force than dots next to (x), (∃x) and so on, which have greater force than dots indicating a logical product ∧. Example 1. The line corresponds to The two dots standing together immediately following the assertion-sign indicate that what is asserted is the entire line: since there are two of them, their scope is greater than that of any of the single dots to their right. They are replaced by a left parenthesis standing where the dots are and a right parenthesis at the end of the formula, thus: (In practice, these outermost parentheses, which enclose an entire formula, are usually suppressed.) The first of the single dots, standing between two propositional variables, represents conjunction. It belongs to the third group and has the narrowest scope. Here it is replaced by the modern symbol for conjunction "∧", thus The two remaining single dots pick out the main connective of the whole formula. They illustrate the utility of the dot notation in picking out those connectives which are relatively more important than the ones which surround them. The one to the left of the "⊃" is replaced by a pair of parentheses, the right one goes where the dot is and the left one goes as far to the left as it can without crossing a group of dots of greater force, in this case the two dots which follow the assertion-sign, thus The dot to the right of the "⊃" is replaced by a left parenthesis which goes where the dot is and a right parenthesis which goes as far to the right as it can without going beyond the scope already established by a group of dots of greater force (in this case the two dots which followed the assertion-sign). So the right parenthesis which replaces the dot to the right of the "⊃" is placed in front of the right parenthesis which replaced the two dots following the assertion-sign, thus Example 2, with double, triple, and quadruple dots: stands for Example 3, with a double dot indicating a logical symbol (from volume 1, page 10): stands for where the double dot represents the logical symbol ∧ and can be viewed as having the higher priority as a non-logical single dot. Later in section✱14, brackets "[ ]" appear, and in sections✱20and following, braces "{ }" appear. Whether these symbols have specific meanings or are just for visual clarification is unclear. Unfortunately the single dot (but also ":", ":.", "::", etc.) is also used to symbolise "logical product" (contemporary logical AND often symbolised by "&" or "∧"). Logical implication is represented by Peano's "Ɔ" simplified to "⊃", logical negation is symbolised by an elongated tilde, i.e., "~" (contemporary "~" or "¬"), the logical OR by "v". The symbol "=" together with "Df" is used to indicate "is defined as", whereas in sections✱13and following, "=" is defined as (mathematically) "identical with", i.e., contemporary mathematical "equality" (cf. discussion in section✱13). Logical equivalence is represented by "≡" (contemporary "if and only if"); "elementary" propositional functions are written in the customary way, e.g., "f(p)", but later the function sign appears directly before the variable without parenthesis e.g., "φx", "χx", etc. Example,PMintroduces the definition of "logical product" as follows: Translation of the formulas into contemporary symbols: Various authors use alternate symbols, so no definitive translation can be given. However, because of criticisms such as that ofKurt Gödelbelow, the best contemporary treatments will be very precise with respect to the "formation rules" (the syntax) of the formulas. The first formula might be converted into modern symbolism as follows:[22] alternately alternately etc. The second formula might be converted as follows: But note that this is not (logically) equivalent to (p→ (q→r)) nor to ((p→q) →r), and these two are not logically equivalent either. These sections concern what is now known aspredicate logic, and predicate logic with identity (equality). Section✱10: The existential and universal "operators":PMadds "(x)" to represent the contemporary symbolism "for allx" i.e., " ∀x", and it uses a backwards serifed E to represent "there exists anx", i.e., "(Ǝx)", i.e., the contemporary "∃x". The typical notation would be similar to the following: Sections✱10, ✱11, ✱12: Properties of a variable extended to all individuals: section✱10introduces the notion of "a property" of a "variable".PMgives the example: φ is a function that indicates "is a Greek", and ψ indicates "is a man", and χ indicates "is a mortal" these functions then apply to a variablex.PMcan now write, and evaluate: The notation above means "for allx,xis a man". Given a collection of individuals, one can evaluate the above formula for truth or falsity. For example, given the restricted collection of individuals { Socrates, Plato, Russell, Zeus } the above evaluates to "true" if we allow for Zeus to be a man. But it fails for: because Russell is not Greek. And it fails for because Zeus is not a mortal. Equipped with this notationPMcan create formulas to express the following: "If all Greeks are men and if all men are mortals then all Greeks are mortals". (PM1962:138) Another example: the formula: means "The symbols representing the assertion 'There exists at least onexthat satisfies function φ' is defined by the symbols representing the assertion 'It's not true that, given all values ofx, there are no values ofxsatisfying φ'". The symbolisms ⊃xand "≡x" appear at✱10.02and✱10.03. Both are abbreviations for universality (i.e., for all) that bind the variablexto the logical operator. Contemporary notation would have simply used parentheses outside of the equality ("=") sign: PMattributes the first symbolism to Peano. Section✱11applies this symbolism to two variables. Thus the following notations: ⊃x, ⊃y, ⊃x, ycould all appear in a single formula. Section✱12reintroduces the notion of "matrix" (contemporarytruth table), the notion of logical types, and in particular the notions offirst-orderandsecond-orderfunctions and propositions. New symbolism "φ!x" represents any value of a first-order function. If a circumflex "^" is placed over a variable, then this is an "individual" value ofy, meaning that "ŷ" indicates "individuals" (e.g., a row in a truth table); this distinction is necessary because of the matrix/extensional nature of propositional functions. Now equipped with the matrix notion,PMcan assert its controversialaxiom of reducibility: a function of one or two variables (two being sufficient forPM's use)where all its values are given(i.e., in its matrix) is (logically) equivalent ("≡") to some "predicative" function of the same variables. The one-variable definition is given below as an illustration of the notation (PM1962:166–167): ✱12.1⊢:(Ǝf):φx.≡x.f!xPp; This means: "We assert the truth of the following: There exists a functionfwith the property that: given all values ofx, their evaluations in function φ (i.e., resulting their matrix) is logically equivalent to somefevaluated at those same values ofx. (and vice versa, hence logical equivalence)". In other words: given a matrix determined by property φ applied to variablex, there exists a functionfthat, when applied to thexis logically equivalent to the matrix. Or: every matrix φxcan be represented by a functionfapplied tox, and vice versa. ✱13: The identity operator "=": This is a definition that uses the sign in two different ways, as noted by the quote fromPM: means: The not-equals sign "≠" makes its appearance as a definition at✱13.02. ✱14: Descriptions: From thisPMemploys two new symbols, a forward "E" and an inverted iota "℩". Here is an example: This has the meaning: The text leaps from section✱14directly to the foundational sections✱20 GENERAL THEORY OF CLASSESand✱21 GENERAL THEORY OF RELATIONS. "Relations" are what is known in contemporaryset theoryas sets ofordered pairs. Sections✱20and✱22introduce many of the symbols still in contemporary usage. These include the symbols "ε", "⊂", "∩", "∪", "–", "Λ", and "V": "ε" signifies "is an element of" (PM1962:188); "⊂" (✱22.01) signifies "is contained in", "is a subset of"; "∩" (✱22.02) signifies the intersection (logical product) of classes (sets); "∪" (✱22.03) signifies the union (logical sum) of classes (sets); "–" (✱22.03) signifies negation of a class (set); "Λ" signifies the null class; and "V" signifies the universal class or universe of discourse. Small Greek letters (other than "ε", "ι", "π", "φ", "ψ", "χ", and "θ") represent classes (e.g., "α", "β", "γ", "δ", etc.) (PM1962:188): When applied to relations in section✱23 CALCULUS OF RELATIONS, the symbols "⊂", "∩", "∪", and "–" acquire a dot: for example: "⊍", "∸".[26] The notion, and notation, of "a class" (set): In the first editionPMasserts that no new primitive ideas are necessary to define what is meant by "a class", and only two new "primitive propositions" called theaxioms of reducibilityfor classes and relations respectively (PM1962:25).[27]But before this notion can be defined,PMfeels it necessary to create a peculiar notation "ẑ(φz)" that it calls a "fictitious object". (PM1962:188) At leastPMcan tell the reader how these fictitious objects behave, because "A class is wholly determinate when its membership is known, that is, there cannot be two different classes having the same membership" (PM1962:26). This is symbolised by the following equality (similar to✱13.01above: Perhaps the above can be made clearer by the discussion of classes inIntroduction to the Second Edition, which disposes of theAxiom of Reducibilityand replaces it with the notion: "All functions of functions are extensional" (PM1962:xxxix), i.e., This has the reasonable meaning that "IF for all values ofxthetruth-valuesof the functions φ and ψ ofxare [logically] equivalent, THEN the function ƒ of a given φẑand ƒ of ψẑare [logically] equivalent."PMasserts this is "obvious": Observe the change to the equality "=" sign on the right.PMgoes on to state that will continue to hang onto the notation "ẑ(φz)", but this is merely equivalent to φẑ, and this is a class. (all quotes:PM1962:xxxix). According toCarnap's "Logicist Foundations of Mathematics", Russell wanted a theory that could plausibly be said to derive all of mathematics from purely logical axioms. However,Principia Mathematicarequired, in addition to the basic axioms of type theory, three further axioms that seemed to not be true as mere matters of logic, namely theaxiom of infinity, theaxiom of choice, and theaxiom of reducibility. Since the first two were existential axioms, Russell phrased mathematical statements depending on them as conditionals. But reducibility was required to be sure that the formal statements even properly express statements of real analysis, so that statements depending on it could not be reformulated as conditionals.Frank Ramseytried to argue that Russell's ramification of the theory of types was unnecessary, so that reducibility could be removed, but these arguments seemed inconclusive. Beyond the status of the axioms aslogical truths, one can ask the following questions about any system such as PM: Propositional logicitself was known to be consistent, but the same had not been established forPrincipia's axioms of set theory. (SeeHilbert's second problem.) Russell and Whitehead suspected that the system in PM is incomplete: for example, they pointed out that it does not seem powerful enough to show that the cardinal ℵωexists. However, one can ask if some recursively axiomatizable extension of it is complete and consistent. In 1930,Gödel's completeness theoremshowed that first-order predicate logic itself was complete in a much weaker sense—that is, any sentence that is unprovable from a given set of axioms must actually be false in somemodelof the axioms. However, this is not the stronger sense of completeness desired forPrincipia Mathematica, since a given system of axioms (such as those ofPrincipia Mathematica) may have many models, in some of which a given statement is true and in others of which that statement is false, so that the statement is left undecided by the axioms. Gödel's incompleteness theoremscast unexpected light on these two related questions. Gödel's first incompleteness theorem showed that no recursive extension ofPrincipiacould be both consistent and complete for arithmetic statements. (As mentioned above, Principia itself was already known to be incomplete for some non-arithmetic statements.) According to the theorem, within every sufficiently powerful recursivelogical system(such asPrincipia), there exists a statementGthat essentially reads, "The statementGcannot be proved." Such a statement is a sort ofCatch-22: ifGis provable, then it is false, and the system is therefore inconsistent; and ifGis not provable, then it is true, and the system is therefore incomplete. Gödel's second incompleteness theorem(1931) shows that noformal systemextending basic arithmetic can be used to prove its own consistency. Thus, the statement "there are no contradictions in thePrincipiasystem" cannot be proven in thePrincipiasystem unless therearecontradictions in the system (in which case it can be proven both true and false). By the second edition ofPM, Russell had removed hisaxiom of reducibilityto a new axiom (although he does not state it as such). Gödel 1944:126 describes it this way: This change is connected with the new axiom that functions can occur in propositions only "through their values", i.e., extensionally (...) [this is] quite unobjectionable even from the constructive standpoint (...) provided that quantifiers are always restricted to definite orders". This change from a quasi-intensionalstance to a fullyextensionalstance also restrictspredicate logicto the second order, i.e. functions of functions: "We can decide that mathematics is to confine itself to functions of functions which obey the above assumption". This new proposal resulted in a dire outcome. An "extensional stance" and restriction to a second-order predicate logic means that a propositional function extended to all individuals such as "All 'x' are blue" now has to list all of the 'x' that satisfy (are true in) the proposition, listing them in a possibly infinite conjunction: e.g.x1∧x2∧ . . . ∧xn∧ . . .. Ironically, this change came about as the result of criticism fromLudwig Wittgensteinin his 1919Tractatus Logico-Philosophicus. As described by Russell in the Introduction to the Second Edition ofPM: There is another course, recommended by Wittgenstein† (†Tractatus Logico-Philosophicus, *5.54ff) for philosophical reasons. This is to assume that functions of propositions are always truth-functions, and that a function can only occur in a proposition through its values. (...) [Working through the consequences] it appears that everything in Vol. I remains true (though often new proofs are required); the theory of inductive cardinals and ordinals survives; but it seems that the theory of infinite Dedekindian and well-ordered series largely collapses, so that irrationals, and real numbers generally, can no longer be adequately dealt with. Also Cantor's proof that 2n>nbreaks down unlessnis finite." In other words, the fact that an infinite list cannot realistically be specified means that the concept of "number" in the infinite sense (i.e. the continuum) cannot be described by the new theory proposed inPM Second Edition. Wittgenstein in hisLectures on the Foundations of Mathematics, Cambridge 1939criticisedPrincipiaon various grounds, such as: Wittgenstein did, however, concede thatPrincipiamay nonetheless make some aspects of everyday arithmetic clearer. Gödeloffered a "critical but sympathetic discussion of the logicistic order of ideas" in his 1944 article "Russell's Mathematical Logic".[28]He wrote: It is to be regretted that this first comprehensive and thorough-going presentation of a mathematical logic and the derivation of mathematics from it [is] so greatly lacking in formal precision in the foundations (contained in✱1–✱21ofPrincipia[i.e., sections✱1–✱5(propositional logic),✱8–14(predicate logic with identity/equality),✱20(introduction to set theory), and✱21(introduction to relations theory)]) that it represents in this respect a considerable step backwards as compared with Frege. What is missing, above all, is a precise statement of the syntax of the formalism. Syntactical considerations are omitted even in cases where they are necessary for the cogency of the proofs ... The matter is especially doubtful for the rule of substitution and of replacing defined symbols by theirdefiniens... it is chiefly the rule of substitution which would have to be proved.[16] This section describes the propositional and predicate calculus, and gives the basic properties of classes, relations, and types. This part covers various properties of relations, especially those needed for cardinal arithmetic. This covers the definition and basic properties of cardinals. A cardinal is defined to be an equivalence class of similar classes (as opposed toZFC, where a cardinal is a special sort of von Neumann ordinal). Each type has its own collection of cardinals associated with it, and there is a considerable amount of bookkeeping necessary for comparing cardinals of different types. PM define addition, multiplication and exponentiation of cardinals, and compare different definitions of finite and infinite cardinals. ✱120.03 is the Axiom of infinity. A "relation-number" is an equivalence class of isomorphic relations. PM defines analogues of addition, multiplication, and exponentiation for arbitrary relations. The addition and multiplication is similar to the usual definition of addition and multiplication of ordinals in ZFC, though the definition of exponentiation of relations in PM is not equivalent to the usual one used in ZFC. This covers series, which is PM's term for what is now called a totally ordered set. In particular it covers complete series, continuous functions between series with the order topology (though of course they do not use this terminology), well-ordered series, and series without "gaps" (those with a member strictly between any two given members). This section constructs the ring of integers, the fields of rational and real numbers, and "vector-families", which are related to what are now called torsors over abelian groups. This section compares the system in PM with the usual mathematical foundations of ZFC. The system of PM is roughly comparable in strength with Zermelo set theory (or more precisely a version of it where the axiom of separation has all quantifiers bounded). Apart from corrections of misprints, the main text of PM is unchanged between the first and second editions. The main text in Volumes 1 and 2 was reset, so that it occupies fewer pages in each. In the second edition, Volume 3 was not reset, being photographically reprinted with the same page numbering; corrections were still made. The total number of pages (excluding the endpapers) in the first edition is 1,996; in the second, 2,000. Volume 1 has five new additions: In 1962, Cambridge University Press published a shortened paperback edition containing parts of the second edition of Volume 1: the new introduction (and the old), the main text up to *56, and Appendices A and C.. The first edition was reprinted in 2009 by Merchant Books,ISBN978-1-60386-182-3,ISBN978-1-60386-183-0,ISBN978-1-60386-184-7. Andrew D. Irvinesays thatPMsparked interest in symbolic logic and advanced the subject by popularizing it; it showcased the powers and capacities of symbolic logic; and it showed how advances in philosophy of mathematics and symbolic logic could go hand-in-hand with tremendous fruitfulness.[4]PMwas in part brought about by an interest inlogicism, the view on which all mathematical truths are logical truths. Though flawed,PMwould be influential in several later advances in meta-logic, includingGödel's incompleteness theorems.[citation needed] The logical notation inPMwas not widely adopted, possibly because its foundations are often considered a form ofZermelo–Fraenkel set theory.[citation needed] Scholarly, historical, and philosophical interest inPMis great and ongoing, and mathematicians continue to work withPM, whether for the historical reason of understanding the text or its authors, or for furthering insight into the formalizations of math and logic.[citation needed] TheModern LibraryplacedPM23rd in their list of the top 100 English-language nonfiction books of the twentieth century.[5]
https://en.wikipedia.org/wiki/Principia_Mathematica
Incomputer science,corecursionis a type of operation that isdualtorecursion. Whereas recursion worksanalytically, starting on data further from a base case and breaking it down into smaller data and repeating until one reaches a base case, corecursion works synthetically, starting from a base case and building it up, iteratively producing data further removed from a base case. Put simply, corecursive algorithms use the data that they themselves produce, bit by bit, as they become available, and needed, to produce further bits of data. A similar but distinct concept isgenerative recursion, which may lack a definite "direction" inherent in corecursion and recursion. Where recursion allows programs to operate on arbitrarily complex data, so long as they can be reduced to simple data (base cases), corecursion allows programs to produce arbitrarily complex and potentially infinite data structures, such asstreams, so long as it can be produced from simple data (base cases) in a sequence offinitesteps. Where recursion may not terminate, never reaching a base state, corecursion starts from a base state, and thus produces subsequent steps deterministically, though it may proceed indefinitely (and thus not terminate under strict evaluation), or it may consume more than it produces and thus become non-productive. Many functions that are traditionally analyzed as recursive can alternatively, and arguably more naturally, be interpreted as corecursive functions that are terminated at a given stage, for examplerecurrence relationssuch as the factorial. Corecursion can produce bothfiniteandinfinitedata structuresas results, and may employself-referentialdata structures. Corecursion is often used in conjunction withlazy evaluation, to produce only a finite subset of a potentially infinite structure (rather than trying to produce an entire infinite structure at once). Corecursion is a particularly important concept infunctional programming, where corecursion andcodataallowtotal languagesto work with infinite data structures. Corecursion can be understood by contrast with recursion, which is more familiar. While corecursion is primarily of interest in functional programming, it can be illustrated usingimperative programming, which is done below using thegeneratorfacility inPython. In these exampleslocal variablesare used, andassigned valuesimperatively (destructively), though these are not necessary in corecursion inpure functional programming. In pure functional programming, rather than assigning to local variables, these computed values form an invariable sequence, and prior values are accessed by self-reference (later values in the sequence reference earlier values in the sequence to be computed). The assignments simply express this in the imperative paradigm and explicitly specify where the computations happen, which serves to clarify the exposition. A classic example of recursion is computing thefactorial, which is defined recursively by0! := 1andn! := n × (n - 1)!. Torecursivelycompute its result on a given input, a recursive function calls (a copy of)itselfwith a different ("smaller" in some way) input and uses the result of this call to construct its result. The recursive call does the same, unless thebase casehas been reached. Thus acall stackdevelops in the process. For example, to computefac(3), this recursively calls in turnfac(2),fac(1),fac(0)("winding up" the stack), at which point recursion terminates withfac(0) = 1, and then the stack unwinds in reverse order and the results are calculated on the way back along the call stack to the initial call framefac(3)that uses the result offac(2) = 2to calculate the final result as3 × 2 = 3 × fac(2) =: fac(3)and finally returnfac(3) = 6. In this example a function returns a single value. This stack unwinding can be explicated, defining the factorialcorecursively, as aniterator, where onestartswith the case of1=:0!{\displaystyle 1=:0!}, then from this starting value constructs factorial values for increasing numbers1, 2, 3...as in the above recursive definition with "time arrow" reversed, as it were, by reading itbackwardsasn!×(n+1)=:(n+1)!{\displaystyle n!\times (n+1)=:(n+1)!}.The corecursive algorithm thus defined produces astreamofallfactorials. This may be concretely implemented as agenerator. Symbolically, noting that computing next factorial value requires keeping track of bothnandf(a previous factorial value), this can be represented as: or inHaskell, meaning, "starting fromn,f=0,1{\displaystyle n,f=0,1}, on each step the next values are calculated asn+1,f×(n+1){\displaystyle n+1,f\times (n+1)}". This is mathematically equivalent and almost identical to the recursive definition, but the+1{\displaystyle +1}emphasizes that the factorial values are being builtup, going forwards from the starting case, rather than being computed after first going backwards,downto the base case, with a−1{\displaystyle -1}decrement. The direct output of the corecursive function does not simply contain the factorialn!{\displaystyle n!}values, but also includes for each value the auxiliary data of its indexnin the sequence, so that any one specific result can be selected among them all, as and when needed. There is a connection withdenotational semantics, where thedenotations of recursive programsis built up corecursively in this way. In Python, a recursive factorial function can be defined as:[a] This could then be called for example asfactorial(5)to compute5!. A corresponding corecursive generator can be defined as: This generates an infinite stream of factorials in order; a finite portion of it can be produced by: This could then be called to produce the factorials up to5!via: If we're only interested in a certain factorial, just the last value can be taken, or we can fuse the production and the access into one function, As can be readily seen here, this is practically equivalent (just by substitutingreturnfor the onlyyieldthere) to the accumulator argument technique fortail recursion, unwound into an explicit loop. Thus it can be said that the concept of corecursion is an explication of the embodiment of iterative computation processes by recursive definitions, where applicable. In the same way, theFibonacci sequencecan be represented as: Because the Fibonacci sequence is arecurrence relationof order 2, the corecursive relation must track two successive terms, with the(b,−){\displaystyle (b,-)}corresponding to shift forward by one step, and the(−,a+b){\displaystyle (-,a+b)}corresponding to computing the next term. This can then be implemented as follows (usingparallel assignment): In Haskell, Tree traversalvia adepth-firstapproach is a classic example of recursion. Dually,breadth-firsttraversal can very naturally be implemented via corecursion. Iteratively, one may traverse a tree by placing its root node in a data structure, then iterating with that data structure while it is non-empty, on each step removing the first node from it and placing the removed node'schild nodesback into that data structure. If the data structure is astack(LIFO), this yields depth-first traversal, and if the data structure is aqueue(FIFO), this yields breadth-first traversal: Using recursion, a depth-first traversal of a tree is implemented simply as recursively traversing each of the root node's child nodes in turn. Thus the second child subtree is not processed until the first child subtree is finished. The root node's value is handled separately, whether before the first child is traversed (resulting in pre-order traversal), after the first is finished and before the second (in-order), or after the second child node is finished (post-order) — assuming the tree is binary, for simplicity of exposition. The call stack (of the recursive traversal function invocations) corresponds to the stack that would be iterated over with the explicit LIFO structure manipulation mentioned above. Symbolically, "Recursion" has two meanings here. First, the recursive invocations of the tree traversal functionsdfxyz{\displaystyle {\text{df}}_{xyz}}. More pertinently, we need to contend with how the resultinglist of valuesis built here. Recursive, bottom-up output creation will result in the right-to-left tree traversal. To have it actually performed in the intended left-to-right order the sequencing would need to be enforced by some extraneous means, or it would be automatically achieved if the output were to be built in the top-down fashion, i.e.corecursively. A breadth-first traversal creating its output in the top-down order, corecursively, can be also implemented by starting at the root node, outputting its value,[b]then breadth-first traversing the subtrees – i.e., passing on thewhole listof subtrees to the next step (not a single subtree, as in the recursive approach) – at the next step outputting the values of all of their root nodes, then passing ontheirchild subtrees, etc.[c]In this case the generator function, indeed the output sequence itself, acts as the queue. As in the factorial example above, where the auxiliary information of the index (which step one was at,n) was pushed forward, in addition to the actual output ofn!, in this case the auxiliary information of the remaining subtrees is pushed forward, in addition to the actual output. Symbolically, meaning that at each step, one outputs the list of values in this level's nodes, then proceeds to the next level's nodes. Generating just the node values from this sequence simply requires discarding the auxiliary child tree data, then flattening the list of lists (values are initially grouped by level (depth); flattening (ungrouping) yields a flat linear list). This is extensionally equivalent to theauxbf{\displaystyle {\text{aux}}_{\text{bf}}}specification above. In Haskell, Notably, given an infinite tree,[d]the corecursive breadth-first traversal will traverse all nodes, just as for a finite tree, while the recursive depth-first traversal will go down one branch and not traverse all nodes, and indeed if traversing post-order, as in this example (or in-order), it will visit no nodes at all, because it never reaches a leaf. This shows the usefulness of corecursion rather than recursion for dealing with infinite data structures. One caveat still remains for trees with the infinite branching factor, which need a more attentive interlacing to explore the space better. Seedovetailing. In Python, this can be implemented as follows.[e]The usual post-order depth-first traversal can be defined as:[f] This can then be called bydf(t)to print the values of the nodes of the tree in post-order depth-first order. The breadth-first corecursive generator can be defined as:[g] This can then be called to print the values of the nodes of the tree in breadth-first order: Initial data typescan be defined as being theleast fixpoint(up to isomorphism) of some type equation; theisomorphismis then given by aninitial algebra. Dually, final (or terminal) data types can be defined as being thegreatest fixpointof a type equation; the isomorphism is then given by a finalcoalgebra. If the domain of discourse is thecategory of setsandtotal functions, then final data types may contain infinite,non-wellfoundedvalues, whereas initial types do not.[1][2]On the other hand, if the domain of discourse is the category ofcomplete partial ordersandcontinuous functions, which corresponds roughly to theHaskellprogramming language, then final types coincide with initial types, and the corresponding final coalgebra and initial algebra form an isomorphism.[3] Corecursion is then a technique for recursively defining functions whose range (codomain) is a final data type, dual to the way that ordinaryrecursionrecursively defines functions whose domain is an initial data type.[4] The discussion below provides several examples in Haskell that distinguish corecursion. Roughly speaking, if one were to port these definitions to the category of sets, they would still be corecursive. This informal usage is consistent with existing textbooks about Haskell.[5]The examples used in this article predate the attempts to define corecursion and explain what it is. The rule forprimitive corecursiononcodatais the dual to that forprimitive recursionon data. Instead of descending on the argument bypattern-matchingon its constructors (thatwere called up before, somewhere, so we receive a ready-made datum and get at its constituent sub-parts, i.e. "fields"), we ascend on the result by filling-in its "destructors" (or "observers", thatwill be called afterwards, somewhere - so we're actually calling a constructor, creating another bit of the result to be observed later on). Thus corecursioncreates(potentially infinite) codata, whereas ordinary recursionanalyses(necessarily finite) data. Ordinary recursion might not be applicable to the codata because it might not terminate. Conversely, corecursion is not strictly necessary if the result type is data, because data must be finite. In "Programming with streams in Coq: a case study: the Sieve of Eratosthenes"[6]we find where primes "are obtained by applying the primes operation to the stream (Enu 2)". Following the above notation, the sequence of primes (with a throwaway 0 prefixed to it) and numbers streams being progressively sieved, can be represented as or in Haskell, The authors discuss how the definition ofsieveis not guaranteed always to beproductive, and could become stuck e.g. if called with[5,10..]as the initial stream. Here is another example in Haskell. The following definition produces the list ofFibonacci numbersin linear time: This infinite list depends on lazy evaluation; elements are computed on an as-needed basis, and only finite prefixes are ever explicitly represented in memory. This feature allows algorithms on parts of codata to terminate; such techniques are an important part of Haskell programming. This can be done in Python as well:[7] The definition ofzipWithcan be inlined, leading to this: This example employs a self-referentialdata structure. Ordinary recursion makes use of self-referentialfunctions, but does not accommodate self-referential data. However, this is not essential to the Fibonacci example. It can be rewritten as follows: This employs only self-referentialfunctionto construct the result. If it were used with strict list constructor it would be an example of runaway recursion, but withnon-strictlist constructor this guarded recursion gradually produces an indefinitely defined list. Corecursion need not produce an infinite object; a corecursive queue[8]is a particularly good example of this phenomenon. The following definition produces abreadth-first traversalof a binary tree in thetop-downmanner, in linear time (already incorporating the flattening mentionedabove): This definition takes a tree and produces a list of its sub-trees (nodes and leaves). This list serves dual purpose as both the input queue and the result (gen len pproduces its outputlennotches ahead of its input back-pointer,p, along the list). It is finite if and only if the initial tree is finite. The length of the queue must be explicitly tracked in order to ensure termination; this can safely be elided if this definition is applied only to infinite trees. This Haskell code uses self-referential data structure, but does notessentiallydepend on lazy evaluation. It can be straightforwardly translated into e.g. Prolog which is not a lazy language. Whatisessential is the ability to build a list (used as the queue) in thetop-downmanner. For that, Prolog hastail recursion modulo cons(i.e. open ended lists). Which is also emulatable in Scheme, C, etc. using linked lists with mutable tail sentinel pointer: Another particular example gives a solution to the problem of breadth-first labeling.[9]The functionlabelvisits every node in a binary tree in the breadth first fashion, replacing each label with an integer, each subsequent integer bigger than the last by 1. This solution employs a self-referential data structure, and the binary tree can be finite or infinite. Or in Prolog, for comparison, Anapomorphism(such as ananamorphism, such asunfold) is a form of corecursion in the same way that aparamorphism(such as acatamorphism, such asfold) is a form of recursion. TheCoqproof assistant supports corecursion andcoinductionusing the CoFixpoint command. Corecursion, referred to ascircular programming,dates at least to (Bird 1984), who creditsJohn HughesandPhilip Wadler; more general forms were developed in (Allison 1989). The original motivations included producing more efficient algorithms (allowing a single pass over data in some cases, instead of requiring multiple passes) and implementing classical data structures, such asdoubly linked listsand queues, in functional languages. and initializing a tree, say via: In this example nodes are labeled in breadth-first order:
https://en.wikipedia.org/wiki/Corecursion
Incomputability theory,course-of-values recursionis a technique for definingnumber-theoretic functionsbyrecursion. In a definition of a functionfby course-of-values recursion, the value off(n) is computed from the sequence⟨f(0),f(1),…,f(n−1)⟩{\displaystyle \langle f(0),f(1),\ldots ,f(n-1)\rangle }. The fact that such definitions can be converted into definitions using a simpler form of recursion is often used to prove that functions defined by course-of-values recursion areprimitive recursive. Contrary to course-of-values recursion, in primitive recursion the computation of a value of a function requires only the previous value; for example, for a1-aryprimitive recursive functiongthe value ofg(n+1) is computed only fromg(n) andn. The factorial functionn! is recursively defined by the rules This recursion is aprimitive recursionbecause it computes the next value (n+1)! of the function based on the value ofnand the previous valuen! of the function. On the other hand, the function Fib(n), which returns thenthFibonacci number, is defined with the recursion equations In order to compute Fib(n+2), the lasttwovalues of the Fib function are required. Finally, consider the functiongdefined with the recursion equations To computeg(n+1) using these equations, all the previous values ofgmust be computed; no fixed finite number of previous values is sufficient in general for the computation ofg. The functions Fib andgare examples of functions defined by course-of-values recursion. In general, a functionfis defined bycourse-of-values recursionif there is a fixed primitive recursive functionhsuch that for alln, where⟨f(0),f(1),…,f(n−1)⟩{\displaystyle \langle f(0),f(1),\ldots ,f(n-1)\rangle }is aGödel numberencoding the indicated sequence. In particular provides the initial value of the recursion. The functionhmight test its first argument to provide explicit initial values, for instance for Fib one could use the function defined by wheres[i] denotes extraction of the elementifrom an encoded sequences; this is easily seen to be a primitive recursive function (assuming an appropriate Gödel numbering is used). In order to convert a definition by course-of-values recursion into a primitive recursion, an auxiliary (helper) function is used. Suppose that one wants to have To definefusing primitive recursion, first define the auxiliarycourse-of-values functionthat should satisfy where the right hand side is taken to be aGödel numbering for sequences. Thusf¯(n){\displaystyle {\bar {f}}(n)}encodes the firstnvalues off. The functionf¯{\displaystyle {\bar {f}}}can be defined by primitive recursion becausef¯(n+1){\displaystyle {\bar {f}}(n+1)}is obtained by appending tof¯(n){\displaystyle {\bar {f}}(n)}the new elementh(n,f¯(n)){\displaystyle h(n,{\bar {f}}(n))}: whereappend(n,s,x)computes, wheneversencodes a sequence of lengthn, a new sequencetof lengthn+ 1such thatt[n] =xandt[i] =s[i]for alli<n. This is a primitive recursive function, under the assumption of an appropriate Gödel numbering;his assumed primitive recursive to begin with. Thus the recursion relation can be written as primitive recursion: wheregis itself primitive recursive, being the composition of two such functions: Givenf¯{\displaystyle {\bar {f}}}, the original functionfcan be defined byf(n)=f¯(n+1)[n]{\displaystyle f(n)={\bar {f}}(n+1)[n]}, which shows that it is also a primitive recursive function. In the context ofprimitive recursive functions, it is convenient to have a means to represent finite sequences of natural numbers as single natural numbers. One such method,Gödel's encoding, represents a sequence of positive integers⟨n0,n1,n2,…,nk⟩{\displaystyle \langle n_{0},n_{1},n_{2},\ldots ,n_{k}\rangle }as wherepirepresent theith prime. It can be shown that, with this representation, the ordinary operations on sequences are all primitive recursive. These operations include Using this representation of sequences, it can be seen that ifh(m) is primitive recursive then the function is also primitive recursive. When the sequence⟨n0,n1,n2,…,nk⟩{\displaystyle \langle n_{0},n_{1},n_{2},\ldots ,n_{k}\rangle }is allowed to include zeros, it is instead represented as which makes it possible to distinguish the codes for the sequences⟨0⟩{\displaystyle \langle 0\rangle }and⟨0,0⟩{\displaystyle \langle 0,0\rangle }. Not every recursive definition can be transformed into a primitive recursive definition. One known example isAckermann's function, which is of the formA(m,n) and is provably not primitive recursive. Indeed, every new valueA(m+1,n) depends on the sequence of previously defined valuesA(i,j), but thei-s andj-s for which values should be included in this sequence depend themselves on previously computed values of the function; namely (i,j) = (m,A(m+1,n)). Thus one cannot encode the previously computed sequence of values in a primitive recursive way in the manner suggested above (or at all, as it turns out this function is not primitive recursive).
https://en.wikipedia.org/wiki/Course-of-values_recursion
Digital infinityis a technical term intheoretical linguistics. Alternative formulations are "discrete infinity" and "the infinite use of finite means". The idea is that all human languages follow a simple logical principle, according to which a limited set of digits—irreducible atomic sound elements—are combined to produce an infinite range of potentially meaningful expressions. Language is, at its core, a system that is both digital and infinite. To my knowledge, there is no other biological system with these properties.... It remains for us to examine the spiritual element of speech ... this marvelous invention of composing from twenty-five or thirty sounds an infinite variety of words, which, although not having any resemblance in themselves to that which passes through our minds, nevertheless do not fail to reveal to others all of the secrets of the mind, and to make intelligible to others who cannot penetrate into the mind all that we conceive and all of the diverse movements of our souls. Noam ChomskycitesGalileoas perhaps the first to recognise the significance of digital infinity. This principle, notes Chomsky, is "the core property of human language, and one of its most distinctive properties: the use of finite means to express an unlimited array of thoughts". In hisDialogo, Galileo describes with wonder the discovery of a means to communicate one's "most secret thoughts to any other person ... with no greater difficulty than the various collocations of twenty-four little characters upon a paper." "This is the greatest of all human inventions," Galileo continues, noting it to be "comparable to the creations of a Michelangelo".[1] 'Digital infinity' corresponds to Noam Chomsky's 'universal grammar' mechanism, conceived as a computationalmoduleinserted somehow intoHomo sapiens'otherwise 'messy' (non-digital) brain. This conception of human cognition—central to the so-called 'cognitive revolution' of the 1950s and 1960s—is generally attributed toAlan Turing, who was the first scientist to argue that a man-made machine might truly be said to 'think'. But his often forgotten conclusion however was in line with previous observations that a "thinking" machine would be absurd, since we have no formal idea what "thinking" is — and indeed we still don't. Chomsky frequently pointed this out. Chomsky agreed that while a mind can be said to "compute"—as we have some idea of what computing is and some good evidence the brain is doing it on at least some level—we cannot however claim that a computer or any other machine is "thinking" because we have no coherent definition of what thinking is. Taking the example of what's called 'consciousness,' Chomsky said that, "We don't even have bad theories"—echoing the famous physics criticism that a theory is "not even wrong." From Turing's seminal 1950 article, "Computing Machinery and Intelligence", published inMind, Chomsky provides the example of a submarine being said to "swim." Turing clearly derided the idea. "If you want to call that swimming, fine," Chomsky says, repeatedly explaining in print and video how Turing is consistently misunderstood on this, one of his most cited observations. Previously the idea of a thinking machine was famously dismissed byRené Descartesastheoreticallyimpossible. Neither animals nor machines can think, insisted Descartes, since they lack a God-given soul.[3]Turing was well aware of this traditional theological objection, and explicitly countered it.[4] Today's digital computers are instantiations of Turing's theoretical breakthrough in conceiving the possibility of a man-madeuniversal thinking machine—known nowadays as a 'Turing machine'. No physical mechanism can be intrinsically 'digital', Turing explained, since—examined closely enough—its possible states will vary without limit. But if most of these states can be profitably ignored, leaving only a limited set of relevant distinctions, thenfunctionallythe machine may be considered 'digital':[4] The digital computers considered in the last section may be classified amongst the "discrete-state machines." These are the machines which move by sudden jumps or clicks from one quite definite state to another. These states are sufficiently different for the possibility of confusion between them to be ignored. Strictly speaking, there are no such machines. Everything really moves continuously. But there are many kinds of machine which can profitably be thought of as being discrete-state machines. For instance in considering the switches for a lighting system it is a convenient fiction that each switch must be definitely on or definitely off. There must be intermediate positions, but for most purposes we can forget about them. An implication is that 'digits' don't exist: they and their combinations are no more than convenient fictions, operating on a level quite independent of the material, physical world. In the case of abinarydigital machine, the choice at each point is restricted to 'off' versus 'on'. Crucially, the intrinsic properties of themediumused to encode signals then have no effect on the message conveyed. 'Off' (or alternatively 'on') remains unchanged regardless of whether the signal consists of smoke, electricity, sound, light or anything else. In the case of analog (more-versus-less) gradations, this is not so because the range of possible settings is unlimited. Moreover, in the analog case itdoesmatter which particular medium is being employed: equating a certain intensity of smoke with a corresponding intensity of light, sound or electricity is just not possible. In other words, only in the case ofdigitalcomputation and communication can information be truly independent of the physical, chemical or other properties of the materials used to encode and transmit messages. This way, digital computation and communication operates independently of the physical properties of the computing machine. As scientists and philosophers during the 1950s digested the implications, they exploited the insight to explain why 'mind' apparently operates on so different a level from 'matter'. Descartes's celebrated distinction between immortal 'soul' and mortal 'body' was conceptualised, following Turing, as no more than the distinction between (digitally encoded)informationon the one hand, and, on the other, the particular physicalmedium—light, sound, electricity or whatever—chosen to transmit the corresponding signals. Note that the Cartesian assumption of mind's independence of matter implied—in the human case at least—the existence of some kind of digital computer operating inside the human brain. Information and computation reside in patterns of data and in relations of logic that are independent of the physical medium that carries them. When you telephone your mother in another city, the message stays the same as it goes from your lips to her ears even as it physically changes its form, from vibrating air, to electricity in a wire, to charges in silicon, to flickering light in a fibre optic cable, to electromagnetic waves, and then back again in reverse order. ... Likewise, a given programme can run on computers made of vacuum tubes, electromagnetic switches, transistors, integrated circuits, or well-trained pigeons, and it accomplishes the same things for the same reasons. This insight, first expressed by the mathematician Alan Turing, the computer scientists Alan Newell, Herbert Simon, and Marvin Minsky, and the philosophers Hilary Putnam and Jerry Fodor, is now called thecomputational theory of mind. It is one of the great ideas in intellectual history, for it solves one of the puzzles that make up the 'mind-body problem', how to connect the ethereal world of meaning and intention, the stuff of our mental lives, with a physical hunk of matter like the brain. ... For millennia this has been a paradox. ... The computational theory of mind resolves the paradox. Turing did not claim that the human mind really is a digital computer. More modestly, he proposed that digital computers might one day qualify in human eyes as machines endowed with "mind". However, it was not long before philosophers (most notablyHilary Putnam) took what seemed to be the next logical step—arguing that the human minditselfis a digital computer, or at least that certain mental "modules" are best understood that way. Noam Chomskyrose to prominence as one of the most audacious champions of this 'cognitive revolution'. Language, he proposed, is a computational 'module' or 'device' unique to the human brain. Previously, linguists had thought of language as learned cultural behaviour: chaotically variable, inseparable from social life and therefore beyond the remit of natural science. The Swiss linguistFerdinand de Saussure, for example, had defined linguistics as a branch of 'semiotics', this in turn being inseparable from anthropology, sociology and the study of man-made conventions and institutions. By picturing language instead as thenaturalmechanism of 'digital infinity', Chomsky promised to bring scientific rigour to linguistics as a branch of strictlynaturalscience. In the 1950s,phonologywas generally considered the most rigorously scientific branch of linguistics. For phonologists, "digital infinity" was made possible by the human vocal apparatus conceptualised as a kind of machine consisting of a small number of binary switches. For example, "voicing" could be switched 'on' or 'off', as couldpalatisation,nasalisationand so forth. Take the consonant [b], for example, and switch voicing to the 'off' position—and you get [p]. Every possible phoneme in any of the world's languages might in this way be generated by specifying a particular on/off configuration of the switches ('articulators') constituting the human vocal apparatus. This approach became celebrated as 'distinctive features' theory, in large part credited to the Russian linguist and polymathRoman Jakobson. The basic idea was that every phoneme in every natural language could in principle be reduced to its irreducible atomic components—a set of 'on' or 'off' choices ('distinctive features') allowed by the design of a digital apparatus consisting of the human tongue, soft palate, lips, larynx and so forth. Chomsky's original work was inmorphophonemics. During the 1950s, he became inspired by the prospect of extending Roman Jakobson's 'distinctive features' approach—now hugely successful—far beyond its original field of application. Jakobson had already persuaded a young social anthropologist—Claude Lévi-Strauss—to apply distinctive features theory to the study of kinship systems, in this way inaugurating 'structural anthropology'. Chomsky—who got his job at the Massachusetts Institute of Technology thanks to the intervention of Jakobson and his student,Morris Halle—hoped to explore the extent to which similar principles might be applied to the various sub-disciplines of linguistics, including syntax and semantics.[6]If the phonological component of language was demonstrably rooted in a digital biological 'organ' or 'device', why not the syntactic and semantic components as well? Might not language as a whole prove to be a digital organ or device? This led some of Chomsky's early students to the idea of 'generative semantics'—the proposal that the speaker generates word and sentence meanings by combining irreducible constituent elements of meaning, each of which can be switched 'on' or 'off'. To produce 'bachelor', using this logic, the relevant component of the brain must switch 'animate', 'human' and 'male' to the 'on'(+)position while keeping 'married' switched 'off'(-). The underlying assumption here is that the requisite conceptual primitives—irreducible notions such as 'animate', 'male', 'human', 'married' and so forth—are genetically determined internal components of the human language organ. This idea would rapidly encounter intellectual difficulties—sparking controversies culminating in the so-called 'linguistics wars' as described in Randy Allen Harris's 1993 publication by that name.[7]The linguistic wars attracted young and ambitious scholars impressed by the recent emergence of computer science and its promise of scientific parsimony and unification. If the theory worked, the simple principle of digital infinity would apply to language as a whole. Linguistics in its entirety might then lay claim to the coveted status ofnaturalscience. No part of the discipline—not even semantics—need be "contaminated" any longer by association with such 'un-scientific' disciplines as cultural anthropology or social science.[8][9]: 3[10]
https://en.wikipedia.org/wiki/Digital_infinity
Takethis kiss upon the brow!And, in parting from you now,Thus much let me avow—You are not wrong, who deemThat my days have been a dream;Yet if hope has flown awayIn a night, or in a day,In a vision, or in none,Is it therefore the lessgone?Allthat we see or seemIs but a dream within a dream.I stand amid the roarOf a surf-tormented shore,And I hold within my handGrains of the golden sand—How few! yet how they creepThrough my fingers to the deep,While I weep—while I weep!O God! can I not graspThem with a tighter clasp?O God! can I not saveOnefrom the pitiless wave?Isallthat we see or seemBut a dream within a dream? "A Dream Within a Dream" is a poem written by American poetEdgar Allan Poe, first published in1849. The poem has 24 lines, divided into two stanzas. The poem dramatizes the confusion felt by the narrator as he watches the important things in life slip away.[1]Realizing he cannot hold on to even one grain of sand, he is led to his final question whether all things are just a dream.[2] It has been suggested that the "golden sand" referenced in the 15th line signifies that which is to be found in anhourglass, consequently time itself.[3]Another interpretation holds that the expression evokes an image derived from the 1848 finding ofgold in California.[1]The latter interpretation seems unlikely, however, given the presence of the four, almost identical, lines describing the sand in another poem "To ——," which is regarded as a blueprint for "A Dream Within a Dream" and preceding its publication by two decades.[3] The poem was first published in the March 31, 1849, edition of theBoston-based story paperThe Flag of Our Union.[2]The same publication had only two weeks before first published Poe's short story "Hop-Frog." The next month, owner Frederick Gleason announced it could no longer pay for whatever articles or poems it published.
https://en.wikipedia.org/wiki/A_Dream_Within_a_Dream_(poem)
TheDroste effect(Dutch pronunciation:[ˈdrɔstə]), known in art as an example ofmise en abyme, is the effect of a picturerecursivelyappearing within itself, in a place where a similar picture would realistically be expected to appear. This produces a loop which in theory could go on forever, but in practice only continues as far as the image's resolution allows. The effect is named afterDroste, a Dutch brand ofcocoa, with an image designed by Jan Misset in 1904. The Droste effect has since been used in the packaging of a variety of products. Apart from advertising, the effect is also seen in the Dutch artistM. C. Escher's 1956 lithographPrint Gallery, which portrays a gallery that depicts itself. The effect has been widely used on the covers ofcomic books, mainly in the 1940s. TheDrosteeffect is named after the image on the tins and boxes ofDrostecocoapowder which displayed a nurse carrying a serving tray with a cup of hot chocolate and a box with the same image, designed by Jan Misset.[2]This familiar image was introduced in 1904 and maintained for decades with slight variations from 1912 by artists includingAdolphe Mouron. The poet and columnist Nico Scheepmaker introduced wider usage of the term in the late 1970s.[3] The appearance isrecursive: the smaller version contains an even smaller version of the picture, and so on.[4]Only in theory could this go on forever, asfractalsdo; practically, it continues only as long as theresolutionof the picture allows, which is relatively short, since each iterationgeometricallyreduces the picture's size.[5][6] The Droste effect was anticipated byGiottoearly in the 14th century, in hisStefaneschi Triptych. The altarpiece portrays in its centre panelCardinalGiacomo Gaetani Stefaneschioffering the triptych itself toSt. Peter.[7]There are also several examples from medieval times of books featuring images containing the book itself or window panels in churches depicting miniature copies of the window panel itself.[8] The Dutch artistM. C. Eschermade use of the Droste effect in his 1956 lithographPrint Gallery, which portrays a gallery containing a print which depicts the gallery, each time both reduced and rotated, but with a void at the centre of the image. The work has attracted the attention of mathematicians includingHendrik Lenstra. They devised a method of filling in the artwork's central void in an additional application of the Droste effect by successively rotating and shrinking an image of the artwork.[4][9][10] In the 20th century, the Droste effect was used to market a variety of products. The packaging ofLand O'Lakesbutter featured aNative Americanwoman holding a package of butter with a picture of herself.[4]Morton Saltsimilarly made use of the effect.[11]The cover of the 1969 vinyl albumUmmagummabyPink Floydshows the band members sitting in various places, with a picture on the wall showing the same scene, but the order of the band members rotated.[12]The logo ofThe Laughing Cowcheese spread brand pictures a cow with earrings. On closer inspection, these are seen to be images of the circular cheese spread package, each bearing the image of the mascot itself.[4]The Droste effect is a theme inRussell Hoban's children's novel,The Mouse and His Child, appearing in the form of a label on a can of "Bonzo Dog Food" which depicts itself.[13][14] The Droste effect has been a motif for the cover ofcomic booksfor many years, known as an "infinity cover". Such covers were especially popular during the 1940s. Examples includeBatman#8 (December 1941–January 1942),Action Comics#500 (October 1979), andBongo ComicsFree For All!(2007 ed.).Little Giant Comics#1 (July 1938) is said to be the first-published example of an infinity cover.[15]
https://en.wikipedia.org/wiki/Droste_effect
Afalse awakeningis a vivid and convincingdreamaboutawakeningfromsleep, while the dreamer in reality continues to sleep. After a false awakening, subjects often dream they are performing their daily morning routine such as showering or eating breakfast. False awakenings, mainly those in which one dreams that they have awoken from a sleep that featured dreams, take on aspects of adouble dreamor adream within a dream. A classic example in fiction is the double false awakening of the protagonist inGogol'sPortrait(1835). Studies have shown that false awakening is closely related tolucid dreamingthat often transforms into one another. The only differentiating feature between them is that the dreamer has a logical understanding of the dream in a lucid dream, while that is not the case in a false awakening.[1] Once one realizes they are falsely awakened, they either wake up or begin lucid dreaming.[1] A false awakening may occur following a dream or following alucid dream(one in which the dreamer has been aware of dreaming). Particularly, if the false awakening follows a lucid dream, the false awakening may turn into a "pre-lucid dream",[2]that is, one in which the dreamer may start to wonder if they are really awake and may or may not come to the correct conclusion. In a study byHarvardpsychologistDeirdre Barrett, 2,000 dreams from 200 subjects were examined and it was found that false awakenings and lucidity were significantly more likely to occur within the same dream or within different dreams of the same night. False awakenings often preceded lucidity as a cue, but they could also follow the realization of lucidity, often losing it in the process.[3] Because the mind still dreams after a false awakening, there may be more than one false awakening in a single dream. Subjects may dream they wake up, eat breakfast, brush their teeth, and so on; suddenly awake again in bed (still in a dream), begin morning rituals again, awaken again, and so forth. The philosopherBertrand Russellclaimed to have experienced "about a hundred" false awakenings in succession while coming around from a general anesthetic.[4] Giorgio Buzzisuggests that FAs may indicate the occasional re-appearing of a vestigial (or anyway anomalous) REM sleep in the context of disturbed or hyperaroused sleep (lucid dreaming,sleep paralysis, or situations of high anticipation). This peculiar form of REM sleep permits the replay of unaltered experiential memories, thus providing a unique opportunity to study how waking experiences interact with the hypothesized predictive model of the world. In particular, it could permit to catch a glimpse of the protoconscious world without the distorting effect of ordinary REM sleep.[5] In accordance with the proposed hypothesis, a high prevalence of FAs could be expected in children, whose "REM sleep machinery" might be less developed.[5] Gibson's dreamprotoconsciousnesstheory states that false awakening is shaped on some fixed patterns depicting real activities, especially the day-to-day routine. False awakening is often associated with highly realistic environmental details of the familiar events like the day-to-day activities or autobiographic andepisodicmoments.[5] Certain aspects of life may be dramatized or out of place in false awakenings. Things may seem wrong: details, like the painting on a wall, not being able to talk or difficulty reading (reportedly, reading in lucid dreams is often difficult or impossible).[6]A common theme in false awakenings is visiting the bathroom, upon which the dreamer will see that their reflection in the mirror is distorted (which can be an opportunity for lucidity, but usually resulting in wakefulness). Celia Greensuggested a distinction should be made between two types of false awakening:[2] Type 1 is the more common, in which the dreamer seems to wake up, but not necessarily in realistic surroundings; that is, not in their own bedroom. A pre-lucid dream may ensue. More commonly, dreamers will believe they have awakened, and then either genuinely wake up in their own bed or "fall back asleep" in the dream. A common false awakening is a "late for work" scenario. A person may "wake up" in a typical room, with most things looking normal, and realize they overslept and missed the start time at work or school. Clocks, if found in the dream, will show time indicating that fact. The resulting panic is often strong enough to truly awaken the dreamer (much like from anightmare). Another common Type 1 example of false awakening can result in bedwetting. In this scenario, the dreamer has had a false awakening and while in the state of dream has performed all the traditional behaviors that precede urinating – arising from bed, walking to the bathroom, and sitting down on the toilet or walking up to a urinal. The dreamer may then urinate and suddenly wake up to find they have wet themselves. The Type 2 false awakening seems to be considerably less common. Green characterized it as follows: The subject appears to wake up in a realistic manner but to an atmosphere of suspense.... The dreamer's surroundings may at first appear normal, and they may gradually become aware of something uncanny in the atmosphere, and perhaps of unwanted [unusual] sounds and movements, or they may "awake" immediately to a "stressed" and "stormy" atmosphere. In either case, the end result would appear to be characterized by feelings of suspense, excitement or apprehension.[7] Charles McCreerydraws attention to the similarity between this description and the description by the German psychopathologistKarl Jaspers(1923) of the so-called "primary delusionary experience" (a general feeling that precedes more specific delusory belief).[8]Jaspers wrote: Patients feel uncanny and that there is something suspicious afoot. Everything gets anew meaning. The environment is somehow different—not to a gross degree—perception is unaltered in itself but there is some change which envelops everything with a subtle, pervasive and strangely uncertain light.... Something seems in the air which the patient cannot account for, a distrustful, uncomfortable, uncanny tension invades him.[9] McCreery suggests this phenomenological similarity is not coincidental and results from the idea that both phenomena, the Type 2 false awakening and the primary delusionary experience, are phenomena of sleep.[10]He suggests that the primary delusionary experience, like other phenomena of psychosis such as hallucinations and secondary or specific delusions, represents an intrusion into waking consciousness of processes associated withstage 1 sleep. It is suggested that the reason for these intrusions is that the psychotic subject is in a state ofhyperarousal, a state that can lead to whatIan Oswaldcalled "microsleeps" in waking life.[11] Other researchers doubt that these are clearly distinguished types, as opposed to being points on a subtle spectrum.[12] The clinical and neurophysiological descriptions of false awakening are rare. One notable report by Takeuchiet al.,[13]was considered by some experts as a case of false awakening. It depicts ahypnagogichallucinationof an unpleasant and fearful feeling of presence in sleeping lab with perception of having risen from the bed. Thepolysomnographyshowed abundant trains of alpha rhythm onEEG(sometimes blocked byREMsmixed withslow eye movementsand low muscle tone). Conversely, the two experiences of FA monitored here were close to regular REM sleep. Even quantitative analysis clearly shows theta waves predominantly, suggesting that these two experiences are a product of adreamingrather than a fully conscious brain.[14] The clinical and neurophysiological characteristics of false awakening are
https://en.wikipedia.org/wiki/False_awakening
Incombinatory logicforcomputer science, afixed-point combinator(orfixpoint combinator)[1]: p.26is ahigher-order function(i.e., afunctionwhich takes a function asargument) that returns somefixed point(a value that is mapped to itself) of its argument function, if one exists. Formally, iffix{\displaystyle \mathrm {fix} }is a fixed-point combinator and the functionf{\displaystyle f}has one or more fixed points, thenfixf{\displaystyle \mathrm {fix} \ f}is one of these fixed points, i.e., Fixed-point combinators can be defined in thelambda calculusand infunctional programminglanguages, and provide a means to allow forrecursive definitions. In the classical untypedlambda calculus, every function has a fixed point. A particular implementation offix{\displaystyle \mathrm {fix} }isHaskell Curry's paradoxical combinatorY, given by[2]: 131[note 1][note 2] (Here using the standard notations and conventions of lambda calculus:Yis a function that takes one argumentfand returns the entire expression following the first period; the expressionλx.f(xx){\displaystyle \lambda x.f\ (x\ x)}denotes a function that takes one argumentx, thought of as a function, and returns the expressionf(xx){\displaystyle f\ (x\ x)}, where(xx){\displaystyle (x\ x)}denotesxapplied to itself. Juxtaposition of expressions denotes function application, is left-associative, and has higher precedence than the period.) The following calculation verifies thatYg{\displaystyle Yg}is indeed a fixed point of the functiong{\displaystyle g}: The lambda termg(Yg){\displaystyle g\ (Y\ g)}may not, in general,β-reduceto the termYg{\displaystyle Y\ g}. However, both terms β-reduce to the same term, as shown. Applied to a function with one variable, theYcombinator usually does not terminate. More interesting results are obtained by applying theYcombinator to functions of two or more variables. The added variables may be used as a counter, or index. The resulting function behaves like awhileor aforloop in an imperative language. Used in this way, theYcombinator implements simplerecursion. The lambda calculus does not allow a function to appear as a term in its own definition as is possible in manyprogramming languages, but a function can be passed as an argument to a higher-order function that applies it in a recursive manner. TheYcombinator may also be used in implementingCurry's paradox. The heart of Curry's paradox is that untyped lambda calculus is unsound as a deductive system, and theYcombinator demonstrates this by allowing an anonymous expression to represent zero, or even many values. This is inconsistent in mathematical logic. An example implementation ofYin the languageRis presented below: This can then be used to implement factorial as follows: Y is only needed when function names are absent. Substituting all the definitions into one line so that function names are not required gives: This works because R useslazy evaluation. Languages that usestrict evaluation, such asPython,C++, and otherstrict programming languages, can often express Y; however, any implementation is useless in practice since itloops indefinitelyuntil terminating via astack overflow. The Y combinator is an implementation of a fixed-point combinator in lambda calculus. Fixed-point combinators may also be easily defined in other functional and imperative languages. The implementation in lambda calculus is more difficult due to limitations in lambda calculus. The fixed-point combinator may be used in a number of different areas: Fixed-point combinators may be applied to a range of different functions, but normally will not terminate unless there is an extra parameter. When the function to be fixed refers to its parameter, another call to the function is invoked, so the calculation never gets started. Instead, the extra parameter is used to trigger the start of the calculation. The type of the fixed point is the return type of the function being fixed. This may be a real or a function or any other type. In the untyped lambda calculus, the function to apply the fixed-point combinator to may be expressed using an encoding, likeChurch encoding. In this case particular lambda terms (which define functions) are considered as values. "Running" (beta reducing) the fixed-point combinator on the encoding gives a lambda term for the result which may then be interpreted as fixed-point value. Alternately, a function may be considered as a lambda term defined purely in lambda calculus. These different approaches affect how a mathematician and a programmer may regard a fixed-point combinator. A mathematician may see the Y combinator applied to a function as being an expression satisfying the fixed-point equation, and therefore a solution. In contrast, a person only wanting to apply a fixed-point combinator to some general programming task may see it only as a means of implementing recursion. Many functions do not have any fixed points, for instancef:N→N{\displaystyle f:\mathbb {N} \to \mathbb {N} }withf(n)=n+1{\displaystyle f(n)=n+1}. UsingChurch encoding, natural numbers can be represented in lambda calculus, and this functionfcan be defined in lambda calculus. However, itsdomainwill now containalllambda expressions, not just those representing natural numbers. The Y combinator, applied tof, will yield a fixed-point forf, but this fixed-point won't represent a natural number. If trying to computeY fin an actual programming language, an infinite loop will occur. The fixed-point combinator may be defined in mathematics and then implemented in other languages. General mathematics defines a function based on itsextensionalproperties.[3]That is, two functions are equal if they perform the same mapping. Lambda calculus and programming languages regard function identity as anintensionalproperty. A function's identity is based on its implementation. A lambda calculus function (or term) is an implementation of a mathematical function. In the lambda calculus there are a number of combinators (implementations) that satisfy the mathematical definition of a fixed-point combinator. Combinatory logicis ahigher-order functionstheory. Acombinatoris aclosedlambda expression, meaning that it has nofree variables. The combinators may be combined to direct values to their correct places in the expression without ever naming them as variables. Fixed-point combinators can be used to implementrecursive definitionof functions. However, they are rarely used in practical programming.[4]Strongly normalizingtype systemssuch as thesimply typed lambda calculusdisallow non-termination and hence fixed-point combinators often cannot be assigned a type or require complex type system features. Furthermore fixed-point combinators are often inefficient compared to other strategies for implementing recursion, as they require more function reductions and construct and take apart a tuple for each group of mutually recursive definitions.[1]: page 232 Thefactorial functionprovides a good example of how a fixed-point combinator may be used to define recursive functions. The standard recursive definition of the factorial function in mathematics can be written as wherenis a non-negative integer. Implementing this in lambda calculus, where integers are represented usingChurch encoding, encounters the problem that the lambda calculus disallows the name of a function ('fact') to be used in the function's definition. This can be circumvented using a fixed-point combinatorfix{\displaystyle {\textsf {fix}}}as follows. Define a functionFof two argumentsfandn: (Here(IsZero⁡n){\displaystyle (\operatorname {IsZero} \ n)}is a function that takes two arguments and returns its first argument ifn=0, and its second argument otherwise;pred⁡n{\displaystyle \operatorname {pred} \ n}evaluates ton-1.) Now definefact=fixF{\displaystyle \operatorname {fact} ={\textsf {fix}}\ F}. Thenfact{\displaystyle \operatorname {fact} }is a fixed-point ofF, which gives as desired. The Y combinator, discovered byHaskell Curry, is defined as In untyped lambda calculus fixed-point combinators are not especially rare. In fact there are infinitely many of them.[5]In 2005 Mayer Goldberg showed that the set of fixed-point combinators of untyped lambda calculus isrecursively enumerable.[6] The Y combinator can be expressed in theSKI-calculusas Additional combinators (B, C, K, W system) allow for much shorter encodings. WithU=SII{\displaystyle {\mathsf {U=SII}}}the self-application combinator, sinceS(Kx)yz=x(yz)=Bxyz{\displaystyle {\mathsf {S}}({\mathsf {K}}x)yz=x(yz)={\mathsf {B}}xyz}andSx(Ky)z=xzy=Cxyz{\displaystyle {\mathsf {S}}x({\mathsf {K}}y)z=xzy={\mathsf {C}}xyz}, the above becomes The shortest fixed-point combinator in the SK-calculus using S and K combinators only, found byJohn Tromp, is although note that it is not in normal form, which is longer. This combinator corresponds to the lambda expression The following fixed-point combinator is simpler than the Y combinator, and β-reduces into the Y combinator; it is sometimes cited as the Y combinator itself: Another common fixed-point combinator is the Turing fixed-point combinator (named after its discoverer,Alan Turing):[7][2]: 132 Its advantage overY{\displaystyle {\mathsf {Y}}}is thatΘf{\displaystyle \Theta \ f}beta-reduces tof(Θf){\displaystyle f\ (\Theta f)},[note 3]whereasYf{\displaystyle {\mathsf {Y}}\ f}andf(Yf){\displaystyle f\ ({\mathsf {Y}}f)}only beta-reduce to a common term. Θ{\displaystyle \Theta }also has a simple call-by-value form: The analog formutual recursionis apolyvariadic fix-point combinator,[8][9][10]which may be denoted Y*. In astrict programming languagethe Y combinator will expand untilstack overflow, or never halt in case oftail call optimization.[11]The Z combinator will work instrict languages(also called eager languages, where applicative evaluation order is applied). The Z combinator has the next argument defined explicitly, preventing the expansion ofZg{\displaystyle Zg}in the right-hand side of the definition:[12] and in lambda calculus it is aneta-expansionof theYcombinator: If F is a fixed-point combinator in untyped lambda calculus, then there is: Terms that have the sameBöhm treeas a fixed-point combinator, i.e., have the same infinite extensionλx.x(x(x⋯)){\displaystyle \lambda x.x(x(x\cdots ))}, are callednon-standard fixed-point combinators. Any fixed-point combinator is also a non-standard one, but not all non-standard fixed-point combinators are fixed-point combinators because some of them fail to satisfy the fixed-point equation that defines the "standard" ones. These combinators are calledstrictly non-standard fixed-point combinators; an example is the following combinator: where since whereNi{\displaystyle {\mathsf {N}}_{i}}are modifications ofN{\displaystyle {\mathsf {N}}}created on the fly which addi{\displaystyle i}instances ofx{\displaystyle x}at once into the chain while being replaced withNi+1{\displaystyle {\mathsf {N}}_{i+1}}. The set of non-standard fixed-point combinators is notrecursively enumerable.[6] The Y combinator is a particular implementation of a fixed-point combinator in lambda calculus. Its structure is determined by the limitations of lambda calculus. It is not necessary or helpful to use this structure in implementing the fixed-point combinator in other languages. Simple examples of fixed-point combinators implemented in someprogramming paradigmsare given below. In a language that supportslazy evaluation, as inHaskell, it is possible to define a fixed-point combinator using the defining equation of the fixed-point combinator which is conventionally namedfix. Since Haskell has lazydata types, this combinator can also be used to define fixed points of data constructors (and not only to implement recursive functions). The definition is given here, followed by some usage examples. In Hackage, the original sample is:[13] In a strict functional language, as illustrated below withOCaml, the argument tofis expanded beforehand, yielding an infinite call sequence, This may be resolved by defining fix with an extra parameter. In a multi-paradigm functional language (one decorated with imperative features), such asLisp,Peter Landinsuggested the use of a variable assignment to create a fixed-point combinator,[14]as in the below example usingScheme: Using a lambda calculus with axioms for assignment statements, it can be shown thatY!satisfies the same fixed-point law as the call-by-value Y combinator:[15][16] In more idiomatic modern Scheme usage, this would typically be handled via aletrecexpression, aslexical scopewas introduced to Lisp in the 1970s: Or without the internal label: This example is a slightly interpretive implementation of a fixed-point combinator. A class is used to contain thefixfunction, calledfixer. The function to be fixed is contained in a class that inherits from fixer. Thefixfunction accesses the function to be fixed as a virtual function. As for the strict functional definition,fixis explicitly given an extra parameterx, which means that lazy evaluation is not needed. Another example can be shown to demonstrateSKI combinator calculus(with given bird name fromcombinatory logic) being used to build up Z combinator to achievetail call-like behavior through trampolining: InSystem F(polymorphiclambda calculus) a polymorphic fixed-point combinator has type[17] wherea{\displaystyle a}is atype variable. That is, if the type offixf{\displaystyle fix\ f}fulfilling the equationfixf=f(fixf){\displaystyle fix\ f\ =\ f\ (fix\ f)}isa{\displaystyle a}, – the most general type, – then the type off{\displaystyle f}isa→a{\displaystyle a\to a}. So then,fix{\displaystyle fix}takes a function which mapsa{\displaystyle a}toa{\displaystyle a}and uses it to return a value of typea{\displaystyle a}. In the simply typed lambda calculus extended withrecursive data types, fixed-point operators can be written, but the type of a "useful" fixed-point operator (one whose application always returns) may be restricted. In thesimply typed lambda calculus, the fixed-point combinator Y cannot be assigned a type[18]because at some point it would deal with the self-application sub-termxx{\displaystyle x~x}by the application rule: wherex{\displaystyle x}has the infinite typet1=t1→t2{\displaystyle t_{1}=t_{1}\to t_{2}}. No fixed-point combinator can in fact be typed; in those systems, any support for recursion must be explicitly added to the language. In programming languages that supportrecursive data types, the unbounded recursion int=t→a{\displaystyle t=t\to a}which creates the infinite typet{\displaystyle t}is broken by marking thet{\displaystyle t}type explicitly as a recursive typeReca{\displaystyle Rec\ a}, which is defined so as to be isomorphic to (or just to be a synonym of)Reca→a{\displaystyle Rec\ a\to a}. TheReca{\displaystyle Rec\ a}type value is created by simply tagging the function value of typeReca→a{\displaystyle Rec\ a\to a}with the data constructor tagRec{\displaystyle Rec}(or any other of our choosing). For example, in the following Haskell code, hasRecandappbeing the names of the two directions of the isomorphism, with types:[19][20] which lets us write: Or equivalently in OCaml: Alternatively: Because fixed-point combinators can be used to implement recursion, it is possible to use them to describe specific types of recursive computations, such as those infixed-point iteration,iterative methods,recursive joininrelational databases,data-flow analysis, FIRST and FOLLOW sets of non-terminals in acontext-free grammar,transitive closure, and other types ofclosureoperations. A function for whicheveryinput is a fixed point is called anidentity function. Formally: In contrast to universal quantification over allx{\displaystyle x}, a fixed-point combinator constructsonevalue that is a fixed point off{\displaystyle f}. The remarkable property of a fixed-point combinator is that it constructs a fixed point for anarbitrary givenfunctionf{\displaystyle f}. Other functions have the special property that, after being applied once, further applications don't have any effect. More formally: Such functions are calledidempotent(see alsoProjection (mathematics)). An example of such a function is the function that returns0for all even integers, and1for all odd integers. Inlambda calculus,from a computational point of view, applying a fixed-point combinator to an identity function or an idempotent function typically results in non-terminating computation. For example, obtaining where the resulting term can only reduce to itself and represents an infinite loop. Fixed-point combinators do not necessarily exist in more restrictive models of computation. For instance, they do not exist insimply typed lambda calculus. The Y combinator allowsrecursionto be defined as a set ofrewrite rules,[21]without requiring native recursion support in the language.[22] In programming languages that supportanonymous functions, fixed-point combinators allow the definition and use of anonymousrecursivefunctions, i.e., without having tobindsuch functions toidentifiers. In this setting, the use of fixed-point combinators is sometimes calledanonymous recursion.[note 4][23]
https://en.wikipedia.org/wiki/Fixed_point_combinator
In mathematics,infinitecompositionsofanalytic functions(ICAF)offer alternative formulations ofanalytic continued fractions,series,productsand other infinite expansions, and the theory evolving from such compositions may shed light on theconvergence/divergenceof these expansions. Some functions can actually be expanded directly as infinite compositions. In addition, it is possible to use ICAF to evaluate solutions offixed pointequations involving infinite expansions.Complex dynamicsoffers another venue foriteration of systems of functionsrather than a single function. For infinite compositions of asingle functionseeIterated function. For compositions of a finite number of functions, useful infractaltheory, seeIterated function system. Although the title of this article specifies analytic functions, there are results for more generalfunctions of a complex variableas well. There are several notations describing infinite compositions, including the following: Forward compositions:Fk,n(z)=fk∘fk+1∘⋯∘fn−1∘fn(z).{\displaystyle F_{k,n}(z)=f_{k}\circ f_{k+1}\circ \dots \circ f_{n-1}\circ f_{n}(z).} Backward compositions:Gk,n(z)=fn∘fn−1∘⋯∘fk+1∘fk(z).{\displaystyle G_{k,n}(z)=f_{n}\circ f_{n-1}\circ \dots \circ f_{k+1}\circ f_{k}(z).} In each case convergence is interpreted as the existence of the following limits: For convenience, setFn(z) =F1,n(z)andGn(z) =G1,n(z). One may also writeFn(z)=Rnk=1fk(z)=f1∘f2∘⋯∘fn(z){\displaystyle F_{n}(z)={\underset {k=1}{\overset {n}{\mathop {R} }}}\,f_{k}(z)=f_{1}\circ f_{2}\circ \cdots \circ f_{n}(z)}andGn(z)=Lnk=1gk(z)=gn∘gn−1∘⋯∘g1(z){\displaystyle G_{n}(z)={\underset {k=1}{\overset {n}{\mathop {L} }}}\,g_{k}(z)=g_{n}\circ g_{n-1}\circ \cdots \circ g_{1}(z)} Many results can be considered extensions of the following result: Contraction Theorem for Analytic Functions[1]—Letfbe analytic in a simply-connected regionSand continuous on the closureSofS. Supposef(S) is a bounded set contained inS. Then for allzinSthere exists anattractive fixed pointα offinSsuch that:Fn(z)=(f∘f∘⋯∘f)(z)→α.{\displaystyle F_{n}(z)=(f\circ f\circ \cdots \circ f)(z)\to \alpha .} Let {fn} be a sequence of functions analytic on a simply-connected domainS. Suppose there exists a compact set Ω ⊂Ssuch that for eachn,fn(S) ⊂ Ω. Forward (inner or right) Compositions Theorem—{Fn} converges uniformly on compact subsets ofSto a constant functionF(z) =λ.[2] Backward (outer or left) Compositions Theorem—{Gn} converges uniformly on compact subsets ofStoγ∈ Ω if and only if the sequence of fixed points {γn} of the {fn} converges toγ.[3] Additional theory resulting from investigations based on these two theorems, particularly Forward Compositions Theorem, include location analysis for the limits obtained in the following reference.[4]For a different approach to Backward Compositions Theorem, see the following reference.[5] Regarding Backward Compositions Theorem, the examplef2n(z) = 1/2 andf2n−1(z) = −1/2 forS= {z: |z| < 1} demonstrates the inadequacy of simply requiring contraction into a compact subset, like Forward Compositions Theorem. For functions not necessarily analytic theLipschitzcondition suffices: Theorem[6]—SupposeS{\displaystyle S}is a simply connected compact subset ofC{\displaystyle \mathbb {C} }and lettn:S→S{\displaystyle t_{n}:S\to S}be a family of functions that satisfies∀n,∀z1,z2∈S,∃ρ:|tn(z1)−tn(z2)|≤ρ|z1−z2|,ρ<1.{\displaystyle \forall n,\forall z_{1},z_{2}\in S,\exists \rho :\quad \left|t_{n}(z_{1})-t_{n}(z_{2})\right|\leq \rho |z_{1}-z_{2}|,\quad \rho <1.}Define:Gn(z)=(tn∘tn−1∘⋯∘t1)(z)Fn(z)=(t1∘t2∘⋯∘tn)(z){\displaystyle {\begin{aligned}G_{n}(z)&=\left(t_{n}\circ t_{n-1}\circ \cdots \circ t_{1}\right)(z)\\F_{n}(z)&=\left(t_{1}\circ t_{2}\circ \cdots \circ t_{n}\right)(z)\end{aligned}}}ThenFn(z)→β∈S{\displaystyle F_{n}(z)\to \beta \in S}uniformly onS.{\displaystyle S.}Ifαn{\displaystyle \alpha _{n}}is the unique fixed point oftn{\displaystyle t_{n}}thenGn(z)→α{\displaystyle G_{n}(z)\to \alpha }uniformly onS{\displaystyle S}if and only if|αn−α|=εn→0{\displaystyle |\alpha _{n}-\alpha |=\varepsilon _{n}\to 0}. Results involvingentire functionsinclude the following, as examples. Set Then the following results hold: Theorem E1[7]—Ifan≡ 1,∑n=1∞ρn<∞{\displaystyle \sum _{n=1}^{\infty }\rho _{n}<\infty }thenFn→Fis entire. Theorem E2[8]—Setεn= |an−1| suppose there exists non-negativeδn,M1,M2,Rsuch that the following holds:∑n=1∞εn<∞,∑n=1∞δn<∞,∏n=1∞(1+δn)<M1,∏n=1∞(1+εn)<M2,ρn<δnRM1M2.{\displaystyle {\begin{aligned}\sum _{n=1}^{\infty }\varepsilon _{n}&<\infty ,\\\sum _{n=1}^{\infty }\delta _{n}&<\infty ,\\\prod _{n=1}^{\infty }(1+\delta _{n})&<M_{1},\\\prod _{n=1}^{\infty }(1+\varepsilon _{n})&<M_{2},\\\rho _{n}&<{\frac {\delta _{n}}{RM_{1}M_{2}}}.\end{aligned}}}ThenGn(z) →G(z) is analytic for |z| <R. Convergence is uniform on compact subsets of {z: |z| <R}. Additional elementary results include: Theorem GF3[6]—Supposefk(z)=z+ρkφk(z){\displaystyle f_{k}(z)=z+\rho _{k}\varphi _{k}(z)}where there existR,M>0{\displaystyle R,M>0}such that|z|<R{\displaystyle |z|<R}implies|φk(z)|<M,∀k,{\displaystyle |\varphi _{k}(z)|<M,\forall k,\ }Furthermore, supposeρk≥0,∑k=1∞ρk<∞{\textstyle \rho _{k}\geq 0,\sum _{k=1}^{\infty }\rho _{k}<\infty }andR>M∑k=1∞ρk.{\textstyle R>M\sum _{k=1}^{\infty }\rho _{k}.}Then forR∗<R−M∑k=1∞ρk{\textstyle R*<R-M\sum _{k=1}^{\infty }\rho _{k}}Gn(z)≡(fn∘fn−1∘⋯∘f1)(z)→G(z)for{z:|z|<R∗}{\displaystyle G_{n}(z)\equiv \left(f_{n}\circ f_{n-1}\circ \cdots \circ f_{1}\right)(z)\to G(z)\qquad {\text{ for }}\{z:|z|<R*\}} Theorem GF4[6]—Supposefk(z)=z+ρkφk(z){\displaystyle f_{k}(z)=z+\rho _{k}\varphi _{k}(z)}where there existR,M>0{\displaystyle R,M>0}such that|z|<R{\displaystyle |z|<R}and|ζ|<R{\displaystyle |\zeta |<R}implies|φk(z)|<M{\displaystyle |\varphi _{k}(z)|<M}and|φk(z)−φk(ζ)|≤r|z−ζ|,∀k.{\displaystyle |\varphi _{k}(z)-\varphi _{k}(\zeta )|\leq r|z-\zeta |,\forall k.\ }Furthermore, supposeρk≥0,∑k=1∞ρk<∞{\textstyle \rho _{k}\geq 0,\sum _{k=1}^{\infty }\rho _{k}<\infty }andR>M∑k=1∞ρk.{\textstyle R>M\sum _{k=1}^{\infty }\rho _{k}.}Then forR∗<R−M∑k=1∞ρk{\textstyle R*<R-M\sum _{k=1}^{\infty }\rho _{k}}Fn(z)≡(f1∘f2∘⋯∘fn)(z)→F(z)for{z:|z|<R∗}{\displaystyle F_{n}(z)\equiv \left(f_{1}\circ f_{2}\circ \cdots \circ f_{n}\right)(z)\to F(z)\qquad {\text{ for }}\{z:|z|<R*\}} Results[8]for compositions oflinear fractional (Möbius) transformationsinclude the following, as examples: Theorem LFT1—On the set of convergence of a sequence {Fn} of non-singular LFTs, the limit function is either: In (a), the sequence converges everywhere in the extended plane. In (b), the sequence converges either everywhere, and to the same value everywhere except at one point, or it converges at only two points. Case (c) can occur with every possible set of convergence.[9] Theorem LFT2[10]—If {Fn} converges to an LFT, thenfnconverge to the identity functionf(z) =z. Theorem LFT3[11]—Iffn→fand all functions arehyperbolicorloxodromicMöbius transformations, thenFn(z) →λ, a constant, for allz≠β=limn→∞βn{\textstyle z\neq \beta =\lim _{n\to \infty }\beta _{n}}, where {βn} are the repulsive fixed points of the {fn}. Theorem LFT4[12]—Iffn→fwherefisparabolicwith fixed pointγ. Let the fixed-points of the {fn} be {γn} and {βn}. If∑n=1∞|γn−βn|<∞and∑n=1∞n|βn+1−βn|<∞{\displaystyle \sum _{n=1}^{\infty }\left|\gamma _{n}-\beta _{n}\right|<\infty \quad {\text{and}}\quad \sum _{n=1}^{\infty }n\left|\beta _{n+1}-\beta _{n}\right|<\infty }thenFn(z) →λ, a constant in the extended complex plane, for allz. The value of the infinite continued fraction may be expressed as the limit of the sequence {Fn(0)} where As a simple example, a well-known result (Worpitsky's circle theorem[13]) follows from an application of Theorem (A): Consider the continued fraction with Stipulate that |ζ| < 1 and |z| <R< 1. Then for 0 <r< 1, Example.F(z)=(i−1)z1+i+z+(2−i)z1+2i+z+(3−i)z1+3i+z+⋯,{\displaystyle F(z)={\frac {(i-1)z}{1+i+z{\text{ }}+}}{\text{ }}{\frac {(2-i)z}{1+2i+z{\text{ }}+}}{\text{ }}{\frac {(3-i)z}{1+3i+z{\text{ }}+}}\cdots ,}[−15,15]{\displaystyle [-15,15]} Example.[8]Afixed-point continued fraction form(a single variable). Examples illustrating the conversion of a function directly into a composition follow: Example 1.[7][14]Supposeϕ{\displaystyle \phi }is an entire function satisfying the following conditions: Then Example 2.[7] Example 3.[6] Example 4.[6] Theorem (B) can be applied to determine the fixed-points of functions defined by infinite expansions or certain integrals. The following examples illustrate the process: Example FP1.[3]For |ζ| ≤ 1 let To find α =G(α), first we define: Then calculateGn(ζ)=fn∘⋯∘f1(ζ){\displaystyle G_{n}(\zeta )=f_{n}\circ \cdots \circ f_{1}(\zeta )}with ζ = 1, which gives: α = 0.087118118... to ten decimal places after ten iterations. Theorem FP2[8]—Letφ(ζ,t) be analytic inS= {z: |z| <R} for alltin [0, 1] and continuous int. Setfn(ζ)=1n∑k=1nφ(ζ,kn).{\displaystyle f_{n}(\zeta )={\frac {1}{n}}\sum _{k=1}^{n}\varphi \left(\zeta ,{\tfrac {k}{n}}\right).}If |φ(ζ,t)| ≤r<Rforζ∈Sandt∈ [0, 1], thenζ=∫01φ(ζ,t)dt{\displaystyle \zeta =\int _{0}^{1}\varphi (\zeta ,t)\,dt}has a unique solution,αinS, withlimn→∞Gn(ζ)=α.{\displaystyle \lim _{n\to \infty }G_{n}(\zeta )=\alpha .} Consider a time interval, normalized toI= [0, 1]. ICAFs can be constructed to describe continuous motion of a point,z, over the interval, but in such a way that at each "instant" the motion is virtually zero (seeZeno's Arrow): For the interval divided into n equal subintervals, 1 ≤k≤nsetgk,n(z)=z+φk,n(z){\displaystyle g_{k,n}(z)=z+\varphi _{k,n}(z)}analytic or simply continuous – in a domainS, such that andgk,n(z)∈S{\displaystyle g_{k,n}(z)\in S}. Source:[8] implies where the integral is well-defined ifdzdt=ϕ(z,t){\displaystyle {\tfrac {dz}{dt}}=\phi (z,t)}has a closed-form solutionz(t). Then Otherwise, the integrand is poorly defined although the value of the integral is easily computed. In this case one might call the integral a "virtual" integral. Example.ϕ(z,t)=2t−cos⁡y1−sin⁡xcos⁡y+i1−2tsin⁡x1−sin⁡xcos⁡y,∫01ψ(z,t)dt{\displaystyle \phi (z,t)={\frac {2t-\cos y}{1-\sin x\cos y}}+i{\frac {1-2t\sin x}{1-\sin x\cos y}},\int _{0}^{1}\psi (z,t)\,dt} Example.Let: Next, setT1,n(z)=gn(z),Tk,n(z)=gn(Tk−1,n(z)),{\displaystyle T_{1,n}(z)=g_{n}(z),T_{k,n}(z)=g_{n}(T_{k-1,n}(z)),}andTn(z) =Tn,n(z). Let when that limit exists. The sequence {Tn(z)} defines contours γ = γ(cn,z) that follow the flow of the vector fieldf(z). If there exists an attractive fixed point α, meaning |f(z) − α| ≤ ρ|z− α| for 0 ≤ ρ < 1, thenTn(z) →T(z) ≡ α along γ = γ(cn,z), provided (for example)cn=n{\displaystyle c_{n}={\sqrt {n}}}. Ifcn≡c> 0, thenTn(z) →T(z), a point on the contour γ = γ(c,z). It is easily seen that and when these limits exist. These concepts are marginally related toactive contour theoryin image processing, and are simple generalizations of theEuler method The series defined recursively byfn(z) =z+gn(z) have the property that the nth term is predicated on the sum of the firstn− 1 terms. In order to employ theorem (GF3) it is necessary to show boundedness in the following sense: If eachfnis defined for |z| <Mthen |Gn(z)| <Mmust follow before |fn(z) −z| = |gn(z)| ≤Cβnis defined for iterative purposes. This is becausegn(Gn−1(z)){\displaystyle g_{n}(G_{n-1}(z))}occurs throughout the expansion. The restriction serves this purpose. ThenGn(z) →G(z) uniformly on the restricted domain. Example (S1).Set andM= ρ2. ThenR= ρ2− (π/6) > 0. Then, ifS={z:|z|<R,Re⁡(z)>0}{\displaystyle S=\left\{z:|z|<R,\operatorname {Re} (z)>0\right\}},zinSimplies |Gn(z)| <Mand theorem (GF3) applies, so that converges absolutely, hence is convergent. Example (S2):fn(z)=z+1n2⋅φ(z),φ(z)=2cos⁡(x/y)+i2sin⁡(x/y),>Gn(z)=fn∘fn−1∘⋯∘f1(z),[−10,10],n=50{\displaystyle f_{n}(z)=z+{\frac {1}{n^{2}}}\cdot \varphi (z),\varphi (z)=2\cos(x/y)+i2\sin(x/y),>G_{n}(z)=f_{n}\circ f_{n-1}\circ \cdots \circ f_{1}(z),\qquad [-10,10],n=50} The product defined recursively by has the appearance In order to apply Theorem GF3 it is required that: Once again, a boundedness condition must support If one knowsCβnin advance, the following will suffice: ThenGn(z) →G(z) uniformly on the restricted domain. Example (P1).Supposefn(z)=z(1+gn(z)){\displaystyle f_{n}(z)=z(1+g_{n}(z))}withgn(z)=z2n3,{\displaystyle g_{n}(z)={\tfrac {z^{2}}{n^{3}}},}observing after a few preliminary computations, that |z| ≤ 1/4 implies |Gn(z)| < 0.27. Then and converges uniformly. Example (P2). Example (CF1): A self-generating continued fraction.[8] Example (CF2): Best described as a self-generating reverseEuler continued fraction.[8]
https://en.wikipedia.org/wiki/Infinite_compositions_of_analytic_functions
Incomputer programming, aninfinite loop(orendless loop)[1][2]is a sequence of instructions that, as written, will continue endlessly, unless an external intervention occurs, such as turning off power via a switch or pulling a plug. It may be intentional. There is no general algorithm to determine whether a computer program contains an infinite loop or not; this is thehalting problem. This differs from "a type of computer program that runs the same instructions continuously until it is either stopped or interrupted".[3]Consider the followingpseudocode: The same instructionswere runcontinuously until it was stopped or interrupted. . . by theFALSEreturned at some point by the functionis_there_more_data. By contrast, the following loop will not end by itself: birdswill alternate being 1 or 2, whilefishwill alternate being 2 or 1. The loop will not stop unless an external intervention occurs ("pull the plug"). Aninfinite loopis a sequence of instructions in acomputer programwhich loops endlessly, either due to theloophaving no terminating condition,[4]having one that can never be met, or one that causes the loop to start over. In olderoperating systemswithcooperative multitasking,[5]infinite loops normally caused the entire system to become unresponsive. With the now-prevalent preemptive multitasking model, infinite loops usually cause the program to consume all available processor time, but can usually be terminated by a user.Busy waitloops are also sometimes called "infinite loops". Infinite loops are one possible cause for a computerhanging or freezing; others includethrashing,deadlock, andaccess violations. Looping is repeating a set of instructions until a specific condition is met. An infinite loop occurs when the condition will never be met due to some inherent characteristic of the loop. There are a few situations when this is desired behavior. For example, the games on cartridge-based game consoles typically have no exit condition in their main loop, as there is no operating system for the program to exit to; the loop runs until the console is powered off. Modern interactive computers require that the computer constantly be monitoring for user input or device activity, so at some fundamental level there is an infinite processingidle loopthat must continue until the device is turned off or reset. In theApollo Guidance Computer, for example, this outer loop was contained in the Exec program,[6]and if the computer had absolutely no other work to do, it would loop run a dummy job that would simply turn off the "computer activity" indicator light. Modern computers also typically do not halt the processor or motherboard circuit-driving clocks when they crash. Instead they fall back to an error condition displaying messages to the operator (such as theblue screen of death), and enter an infinite loop waiting for the user to either respond to a prompt to continue, or reset the device. Spinlocksare low-level synchronization mechanisms used in concurrent programming to protect shared resources. Unlike traditional locks that put a thread to sleep when it can't acquire the lock, spinlocks repeatedly "spin" in an infinite loop until the lock becomes available. This intentional infinite looping is a deliberate design choice aimed at minimizing the time a thread spends waiting for the lock and avoiding the overhead of higher level synchronisation mechanisms such asmutexes. In multi-threaded programs some threads can be executing inside infinite loops without causing the entire program to be stuck in an infinite loop. If the main thread exits, all threads of the process are forcefully stopped, thus all execution ends and the process/program terminates. The threads inside the infinite loops can perform "housekeeping" tasks or they can be in a blocked state waiting for input (from socket/queue) and resume execution every time input is received. Most often, the term is used for those situations when this is not the intended result; that is, when this is abug.[7]Such errors are most common by novice programmers, but can be made by experienced programmers also, because their causes can be quite subtle. One common cause, for example, is that a programmer intends to iterate over sequence of nodes in adata structuresuch as alinked listortree, executing the loop code once for each node. Improperly formed links can create areference loopin the data structure, where one node links to another that occurs earlier in the sequence. This makes part of the data structure into aring, causing naive code to loop forever. While most infinite loops can be found by close inspection of the code, there is no general method to determine whether a given program will ever halt or will run forever; this is theundecidabilityof thehalting problem.[8] As long as the system is responsive, infinite loops can often be interrupted by sending a signal to the process (such asSIGINTin Unix), or aninterruptto the processor, causing the current process to be aborted. This can be done in atask manager, in a terminal with theControl-Ccommand,[9]or by using thekillcommand orsystem call. However, this does not always work, as the process may not be responding to signals or the processor may be in an uninterruptible state, such as in theCyrix coma bug(caused by overlapping uninterruptible instructions in aninstruction pipeline). In some cases other signals such asSIGKILLcan work, as they do not require the process to be responsive, while in other cases the loop cannot be terminated short of system shutdown. Infinite loops can be implemented using variouscontrol flowconstructs. Most commonly, in unstructured programming this is jump back up (goto), while in structured programming this is an indefinite loop (while loop) set to never end, either by omitting the condition or explicitly setting it to true, aswhile (true) .... Some languages have special constructs for infinite loops, typically by omitting the condition from an indefinite loop. Examples include Ada (loop ... end loop),[10]Fortran (DO ... END DO), Go (for { ... }), Ruby (loop do ... end), and Rust (loop { ... }). A simple example (inC): The formfor (;;)for an infinite loop is traditional, appearing in the standard referenceThe C Programming Language, and is often punningly pronounced "forever".[11] This is a loop that will print "Infinite Loop" without halting. A similar example in 1980s-eraBASIC: A similar example inMS-DOScompatible batch files: InJava: The while loop never terminates because its condition is always true. InBourne Again Shell: InRust: Here is one example of an infinite loop inVisual Basic: This creates a situation wherexwill never be greater than 5, since at the start of the loop code,xis assigned the value of 1 (regardless of any previous value) before it is changed tox+ 1. Thus the loop will always result inx= 2 and will never break. This could be fixed by moving thex = 1instruction outside the loop so that its initial value is set only once. In some languages, programmer confusion about mathematical symbols may lead to an unintentional infinite loop. For example, here is a snippet inC: The expected output is the numbers 0 through 9, with an interjected "a equals 5!" between 5 and 6. However, in the line "if (a = 5)" above, the = (assignment) operator was confused with the == (equality test) operator. Instead, this will assign the value of 5 toaat this point in the program. Thus,awill never be able to advance to 10, and this loop cannot terminate. Unexpected behavior in evaluating the terminating condition can also cause this problem. Here is an example inC: On some systems, this loop will execute ten times as expected, but on other systems it will never terminate. The problem is that the loop terminating condition(x != 1.1)tests for exact equality of twofloating pointvalues, and the way floating point values are represented in many computers will make this test fail, because they cannot represent the value 0.1 exactly, thus introducing rounding errors on each increment (cf. box). The same can happen inPython: Because of the likelihood of tests for equality or not-equality failing unexpectedly, it is safer to use greater-than or less-than tests when dealing with floating-point values. For example, instead of testing whetherxequals 1.1, one might test whether(x <= 1.0), or(x < 1.1), either of which would be certain to exit after a finite number of iterations. Another way to fix this particular example would be to use anintegeras aloop index, counting the number of iterations that have been performed. A similar problem occurs frequently innumerical analysis: in order to compute a certain result, an iteration is intended to be carried out until the error is smaller than a chosen tolerance. However, because of rounding errors during the iteration, the specified tolerance can never be reached, resulting in an infinite loop. An infinite loop may be caused by several entities interacting. Consider a server that always replies with an error message if it does not understand the request. Even if there is no possibility for an infinite loop within the server itself, a system comprising two of them (AandB) may loop endlessly: ifAreceives a message of unknown type fromB, thenAreplies with an error message toB; ifBdoes not understand the error message, it replies toAwith its own error message; ifAdoes not understand the error message fromB, it sends yet another error message, and so on. One common example of such situation is an email loop. An example of an email loop is if someone receives mail from a no reply inbox, but their auto-response is on. They will reply to the no reply inbox, triggering the "this is a no reply inbox" response. This will be sent to the user, who then sends an auto reply to the no-reply inbox, and so on and so forth. A pseudo-infinite loop is a loop that appears infinite but is really just a very long loop. An example inbash: An examplefor loopinC: It appears that this will go on indefinitely, but in fact the value ofiwill eventually reach the maximum value storable in anunsigned intand adding 1 to that number will wrap-around to 0, breaking the loop. The actual limit ofidepends on the details of the system andcompilerused. Witharbitrary-precision arithmetic, this loop would continue until the computer'smemorycould no longer holdi. Ifiwas a signed integer, rather than an unsigned integer, overflow would be undefined. In this case, the compiler could optimize the code into an infinite loop. Infinite recursion is a special case of an infinite loop that is caused byrecursion. The following example inVisual Basic for Applications(VBA) returns astack overflowerror: A "while (true)" loop looks infinite at first glance, but there may be a way to escape the loop through abreak statementorreturn statement. Example inPHP: Alderson loopis a rare slang orjargonterm for an infinite loop where there is an exit condition available, but inaccessible in an implementation of the code, typically due to a programmer error. These are most common and visible whiledebugginguser interfacecode. A C-like pseudocode example of an Alderson loop, where the program is supposed to sum numbers given by the user until zero is given, but where the wrong operator is used: The term allegedly received its name from a programmer (whose last name is Alderson) who in 1996[12]had coded amodaldialog boxinMicrosoft Accesswithout either an OK or Cancel button, thereby disabling the entire program whenever the box came up.[13]
https://en.wikipedia.org/wiki/Infinite_loop
Infinite regressis aphilosophicalconcept to describe a series of entities. Each entity in the series depends on its predecessor, following arecursiveprinciple. For example, theepistemic regressis a series of beliefs in which thejustificationof each belief depends on the justification of the belief that comes before it. Aninfinite regress argumentis an argument against a theory based on the fact that this theory leads to an infinite regress. For such an argument to be successful, it must demonstrate not just that the theory in question entails an infinite regress but also that this regress isvicious. There are different ways in which a regress can be vicious. The most serious form of viciousness involves acontradictionin the form ofmetaphysical impossibility. Other forms occur when the infinite regress is responsible for the theory in question being implausible or for its failure to solve the problem it was formulated to solve. Traditionally, it was often assumed without much argument that each infinite regress is vicious but this assumption has been put into question in contemporary philosophy. While some philosophers have explicitly defended theories with infinite regresses, the more common strategy has been to reformulate the theory in question in a way that avoids the regress. One such strategy isfoundationalism, which posits that there is a first element in the series from which all the other elements arise but which is not itself explained this way. Another way iscoherentism, which is based on a holistic explanation that usually sees the entities in question not as a linear series but as an interconnected network. Infinite regress arguments have been made in various areas of philosophy. Famous examples include thecosmological argumentandBradley's regress. Aninfinite regressis an infinite series of entities governed by a recursive principle that determines how each entity in the series depends on or is produced by its predecessor.[1]This principle can often be expressed in the following form:XisFbecauseXstands inRtoYandYisF.XandYstand for objects,Rstands for a relation andFstands for a property in the widest sense.[1][2]In the epistemic regress, for example, a belief is justified because it is based on another belief that is justified. But this other belief is itself in need of one more justified belief for itself to be justified and so on.[3]Or in the cosmological argument, an event occurred because it was caused by another event that occurred before it, which was itself caused by a previous event, and so on.[1][4]This principle by itself is not sufficient: it does not lead to a regress if there is noXthat isF. This is why an additional triggering condition has to be fulfilled: there has to be anXthat isFfor the regress to get started.[5]So the regress starts with the fact thatXisF. According to the recursive principle, this is only possible if there is a distinctYthat is alsoF. But in order to account for the fact thatYisF, we need to posit aZthat isFand so on. Once the regress has started, there is no way of stopping it since a new entity has to be introduced at each step in order to make the previous step possible.[1] Aninfinite regress argumentis an argument against a theory based on the fact that this theory leads to an infinite regress.[1][5]For such an argument to be successful, it has to demonstrate not just that the theory in question entails an infinite regress but also that this regress isvicious.[1][4]The mere existence of an infinite regress by itself is not a proof for anything.[5]So in addition to connecting the theory to a recursive principle paired with a triggering condition, the argument has to show in which way the resulting regress is vicious.[4][5]For example, one form ofevidentialismin epistemology holds that a belief is only justified if it is based on another belief that is justified. An opponent of this theory could use an infinite regress argument by demonstrating (1) that this theory leads to an infinite regress (e.g. by pointing out the recursive principle and the triggering condition) and (2) that this infinite regress is vicious (e.g. by showing that it is implausible given the limitations of the human mind).[1][5][3][6]In this example, the argument has a negative form since it only denies that another theory is true. But it can also be used in a positive form to support a theory by showing that its alternative involves a vicious regress.[3]This is how thecosmological argumentfor the existence of God works: it claims that positing God's existence is necessary in order to avoid an infinite regress of causes.[1][4][3] For aninfinite regress argumentto be successful, it has to show that the involved regress isvicious.[3]Anon-viciousregress is calledvirtuousorbenign.[5]Traditionally, it was often assumed without much argument that each infinite regress is vicious but this assumption has been put into question in contemporary philosophy. In most cases, it is not self-evident whether an infinite regress is vicious or not.[5]Thetruth regressconstitutes an example of an infinite regress that is not vicious: if the proposition "P" is true, then the proposition that "It is true that P" is also true and so on.[4]Infinite regresses pose a problem mostly if the regress concerns concrete objects.Abstract objects, on the other hand, are often considered to be unproblematic in this respect. For example, the truth-regress leads to an infinite number of true propositions or thePeano axiomsentail the existence of infinitely manynatural numbers. But these regresses are usually not held against the theories that entail them.[4] There are different ways in which a regress can be vicious. The most serious type of viciousness involves acontradictionin the form ofmetaphysical impossibility.[4][1][7]Other types occur when the infinite regress is responsible for the theory in question being implausible or for its failure to solve the problem it was formulated to solve.[4][7]The vice of an infinite regress can be local if it causes problems only for certain theories when combined with other assumptions, or global otherwise. For example, an otherwise virtuous regress is locally vicious for a theory that posits a finite domain.[1]In some cases, an infinite regress is not itself the source of the problem but merely indicates a different underlying problem.[1] Infinite regresses that involvemetaphysical impossibilityare the most serious cases of viciousness. The easiest way to arrive at this result is by accepting the assumption thatactual infinitiesare impossible, thereby directly leading to a contradiction.[5]This anti-infinitists position is opposed to infinity in general, not just specifically to infinite regresses.[1]But it is open to defenders of the theory in question to deny this outright prohibition on actual infinities.[5]For example, it has been argued that only certain types of infinities are problematic in this way, like infinite intensive magnitudes (e.g. infinite energy densities).[4]But other types of infinities, like infinite cardinality (e.g. infinitely many causes) or infinite extensive magnitude (e.g. the duration of the universe's history) are unproblematic from the point of view of metaphysical impossibility.[4]While there may be some instances of viciousness due to metaphysical impossibility, most vicious regresses are problematic because of other reasons.[4] A more common form of viciousness arises from the implausibility of the infinite regress in question. This category often applies to theories about human actions, states or capacities.[4]This argument is weaker than the argument from impossibility since it allows that the regress in question is possible. It only denies that it is actual.[1]For example, it seems implausible due to the limitations of the human mind that there are justified beliefs if this entails that the agent needs to have an infinite amount of them. But this is not metaphysically impossible, e.g. if it is assumed that the infinite number of beliefs are onlynon-occurrent or dispositionalwhile the limitation only applies to the number of beliefs one is actually thinking about at one moment.[4]Another reason for the implausibility of theories involving an infinite regress is due to the principle known asOckham's razor, which posits that we should avoid ontological extravagance by not multiplying entities without necessity.[8]Considerations of parsimony are complicated by the distinction between quantitative and qualitative parsimony: concerning how many entities are posited in contrast to how many kinds of entities are posited.[1]For example, thecosmological argumentfor the existence of God promises to increasequantitativeparsimony by positing that there is one first cause instead of allowing an infinite chain of events. But it does so by decreasingqualitativeparsimony: it posits God as a new type of entity.[4] Another form of viciousness applies not to the infinite regress by itself but to it in relation to the explanatory goals of a theory.[4][7]Theories are often formulated with the goal of solving a specific problem, e.g. of answering the question why a certain type of entity exists. One way how such an attempt can fail is if the answer to the question already assumes in disguised form what it was supposed to explain.[4][7]This is akin to theinformal fallacyofbegging the question.[2]From the perspective of a mythological world view, for example, one way to explain why the earth seems to be at rest instead of falling down is to hold that it rests on the back of a giant turtle. In order to explain why the turtle itself is not in free fall, another even bigger turtle is posited and so on, resulting in a world that isturtles all the way down.[4][1]Despite its shortcomings in clashing with modern physics and due to its ontological extravagance, this theory seems to be metaphysically possible assuming that space is infinite. One way to assess the viciousness of this regress is to distinguish betweenlocalandglobalexplanations.[1]Alocalexplanation is only interested in explaining why one thing has a certain property through reference to another thing without trying to explain this other thing as well. Aglobalexplanation, on the other hand, tries to explain why there are any things with this property at all.[1]So as a local explanation, the regress in the turtle theory is benign: it succeeds in explaining why the earth is not falling. But as a global explanation, it fails because it has to assume rather than explain at each step that there is another thing that is not falling. It does not explain why nothing at all is falling.[1][4] It has been argued that infinite regresses can be benign under certain circumstances despite aiming at global explanation. This line of thought rests on the idea of thetransmissioninvolved in the vicious cases:[9]it is explained thatXisFbecauseYisFwhere thisFwas somehow transmitted fromYtoX.[1]The problem is that to transfer something, it first must be possessed, so the possession is presumed rather than explained. For example, in trying to explain why one's neighbor has the property of being the owner of a bag of sugar, it is revealed that this bag was first in someone else's possession before it was transferred to the neighbor and that the same is true for this and every other previous owner.[1]This explanation is unsatisfying since ownership is presupposed at every step. In non-transmissive explanations, however,Yis still the reason forXbeingFandYis alsoFbut this is just seen as a contingent fact.[1][9]This line of thought has been used to argue that the epistemic regress is not vicious. From aBayesianpoint of view, for example, justification or evidence can be defined in terms of one belief raising the probability that another belief is true.[10][11]The former belief may also be justified but this is not relevant for explaining why the latter belief is justified.[1] Philosophers have responded to infinite regress arguments in various ways. The criticized theory can be defended, for example, by denying that an infinite regress is involved.Infinitists, on the other hand, embrace the regress but deny that it is vicious.[6]Another response is to modify the theory in order to avoid the regress. This can be achieved in the form offoundationalismor ofcoherentism. Traditionally, the most common response isfoundationalism.[1]It posits that there is a first element in the series from which all the other elements arise but which is not itself explained this way.[12]So from any given position, the series can be traced back to elements on the most fundamental level, which the recursive principle fails to explain. This way an infinite regress is avoided.[1][6]This position is well-known from its applications in the field of epistemology.[1]Foundationalist theories of epistemic justification state that besides inferentially justified beliefs, which depend for their justification on other beliefs, there are also non-inferentially justified beliefs.[12]The non-inferentially justified beliefs constitute the foundation on which the superstructure consisting of all the inferentially justified beliefs rests.[13]Acquaintance theories, for example, explain the justification of non-inferential beliefs through acquaintance with the objects of the belief. On such a view, an agent is inferentially justified to believe that it will rain tomorrow based on the belief that the weather forecast told so. They are non-inferentially justified in believing that they are in pain because they are directly acquainted with the pain.[12]So a different type of explanation (acquaintance) is used for the foundational elements. Another example comes from the field ofmetaphysicsconcerning the problem ofontological hierarchy. One position in this debate claims that some entities exist on a more fundamental level than other entities and that the latter entities depend on or are grounded in the former entities.[14]Metaphysical foundationalismis the thesis that these dependence relations do not form an infinite regress: that there is a most fundamental level that grounds the existence of the entities from all other levels.[1][15]This is sometimes expressed by stating that the grounding-relation responsible for this hierarchy iswell-founded.[15] Coherentism, mostly found in the field of epistemology, is another way to avoid infinite regresses.[1]It is based on a holistic explanation that usually sees the entities in question not as a linear series but as an interconnected network. For example, coherentist theories of epistemic justification hold that beliefs are justified because of the way they hang together: they cohere well with each other.[16]This view can be expressed by stating that justification is primarily a property of the system of beliefs as a whole. The justification of a single belief is derivative in the sense that it depends on the fact that this belief belongs to a coherent whole.[1]Laurence BonJouris a well-known contemporary defender of this position.[17][18] Aristotleargued that knowing does not necessitate an infinite regress because some knowledge does not depend on demonstration: Some hold that owing to the necessity of knowing the primary premises, there is no scientific knowledge. Others think there is, but that all truths are demonstrable. Neither doctrine is either true or a necessary deduction from the premises. The first school, assuming that there is no way of knowing other than by demonstration, maintain that an infinite regress is involved, on the ground that if behind the prior stands no primary, we could not know the posterior through the prior (wherein they are right, for one cannot traverse an infinite series): if on the other hand – they say – the series terminates and there are primary premises, yet these are unknowable because incapable of demonstration, which according to them is the only form of knowledge. And since thus one cannot know the primary premises, knowledge of the conclusions which follow from them is not pure scientific knowledge nor properly knowing at all, but rests on the mere supposition that the premises are true. The other party agrees with them as regards knowing, holding that it is only possible by demonstration, but they see no difficulty in holding that all truths are demonstrated, on the ground that demonstration may be circular and reciprocal. Our own doctrine is that not all knowledge is demonstrative: on the contrary, knowledge of the immediate premises is independent of demonstration. (The necessity of this is obvious; for since we must know the prior premises from which the demonstration is drawn, and since the regress must end in immediate truths, those truths must be indemonstrable.) Such, then, is our doctrine, and in addition, we maintain that besides scientific knowledge there is its original source which enables us to recognize the definitions.[19][20] Gilbert Ryleargues in the philosophy of mind thatmind-body dualismis implausible because it produces an infinite regress of "inner observers" when trying to explain how mental states are able to influence physical states.[citation needed] Media related toInfinite regressat Wikimedia Commons
https://en.wikipedia.org/wiki/Infinite_regress
Infinitismis the view that knowledge may be justified by aninfinitechain of reasons. It belongs toepistemology, the branch ofphilosophythat considers the possibility, nature, and means ofknowledge. SinceGettier, "knowledge" is no longer widely accepted as meaning "justified true belief" only.[1]However, some epistemologists[who?]still consider knowledge to have a justification condition. Traditional theories of justification (foundationalismandcoherentism) and indeed some philosophers[who?]consider an infinite regress not to be a valid justification. In their view, ifAis justified byB,BbyC, and so forth, then either Infinitism, the view, for example, ofPeter D. Klein, challenges this consensus, referring back to work ofPaul Moser(1984) andJohn Post(1987).[2]In this view, the evidential ancestry of a justified belief must be infinite and non-repeating, which follows from the conjunction of two principles that Klein sees as having straightforward intuitive appeal: "The Principle of Avoiding Circularity" and "The Principle of Avoiding Arbitrariness." The Principle of Avoiding Circularity (PAC) is stated as follows: "For all x, if a person, S, has a justification for x, then for all y, if y is in the evidential ancestry of x for S, then x is not in the evidential ancestry of y for S."[3]PAC says that the proposition to be justified cannot be a member of its own evidential ancestry, which is violated by coherence theories of justification. The Principle of Avoiding Arbitrariness (PAA) is stated as follows: "For all x, if a person, S, has a justification for x, then there is some reason, r1, available to S for x; and there is some reason, r2, available to S for r1; etc."[3]PAA says that in order to avoid arbitrariness, for any propositionxto be justified for an epistemological agent, there must be some reasonravailable to the agent; this reason will in turn require the same structure of justification, and so onad infinitum. Foundationalist theories can only avoid arbitrariness by claiming that some propositions are self-justified. But if a proposition is its own justification (e.g. coherentism), then it is a member of its own evidential ancestry, and the structure of justification is circular. In this view, the conjunction of both PAC and PAA leaves infinitism as the only alternative to skepticism.[3] The Availability of Reasons: Klein also relies on the notion of "availability". In other words, a reason must be available to the subject in order for it to be a candidate for justification. There are two conditions that need to be satisfied in order for a reason to be available: objectively and subjectively. An objectively available reason is stated as follows: "a belief, r, is objectively available to S as a reason for p if(1)r has some sufficiently high probability and the conditional probability of p given r is sufficiently high; or(2)an impartial, informed observer would accept r as a reason for p; or(3)r would be accepted in the long run by an appropriately defined set of people; or(4)r is evident for S and r makes p evident for S; or(5)r accords with S's deepest epistemic commitments; or(6)r meets the appropriate conversational presuppositions; or(7)an intellectually virtuous person would advance r as a reason for p."[3]Any of these conditions are sufficient to describe objectively available reasons and are compatible with infinitism. Klein concedes that, ultimately, the proper characterization of objectively available need be a member of this list, but, for the scope of Klein's defense of infinitism, he need not provide a fully developed account of objectively available reasons. Objective availability could be best understood, at least as a working definition, as an existing, truth-apt reason not dependent on the subject. A subjectively available reason is stated as follows: "S must be able to call on r." (Subjectively available is comparatively straightforward compared to objectively available.) The subject must be able to evoke the reason in their own mind and use the reason in the process of justification. In essence, the reason must be "properly hooked up with S's own beliefs" in order to be subjectively available. A reason that is both objectively and subjectively available to a subject is a candidate for justification according to infinitism (or, at least for Klein).[3] Objection to Infinitism: Klein addresses an objection to infinitism. The finite mind objection (attributed to John Williams): The human mind is finite and has a limited capacity. "It is impossible to consciously believe an infinite number of propositions (because to believe something takes some time) and it is impossible to "unconsciously believe"...an infinite number of propositions because the candidate beliefs are such that some of them "defeat human understanding."[3]It is simply an impossibility that a subject has an infinite chain of reasons which justify their beliefs because the human mind is finite. Klein concedes that the human mind is finite and cannot contain an infinite number of reasons, but the infinitist, according to Klein, is not committed to a subject actually possessing infinite reasons. "The infinitist is not claiming that in any finite period of time...we can consciously entertain an infinite number of thoughts. It is rather that there are an infinite number of propositions such that each one of them would be consciously thought were the appropriate circumstances to arise."[3]So, an infinite chain of reasons need not be present in the mind in order for a belief to be justified rather it must merely be possible to provide an infinite chain of reasons. There will always be another reason to justify the preceding reason if the subject felt compelled to make the inquiry and had subjective access to that reason.
https://en.wikipedia.org/wiki/Infinitism
Theinfinity mirror(also sometimes called aninfinite mirror) is a configuration of two or moreparallelor angledmirrors, which are arranged to create a series of farther and farther reflections that appear to recede toinfinity.[1][2]Often the front mirror of an infinity mirror is half-silvered (a so-calledone way mirror), but this is not required to produce the effect. A similar appearance in artworks has been called theDroste effect. Infinity mirrors are sometimes used asroom accentsor in works of art.[3] In a classic self-contained infinity mirror, a set of light bulbs,LEDs, or other point-source lights are placed around the periphery of a fully reflective mirror, and a second, partially reflective "one-way mirror" is placed a short distance in front of it, in a parallel alignment. When an outside observer looks into the surface of the partially reflective mirror, the lights appear to recede into infinity, creating the appearance of a tunnel of great depth that is lined with lights.[2]If the mirrors are not precisely parallel but instead are canted at a slight angle, the "visual tunnel" will be perceived to be curved off to one side, as it recedes into infinity. Alternatively, this effect can also be seen when an observer standsbetweentwo parallel fully reflective mirrors, as in somedressing rooms, someelevators, or ahouse of mirrors.[1]A weaker version of this effect can be seen by standing between any two parallel reflective surfaces, such as the glass walls of a small entry lobby into some buildings. The partially-reflective glass produces this sensation, diluted by thevisual noiseof the views through the glass into the surrounding environment.[citation needed] The 3D illusion mirror effect is produced whenever there are two parallel reflective surfaces which can bounce a beam of light back and forth an indefinite (theoretically infinite) number of times. The reflections appear to recede into the distance because the light actually is traversing the distance it appears to be traveling. The reflections may also appear to dim in the distance because the mirrors absorb some of the light and do not reflect all of it. For example, in a two-centimeter-thick infinity mirror, with the light sources halfway between, light from the source initially travels one centimeter. The first reflection travels one centimeter to the rear mirror and then two centimeters to, and through the front mirror, a total of three centimeters. The second reflection travels two centimeters from front mirror to back mirror, and again two centimeters from the back mirror to, and through the front mirror, totaling four centimeters, plus the first reflection (three centimeters) making the second reflection seven centimeters away from the front mirror. Each successive reflection adds four more centimeters to the total (the third reflection appears 11 centimeters deep, fourth 15 centimeters, and so on).[1][4] Each additional reflection adds length to the path the light must travel before exiting the mirror and reaching the viewer. Often, reflection of the light also reduces the brightness of the image due to impurities in the glass. For example, most mirrors use glass with small amounts of iron oxide impurities, giving the reflection a slightly dim green tinge. Across multiple reflections, the brightness reduces further and further, and is tinted more and more green. However, mirrors used for infinity mirrors are ideallyfront silveredand these suffer from lower losses as the light does not travel through glass except when it finally escapes. An early reference to an infinity mirror is found in the history of Chinese Buddhism, where theHuayanPatriarchFazang(643–712) is said to have illustrated the "tenfold" precedent of theHuayan Jingby placing ten mirrors around a statue of the Buddha—eight mirrors in an octagon, with additional mirrors on the floor and ceiling. When he lit a torch, its light and the illuminated Buddha were reflected within reflections around the room.[5] Visual artists, especially contemporarysculptors, have made use of infinity mirrors.Yayoi Kusama,Josiah McElheny,Ivan Navarro,Taylor Davis,Anthony James,[6]andGuillaume Lachapelle[7]have all produced works that use the infinity mirror to expand the sensation of unlimited space in their artworks. The contemporary classical composerArvo Pärtwrote his 1978 compositionSpiegel im Spiegel("mirror in the mirror") as a musical reflection on the infinity mirror effect.
https://en.wikipedia.org/wiki/Infinity_mirror
Inmathematics, aniterated functionis a function that is obtained bycomposinganother function with itself two or several times. The process of repeatedly applying the same function is callediteration. In this process, starting from some initial object, the result of applying a given function is fed again into the function as input, and this process is repeated. For example, on the image on the right: Iterated functions are studied incomputer science,fractals,dynamical systems, mathematics andrenormalization groupphysics. The formal definition of an iterated function on asetXfollows. LetXbe a set andf:X→Xbe afunction. Definingfnas then-th iterate off, wherenis a non-negative integer, by:f0=defidX{\displaystyle f^{0}~{\stackrel {\mathrm {def} }{=}}~\operatorname {id} _{X}}andfn+1=deff∘fn,{\displaystyle f^{n+1}~{\stackrel {\mathrm {def} }{=}}~f\circ f^{n},} whereidXis theidentity functiononXand(f∘{\displaystyle \circ }g)(x) =f(g(x))denotesfunction composition. This notation has been traced to andJohn Frederick William Herschelin 1813.[1][2][3][4]Herschel creditedHans Heinrich Bürmannfor it, but without giving a specific reference to the work of Bürmann, which remains undiscovered.[5] Because the notationfnmay refer to both iteration (composition) of the functionforexponentiation of the functionf(the latter is commonly used intrigonometry), some mathematicians[citation needed]choose to use∘to denote the compositional meaning, writingf∘n(x)for then-th iterate of the functionf(x), as in, for example,f∘3(x)meaningf(f(f(x))). For the same purpose,f[n](x)was used byBenjamin Peirce[6][4][nb 1]whereasAlfred PringsheimandJules Molksuggestednf(x)instead.[7][4][nb 2] In general, the following identity holds for all non-negative integersmandn, This is structurally identical to the property ofexponentiationthataman=am+n. In general, for arbitrary general (negative, non-integer, etc.) indicesmandn, this relation is called thetranslation functional equation, cf.Schröder's equationandAbel equation. On a logarithmic scale, this reduces to thenesting propertyofChebyshev polynomials,Tm(Tn(x)) =Tm n(x), sinceTn(x) = cos(narccos(x)). The relation(fm)n(x) = (fn)m(x) =fmn(x)also holds, analogous to the property of exponentiation that(am)n= (an)m=amn. The sequence of functionsfnis called aPicard sequence,[8][9]named afterCharles Émile Picard. For a givenxinX, thesequenceof valuesfn(x)is called theorbitofx. Iffn(x) =fn+m(x)for some integerm > 0, the orbit is called aperiodic orbit. The smallest such value ofmfor a givenxis called theperiod of the orbit. The pointxitself is called aperiodic point. Thecycle detectionproblem in computer science is thealgorithmicproblem of finding the first periodic point in an orbit, and the period of the orbit. Ifx= f(x)for somexinX(that is, the period of the orbit ofxis1), thenxis called afixed pointof the iterated sequence. The set of fixed points is often denoted asFix(f). There exist a number offixed-point theoremsthat guarantee the existence of fixed points in various situations, including theBanach fixed point theoremand theBrouwer fixed point theorem. There are several techniques forconvergence accelerationof the sequences produced byfixed point iteration.[10]For example, theAitken methodapplied to an iterated fixed point is known asSteffensen's method, and produces quadratic convergence. Upon iteration, one may find that there are sets that shrink and converge towards a single point. In such a case, the point that is converged to is known as anattractive fixed point. Conversely, iteration may give the appearance of points diverging away from a single point; this would be the case for anunstable fixed point.[11] When the points of the orbit converge to one or more limits, the set ofaccumulation pointsof the orbit is known as thelimit setor theω-limit set. The ideas of attraction and repulsion generalize similarly; one may categorize iterates intostable setsandunstable sets, according to the behavior of smallneighborhoodsunder iteration. Also seeinfinite compositions of analytic functions. Other limiting behaviors are possible; for example,wandering pointsare points that move away, and never come back even close to where they started. If one considers the evolution of a density distribution, rather than that of individual point dynamics, then the limiting behavior is given by theinvariant measure. It can be visualized as the behavior of a point-cloud or dust-cloud under repeated iteration. The invariant measure is an eigenstate of the Ruelle-Frobenius-Perron operator ortransfer operator, corresponding to an eigenvalue of 1. Smaller eigenvalues correspond to unstable, decaying states. In general, because repeated iteration corresponds to a shift, the transfer operator, and its adjoint, theKoopman operatorcan both be interpreted asshift operatorsaction on ashift space. The theory ofsubshifts of finite typeprovides general insight into many iterated functions, especially those leading to chaos. The notionf1/nmust be used with care when the equationgn(x) =f(x)has multiple solutions, which is normally the case, as inBabbage's equationof the functional roots of the identity map. For example, forn= 2andf(x) = 4x− 6, bothg(x) = 6 − 2xandg(x) = 2x− 2are solutions; so the expressionf1/2(x)does not denote a unique function, just as numbers have multiple algebraic roots. A trivial root offcan always be obtained iff's domain can be extended sufficiently, cf. picture. The roots chosen are normally the ones belonging to the orbit under study. Fractional iteration of a function can be defined: for instance, ahalf iterateof a functionfis a functiongsuch thatg(g(x)) =f(x).[12]This functiong(x)can be written using the index notation asf1/2(x). Similarly,f1/3(x)is the function defined such thatf1/3(f1/3(f1/3(x))) =f(x), whilef2/3(x)may be defined as equal tof1/3(f1/3(x)), and so forth, all based on the principle, mentioned earlier, thatfm○fn=fm+n. This idea can be generalized so that the iteration countnbecomes acontinuous parameter, a sort of continuous "time" of a continuousorbit.[13][14] In such cases, one refers to the system as aflow(cf. section onconjugacybelow.) If a function is bijective (and so possesses an inverse function), then negative iterates correspond to function inverses and their compositions. For example,f−1(x)is the normal inverse off, whilef−2(x)is the inverse composed with itself, i.e.f−2(x) =f−1(f−1(x)). Fractional negative iterates are defined analogously to fractional positive ones; for example,f−1/2(x)is defined such thatf−1/2(f−1/2(x)) =f−1(x), or, equivalently, such thatf−1/2(f1/2(x)) =f0(x) =x. One of several methods of finding a series formula for fractional iteration, making use of a fixed point, is as follows.[15] This can be carried on indefinitely, although inefficiently, as the latter terms become increasingly complicated. A more systematic procedure is outlined in the following section onConjugacy. For example, settingf(x) =Cx+Dgives the fixed pointa=D/(1 −C), so the above formula terminates to justfn(x)=D1−C+(x−D1−C)Cn=Cnx+1−Cn1−CD,{\displaystyle f^{n}(x)={\frac {D}{1-C}}+\left(x-{\frac {D}{1-C}}\right)C^{n}=C^{n}x+{\frac {1-C^{n}}{1-C}}D~,}which is trivial to check. Find the value of222⋯{\displaystyle {\sqrt {2}}^{{\sqrt {2}}^{{\sqrt {2}}^{\cdots }}}}where this is donentimes (and possibly the interpolated values whennis not an integer). We havef(x) =√2x. A fixed point isa=f(2) = 2. So setx= 1andfn(1)expanded around the fixed point value of 2 is then an infinite series,222⋯=fn(1)=2−(ln⁡2)n+(ln⁡2)n+1((ln⁡2)n−1)4(ln⁡2−1)−⋯{\displaystyle {\sqrt {2}}^{{\sqrt {2}}^{{\sqrt {2}}^{\cdots }}}=f^{n}(1)=2-(\ln 2)^{n}+{\frac {(\ln 2)^{n+1}((\ln 2)^{n}-1)}{4(\ln 2-1)}}-\cdots }which, taking just the first three terms, is correct to the first decimal place whennis positive. Also seeTetration:fn(1) =n√2. Using the other fixed pointa=f(4) = 4causes the series to diverge. Forn= −1, the series computes the inverse function⁠2+lnx/ln 2⁠. With the functionf(x) =xb, expand around the fixed point 1 to get the seriesfn(x)=1+bn(x−1)+12bn(bn−1)(x−1)2+13!bn(bn−1)(bn−2)(x−1)3+⋯,{\displaystyle f^{n}(x)=1+b^{n}(x-1)+{\frac {1}{2}}b^{n}(b^{n}-1)(x-1)^{2}+{\frac {1}{3!}}b^{n}(b^{n}-1)(b^{n}-2)(x-1)^{3}+\cdots ~,}which is simply the Taylor series ofx(bn)expanded around 1. Iffandgare two iterated functions, and there exists ahomeomorphismhsuch thatg=h−1○f○h, thenfandgare said to betopologically conjugate. Clearly, topological conjugacy is preserved under iteration, asgn=h−1○fn○h. Thus, if one can solve for one iterated function system, one also has solutions for all topologically conjugate systems. For example, thetent mapis topologically conjugate to thelogistic map. As a special case, takingf(x) =x+ 1, one has the iteration ofg(x) =h−1(h(x) + 1)as Making the substitutionx=h−1(y) =ϕ(y)yields Even in the absence of a strict homeomorphism, near a fixed point, here taken to be atx= 0,f(0) = 0, one may often solve[16]Schröder's equationfor a function Ψ, which makesf(x)locally conjugate to a mere dilation,g(x) =f'(0)x, that is Thus, its iteration orbit, or flow, under suitable provisions (e.g.,f'(0) ≠ 1), amounts to the conjugate of the orbit of the monomial, wherenin this expression serves as a plain exponent:functional iteration has been reduced to multiplication!Here, however, the exponentnno longer needs be integer or positive, and is a continuous "time" of evolution for the full orbit:[17]themonoidof the Picard sequence (cf.transformation semigroup) has generalized to a fullcontinuous group.[18] This method (perturbative determination of the principaleigenfunctionΨ, cf.Carleman matrix) is equivalent to the algorithm of the preceding section, albeit, in practice, more powerful and systematic. If the function is linear and can be described by astochastic matrix, that is, a matrix whose rows or columns sum to one, then the iterated system is known as aMarkov chain. There aremany chaotic maps. Well-known iterated functions include theMandelbrot setanditerated function systems. Ernst Schröder,[20]in 1870, worked out special cases of thelogistic map, such as the chaotic casef(x) = 4x(1 −x), so thatΨ(x) = arcsin(√x)2, hencefn(x) = sin(2narcsin(√x))2. A nonchaotic case Schröder also illustrated with his method,f(x) = 2x(1 −x), yieldedΨ(x) = −⁠1/2⁠ln(1 − 2x), and hencefn(x) = −⁠1/2⁠((1 − 2x)2n− 1). Iffis theactionof a group element on a set, then the iterated function corresponds to afree group. Most functions do not have explicit generalclosed-form expressionsfor then-th iterate. The table below lists some[20]that do. Note that all these expressions are valid even for non-integer and negativen, as well as non-negative integern. where: where: where: Note: these two special cases ofax2+bx+care the only cases that have a closed-form solution. Choosingb= 2 = –aandb= 4 = –a, respectively, further reduces them to the nonchaotic and chaotic logistic cases discussed prior to the table. Some of these examples are related among themselves by simple conjugacies. Iterated functions can be studied with theArtin–Mazur zeta functionand withtransfer operators. Incomputer science, iterated functions occur as a special case ofrecursive functions, which in turn anchor the study of such broad topics aslambda calculus, or narrower ones, such as thedenotational semanticsof computer programs. Two importantfunctionalscan be defined in terms of iterated functions. These aresummation: and the equivalent product: Thefunctional derivativeof an iterated function is given by the recursive formula: Iterated functions crop up in the series expansion of combined functions, such asg(f(x)). Given theiteration velocity, orbeta function (physics), for thenthiterate of the functionf, we have[22] For example, for rigid advection, iff(x) =x+t, thenv(x) =t. Consequently,g(x+t) = exp(t∂/∂x)g(x), action by a plainshift operator. Conversely, one may specifyf(x)given an arbitraryv(x), through the genericAbel equationdiscussed above, where This is evident by noting that For continuous iteration indext, then, now written as a subscript, this amounts to Lie's celebrated exponential realization of a continuous group, The initial flow velocityvsuffices to determine the entire flow, given this exponential realization which automatically provides the general solution to thetranslation functional equation,[23]
https://en.wikipedia.org/wiki/Iterated_function
Mathematical inductionis a method forprovingthat a statementP(n){\displaystyle P(n)}is true for everynatural numbern{\displaystyle n}, that is, that the infinitely many casesP(0),P(1),P(2),P(3),…{\displaystyle P(0),P(1),P(2),P(3),\dots }all hold. This is done by first proving a simple case, then also showing that if we assume the claim is true for a given case, then the next case is also true. Informal metaphors help to explain this technique, such as falling dominoes or climbing a ladder: Mathematical induction proves that we can climb as high as we like on a ladder, by proving that we can climb onto the bottom rung (thebasis) and that from each rung we can climb up to the next one (thestep). Aproof by inductionconsists of two cases. The first, thebase case, proves the statement forn=0{\displaystyle n=0}without assuming any knowledge of other cases. The second case, theinduction step, proves thatifthe statement holds for any given casen=k{\displaystyle n=k},thenit must also hold for the next casen=k+1{\displaystyle n=k+1}. These two steps establish that the statement holds for every natural numbern{\displaystyle n}. The base case does not necessarily begin withn=0{\displaystyle n=0}, but often withn=1{\displaystyle n=1}, and possibly with any fixed natural numbern=N{\displaystyle n=N}, establishing the truth of the statement for all natural numbersn≥N{\displaystyle n\geq N}. The method can be extended to prove statements about more generalwell-foundedstructures, such astrees; this generalization, known asstructural induction, is used inmathematical logicandcomputer science. Mathematical induction in this extended sense is closely related torecursion. Mathematical induction is aninference ruleused informal proofs, and is the foundation of mostcorrectnessproofs for computer programs.[3] Despite its name, mathematical induction differs fundamentally frominductive reasoningasused in philosophy, in which the examination of many cases results in a probable conclusion. The mathematical method examines infinitely many cases to prove a general statement, but it does so by a finite chain ofdeductive reasoninginvolving thevariablen{\displaystyle n}, which can take infinitely many values. The result is a rigorous proof of the statement, not an assertion of its probability.[4] In 370 BC,Plato'sParmenidesmay have contained traces of an early example of an implicit inductive proof,[5]however, the earliest implicit proof by mathematical induction was written byal-Karajiaround 1000 AD, who applied it toarithmetic sequencesto prove thebinomial theoremand properties ofPascal's triangle. Whilst the original work was lost, it was later referenced byAl-Samawal al-Maghribiin his treatiseal-Bahir fi'l-jabr (The Brilliant in Algebra)in around 1150 AD.[6][7][8] Katz says in his history of mathematics Another important idea introduced by al-Karaji and continued by al-Samaw'al and others was that of an inductive argument for dealing with certain arithmetic sequences. Thus al-Karaji used such an argument to prove the result on the sums of integral cubes already known toAryabhata[...] Al-Karaji did not, however, state a general result for arbitraryn. He stated his theorem for the particular integer 10 [...] His proof, nevertheless, was clearly designed to be extendable to any other integer. [...] Al-Karaji's argument includes in essence the two basic components of a modern argument by induction, namely thetruthof the statement forn= 1 (1 = 13) and the deriving of the truth forn=kfrom that ofn=k- 1. Of course, this second component is not explicit since, in some sense, al-Karaji's argument is in reverse; this is, he starts fromn= 10 and goes down to 1 rather than proceeding upward. Nevertheless, his argument inal-Fakhriis the earliest extant proof ofthe sum formula for integral cubes.[9] In India, early implicit proofs by mathematical induction appear inBhaskara's "cyclic method".[10] None of these ancient mathematicians, however, explicitly stated the induction hypothesis. Another similar case (contrary to what Vacca has written, as Freudenthal carefully showed)[11]was that ofFrancesco Maurolicoin hisArithmeticorum libri duo(1575), who used the technique to prove that the sum of the firstnoddintegersisn2. The earliestrigoroususe of induction was byGersonides(1288–1344).[12][13]The first explicit formulation of the principle of induction was given byPascalin hisTraité du triangle arithmétique(1665). Another Frenchman,Fermat, made ample use of a related principle: indirect proof byinfinite descent. The induction hypothesis was also employed by the SwissJakob Bernoulli, and from then on it became well known. The modern formal treatment of the principle came only in the 19th century, withGeorge Boole,[14]Augustus De Morgan,Charles Sanders Peirce,[15][16]Giuseppe Peano, andRichard Dedekind.[10] The simplest and most common form of mathematical induction infers that a statement involving anatural numbern(that is, an integern≥ 0or 1) holds for all values ofn. The proof consists of two steps: The hypothesis in the induction step, that the statement holds for a particularn, is called theinduction hypothesisorinductive hypothesis. To prove the induction step, one assumes the induction hypothesis fornand then uses this assumption to prove that the statement holds forn+ 1. Authors who prefer to define natural numbers to begin at 0 use that value in the base case; those who define natural numbers to begin at 1 use that value. Mathematical induction can be used to prove the following statementP(n)for all natural numbersn.P(n):0+1+2+⋯+n=n(n+1)2.{\displaystyle P(n)\!:\ \ 0+1+2+\cdots +n={\frac {n(n+1)}{2}}.} This states a general formula for the sum of the natural numbers less than or equal to a given number; in fact an infinite sequence of statements:0=(0)(0+1)2{\displaystyle 0={\tfrac {(0)(0+1)}{2}}},0+1=(1)(1+1)2{\displaystyle 0+1={\tfrac {(1)(1+1)}{2}}},0+1+2=(2)(2+1)2{\displaystyle 0+1+2={\tfrac {(2)(2+1)}{2}}}, etc. Proposition.For everyn∈N{\displaystyle n\in \mathbb {N} },0+1+2+⋯+n=n(n+1)2.{\displaystyle 0+1+2+\cdots +n={\tfrac {n(n+1)}{2}}.} Proof.LetP(n)be the statement0+1+2+⋯+n=n(n+1)2.{\displaystyle 0+1+2+\cdots +n={\tfrac {n(n+1)}{2}}.}We give a proof by induction onn. Base case:Show that the statement holds for the smallest natural numbern= 0. P(0)is clearly true:0=0(0+1)2.{\displaystyle 0={\tfrac {0(0+1)}{2}}\,.} Induction step:Show that for everyk≥ 0, ifP(k)holds, thenP(k+ 1)also holds. Assume the induction hypothesis that for a particulark, the single casen=kholds, meaningP(k)is true:0+1+⋯+k=k(k+1)2.{\displaystyle 0+1+\cdots +k={\frac {k(k+1)}{2}}.}It follows that:(0+1+2+⋯+k)+(k+1)=k(k+1)2+(k+1).{\displaystyle (0+1+2+\cdots +k)+(k+1)={\frac {k(k+1)}{2}}+(k+1).} Algebraically, the right hand side simplifies as:k(k+1)2+(k+1)=k(k+1)+2(k+1)2=(k+1)(k+2)2=(k+1)((k+1)+1)2.{\displaystyle {\begin{aligned}{\frac {k(k+1)}{2}}+(k+1)&={\frac {k(k+1)+2(k+1)}{2}}\\&={\frac {(k+1)(k+2)}{2}}\\&={\frac {(k+1)((k+1)+1)}{2}}.\end{aligned}}} Equating the extreme left hand and right hand sides, we deduce that:0+1+2+⋯+k+(k+1)=(k+1)((k+1)+1)2.{\displaystyle 0+1+2+\cdots +k+(k+1)={\frac {(k+1)((k+1)+1)}{2}}.}That is, the statementP(k+ 1)also holds true, establishing the induction step. Conclusion:Since both the base case and the induction step have been proved as true, by mathematical induction the statementP(n)holds for every natural numbern.Q.E.D. Induction is often used to proveinequalities. As an example, we prove that|sin⁡nx|≤n|sin⁡x|{\displaystyle \left|\sin nx\right|\leq n\left|\sin x\right|}for anyreal numberx{\displaystyle x}and natural numbern{\displaystyle n}. At first glance, it may appear that a more general version,|sin⁡nx|≤n|sin⁡x|{\displaystyle \left|\sin nx\right|\leq n\left|\sin x\right|}for anyrealnumbersn,x{\displaystyle n,x}, could be proven without induction; but the casen=12,x=π{\textstyle n={\frac {1}{2}},\,x=\pi }shows it may be false for non-integer values ofn{\displaystyle n}. This suggests we examine the statement specifically fornaturalvalues ofn{\displaystyle n}, and induction is the readiest tool. Proposition.For anyx∈R{\displaystyle x\in \mathbb {R} }andn∈N{\displaystyle n\in \mathbb {N} },|sin⁡nx|≤n|sin⁡x|{\displaystyle \left|\sin nx\right|\leq n\left|\sin x\right|}. Proof.Fix an arbitrary real numberx{\displaystyle x}, and letP(n){\displaystyle P(n)}be the statement|sin⁡nx|≤n|sin⁡x|{\displaystyle \left|\sin nx\right|\leq n\left|\sin x\right|}. We induce onn{\displaystyle n}. Base case:The calculation|sin⁡0x|=0≤0=0|sin⁡x|{\displaystyle \left|\sin 0x\right|=0\leq 0=0\left|\sin x\right|}verifiesP(0){\displaystyle P(0)}. Induction step:We show theimplicationP(k)⟹P(k+1){\displaystyle P(k)\implies P(k+1)}for any natural numberk{\displaystyle k}. Assume the induction hypothesis: for a given valuen=k≥0{\displaystyle n=k\geq 0}, the single caseP(k){\displaystyle P(k)}is true. Using theangle addition formulaand thetriangle inequality, we deduce:|sin⁡(k+1)x|=|sin⁡kxcos⁡x+sin⁡xcos⁡kx|(angle addition)≤|sin⁡kxcos⁡x|+|sin⁡xcos⁡kx|(triangle inequality)=|sin⁡kx||cos⁡x|+|sin⁡x||cos⁡kx|≤|sin⁡kx|+|sin⁡x|(|cos⁡t|≤1)≤k|sin⁡x|+|sin⁡x|(induction hypothesis)=(k+1)|sin⁡x|.{\displaystyle {\begin{aligned}\left|\sin(k+1)x\right|&=\left|\sin kx\cos x+\sin x\cos kx\right|&&{\text{(angle addition)}}\\&\leq \left|\sin kx\cos x\right|+\left|\sin x\,\cos kx\right|&&{\text{(triangle inequality)}}\\&=\left|\sin kx\right|\left|\cos x\right|+\left|\sin x\right|\left|\cos kx\right|\\&\leq \left|\sin kx\right|+\left|\sin x\right|&&(\left|\cos t\right|\leq 1)\\&\leq k\left|\sin x\right|+\left|\sin x\right|&&{\text{(induction hypothesis}})\\&=(k+1)\left|\sin x\right|.\end{aligned}}} The inequality between the extreme left-hand and right-hand quantities shows thatP(k+1){\displaystyle P(k+1)}is true, which completes the induction step. Conclusion:The propositionP(n){\displaystyle P(n)}holds for all natural numbersn.{\displaystyle n.}Q.E.D. In practice, proofs by induction are often structured differently, depending on the exact nature of the property to be proven. All variants of induction are special cases oftransfinite induction; seebelow. If one wishes to prove a statement, not for all natural numbers, but only for all numbersngreater than or equal to a certain numberb, then the proof by induction consists of the following: This can be used, for example, to show that2n≥n+ 5forn≥ 3. In this way, one can prove that some statementP(n)holds for alln≥ 1, or even for alln≥ −5. This form of mathematical induction is actually a special case of the previous form, because if the statement to be proved isP(n)then proving it with these two rules is equivalent with provingP(n+b)for all natural numbersnwith an induction base case0.[17] Assume an infinite supply of 4- and 5-dollar coins. Induction can be used to prove that any whole amount of dollars greater than or equal to12can be formed by a combination of such coins. LetS(k)denote the statement "kdollars can be formed by a combination of 4- and 5-dollar coins". The proof thatS(k)is true for allk≥ 12can then be achieved by induction onkas follows: Base case:Showing thatS(k)holds fork= 12is simple: take three 4-dollar coins. Induction step:Given thatS(k)holds for some value ofk≥ 12(induction hypothesis), prove thatS(k+ 1)holds, too. AssumeS(k)is true for some arbitraryk≥ 12. If there is a solution forkdollars that includes at least one 4-dollar coin, replace it by a 5-dollar coin to makek+ 1dollars. Otherwise, if only 5-dollar coins are used,kmust be a multiple of 5 and so at least 15; but then we can replace three 5-dollar coins by four 4-dollar coins to makek+ 1dollars. In each case,S(k+ 1)is true. Therefore, by the principle of induction,S(k)holds for allk≥ 12, and the proof is complete. In this example, althoughS(k)also holds fork∈{4,5,8,9,10}{\textstyle k\in \{4,5,8,9,10\}}, the above proof cannot be modified to replace the minimum amount of12dollar to any lower valuem. Form= 11, the base case is actually false; form= 10, the second case in the induction step (replacing three 5- by four 4-dollar coins) will not work; let alone for even lowerm. It is sometimes desirable to prove a statement involving two natural numbers,nandm, by iterating the induction process. That is, one proves a base case and an induction step forn, and in each of those proves a base case and an induction step form. See, for example, theproof of commutativityaccompanyingaddition of natural numbers. More complicated arguments involving three or more counters are also possible. The method of infinite descent is a variation of mathematical induction which was used byPierre de Fermat. It is used to show that some statementQ(n)is false for all natural numbersn. Its traditional form consists of showing that ifQ(n)is true for some natural numbern, it also holds for some strictly smaller natural numberm. Because there are no infinite decreasing sequences of natural numbers, this situation would be impossible, thereby showing (by contradiction) thatQ(n)cannot be true for anyn. The validity of this method can be verified from the usual principle of mathematical induction. Using mathematical induction on the statementP(n)defined as "Q(m)is false for all natural numbersmless than or equal ton", it follows thatP(n)holds for alln, which means thatQ(n)is false for every natural numbern. If one wishes to prove that a propertyPholds for all natural numbers less than or equal to a fixedN, proving thatPsatisfies the following conditions suffices:[18] The most common form of proof by mathematical induction requires proving in the induction step that∀k(P(k)→P(k+1)){\displaystyle \forall k\,(P(k)\to P(k+1))} whereupon the induction principle "automates"napplications of this step in getting fromP(0)toP(n). This could be called "predecessor induction" because each step proves something about a number from something about that number's predecessor. A variant of interest incomputational complexityis "prefix induction", in which one proves the following statement in the induction step:∀k(P(k)→P(2k)∧P(2k+1)){\displaystyle \forall k\,(P(k)\to P(2k)\land P(2k+1))}or equivalently∀k(P(⌊k2⌋)→P(k)){\displaystyle \forall k\,\left(P\!\left(\left\lfloor {\frac {k}{2}}\right\rfloor \right)\to P(k)\right)} The induction principle then "automates"log2napplications of this inference in getting fromP(0)toP(n). In fact, it is called "prefix induction" because each step proves something about a number from something about the "prefix" of that number — as formed by truncating the low bit of itsbinary representation. It can also be viewed as an application of traditional induction on the length of that binary representation. If traditional predecessor induction is interpreted computationally as ann-step loop, then prefix induction would correspond to a log-n-step loop. Because of that, proofs using prefix induction are "more feasibly constructive" than proofs using predecessor induction. Predecessor induction can trivially simulate prefix induction on the same statement. Prefix induction can simulate predecessor induction, but only at the cost of making the statement more syntactically complex (adding aboundeduniversal quantifier), so the interesting results relating prefix induction topolynomial-timecomputation depend on excluding unbounded quantifiers entirely, and limiting the alternation of bounded universal andexistentialquantifiers allowed in the statement.[19] One can take the idea a step further: one must prove∀k(P(⌊k⌋)→P(k)){\displaystyle \forall k\,\left(P\!\left(\left\lfloor {\sqrt {k}}\right\rfloor \right)\to P(k)\right)}whereupon the induction principle "automates"log lognapplications of this inference in getting fromP(0)toP(n). This form of induction has been used, analogously, to study log-time parallel computation.[citation needed] Another variant, calledcomplete induction,course of values inductionorstrong induction(in contrast to which the basic form of induction is sometimes known asweak induction), makes the induction step easier to prove by using a stronger hypothesis: one proves the statementP(m+1){\displaystyle P(m+1)}under the assumption thatP(n){\displaystyle P(n)}holds forallnatural numbersn{\displaystyle n}less thanm+1{\displaystyle m+1}; by contrast, the basic form only assumesP(m){\displaystyle P(m)}. The name "strong induction" does not mean that this method can prove more than "weak induction", but merely refers to the stronger hypothesis used in the induction step. In fact, it can be shown that the two methods are actually equivalent, as explained below. In this form of complete induction, one still has to prove the base case,P(0){\displaystyle P(0)}, and it may even be necessary to prove extra-base cases such asP(1){\displaystyle P(1)}before the general argument applies, as in the example below of theFibonacci numberFn{\displaystyle F_{n}}. Although the form just described requires one to prove the base case, this is unnecessary if one can proveP(m){\displaystyle P(m)}(assumingP(n){\displaystyle P(n)}for all lowern{\displaystyle n}) for allm≥0{\displaystyle m\geq 0}. This is a special case oftransfinite inductionas described below, although it is no longer equivalent to ordinary induction. In this form the base case is subsumed by the casem=0{\displaystyle m=0}, whereP(0){\displaystyle P(0)}is proved with no otherP(n){\displaystyle P(n)}assumed; this case may need to be handled separately, but sometimes the same argument applies form=0{\displaystyle m=0}andm>0{\displaystyle m>0}, making the proof simpler and more elegant. In this method, however, it is vital to ensure that the proof ofP(m){\displaystyle P(m)}does not implicitly assume thatm>0{\displaystyle m>0}, e.g. by saying "choose an arbitraryn<m{\displaystyle n<m}", or by assuming that a set ofmelements has an element. Complete induction is equivalent to ordinary mathematical induction as described above, in the sense that a proof by one method can be transformed into a proof by the other. Suppose there is a proof ofP(n){\displaystyle P(n)}by complete induction. Then, this proof can be transformed into an ordinary induction proof by assuming a stronger inductive hypothesis. LetQ(n){\displaystyle Q(n)}be the statement "P(m){\displaystyle P(m)}holds for allm{\displaystyle m}such that0≤m≤n{\displaystyle 0\leq m\leq n}"—this becomes the inductive hypothesis for ordinary induction. We can then showQ(0){\displaystyle Q(0)}andQ(n+1){\displaystyle Q(n+1)}forn∈N{\displaystyle n\in \mathbb {N} }assuming onlyQ(n){\displaystyle Q(n)}and show thatQ(n){\displaystyle Q(n)}impliesP(n){\displaystyle P(n)}.[20] If, on the other hand,P(n){\displaystyle P(n)}had been proven by ordinary induction, the proof would already effectively be one by complete induction:P(0){\displaystyle P(0)}is proved in the base case, using no assumptions, andP(n+1){\displaystyle P(n+1)}is proved in the induction step, in which one may assume all earlier cases but need only use the caseP(n){\displaystyle P(n)}. Complete induction is most useful when several instances of the inductive hypothesis are required for each induction step. For example, complete induction can be used to show thatFn=φn−ψnφ−ψ{\displaystyle F_{n}={\frac {\varphi ^{n}-\psi ^{n}}{\varphi -\psi }}}whereFn{\displaystyle F_{n}}is then-thFibonacci number, andφ=12(1+5){\textstyle \varphi ={\frac {1}{2}}(1+{\sqrt {5}})}(thegolden ratio) andψ=12(1−5){\textstyle \psi ={\frac {1}{2}}(1-{\sqrt {5}})}are therootsof thepolynomialx2−x−1{\displaystyle x^{2}-x-1}. By using the fact thatFn+2=Fn+1+Fn{\displaystyle F_{n+2}=F_{n+1}+F_{n}}for eachn∈N{\displaystyle n\in \mathbb {N} }, the identity above can be verified by direct calculation forFn+2{\textstyle F_{n+2}}if one assumes that it already holds for bothFn+1{\textstyle F_{n+1}}andFn{\textstyle F_{n}}. To complete the proof, the identity must be verified in the two base cases:n=0{\displaystyle n=0}andn=1{\textstyle n=1}. Another proof by complete induction uses the hypothesis that the statement holds forallsmallern{\displaystyle n}more thoroughly. Consider the statement that "everynatural numbergreater than 1 is a product of (one or more)prime numbers", which is the "existence" part of thefundamental theorem of arithmetic. For proving the induction step, the induction hypothesis is that for a givenm>1{\displaystyle m>1}the statement holds for all smallern>1{\displaystyle n>1}. Ifm{\displaystyle m}is prime then it is certainly a product of primes, and if not, then by definition it is a product:m=n1n2{\displaystyle m=n_{1}n_{2}}, where neither of the factors is equal to 1; hence neither is equal tom{\displaystyle m}, and so both are greater than 1 and smaller thanm{\displaystyle m}. The induction hypothesis now applies ton1{\displaystyle n_{1}}andn2{\displaystyle n_{2}}, so each one is a product of primes. Thusm{\displaystyle m}is a product of products of primes, and hence by extension a product of primes itself. We shall look to prove the same example asabove, this time withstrong induction. The statement remains the same:S(n):n≥12⟹∃a,b∈N.n=4a+5b{\displaystyle S(n):\,\,n\geq 12\implies \,\exists \,a,b\in \mathbb {N} .\,\,n=4a+5b} However, there will be slight differences in the structure and the assumptions of the proof, starting with the extended base case. Proof. Base case:Show thatS(k){\displaystyle S(k)}holds fork=12,13,14,15{\displaystyle k=12,13,14,15}.4⋅3+5⋅0=124⋅2+5⋅1=134⋅1+5⋅2=144⋅0+5⋅3=15{\displaystyle {\begin{aligned}4\cdot 3+5\cdot 0=12\\4\cdot 2+5\cdot 1=13\\4\cdot 1+5\cdot 2=14\\4\cdot 0+5\cdot 3=15\end{aligned}}} The base case holds. Induction step:Given somej>15{\displaystyle j>15}, assumeS(m){\displaystyle S(m)}holds for allm{\displaystyle m}with12≤m<j{\displaystyle 12\leq m<j}. Prove thatS(j){\displaystyle S(j)}holds. Choosingm=j−4{\displaystyle m=j-4}, and observing that15<j⟹12≤j−4<j{\displaystyle 15<j\implies 12\leq j-4<j}shows thatS(j−4){\displaystyle S(j-4)}holds, by the inductive hypothesis. That is, the sumj−4{\displaystyle j-4}can be formed by some combination of4{\displaystyle 4}and5{\displaystyle 5}dollar coins. Then, simply adding a4{\displaystyle 4}dollar coin to that combination yields the sumj{\displaystyle j}. That is,S(j){\displaystyle S(j)}holds[21]Q.E.D. Sometimes, it is more convenient to deduce backwards, proving the statement forn−1{\displaystyle n-1}, given its validity forn{\displaystyle n}. However, proving the validity of the statement for no single number suffices to establish the base case; instead, one needs to prove the statement for an infinite subset of the natural numbers. For example,Augustin Louis Cauchyfirst used forward (regular) induction to prove theinequality of arithmetic and geometric meansfor allpowers of 2, and then used backwards induction to show it for all natural numbers.[22][23] The induction step must be proved for all values ofn. To illustrate this, Joel E. Cohen proposed the following argument, which purports to prove by mathematical induction thatall horses are of the same color:[24] Base case:in a set of onlyonehorse, there is only one color. Induction step:assume as induction hypothesis that within any set ofn{\displaystyle n}horses, there is only one color. Now look at any set ofn+1{\displaystyle n+1}horses. Number them:1,2,3,…,n,n+1{\displaystyle 1,2,3,\dotsc ,n,n+1}. Consider the sets{1,2,3,…,n}{\textstyle \left\{1,2,3,\dotsc ,n\right\}}and{2,3,4,…,n+1}{\textstyle \left\{2,3,4,\dotsc ,n+1\right\}}. Each is a set of onlyn{\displaystyle n}horses, therefore within each there is only one color. But the two sets overlap, so there must be only one color among alln+1{\displaystyle n+1}horses. The base casen=1{\displaystyle n=1}is trivial, and the induction step is correct in all casesn>1{\displaystyle n>1}. However, the argument used in the induction step is incorrect forn+1=2{\displaystyle n+1=2}, because the statement that "the two sets overlap" is false for{1}{\textstyle \left\{1\right\}}and{2}{\textstyle \left\{2\right\}}. Insecond-order logic, one can write down the "axiomof induction" as follows:∀P(P(0)∧∀k(P(k)→P(k+1))→∀n(P(n))),{\displaystyle \forall P\,{\Bigl (}P(0)\land \forall k{\bigl (}P(k)\to P(k+1){\bigr )}\to \forall n\,{\bigl (}P(n){\bigr )}{\Bigr )},}whereP(·)is a variable forpredicatesinvolving one natural number andkandnare variables fornatural numbers. In words, the base caseP(0)and the induction step (namely, that the induction hypothesisP(k)impliesP(k+ 1)) together imply thatP(n)for any natural numbern. The axiom of induction asserts the validity of inferring thatP(n)holds for any natural numbernfrom the base case and the induction step. The first quantifier in the axiom ranges overpredicatesrather than over individual numbers. This is a second-order quantifier, which means that this axiom is stated insecond-order logic. Axiomatizing arithmetic induction infirst-order logicrequires anaxiom schemacontaining a separate axiom for each possible predicate. The articlePeano axiomscontains further discussion of this issue. The axiom of structural induction for the natural numbers was first formulated by Peano, who used it to specify the natural numbers together with the following four other axioms: Infirst-orderZFC set theory, quantification over predicates is not allowed, but one can still express induction by quantification over sets:∀A(0∈A∧∀k∈N(k∈A→(k+1)∈A)→N⊆A){\displaystyle \forall A{\Bigl (}0\in A\land \forall k\in \mathbb {N} {\bigl (}k\in A\to (k+1)\in A{\bigr )}\to \mathbb {N} \subseteq A{\Bigr )}}Amay be read as a set representing a proposition, and containing natural numbers, for which the proposition holds. This is not an axiom, but a theorem, given that natural numbers are defined in the language of ZFC set theory by axioms, analogous to Peano's. Seeconstruction of the natural numbersusing theaxiom of infinityandaxiom schema of specification. One variation of the principle of complete induction can be generalized for statements about elements of anywell-founded set, that is, a set with anirreflexive relation< that contains noinfinite descending chains. Every set representing anordinal numberis well-founded, the set of natural numbers is one of them. Applied to a well-founded set, transfinite induction can be formulated as a single step. To prove that a statementP(n)holds for each ordinal number: This form of induction, when applied to a set of ordinal numbers (which form awell-orderedand hence well-foundedclass), is calledtransfinite induction. It is an important proof technique inset theory,topologyand other fields. Proofs by transfinite induction typically distinguish three cases: Strictly speaking, it is not necessary in transfinite induction to prove a base case, because it is avacuousspecial case of the proposition that ifPis true of alln<m, thenPis true ofm. It is vacuously true precisely because there are no values ofn<mthat could serve as counterexamples. So the special cases are special cases of the general case. The principle of mathematical induction is usually stated as anaxiomof the natural numbers; seePeano axioms. It is strictly stronger than thewell-ordering principlein the context of the other Peano axioms. Suppose the following: It can then be proved that induction, given the above-listed axioms, implies the well-ordering principle. The following proof uses complete induction and the first and fourth axioms. Proof.Suppose there exists anon-emptyset,S, of natural numbers that has no least element. LetP(n)be the assertion thatnis not inS. ThenP(0)is true, for if it were false then 0 is the least element ofS. Furthermore, letnbe a natural number, and supposeP(m)is true for all natural numbersmless thann+ 1. Then ifP(n+ 1)is falsen+ 1is inS, thus being a minimal element inS, a contradiction. ThusP(n+ 1)is true. Therefore, by the complete induction principle,P(n)holds for all natural numbersn; soSis empty, a contradiction. Q.E.D. On the other hand, the set{(0,n):n∈N}∪{(1,n):n∈N}{\displaystyle \{(0,n):n\in \mathbb {N} \}\cup \{(1,n):n\in \mathbb {N} \}}, shown in the picture, is well-ordered[25]: 35lfby thelexicographic order. Moreover, except for the induction axiom, it satisfies all Peano axioms, where Peano's constant 0 is interpreted as the pair (0, 0), and Peano'ssuccessorfunction is defined on pairs bysucc(x,n) = (x,n+ 1)for allx∈{0,1}{\displaystyle x\in \{0,1\}}andn∈N{\displaystyle n\in \mathbb {N} }. As an example for the violation of the induction axiom, define the predicateP(x,n)as(x,n) = (0, 0)or(x,n) = succ(y,m)for somey∈{0,1}{\displaystyle y\in \{0,1\}}andm∈N{\displaystyle m\in \mathbb {N} }. Then the base caseP(0, 0)is trivially true, and so is the induction step: ifP(x,n), thenP(succ(x,n)). However,Pis not true for all pairs in the set, sinceP(1,0)is false. Peano's axioms with the induction principle uniquely model the natural numbers. Replacing the induction principle with the well-ordering principle allows for more exotic models that fulfill all the axioms.[25] It is mistakenly printed in several books[25]and sources that the well-ordering principle is equivalent to the induction axiom. In the context of the other Peano axioms, this is not the case, but in the context of other axioms, they are equivalent;[25]specifically, the well-ordering principle implies the induction axiom in the context of the first two above listed axioms and A common mistake in many erroneous proofs is to assume thatn− 1is a unique and well-defined natural number, a property which is not implied by the other Peano axioms.[25]
https://en.wikipedia.org/wiki/Mathematical_induction
InWestern art history,mise en abyme(French pronunciation:[mizɑ̃n‿abim]; alsomise en abîme) is the technique of placing a copy of an image within itself, often in a way that suggests an infinitely recurring sequence. Infilm theoryandliterary theory, it refers to thestory within a storytechnique. The term is derived fromheraldry, and meansplaced intoabyss(exact middle of a shield). It was first appropriated for modern criticism by the French authorAndré Gide. A common sense of the phrase is the visual experience of standing between two mirrors and seeing an infinite reproduction of one's image.[1]Another is theDroste effect, in which a picture appears within itself, in a place where a similar picture would realistically be expected to appear.[2]The Droste effect is named after the 1904Drostecocoa package, which depicts a woman holding a tray bearing a Droste cocoa package, which bears a smaller version of her image.[3] In the terminology ofheraldry, theabymeorabismeis the center of acoat of arms. The termmise en abyme(also calledinescutcheon) then meant “put/placed in the center”. It described a coat of arms that appears as a smaller shield in the center of a larger one (seeDroste effect). A complex example ofmise en abymeis seen in thecoat of arms of the United Kingdomfor the period 1801–1837[broken anchor], as used by KingsGeorge III,George IVandWilliam IV. Thecrown of Charlemagneis placeden abymewithin theescutcheonofHanover, which in turn isen abymewithin the arms of England, Scotland, and Ireland. Whileart historiansworking on the early-modern period adopted this phrase and interpreted it as showing artistic "self-awareness", medievalists tended not to use it.[citation needed]Many examples, however, can be found in the pre-modern era, as in amosaicfrom theHagia Sophiadated to the year 944. To the left,Justinian Ioffers theVirgin Marythe Hagia Sophia, which contains the mosaic itself. To the right,Constantine Ioffers the city ofConstantinople(now known as Istanbul), which itself contains the Hagia Sophia. More medieval examples can be found in the collection of articlesMedieval mise-en-abyme: the object depicted within itself,[4]in which Jersey Ellis conjectures that the self-references sometimes are used to strengthen the symbolism of gift-giving by documenting the act of giving on the object itself. An example of this self-referential gift-giving appears in theStefaneschi Triptychin theVatican Museum, which features CardinalGiacomo Gaetani Stefaneschias the giver of the altarpiece.[5] InWestern art history,mise en abymeis a formal technique in which an image contains a smaller copy of itself, in a sequence appearing to recur infinitely; "recursion" is another term for this. The modern meaning of the phrase originates with the authorAndré Gidewho used it to describe self-reflexive embeddings in various art forms and to describe what he sought in his work.[4]As examples, Gide cites both paintings such asLas MeninasbyDiego Velázquezand literary forms such asWilliam Shakespeare's use of the "play within a play" device inHamlet, where a theatrical company presents a performance for the characters that illuminate a thematic aspect of the play itself. This use of the phrasemise en abymewas picked up by scholars and popularized in the 1977 bookLe récit spéculaire. Essai sur la mise en abymebyLucien Dällenbach.[6] Mise en abymeoccurs in a text when there is a reduplication of images or concepts referring to the textual whole.Mise en abymeis a play of signifiers within a text, of sub-texts mirroring each other.[7]This mirroring can attain a level where meaning may become unstable and, in this respect, may be seen as part of the process ofdeconstruction. The film-within-a-film, where a film contains a plot about the making of a film, is an example ofmise en abyme. The film being made within the film refers, through itsmise en scène, to the real film being made. The spectator sees film equipment, stars getting ready for the take, and crew sorting out the various directorial needs. The narrative of the film within the film may directly reflect the one in the real film.[8]An example isBjörk's videoBachelorette,[9]directed byMichel Gondry. An example isLa Nuit américaine(1973) byFrançois Truffaut. Infilm, the meaning ofmise en abymeis similar to the artistic definition, but also includes the idea of a "dream within a dream". For example, a character awakens from a dream and later discovers that they arestill dreaming. Activities similar to dreaming, such as unconsciousness and virtual reality, also are described asmise en abyme. This is seen in the filmeXistenZwhere the two protagonists never truly know whether or not they are out of the game. It also becomes a prominent element ofCharlie Kaufman'sSynecdoche, New York(2008). More recent instances can be found in the filmsInland Empire(2007) andInception(2010). Classic film examples include the snow globe inCitizen Kane(1941) which provides a clue to the film's core mystery, and the discussion ofEdgar Allan Poe's written works (particularly "The Purloined Letter") in theJean-Luc GodardfilmBand of Outsiders(1964). Inliterary criticism,mise en abymeis a type offrame story, in which the core narrative may be used to illuminate some aspect of the framing story. The term is used in deconstruction and deconstructive literary criticism as a paradigm of theintertextualnature of language, that is, of the way, language never quite reaches the foundation of reality because it refers in a frame-within-a-frame way, to another language, which refers to another language, and so forth.[10] Invideo games, the first chapter of the gameThere Is No Game: Wrong Dimension(2020) is titled "Mise en abyme". Incomedy, the final act ofThe Inside Outtakes(2022) byBo Burnhamcontains a chapter titled "Mise en abyme". It shows footage being projected into a monitor that is captured by the camera, slightly delayed at each step. This effect highlights the disconnection between Burnham and the project during the artistic process.[citation needed]
https://en.wikipedia.org/wiki/Mise_en_abyme
Reentrancyis a programming concept where a function or subroutine can be interrupted and then resumed before it finishes executing. This means that the function can be called again before it completes its previous execution. Reentrant code is designed to be safe and predictable when multiple instances of the same function are called simultaneously or in quick succession. Acomputer programorsubroutineis called reentrant if multiple invocations can safely runconcurrentlyonmultiple processors, or if on asingle-processor systemitsexecutioncan beinterruptedand a new execution of it can be safely started (it can be "re-entered"). The interruption could be caused by an internal action such as ajumpor call (which might be arecursivecall; reentering a function is a generalization of recursion), or by an external action such as an interrupt orsignal. This definition originates frommultiprogrammingenvironments, where multiple processes may be active concurrently and where the flow of control could be interrupted by an interrupt and transferred to aninterrupt service routine(ISR) or "handler" subroutine. Any subroutine used by the handler that could potentially have been executing when the interrupt was triggered should be reentrant. Similarly, code shared by two processors accessing shared data should be reentrant. Often, subroutines accessible via the operating systemkernelare not reentrant. Hence, interrupt service routines are limited in the actions they can perform; for instance, they are usually restricted from accessing thefile systemand sometimes even fromallocating memory. Reentrancy is neither necessary nor sufficient forthread-safetyin multi-threaded environments. In other words, a reentrant subroutine can be thread-safe,[1]but is not guaranteed to be.[2]Conversely, thread-safe code need not be reentrant (see below for examples). Other terms used for reentrant programs include "sharable code".[3]Reentrant subroutines are sometimes marked in reference material as being "signal safe".[4]Reentrant programs are often[a]"pure procedures". Reentrancy is not the same thing asidempotence, in which the function may be called more than once yet generate exactly the same output as if it had only been called once. Generally speaking, a function produces output data based on some input data (though both are optional, in general). Shared data could be accessed by any function at any time. If data can be changed by any function (and none keep track of those changes), there is no guarantee to those that share a datum that that datum is the same as at any time before. Data has a characteristic calledscope, which describes where in a program the data may be used. Data scope is eitherglobal(outside thescopeof any function and with an indefinite extent) orlocal(created each time a function is called and destroyed upon exit). Local data is not shared by any routines, re-entering or not; therefore, it does not affect re-entrance. Global data is defined outside functions and can be accessed by more than one function, either in the form ofglobal variables(data shared between all functions), or asstatic variables(data shared by all invocations of the same function). Inobject-oriented programming, global data is defined in the scope of a class and can be private, making it accessible only to functions of that class. There is also the concept ofinstance variables, where a class variable is bound to a class instance. For these reasons, in object-oriented programming, this distinction is usually reserved for the data accessible outside of the class (public), and for the data independent of class instances (static). Reentrancy is distinct from, but closely related to,thread-safety. A function can bethread-safeand still not reentrant. For example, a function could be wrapped all around with amutex(which avoids problems in multithreading environments), but, if that function were used in an interrupt service routine, it could starve waiting for the first execution to release the mutex. The key for avoiding confusion is that reentrant refers to onlyonethread executing. It is a concept from the time when no multitasking operating systems existed. It may, however, modify itself if it resides in its own unique memory. That is, if each new invocation uses a different physical machine code location where a copy of the original code is made, it will not affect other invocations even if it modifies itself during execution of that particular invocation (thread). Reentrancy of a subroutine that operates on operating-system resources or non-local data depends on theatomicityof the respective operations. For example, if the subroutine modifies a 64-bit global variable on a 32-bit machine, the operation may be split into two 32-bit operations, and thus, if the subroutine is interrupted while executing, and called again from the interrupt handler, the global variable may be in a state where only 32 bits have been updated. The programming language might provide atomicity guarantees for interruption caused by an internal action such as a jump or call. Then the functionfin an expression like(global:=1) + (f()), where the order of evaluation of the subexpressions might be arbitrary in a programming language, would see the global variable either set to 1 or to its previous value, but not in an intermediate state where only part has been updated. (The latter can happen inC, because the expression has nosequence point.) The operating system might provide atomicity guarantees forsignals, such as a system call interrupted by a signal not having a partial effect. The processor hardware might provide atomicity guarantees forinterrupts, such as interrupted processor instructions not having partial effects. To illustrate reentrancy, this article uses as an example aCutility function,swap(), that takes two pointers and transposes their values, and an interrupt-handling routine that also calls the swap function. This is an example swap function that fails to be reentrant or thread-safe. Since thetmpvariable is globally shared, without synchronization, among any concurrent instances of the function, one instance may interfere with the data relied upon by another. As such, it should not have been used in the interrupt service routineisr(): The functionswap()in the preceding example can be made thread-safe by makingtmpthread-local. It still fails to be reentrant, and this will continue to cause problems ifisr()is called in the same context as a thread already executingswap(): An implementation ofswap()that allocatestmpon thestackinstead of globally and that is called only with unshared variables as parameters[b]is both thread-safe and reentrant. Thread-safe because the stack is local to a thread and a function acting just on local data will always produce the expected result. There is no access to shared data therefore no data race. A reentrant interrupt handler is aninterrupt handlerthat re-enables interrupts early in the interrupt handler. This may reduceinterrupt latency.[7]In general, while programming interrupt service routines, it is recommended to re-enable interrupts as soon as possible in the interrupt handler. This practice helps to avoid losing interrupts.[8] In the following code, neitherfnorgfunctions is reentrant. In the above,f()depends on a non-constant global variablev; thus, iff()is interrupted during execution by an ISR which modifiesv, then reentry intof()will return the wrong value ofv. The value ofvand, therefore, the return value off, cannot be predicted with confidence: they will vary depending on whether an interrupt modifiedvduringf's execution. Hence,fis not reentrant. Neither isg, because it callsf, which is not reentrant. These slightly altered versionsarereentrant: In the following, the function is thread-safe, but not (necessarily) reentrant: In the above,function()can be called by different threads without any problem. But, if the function is used in a reentrant interrupt handler and a second interrupt arises inside the function, the second routine will hang forever. As interrupt servicing can disable other interrupts, the whole system could suffer.
https://en.wikipedia.org/wiki/Reentrant_(subroutine)
Self-referenceis a concept that involves referring to oneself or one's own attributes, characteristics, or actions. It can occur inlanguage,logic,mathematics,philosophy, and other fields. Innaturalorformal languages, self-reference occurs when asentence, idea orformularefers to itself. The reference may be expressed either directly—through some intermediate sentence or formula—or by means of someencoding. In philosophy, self-reference also refers to the ability of a subject to speak of or refer to itself, that is, to have the kind of thought expressed by the first person nominative singular pronoun"I"in English. Self-reference is studied and has applications in mathematics, philosophy,computer programming,second-order cybernetics, andlinguistics, as well asin humor. Self-referential statements are sometimesparadoxical, and can also be consideredrecursive. In classicalphilosophy,paradoxeswere created by self-referential concepts such as theomnipotence paradoxof asking if it was possible for a being to exist so powerful that it could create a stone that it could not lift. TheEpimenides paradox, 'All Cretans are liars' when uttered by an ancient Greek Cretan was one of the first recorded versions. Contemporary philosophy sometimes employs the same technique to demonstrate that a supposed concept is meaningless or ill-defined.[2] Inmathematicsandcomputability theory, self-reference (also known asimpredicativity) is the key concept in proving limitations of many systems.Gödel's theoremuses it to show that no formalconsistentsystem of mathematics can ever contain all possible mathematical truths, because it cannot prove some truths about its own structure.The halting problemequivalent, in computation theory, shows that there is always some task that a computer cannot perform, namely reasoning about itself. These proofs relate to a long tradition of mathematical paradoxes such asRussell's paradoxandBerry's paradox, and ultimately to classical philosophical paradoxes. Ingame theory, undefined behaviors can occur where two players must model each other's mental states and behaviors, leading to infinite regress. Incomputer programming, self-reference occurs inreflection, where a program can read or modify its own instructions like any other data.[3]Numerous programming languages support reflection to some extent with varying degrees of expressiveness. Additionally, self-reference is seen inrecursion(related to the mathematicalrecurrence relation) infunctional programming, where a code structure refers back to itself during computation.[4]'Taming' self-reference from potentially paradoxical concepts into well-behaved recursions has been one of the great successes ofcomputer science, and is now used routinely in, for example, writingcompilersusing the 'meta-language'ML. Using a compiler to compile itself is known asbootstrapping.Self-modifying codeis possible to write (programs which operate on themselves), both withassemblerand with functional languages such asLisp, but is generally discouraged in real-world programming. Computing hardware makes fundamental use of self-reference inflip-flops, the basic units of digital memory, which convert potentially paradoxical logical self-relations into memory by expanding their terms over time. Thinking in terms of self-reference is a pervasive part of programmer culture, with many programs and acronyms named self-referentially as a form of humor, such asGNU('GNU's not Unix') andPINE('Pine is not Elm'). TheGNU Hurdis named for a pair of mutually self-referential acronyms. Tupper's self-referential formulais a mathematical curiosity which plots an image of its own formula. Self-reference occurs inliteratureandfilmwhen an author refers to his or her own work in the context of the work itself. Examples includeMiguel de Cervantes'Don Quixote,Shakespeare'sA Midsummer Night's Dream,The TempestandTwelfth Night,Denis Diderot'sJacques le fataliste et son maître,Italo Calvino'sIf on a winter's night a traveler, many stories byNikolai Gogol,Lost in the FunhousebyJohn Barth,Luigi Pirandello'sSix Characters in Search of an Author,Federico Fellini's8½andBryan Forbes'sThe L-Shaped Room. Speculative fiction writerSamuel R. Delanymakes use of this in his novelsNovaandDhalgren. In the former, Katin (a space-faring novelist) is wary of a long-standing curse wherein a novelist dies before completing any given work.Novaends mid-sentence, thus lending credence to the curse and the realization that the novelist is the author of the story; likewise, throughoutDhalgren, Delany has a protagonist simply named The Kid (or Kidd, in some sections), whose life and work are mirror images of themselves and of the novel itself. In the sci-fi spoof filmSpaceballs, DirectorMel Brooksincludes a scene wherein the evil characters are viewing a VHS copy of their own story, which shows them watching themselves "watching themselves", ad infinitum. Perhaps the earliest example is inHomer'sIliad, whereHelen of Troylaments: "for generations still unborn/we will live in song" (appearing in the song itself).[5] Self-reference in art is closely related to the concepts ofbreaking the fourth wallandmeta-reference, which often involve self-reference. The short stories ofJorge Luis Borgesplay with self-reference and related paradoxes in many ways.Samuel Beckett'sKrapp's Last Tapeconsists entirely of the protagonist listening to and making recordings of himself, mostly about other recordings. During the 1990s and 2000s filmic self-reference was a popular part of therubber realitymovement, notably inCharlie Kaufman's filmsBeing John MalkovichandAdaptation, the latter pushing the concept arguably to its breaking point as it attempts to portray its own creation, in adramatized versionof theDroste effect. Variouscreation mythsinvoke self-reference to solve the problem of what created the creator. For example, theEgyptian creation mythhas a god swallowing his own semen to create himself. TheOuroborosis a mythical dragon which eats itself. TheQuranincludes numerous instances of self-referentiality.[6][7] ThesurrealistpainterRené Magritteis famous for his self-referential works. His paintingThe Treachery of Images, includes the words "this is not a pipe", the truth of which depends entirely on whether the wordceci(in English, "this") refers to the pipe depicted—or to the painting or the word or sentence itself.[8]M.C. Escher's art also contains many self-referential concepts such as hands drawing themselves. A word that describes itself is called anautological word(orautonym). This generally applies to adjectives, for examplesesquipedalian(i.e. "sesquipedalian" is a sesquipedalian word), but can also apply to other parts of speech, such asTLA, as a three-letterabbreviationfor "three-letter abbreviation". A sentence which inventories its own letters and punctuation marks is called anautogram. There is a special case of meta-sentence in which the content of the sentence in the metalanguage and the content of the sentence in the object language are the same. Such a sentence is referring to itself. However some meta-sentences of this type can lead to paradoxes. "This is a sentence." can be considered to be a self-referential meta-sentence which is obviously true. However "This sentence is false" is a meta-sentence which leads to a self-referentialparadox. Such sentences can lead to problems, for example, in law, where statements bringing laws into existence can contradict one another or themselves.Kurt Gödelclaimed to have found such aloopholein theUnited States Constitutionat his citizenship ceremony. Self-reference occasionally occurs in themediawhen it is required to write about itself, for example theBBCreporting on job cuts at the BBC. Notable encyclopedias may be required to feature articles about themselves, such as Wikipedia's article onWikipedia. Fumblerulesare a list of rules of good grammar and writing, demonstrated through sentences that violate those very rules, such as "Avoid cliches like the plague" and "Don't use no double negatives". The term was coined in a published list of such rules byWilliam Safire.[9][10] Circular definitionis a type of self-reference in which the definition of a term or concept includes the term or concept itself, either explicitly or implicitly. Circular definitions are consideredfallaciousbecause they only define a term in terms of itself.[11]This type of self-reference may be useful inargumentation, but can result in a lack of clarity in communication. The adverb "hereby" is used in a self-referential way, for example in the statement "I hereby declare you husband and wife."[12] Several constitutions contain self-referential clauses defining how the constitution itself may be amended.[15]An example isArticle Five of the United States Constitution.
https://en.wikipedia.org/wiki/Self-reference
Spiegel im Spiegel(lit.Tooltip literal translation'mirror(s) in the mirror') is a composition byArvo Pärtwritten in 1978, just before his departure from Estonia. The piece is in thetintinnabularstyle, wherein amelodic voice, operating overdiatonic scales, andtintinnabular voice, operating within atriadon thetonic, accompany each other. It is about ten minutes long. The piece was originally written for a singlepianoandviolin– though the violin has often been replaced with either acelloor aviola. Versions also exist forsaxophone,double bass,clarinet,horn,flugelhorn,flute,oboe,bassoon,trombone,harmonica, andpercussion. The piece is an example ofminimal music. The piece is inF majorin 6/4 time, with the piano playing risingcrotchettriads and the second instrument playing slow F major scales, alternately rising and falling, of increasing length, which all end on the note A (themediantof F). The piano's left hand also plays notes, synchronised with the violin (or other instrument). "Spiegel im Spiegel" inGermancan literally mean both "mirror in the mirror" and "mirrors in the mirror", referring to aninfinity mirror, which produces aninfinityofimagesreflected byparallelplane mirrors: the tonic triads are endlessly repeated with small variations as if reflected back and forth.[1]The structure of the melody is made by a pair of phrases characterized by the alternation between ascending and descending movement with the fulcrum on the note A. This alternation, along with the overturning of the final intervals between adjacent phrases (for example, ascending sixth in the question – descending sixth in the answer), contributes to give the impression of a figure reflecting on a mirror and walking back and toward it. In 2011, the piece was the focus of a half-hourBBC Radio 4programme,Soul Music, which examined pieces of music "with a powerful emotional impact". ViolinistTasmin Littlediscussed her relationship to the piece.[2][3] The piece has been used in television, film, and theatre including: Spiegel im Spiegelwas recorded byGidon KremerandElena Kremerin December 1979 and featured on the 1980 albumKonzert nach dem Konzerton the Eurodisc label.Spiegel im Spiegelis featured on the 1999 albumAlinaon theECM New Serieslabel. The album, which was recorded with the participation of Pärt, includes three versions ofSpiegel im Spiegel, two for violin and piano and one for cello and piano, alternated with two variations of Pärt's piano pieceFür Alina.[8]The tempo of the first version ofSpiegel im Spiegelis 69 bpm (larghetto or adagio) and has a more somber feel. The tempo of the second version is faster at 85 bpm (andante) and gives the sense of pushing forward. The tempo of the third version is faster than the first and slower than the second at 78 bpm (a slower andante). Spiegel im Spiegelis featured on the 2016 albumSacredby Australian violinistNiki Vasilakisand features Deanna Djuric on piano. Scottish violinistNicola Benedettihas the track on her 2009 albumFantasie.
https://en.wikipedia.org/wiki/Spiegel_im_Spiegel
Astrange loopis acyclicstructure that goes through several levels in ahierarchicalsystem. It arises when, by moving only upwards or downwards through the system, one finds oneself back where one started. Strange loops may involveself-referenceandparadox. The concept of a strange loop was proposed and extensively discussed byDouglas HofstadterinGödel, Escher, Bach, and is further elaborated in Hofstadter's bookI Am a Strange Loop, published in 2007. Atangled hierarchyis ahierarchicalconsciousness system in which a strange loop appears. A strange loop is a hierarchy of levels, each of which is linked to at least one other by some type of relationship. A strange loop hierarchy is "tangled" (Hofstadter refers to this as a "heterarchy"), in that there is no well defined highest or lowest level; moving through the levels, one eventually returns to the starting point, i.e., the original level. Examples of strange loops that Hofstadter offers include: many of the works ofM. C. Escher, theCanon 5. a 2from J.S. Bach'sMusical Offering, the information flow network betweenDNAandenzymesthroughprotein synthesisandDNA replication, andself-referentialGödelian statementsinformal systems. InI Am a Strange Loop, Hofstadter defines strange loops as follows: And yet when I say "strange loop", I have something else in mind — a less concrete, more elusive notion. What I mean by "strange loop" is — here goes a first stab, anyway — not a physical circuit but an abstract loop in which, in the series of stages that constitute the cycling-around, there is a shift from one level of abstraction (or structure) to another, which feels like an upwards movement in an hierarchy, and yet somehow the successive "upward" shifts turn out to give rise to a closed cycle. That is, despite one's sense of departing ever further from one's origin, one winds up, to one's shock, exactly where one had started out. In short, a strange loop is a paradoxical level-crossingfeedback loop. (pp. 101–102) According to Hofstadter, strange loops take form in human consciousness as the complexity of active symbols in the brain inevitably leads to the same kind of self-reference whichGödelproved was inherent in any sufficiently complex logical or arithmetical system (that allows for arithmetic by means of thePeano axioms) in hisincompleteness theorem.[1]Gödel showed that mathematics and logic contain strange loops: propositions that not only refer tomathematical and logical truths, but also to the symbol systems expressing those truths. This leads to the sort of paradoxes seen in statements such as "This statement is false," wherein the sentence's basis of truth is found in referring to itself and its assertion, causing a logical paradox.[2] Hofstadter argues that the psychological self arises out of a similar kind of paradox. The brain is not born with an "I" – theegoemerges only gradually as experience shapes the brain's dense web of active symbols into a tapestry rich and complex enough to begintwisting back upon itself. According to this view, the psychological "I" is a narrative fiction, something created only from intake of symbolic data and the brain's ability to create stories about itself from that data. The consequence is that a self-perspective is a culmination of a unique pattern of symbolic activity in the brain, which suggests that the pattern of symbolic activity that makes identity, that constitutes subjectivity, can be replicated within the brains of others, and likely even inartificial brains.[2] The "strangeness" of a strange loop comes from the brain's perception, because the brain categorizes its input in a small number of "symbols" (by which Hofstadter means groups of neurons standing for something in the outside world). So the difference between the video-feedback loop and the brain's strange loops, is that while the former converts light to the same pattern on a screen, the latter categorizes a pattern and outputs its "essence", so that as the brain gets closer and closer to its "essence", it goes further down its strange loop.[3] Hofstadter thinks that minds appear to determine the world by way of "downwardcausality", which refers to effects being viewed in terms of their underlying causes. Hofstadter says this happens in the proof ofGödel'sincompleteness theorem: Merely from knowing the formula's meaning, one can infer its truth or falsity without any effort to derive it in the old-fashioned way, which requires one to trudge methodically "upwards" from the axioms. This is not just peculiar; it is astonishing. Normally, one cannot merely look at what a mathematical conjecturesaysand simply appeal to the content of that statement on its own to deduce whether the statement is true or false. (pp. 169–170) Hofstadter claims a similar "flipping around of causality" appears to happen in minds possessingself-consciousness; the mind perceives itself as the cause of certain feelings. The parallels between downward causality in formal systems and downward causality in brains are explored byTheodor Nenuin 2022,[4]together with other aspects of Hofstadter's metaphysics of mind. Nenu also questions the correctness of the above quote by focusing on the sentence which "says about itself" that it is provable (also known as a Henkin-sentence, named after logicianLeon Henkin). It turns out that under suitablemeta-mathematicalchoices (where theHilbert-Bernays provability conditionsdo not obtain), one can construct formally undecidable (or even formally refutable) Henkin-sentences for the arithmetical system under investigation. This system might very well be Hofstadter'sTypographical Number Theoryused inGödel, Escher, Bachor the more familiarPeano Arithmeticor some other sufficiently rich formal arithmetic. Thus, there are examples of sentences "which say about themselves that they are provable", but they don't exhibit the sort of downward causal powers described in the displayed quote. Hofstadter points toBach'sCanon per Tonos,M. C. Escher's drawingsWaterfall,Drawing Hands,Ascending and Descending, and theliar paradoxas examples that illustrate the idea of strange loops, which is expressed fully in the proof ofGödel'sincompleteness theorem. The "chicken or the egg" paradox is perhaps the best-known strange loop problem. The "ouroboros", which depicts a dragon eating its own tail, is perhaps one of the most ancient and universal symbolic representations of the reflexive loop concept. AShepard toneis another illustrative example of a strange loop. Named afterRoger Shepard, it is asoundconsisting of a superposition of tones separated byoctaves. When played with the basepitchof the tone moving upwards or downwards, it is referred to as theShepard scale. This creates theauditory illusionof a tone that continually ascends or descends in pitch, yet which ultimately seems to get no higher or lower. In a similar way a sound with seemingly ever increasing tempo can be constructed, as was demonstrated byJean-Claude Risset. Visual illusions depicting strange loops include thePenrose stairsand theBarberpole illusion. Aquinein software programming is a program that produces a new version of itself without any input from the outside. A similar concept ismetamorphic code. Efron's diceare four dice that areintransitiveunder gambler's preference. I.e., the dice are orderedA > B > C > D > A, wherex>ymeans "a gambler prefersxtoy". Individual preferences are always transitive, excluding preferences when given explicit rules such as in Efron's dice orrock-paper-scissors; however, aggregate preferences of a group may be intransitive. This can result in aCondorcet paradoxwherein following a path from one candidate across a series of majority preferences may return to the original candidate, leaving no clear preference by the group. In this case, some candidate beats an opponent, who in turn beats another opponent, and so forth, until a candidate is reached who beats the original candidate. The liar paradox andRussell's paradoxalso involve strange loops, as doesRené Magritte's paintingThe Treachery of Images. The mathematical phenomenon ofpolysemyhas been observed to be a strange loop. At the denotational level, the term refers to situations where a single entity can be seen tomeanmore than one mathematical object. See Tanenbaum (1999). The Stonecutteris an old Japanesefairy talewith a story that explains social and natural hierarchies as a strange loop. A strange loop can be found by traversing the links in the “See also” sections of the respectiveEnglish Wikipediaarticles. For instance: This article->Mise en abyme->Recursion->this article.[5][circular reference]
https://en.wikipedia.org/wiki/Strange_loop
Incomputer science, atail callis asubroutinecall performed as the final action of a procedure.[1]If the target of a tail is the same subroutine, the subroutine is said to betail recursive, which is a special case of directrecursion.Tail recursion(ortail-end recursion) is particularly useful, and is often easy to optimize in implementations. Tail calls can be implemented without adding a newstack frameto thecall stack. Most of the frame of the current procedure is no longer needed, and can be replaced by the frame of the tail call, modified as appropriate (similar tooverlayfor processes, but for function calls). The program can thenjumpto the called subroutine. Producing such code instead of a standard call sequence is calledtail-call eliminationortail-call optimization. Tail-call elimination allows procedure calls in tail position to be implemented as efficiently asgotostatements, thus allowing efficientstructured programming. In the words ofGuy L. Steele, "in general, procedure calls may be usefully thought of as GOTO statements which also pass parameters, and can be uniformly coded as [machine code] JUMP instructions."[2] Not all programming languages require tail-call elimination. However, infunctional programming languages, tail-call elimination is often guaranteed by thelanguage standard, allowing tail recursion to use a similar amount of memory as an equivalentloop. The special case of tail-recursive calls, when a function calls itself, may be more amenable to call elimination than general tail calls. When the language semantics do not explicitly support general tail calls, a compiler can often still optimizesibling calls, or tail calls to functions which take and return the same types as the caller.[3] When a function is called, the computer must "remember" the place it was called from, thereturn address, so that it can return to that location with the result once the call is complete. Typically, this information is saved on thecall stack, a list of return locations in the order that the call locations were reached. For tail calls, there is no need to remember the caller – instead, tail-call elimination makes only the minimum necessary changes to the stack frame before passing it on,[4]and the tail-called function will return directly to theoriginalcaller. The tail call doesn't have to appear lexically after all other statements in the source code; it is only important that the calling function return immediately after the tail call, returning the tail call's result if any, since the calling function is bypassed when the optimization is performed. For non-recursive function calls, this is usually anoptimizationthat saves only a little time and space, since there are not that many different functions available to call. When dealing with recursive ormutually recursivefunctions where recursion happens through tail calls, however, the stack space and the number of returns saved can grow to be very significant, since a function can call itself, directly or indirectly, creating a new call stack frame each time. Tail-call elimination often reduces asymptotic stack space requirements from linear, orO(n), to constant, orO(1). Tail-call elimination is thus required by the standard definitions of some programming languages, such asScheme,[5][6]and languages in theMLfamily among others. The Scheme language definition formalizes the intuitive notion of tail position exactly, by specifying which syntactic forms allow having results in tail context.[7]Implementations allowing an unlimited number of tail calls to be active at the same moment, thanks to tail-call elimination, can also be called 'properly tail recursive'.[5] Besides space and execution efficiency, tail-call elimination is important in thefunctional programmingidiom known ascontinuation-passing style(CPS), which would otherwise quickly run out of stack space. A tail call can be located just before the syntactical end of a function: Here, botha(data)andb(data)are calls, butbis the last thing the procedure executes before returning and is thus in tail position. However, not all tail calls are necessarily located at the syntactical end of a subroutine: Here, both calls tobandcare in tail position. This is because each of them lies in the end of if-branch respectively, even though the first one is not syntactically at the end ofbar's body. In this code: the call toa(data)is in tail position infoo2, but it isnotin tail position either infoo1or infoo3, because control must return to the caller to allow it to inspect or modify the return value before returning it. The following program is an example inScheme:[8] This is not written in a tail-recursive style, because the multiplication function ("*") is in the tail position. This can be compared to: This program assumesapplicative-orderevaluation. The inner procedurefact-itercalls itselflastin the control flow. This allows aninterpreterorcompilerto reorganize the execution which would ordinarily look like this:[8] into the moreefficientvariant, in terms of both space and time: This reorganization saves space because no state except for the calling function's address needs to be saved, either on the stack or on the heap, and the call stack frame forfact-iteris reused for the intermediate results storage. This also means that the programmer need not worry about running out of stack or heap space for extremely deep recursions. In typical implementations, the tail-recursive variant will be substantially faster than the other variant, but only by a constant factor. Some programmers working in functional languages will rewrite recursive code to be tail recursive so they can take advantage of this feature. This often requires addition of an "accumulator" argument (productin the above example) to the function. Tail recursion modulo consis a generalization of tail-recursion optimization introduced byDavid H. D. Warren[9]in the context ofcompilationofProlog, seen as anexplicitlyset oncelanguage. It was described (though not named) byDaniel P. FriedmanandDavid S. Wisein 1974[10]as aLISPcompilation technique. As the name suggests, it applies when the only operation left to perform after a recursive call is to prepend a known value in front of the list returned from it (or to perform a constant number of simple data-constructing operations, in general). This call would thus be atail callsave for ("modulo") the saidconsoperation. But prefixing a value at the start of a liston exitfrom a recursive call is the same as appending this value at the end of the growing liston entryinto the recursive call, thus building the list as aside effect, as if in an implicit accumulator parameter. The following Prolog fragment illustrates the concept: Thus in tail-recursive translation such a call is transformed into first creating a newlist nodeand setting itsfirstfield, andthenmaking the tail call with the pointer to the node'srestfield as argument, to be filled recursively. The same effect is achieved when the recursion isguardedunder a lazily evaluated data constructor, which is automatically achieved in lazy programming languages like Haskell. The following fragment defines a recursive function inCthat duplicates a linked list (with some equivalent Scheme and Prolog code as comments, for comparison): In this form the function is not tail recursive, because control returns to the caller after the recursive call duplicates the rest of the input list. Even if it were to allocate theheadnode before duplicating the rest, it would still need to plug in the result of the recursive call into thenextfieldafterthe call.[a]So the function isalmosttail recursive. Warren's method pushes the responsibility of filling thenextfield into the recursive call itself, which thus becomes tail call.[b]Using sentinel head node to simplify the code, The callee now appends to the end of the growing list, rather than have the caller prepend to the beginning of the returned list. The work is now done on the wayforwardfrom the list's start,beforethe recursive call which then proceeds further, instead ofbackwardfrom the list's end,afterthe recursive call has returned its result. It is thus similar to the accumulating parameter technique, turning a recursive computation into an iterative one. Characteristically for this technique, a parentframeis created on the execution call stack, which the tail-recursive callee can reuse as its own call frame if the tail-call optimization is present. The tail-recursive implementation can now be converted into an explicitly iterative implementation, as an accumulatingloop: In a paper delivered to theACMconference in Seattle in 1977,Guy L. Steelesummarized the debate over theGOTOandstructured programming, and observed that procedure calls in the tail position of a procedure can be best treated as a direct transfer of control to the called procedure, typically eliminating unnecessary stack manipulation operations.[2]Since such "tail calls" are very common inLisp, a language where procedure calls are ubiquitous, this form of optimization considerably reduces the cost of a procedure call compared to other implementations. Steele argued that poorly-implemented procedure calls had led to an artificial perception that the GOTO was cheap compared to the procedure call. Steele further argued that "in general procedure calls may be usefully thought of as GOTO statements which also pass parameters, and can be uniformly coded as [machine code] JUMP instructions", with the machine code stack manipulation instructions "considered an optimization (rather than vice versa!)".[2]Steele cited evidence that well-optimized numerical algorithms in Lisp could execute faster than code produced by then-available commercial Fortran compilers because the cost of a procedure call in Lisp was much lower. InScheme, a Lisp dialect developed by Steele withGerald Jay Sussman, tail-call elimination is guaranteed to be implemented in any interpreter.[11] Tail recursion is important to somehigh-level languages, especiallyfunctionalandlogiclanguages and members of theLispfamily. In these languages, tail recursion is the most commonly used way (and sometimes the only way available) of implementing iteration. The language specification of Scheme requires that tail calls are to be optimized so as not to grow the stack. Tail calls can be made explicitly inPerl, with a variant of the "goto" statement that takes a function name:goto &NAME;[12] However, for language implementations which store function arguments and local variables on acall stack(which is the default implementation for many languages, at least on systems with ahardware stack, such as thex86), implementing generalized tail-call optimization (including mutual tail recursion) presents an issue: if the size of the callee's activation record is different from that of the caller, then additional cleanup or resizing of the stack frame may be required. For these cases, optimizing tail recursion remains trivial, but general tail-call optimization may be harder to implement efficiently. For example, in theJava virtual machine(JVM), tail-recursive calls can be eliminated (as this reuses the existing call stack), but general tail calls cannot be (as this changes the call stack).[13][14]As a result, functional languages such asScalathat target the JVM can efficiently implement direct tail recursion, but not mutual tail recursion. TheGCC,LLVM/Clang, andIntelcompiler suites perform tail-call optimization forCand other languages at higher optimization levels or when the-foptimize-sibling-callsoption is passed.[15][16][17]Though the given language syntax may not explicitly support it, the compiler can make this optimization whenever it can determine that the return types for the caller and callee are equivalent, and that the argument types passed to both function are either the same, or require the same amount of total storage space on the call stack.[18] Various implementation methods are available. Tail calls are often optimized byinterpretersandcompilersoffunctional programmingandlogic programminglanguages to more efficient forms ofiteration. For example,Schemeprogrammers commonly expresswhile loopsas calls to procedures in tail position and rely on the Scheme compiler or interpreter to substitute the tail calls with more efficientjumpinstructions.[19] For compilers generating assembly directly, tail-call elimination is easy: it suffices to replace a call opcode with a jump one, after fixing parameters on the stack. From a compiler's perspective, the first example above is initially translated into pseudo-assembly language(in fact, this is validx86 assembly): Tail-call elimination replaces the last two lines with a single jump instruction: After subroutineAcompletes, it will then return directly to the return address offoo, omitting the unnecessaryretstatement. Typically, the subroutines being called need to be supplied withparameters. The generated code thus needs to make sure that thecall framefor A is properly set up before jumping to the tail-called subroutine. For instance, onplatformswhere thecall stackdoes not just contain thereturn address, but also the parameters for the subroutine, the compiler may need to emit instructions to adjust the call stack. On such a platform, for the code: (wheredata1anddata2are parameters) a compiler might translate that as:[c] A tail-call optimizer could then change the code to: This code is more efficient both in terms of execution speed and use of stack space. Since manySchemecompilers useCas an intermediate target code, the tail recursion must be encoded in C without growing the stack, even if the C compiler does not optimize tail calls. Many implementations achieve this by using a device known as atrampoline, a piece of code that repeatedly calls functions. All functions are entered via the trampoline. When a function has to tail-call another, instead of calling it directly and then returning the result, it returns the address of the function to be called and the call parameters back to the trampoline (from which it was called itself), and the trampoline takes care of calling this function next with the specified parameters. This ensures that the C stack does not grow and iteration can continue indefinitely. It is possible to implement trampolines usinghigher-order functionsin languages that support them, such asGroovy,Visual Basic .NETandC#.[20] Using a trampoline for all function calls is rather more expensive than the normal C function call, so at least one Scheme compiler,Chicken, uses a technique first described byHenry Bakerfrom an unpublished suggestion byAndrew Appel,[21]in which normal C calls are used but the stack size is checked before every call. When the stack reaches its maximum permitted size, objects on the stack aregarbage-collectedusing theCheney algorithmby moving all live data into a separate heap. Following this, the stack is unwound ("popped") and the program resumes from the state saved just before the garbage collection. Baker says "Appel's method avoids making a large number of small trampoline bounces by occasionally jumping off the Empire State Building."[21]The garbage collection ensures that mutual tail recursion can continue indefinitely. However, this approach requires that no C function call ever returns, since there is no guarantee that its caller's stack frame still exists; therefore, it involves a much more dramatic internal rewriting of the program code:continuation-passing style. Tail recursion can be related to thewhilestatement, an explicit iteration, for instance by transforming into wherexmay be a tuple involving more than one variable: if so, care must be taken in implementing theassignment statementx← baz(x) so that dependencies are respected. One may need to introduce auxiliary variables or use aswapconstruct. More generally, can be transformed into For instance, thisJuliaprogram gives a non-tail recursive definitionfactof the factorial: Indeed,n * factorial(n - 1)wraps the call tofactorial. But it can be transformed into a tail-recursive definition by adding an argumentacalled anaccumulator.[8] This Julia program gives a tail-recursive definitionfact_iterof the factorial: This Julia program gives an iterative definitionfact_iterof the factorial:
https://en.wikipedia.org/wiki/Tail_recursion
Tupper's self-referential formulais aformulathat visually represents itself when graphed at a specific location in the (x,y) plane. The formula was defined by Jeff Tupper and appears as an example in his 2001SIGGRAPHpaper on reliable two-dimensional computer graphing algorithms.[1]This paper discusses methods related to the GrafEq formula-graphing program developed by Tupper.[2] The formula is aninequalitydefined as: 12<⌊mod(⌊y17⌋2−17⌊x⌋−mod(⌊y⌋,17),2)⌋{\displaystyle {\frac {1}{2}}<\left\lfloor \mathrm {mod} \left(\left\lfloor {\frac {y}{17}}\right\rfloor 2^{-17\lfloor x\rfloor -\mathrm {mod} \left(\lfloor y\rfloor ,17\right)},2\right)\right\rfloor }where⌊…⌋{\displaystyle \lfloor \dots \rfloor }denotes thefloor function, and mod is themodulo operation. Letk{\displaystyle k}equal the following 543-digit integer: Graphingthe set of points(x,y){\displaystyle (x,y)}in0≤x<106{\displaystyle 0\leq x<106}andk≤y<k+17{\displaystyle k\leq y<k+17}which satisfy the formula, results in the following plot:[note 1] The formula is a general-purpose method of decoding a bitmap stored in the constantk{\displaystyle k}, and it could be used to draw any other image. When applied to the unbounded positive range0≤y{\displaystyle 0\leq y}, the formula tiles a vertical swath of the plane with a pattern that contains all possible 17-pixel-tall bitmaps. One horizontal slice of that infinite bitmap depicts the drawing formula since other slices depict all other possible formulae that might fit in a 17-pixel-tall bitmap. Tupper has created extended versions of his original formula that rule out all but one slice.[3] The constantk{\displaystyle k}is a simplemonochromebitmap imageof the formula treated as a binary number and multiplied by 17. Ifk{\displaystyle k}is divided by 17, theleast significant bitencodes the upper-right corner(k,0){\displaystyle (k,0)}; the 17 least significant bits encode the rightmost column of pixels; the next 17 least significant bits encode the 2nd-rightmost column, and so on. It fundamentally describes a way to plot points on a two-dimensional surface. The value ofk{\displaystyle k}is the number whose binary digits form the plot. The following plot demonstrates the addition of differentk{\displaystyle k}values. In the fourth subplot, the k-value of "AFGP" and "Aesthetic Function Graph" is added to get the resultant graph, where both texts can be seen with some distortion due to the effects of binary addition. The information regarding the shape of the plot is stored withink{\displaystyle k}.[4]
https://en.wikipedia.org/wiki/Tupper%27s_self-referential_formula
"Turtles all the way down" is an expression of the problem ofinfinite regress. The saying alludes to the mythological idea of aWorld Turtlethat supports aflat Earthon its back. It suggests that this turtle rests on the back of an even larger turtle, which itself is part of a column of increasingly larger turtles that continues indefinitely. The exact origin of the phrase is uncertain. In the form "rocks all the way down", the saying appears as early as 1838.[1]References to the saying's mythological antecedents, the World Turtle and its counterpart the World Elephant, were made by a number of authors in the 17th and 18th centuries.[2][3] The expression has been used to illustrate problems such as theregress argumentinepistemology. Early variants of the saying do not always have explicit references to infinite regression (i.e., the phrase "all the way down"). They often reference stories featuring aWorld Elephant,World Turtle, or other similar creatures that are claimed to come fromHindu mythology. The first known reference to a Hindu source is found in a letter byJesuitEmanuel da Veiga (1549–1605), written at Chandagiri on 18 September 1599, in which the relevant passage reads: Alii dicebant terram novem constare angulis, quibus cœlo innititur. Alius ab his dissentiens volebat terram septem elephantis fulciri, elephantes uero ne subsiderent, super testudine pedes fixos habere. Quærenti quis testudinis corpus firmaret, ne dilaberetur, respondere nesciuit. Others hold that the earth has nine corners by which the heavens are supported. Another disagreeing from these would have the earth supported by seven elephants, and the elephants do not sink down because their feet are fixed on a tortoise. When asked who would fix the body of the tortoise, so that it would not collapse, he said that he did not know.[4] Veiga's account seems to have been received bySamuel Purchas, who has a close paraphrase in hisPurchas His Pilgrims(1613/1626), "that the Earth had nine corners, whereby it was borne up by the Heaven. Others dissented, and said, that the Earth was borne up by seven Elephants; the Elephants' feet stood on Tortoises, and they were borne by they know not what."[5]Purchas' account is again reflected byJohn Lockein his 1689 tractAn Essay Concerning Human Understanding, where Locke introduces the story as a trope referring to the problem of induction in philosophical debate. Locke compares one who would say that properties inhere in "Substance" to the Indian who said the world was on an elephant which was on a tortoise, "But being again pressed to know what gave support to the broad-back'd Tortoise, replied, something, he knew not what".[2]The story is also referenced byHenry David Thoreau, who writes in his journal entry of 4 May 1852: "Men are making speeches ... all over the country, but each expresses only the thought, or the want of thought, of the multitude. No man stands on truth. They are merely banded together as usual, one leaning on another and all together on nothing; as the Hindoos made the world rest on an elephant, and the elephant on a tortoise, and had nothing to put under the tortoise."[6] In the form of "rocks all the way down", the saying dates to at least 1838, when it was printed in an unsigned anecdote in theNew-York Mirrorabout a schoolboy and an old woman living in the woods: "The world, marm," said I, anxious to display my acquired knowledge, "is not exactly round, but resembles in shape a flattened orange; and it turns on its axis once in twenty-four hours." "Well, I don't know anything about itsaxes," replied she, "but I know it don't turn round, for if it did we'd be all tumbled off; and as to its being round, any one can see it's a square piece of ground, standing on a rock!" "Standing on a rock! but upon what does that stand?" "Why, on another, to be sure!" "But what supports the last?" "Lud! child, how stupid you are! There's rocks all the way down!"[1] Another version of the saying appeared in an 1854 transcript of remarks by preacher Joseph Frederick Berg addressed toJoseph Barker: My opponent's reasoning reminds me of the heathen, who, being asked on what the world stood, replied, "On a tortoise." But on what does the tortoise stand? "On another tortoise." With Mr. Barker, too, there are tortoises all the way down. (Vehement and vociferous applause.) Many 20th-century attributions claim that philosopher and psychologistWilliam Jamesis the source of the phrase.[8]James referred to the fable of the elephant and tortoise several times, but told the infinite regress story with "rocks all the way down" in his 1882 essay, "Rationality, Activity and Faith": Like the old woman in the story who described the world as resting on a rock, and then explained that rock to be supported by another rock, and finally when pushed with questions said it was "rocks all the way down," he who believes this to be a radically moral universe must hold the moral order to rest either on an absolute and ultimateshouldor on a series ofshoulds"all the way down."[9] The linguistJohn R. Rossalso associates James with the phrase: The following anecdote is told of William James. [...] After a lecture on cosmology and the structure of the solar system, James was accosted by a little old lady. "Your theory that the sun is the centre of the solar system, and the earth is a ball which rotates around it has a very convincing ring to it, Mr. James, but it's wrong. I've got a better theory," said the little old lady. "And what is that, madam?" inquired James politely. "That we live on a crust of earth which is on the back of a giant turtle." Not wishing to demolish this absurd little theory by bringing to bear the masses of scientific evidence he had at his command, James decided to gently dissuade his opponent by making her see some of the inadequacies of her position. "If your theory is correct, madam," he asked, "what does this turtle stand on?" "You're a very clever man, Mr. James, and that's a very good question," replied the little old lady, "but I have an answer to it. And it's this: The first turtle stands on the back of a second, far larger, turtle, who stands directly under him." "But what does this second turtle stand on?" persisted James patiently. To this, the little old lady crowed triumphantly, "It's no use, Mr. James—it's turtles all the way down." The mythological idea of aturtle worldis often used as an illustration ofinfinite regresses. Aninfinite regressis an infinite series of entities governed by arecursiveprinciple that determines how each entity in the series depends on or is produced by its predecessor.[11]The main interest ininfinite regressesis due to their role ininfinite regress arguments. Aninfinite regress argumentis an argument against a theory based on the fact that this theory leads to an infinite regress.[11][12]For such an argument to be successful, it has to demonstrate not just that the theory in question entails an infinite regress but also that this regress isvicious.[11][13]There are different ways in which a regress can be vicious.[13][14]The idea of aturtle worldexemplifies viciousness due toexplanatory failure: it does not solve the problem it was formulated to solve. Instead, it assumes already in disguised form what it was supposed to explain.[13][14]This is akin to theinformal fallacyofbegging the question.[15]In one interpretation, the goal of positing the existence of aworld turtleis to explain why the earth seems to be at rest instead of falling down: because it rests on the back of a giant turtle. In order to explain why the turtle itself is not in free fall, another, even bigger turtle is posited, and so on, resulting in a world that isturtles all the way down.[13][11]Despite its shortcomings in clashing with modern physics, and due to its ontological extravagance, this theory seems to be metaphysically possible, assuming that space is infinite, thereby avoiding an outrightcontradiction. But it fails because it has to assume rather than explain at each step that there is another thing that is not falling. It does not explain why nothing at all is falling.[11][13] Themetaphoris used as an example of the problem of infinite regress in epistemology to show that there is a necessary foundation to knowledge, as written byJohann Gottlieb Fichtein 1794:[16][page needed] If there is not to be any (system of human knowledge dependent upon an absolute first principle) two cases are only possible. Either there is no immediate certainty at all, and then our knowledge forms many series or one infinite series, wherein each theorem is derived from a higher one, and this again from a higher one, etc., etc. We build our houses on the earth, the earth rests on an elephant, the elephant on a tortoise, the tortoise again—who knows on what?—and so on ad infinitum. True, if our knowledge is thus constituted, we can not alter it; but neither have we, then, any firm knowledge. We may have gone back to a certain link of our series, and have found every thing firm up to this link; but who can guarantee us that, if we go further back, we may not find it ungrounded, and shall thus have to abandon it? Our certainty is only assumed, and we can never be sure of it for a single following day. David Humereferences the story in his 1779 workDialogues Concerning Natural Religionwhen arguing against God as an unmoved mover:[3] How, therefore, shall we satisfy ourselves concerning the cause of that Being whom you suppose the Author of Nature, or, according to your system of Anthropomorphism, the ideal world, into which you trace the material? Have we not the same reason to trace that ideal world into another ideal world, or new intelligent principle? But if we stop, and go no further; why go so far? why not stop at the material world? How can we satisfy ourselves without going on in infinitum? And, after all, what satisfaction is there in that infinite progression? Let us remember the story of the Indian philosopher and his elephant. It was never more applicable than to the present subject. If the material world rests upon a similar ideal world, this ideal world must rest upon some other; and so on, without end. It were better, therefore, never to look beyond the present material world. By supposing it to contain the principle of its order within itself, we really assert it to be God; and the sooner we arrive at that Divine Being, so much the better. When you go one step beyond the mundane system, you only excite an inquisitive humour which it is impossible ever to satisfy. Bertrand Russellalso mentions the story in his 1927 lectureWhy I Am Not a Christianwhile discounting theFirst Causeargument intended to be a proof of God's existence: If everything must have a cause, then God must have a cause. If there can be anything without a cause, it may just as well be the world as God, so that there cannot be any validity in that argument. It is exactly of the same nature as the Hindu's view, that the world rested upon an elephant and the elephant rested upon a tortoise; and when they said, 'How about the tortoise?' the Indian said, 'Suppose we change the subject.' References to "turtles all the way down" have been made in a variety of modern contexts. For example, American hardcore bandEvery Time I Dietitled a song “Turtles All the Way Down” on their 2009 albumNew Junk Aesthetic. The lyrics mention the turtle world theory. "Turtles All the Way Down" is the name of a song by country artistSturgill Simpsonthat appears on his 2014 albumMetamodern Sounds in Country Music.[17]"Gamma Goblins ('Its Turtles All The Way Down' Mix)" is a remix byOttfor the 2002HallucinogenalbumIn Dub.[18]Turtles All the Way Downis also the title of a 2017 novel byJohn Greenabout a teenage girl withobsessive–compulsive disorder.[19] MusicianCaptain Beefheartused the phrase in 1975 to describe playing withFrank ZappaandThe Mothers of Invention(captured on the albumBongo Fury) when he told Steve Weitzman ofRolling Stonethat he "had an extreme amount of fun on this tour. They move awfully fast. I've never travelled this fast with the Magic Band—turtles all the way down."[20] Stephen Hawkingincorporates the saying into the beginning of his 1988 bookA Brief History of Time:[21] A well-known scientist (some say it wasBertrand Russell) once gave a public lecture on astronomy. He described how the earth orbits around the sun and how the sun, in turn, orbits around the centre of a vast collection of stars called our galaxy. At the end of the lecture, a little old lady at the back of the room got up and said: "What you have told us is rubbish. The world is really a flat plate supported on the back of a giant tortoise." The scientist gave a superior smile before replying, "What is the tortoise standing on?" "You're very clever, young man, very clever," said the old lady. "But it's turtles all the way down!" FormerU.S. Supreme CourtJusticeAntonin Scaliadiscussed his "favored version" of the saying in a footnote to his 2006 plurality opinion inRapanos v. United States:[22] In our favored version, an Eastern guru affirms that the earth is supported on the back of a tiger. When asked what supports the tiger, he says it stands upon an elephant; and when asked what supports the elephant he says it is a giant turtle. When asked, finally, what supports the giant turtle, he is briefly taken aback, but quickly replies "Ah, after that it is turtles all the way down." Microsoft Visual Studiohad agamificationplug-in that awarded badges for certain programming behaviors and patterns. One of the badges was "Turtles All the Way Down", which was awarded for writing aclasswith 10 or more levels ofinheritance.[23] In aTED-Edvideo discussingGödel's incompleteness theorems, the phrase "Gödels all the way down" is used to describe the way in which one can never get rid of unprovable true statements in an axiomatic system.[24]
https://en.wikipedia.org/wiki/Turtles_all_the_way_down
Indatabase management, anaggregate functionoraggregation functionis afunctionwhere multiple values are processed together to form a singlesummary statistic. Common aggregate functions include: Others include: Formally, an aggregate function takes as input aset, amultiset(bag), or alistfrom some input domainIand outputs an element of an output domainO.[1]The input and output domains may be the same, such as forSUM, or may be different, such as forCOUNT. Aggregate functions occur commonly in numerousprogramming languages, inspreadsheets, and inrelational algebra. Thelistaggfunction, as defined in theSQL:2016standard[2]aggregates data from multiple rows into a single concatenated string. In theentity relationship diagram, aggregation is represented as seen in Figure 1 with a rectangle around the relationship and its entities to indicate that it is being treated as an aggregate entity.[3] Aggregate functions present abottleneck, because they potentially require having all input values at once. Indistributed computing, it is desirable to divide such computations into smaller pieces, and distribute the work, usuallycomputing in parallel, via adivide and conquer algorithm. Some aggregate functions can be computed by computing the aggregate for subsets, and then aggregating these aggregates; examples includeCOUNT,MAX,MIN, andSUM. In other cases the aggregate can be computed by computing auxiliary numbers for subsets, aggregating these auxiliary numbers, and finally computing the overall number at the end; examples includeAVERAGE(tracking sum and count, dividing at the end) andRANGE(tracking max and min, subtracting at the end). In other cases the aggregate cannot be computed without analyzing the entire set at once, though in some cases approximations can be distributed; examples includeDISTINCT COUNT(Count-distinct problem),MEDIAN, andMODE. Such functions are calleddecomposable aggregation functions[4]ordecomposable aggregate functions. The simplest may be referred to asself-decomposable aggregation functions, which are defined as those functionsfsuch that there is amerge operator⁠⋄{\displaystyle \diamond }⁠such that where⁠⊎{\displaystyle \uplus }⁠is the union of multisets (seemonoid homomorphism). For example,SUM: COUNT: MAX: MIN: Note that self-decomposable aggregation functions can be combined (formally, taking the product) by applying them separately, so for instance one can compute both theSUMandCOUNTat the same time, by tracking two numbers. More generally, one can define adecomposable aggregation functionfas one that can be expressed as the composition of a final functiongand a self-decomposable aggregation functionh,f=g∘h,f(X)=g(h(X)){\displaystyle f=g\circ h,f(X)=g(h(X))}. For example,AVERAGE=SUM/COUNTandRANGE=MAX−MIN. In theMapReduceframework, these steps are known as InitialReduce (value on individual record/singleton set), Combine (binary merge on two aggregations), and FinalReduce (final function on auxiliary values),[5]and moving decomposable aggregation before the Shuffle phase is known as an InitialReduce step,[6] Decomposable aggregation functions are important inonline analytical processing(OLAP), as they allow aggregation queries to be computed on the pre-computed results in theOLAP cube, rather than on the base data.[7]For example, it is easy to supportCOUNT,MAX,MIN, andSUMin OLAP, since these can be computed for each cell of the OLAP cube and then summarized ("rolled up"), but it is difficult to supportMEDIAN, as that must be computed for every view separately. In order to calculate the average and standard deviation from aggregate data, it is necessary to have available for each group: the total of values (Σxi= SUM(x)), the number of values (N=COUNT(x)) and the total of squares of the values (Σxi2=SUM(x2)) of each groups.[8]AVG:AVG⁡(X⊎Y)=(AVG⁡(X)∗COUNT⁡(X)+AVG⁡(Y)∗COUNT⁡(Y))/(COUNT⁡(X)+COUNT⁡(Y)){\displaystyle \operatorname {AVG} (X\uplus Y)={\bigl (}\operatorname {AVG} (X)*\operatorname {COUNT} (X)+\operatorname {AVG} (Y)*\operatorname {COUNT} (Y){\bigr )}/{\bigl (}\operatorname {COUNT} (X)+\operatorname {COUNT} (Y){\bigr )}}orAVG⁡(X⊎Y)=(SUM⁡(X)+SUM⁡(Y))/(COUNT⁡(X)+COUNT⁡(Y)){\displaystyle \operatorname {AVG} (X\uplus Y)={\bigl (}\operatorname {SUM} (X)+\operatorname {SUM} (Y){\bigr )}/{\bigl (}\operatorname {COUNT} (X)+\operatorname {COUNT} (Y){\bigr )}}or, only if COUNT(X)=COUNT(Y)AVG⁡(X⊎Y)=(AVG⁡(X)+AVG⁡(Y))/2{\displaystyle \operatorname {AVG} (X\uplus Y)={\bigl (}\operatorname {AVG} (X)+\operatorname {AVG} (Y){\bigr )}/2}SUM(x2): The sum of squares of the values is important in order to calculate the Standard Deviation of groupsSUM⁡(X2⊎Y2)=SUM⁡(X2)+SUM⁡(Y2){\displaystyle \operatorname {SUM} (X^{2}\uplus Y^{2})=\operatorname {SUM} (X^{2})+\operatorname {SUM} (Y^{2})}STDDEV:For a finite population with equal probabilities at all points, we have[9][circular reference]STDDEV⁡(X)=s(x)=1N∑i=1N(xi−x¯)2=1N(∑i=1Nxi2)−(x¯)2=SUM⁡(x2)/COUNT⁡(x)−AVG⁡(x)2{\displaystyle \operatorname {STDDEV} (X)=s(x)={\sqrt {{\frac {1}{N}}\sum _{i=1}^{N}(x_{i}-{\overline {x}})^{2}}}={\sqrt {{\frac {1}{N}}\left(\sum _{i=1}^{N}x_{i}^{2}\right)-({\overline {x}})^{2}}}={\sqrt {\operatorname {SUM} (x^{2})/\operatorname {COUNT} (x)-\operatorname {AVG} (x)^{2}}}} This means that the standard deviation is equal to the square root of the difference between the average of the squares of the values and the square of the average value.STDDEV⁡(X⊎Y)=SUM⁡(X2⊎Y2)/COUNT⁡(X⊎Y)−AVG⁡(X⊎Y)2{\displaystyle \operatorname {STDDEV} (X\uplus Y)={\sqrt {\operatorname {SUM} (X^{2}\uplus Y^{2})/\operatorname {COUNT} (X\uplus Y)-\operatorname {AVG} (X\uplus Y)^{2}}}}STDDEV⁡(X⊎Y)=(SUM⁡(X2)+SUM⁡(Y2))/(COUNT⁡(X)+COUNT⁡(Y))−((SUM⁡(X)+SUM⁡(Y))/(COUNT⁡(X)+COUNT⁡(Y)))2{\displaystyle \operatorname {STDDEV} (X\uplus Y)={\sqrt {{\bigl (}\operatorname {SUM} (X^{2})+\operatorname {SUM} (Y^{2}){\bigr )}/{\bigl (}\operatorname {COUNT} (X)+\operatorname {COUNT} (Y){\bigr )}-{\bigl (}(\operatorname {SUM} (X)+\operatorname {SUM} (Y))/(\operatorname {COUNT} (X)+\operatorname {COUNT} (Y)){\bigr )}^{2}}}}
https://en.wikipedia.org/wiki/Aggregate_function
Inmathematics, aniterated binary operationis an extension of abinary operationon asetSto afunctionon finitesequencesof elements ofSthrough repeated application.[1]Common examples include the extension of theadditionoperation to thesummationoperation, and the extension of themultiplicationoperation to theproductoperation. Other operations, e.g., the set-theoretic operationsunionandintersection, are also ofteniterated, but the iterations are not given separate names. In print, summation and product are represented by special symbols; but other iterated operators often are denoted by larger variants of the symbol for the ordinary binary operator. Thus, the iterations of the four operations mentioned above are denoted More generally, iteration of a binary function is generally denoted by a slash: iteration off{\displaystyle f}over the sequence(a1,a2…,an){\displaystyle (a_{1},a_{2}\ldots ,a_{n})}is denoted byf/(a1,a2…,an){\displaystyle f/(a_{1},a_{2}\ldots ,a_{n})}, following the notation forreduceinBird–Meertens formalism. In general, there is more than one way to extend a binary operation to operate on finite sequences, depending on whether the operator isassociative, and whether the operator hasidentity elements. Denote byaj,k, withj≥ 0andk≥j, the finite sequence of lengthk−jof elements ofS, with members (ai), forj≤i<k. Note that ifk=j, the sequence is empty. Forf:S×S→S, define a new functionFlon finite nonempty sequences of elements ofS, whereFl(a0,k)={a0,k=1f(Fl(a0,k−1),ak−1),k>1.{\displaystyle F_{l}(\mathbf {a} _{0,k})={\begin{cases}a_{0},&k=1\\f(F_{l}(\mathbf {a} _{0,k-1}),a_{k-1}),&k>1.\end{cases}}} Similarly, defineFr(a0,k)={a0,k=1f(a0,Fr(a1,k)),k>1.{\displaystyle F_{r}(\mathbf {a} _{0,k})={\begin{cases}a_{0},&k=1\\f(a_{0},F_{r}(\mathbf {a} _{1,k})),&k>1.\end{cases}}} Iffhas a unique left identitye, the definition ofFlcan be modified to operate on empty sequences by defining the value ofFlon an empty sequence to bee(the previous base case on sequences of length 1 becomes redundant). Similarly,Frcan be modified to operate on empty sequences iffhas a unique right identity. Iffis associative, thenFlequalsFr, and we can simply writeF. Moreover, if an identity elementeexists, then it is unique (seeMonoid). Iffiscommutativeand associative, thenFcan operate on any non-empty finitemultisetby applying it to an arbitrary enumeration of the multiset. Iffmoreover has an identity elemente, then this is defined to be the value ofFon an empty multiset. Iffis idempotent, then the above definitions can be extended tofinite sets. IfSalso is equipped with ametricor more generally withtopologythat isHausdorff, so that the concept of alimit of a sequenceis defined inS, then aninfiniteiterationon a countable sequence inSis defined exactly when the corresponding sequence of finite iterations converges. Thus, e.g., ifa0,a1,a2,a3, … is an infinite sequence ofreal numbers, then theinfinite product∏i=0∞ai{\textstyle \prod _{i=0}^{\infty }a_{i}}is defined, and equal tolimn→∞∏i=0nai,{\textstyle \lim \limits _{n\to \infty }\prod _{i=0}^{n}a_{i},}if and only if that limit exists. The general, non-associative binary operation is given by amagma. The act of iterating on a non-associative binary operation may be represented as abinary tree. Iterated binary operations are used to represent an operation that will be repeated over a set subject to some constraints. Typically the lower bound of a restriction is written under the symbol, and the upper bound over the symbol, though they may also be written as superscripts and subscripts in compact notation. Interpolation is performed over positiveintegersfrom the lower to upper bound, to produce the set which will be substituted into the index (below denoted asi) for the repeated operations. Common notations include the bigSigma (repeatedsum) and bigPi (repeatedproduct) notations. ∑i=0n−1i=0+1+2+⋯+(n−1){\displaystyle \sum _{i=0}^{n-1}i=0+1+2+\dots +(n-1)}∏i=0n−1i=0×1×2×⋯×(n−1){\displaystyle \prod _{i=0}^{n-1}i=0\times 1\times 2\times \dots \times (n-1)} It is possible to specify set membership or other logical constraints in place of explicit indices, in order to implicitly specify which elements of a set shall be used: ∑x∈Sx=x1+x2+x3+⋯+xn{\displaystyle \sum _{x\in S}x=x_{1}+x_{2}+x_{3}+\dots +x_{n}} Multiple conditions may be written either joined with alogical andor separately: ∑(i∈2N)∧(i≤n)i=∑i≤ni∈2Ni=0+2+4+⋯+n{\displaystyle \sum _{(i\in 2\mathbb {N} )\wedge (i\leq n)}i=\sum _{\stackrel {i\in 2\mathbb {N} }{i\leq n}}i=0+2+4+\dots +n} Less commonly, anybinary operatorsuch asexclusive or(⊕{\displaystyle \oplus })orset union(∪{\displaystyle \cup })may also be used.[2]For example, ifSis a set of logicalpropositions: ⋀p∈Sp=p1∧p2∧⋯∧pN{\displaystyle \bigwedge _{p\in S}p=p_{1}\wedge p_{2}\wedge \dots \wedge p_{N}} which is trueiffall of the elements ofSare true.
https://en.wikipedia.org/wiki/Iterated_binary_operation
Infunctional programming, the concept ofcatamorphism(from theAncient Greek:κατά"downwards" andμορφή"form, shape") denotes the uniquehomomorphismfrom aninitial algebrainto some other algebra. Catamorphisms provide generalizations offoldsofliststo arbitraryalgebraic data types, which can be described asinitial algebras. The dual concept is that ofanamorphismthat generalizeunfolds. Ahylomorphismis the composition of an anamorphism followed by a catamorphism. Consider aninitialF{\displaystyle F}-algebra(A,in){\displaystyle (A,in)}for someendofunctorF{\displaystyle F}of somecategoryinto itself. Herein{\displaystyle in}is amorphismfromFA{\displaystyle FA}toA{\displaystyle A}. Since it is initial, we know that whenever(X,f){\displaystyle (X,f)}is anotherF{\displaystyle F}-algebra, i.e. a morphismf{\displaystyle f}fromFX{\displaystyle FX}toX{\displaystyle X}, there is a uniquehomomorphismh{\displaystyle h}from(A,in){\displaystyle (A,in)}to(X,f){\displaystyle (X,f)}. By the definition of the category ofF{\displaystyle F}-algebra, thish{\displaystyle h}corresponds to a morphism fromA{\displaystyle A}toX{\displaystyle X}, conventionally also denotedh{\displaystyle h}, such thath∘in=f∘Fh{\displaystyle h\circ in=f\circ Fh}. In the context ofF{\displaystyle F}-algebra, the uniquely specified morphism from the initial object is denoted bycataf{\displaystyle \mathrm {cata} \ f}and hence characterized by the following relationship: Another notation found in the literature is(|f|){\displaystyle (\!|f|\!)}. The open brackets used are known asbanana brackets, after which catamorphisms are sometimes referred to asbananas, as mentioned inErik Meijeret al.[1]One of the first publications to introduce the notion of a catamorphism in the context of programming was the paper “Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire”, byErik Meijeret al.,[1]which was in the context of theSquiggolformalism. The general categorical definition was given byGrant Malcolm.[2][3] We give a series of examples, and then a more global approach to catamorphisms, in theHaskellprogramming language. Consider the functorMaybedefined in the below Haskell code: The initial object of the Maybe-Algebra is the set of all objects of natural number typeNattogether with the morphisminidefined below:[4][5] Thecatamap can be defined as follows:[5] As an example consider the following morphism: Thencata g ((Succ. Succ . Succ) Zero)will evaluate to "wait... wait... wait... go!". For a fixed typeaconsider the functorMaybeProd adefined by the following: The initial algebra ofMaybeProd ais given by the lists of elements with typeatogether with the morphisminidefined below:[6] Thecatamap can be defined by: Notice also thatcata g (Cons s l) = g (Just (s, cata g l)). As an example consider the following morphism: cata g (Cons 10 EmptyList)evaluates to 30. This can be seen by expandingcata g (Cons 10 EmptyList) = g (Just (10,cata g EmptyList)) = 10*(cata g EmptyList) = 10*(g Nothing) = 10*3. In the same way it can be shown, thatcata g (Cons 10 (Cons 100 (Cons 1000 EmptyList)))will evaluate to 10*(100*(1000*3)) = 3.000.000. Thecatamap is closely related to the right fold (seeFold (higher-order function)) of listsfoldrList. The morphismliftdefined by relatescatato the right foldfoldrListof lists via: The definition ofcataimplies, thatfoldrListis the right fold and not the left fold. As an example:foldrList (+) 1 (Cons 10 (Cons 100 (Cons 1000 EmptyList)))will evaluate to 1111 andfoldrList (*) 3 (Cons 10 (Cons 100 (Cons 1000 EmptyList))to 3.000.000. For a fixed typea, consider the functor mapping typesbto a type that contains a copy of each term ofaas well as all pairs ofb's (terms of the product type of two instances of the typeb). An algebra consists of a function tob, which either acts on anaterm or twobterms. This merging of a pair can be encoded as two functions of typea -> bresp.b -> b -> b. Deeper category theoretical studies of initial algebras reveal that the F-algebra obtained from applying the functor to its own initial algebra is isomorphic to it. Strong type systems enable us to abstractly specify the initial algebra of a functorfas its fixed pointa = f a. The recursively defined catamorphisms can now be coded in single line, where the case analysis (like in the different examples above) is encapsulated by the fmap. Since the domain of the latter are objects in the image off, the evaluation of the catamorphisms jumps back and forth betweenaandf a. Now again the first example, but now via passing the Maybe functor to Fix. Repeated application of the Maybe functor generates a chain of types, which, however, can be united by the isomorphism from the fixed point theorem. We introduce the termzero, which arises from Maybe'sNothingand identify a successor function with repeated application of theJust. This way the natural numbers arise. Again, the following will evaluate to "wait.. wait.. wait.. wait.. go!":cata pleaseWait (successor.successor.successor.successor $ zero) And now again the tree example. For this we must provide the tree container data type so that we can set up thefmap(we didn't have to do it for theMaybefunctor, as it's part of the standard prelude). The following will evaluate to 4:cata treeDepth $ meet (end "X") (meet (meet (end "YXX") (end "YXY")) (end "YY"))
https://en.wikipedia.org/wiki/Catamorphism
Incomputer science, theprefix sum,cumulative sum,inclusive scan, or simplyscanof a sequence of numbersx0,x1,x2, ...is a second sequence of numbersy0,y1,y2, ..., thesumsofprefixes(running totals) of the input sequence: For instance, the prefix sums of thenatural numbersare thetriangular numbers: Prefix sums are trivial to compute in sequential models of computation, by using the formulayi=yi− 1+xito compute each output value in sequence order. However, despite their ease of computation, prefix sums are a useful primitive in certain algorithms such ascounting sort,[1][2]and they form the basis of thescanhigher-order function infunctional programminglanguages. Prefix sums have also been much studied inparallel algorithms, both as a test problem to be solved and as a useful primitive to be used as a subroutine in other parallel algorithms.[3][4][5] Abstractly, a prefix sum requires only abinary associative operator ⊕, making it useful for many applications from calculatingwell-separated pair decompositionsof points to string processing.[6][7] Mathematically, the operation of taking prefix sums can be generalized from finite to infinite sequences; in that context, a prefix sum is known as apartial sumof aseries. Prefix summation or partial summation formlinear operatorson thevector spacesof finite or infinite sequences; their inverses arefinite differenceoperators. Infunctional programmingterms, the prefix sum may be generalized to any binary operation (not just theadditionoperation); thehigher order functionresulting from this generalization is called ascan, and it is closely related to thefoldoperation. Both the scan and the fold operations apply the given binary operation to the same sequence of values, but differ in that the scan returns the whole sequence of results from the binary operation, whereas the fold returns only the final result. For instance, the sequence offactorialnumbers may be generated by a scan of the natural numbers using multiplication instead of addition: Programming language and library implementations of scan may be eitherinclusiveorexclusive. An inclusive scan includes inputxiwhen computing outputyi(i.e.,yi=⨁j=0ixj{\textstyle y_{i}=\bigoplus _{j=0}^{i}x_{j}}) while an exclusive scan does not (i.e.,yi=⨁j=0i−1xj{\textstyle y_{i}=\bigoplus _{j=0}^{i-1}x_{j}}). In the latter case, implementations either leavey0undefined or accept a separate "x−1" value with which to seed the scan. Either type of scan can be transformed into the other: an inclusive scan can be transformed into an exclusive scan by shifting the array produced by the scan right by one element and inserting the identity value at the left of the array. Conversely, an exclusive scan be transformed into an inclusive scan by shifting the array produced by the scan left and inserting the sum of the last element of the scan and the last element of the input array at the right of the array.[8] The following table lists examples of the inclusive and exclusive scan functions provided by a few programming languages and libraries: The directive-basedOpenMPparallel programming model supports both inclusive and exclusive scan support beginning with Version 5.0. There are two key algorithms for computing a prefix sum in parallel. The first offers a shorterspanand moreparallelismbut is not work-efficient. The second is work-efficient but requires double the span and offers less parallelism. These are presented in turn below. HillisandSteelepresent the following parallel prefix sum algorithm:[9] In the above, the notationxji{\displaystyle x_{j}^{i}}means the value of thejth element of arrayxin timestepi. With a single processor this algorithm would run inO(nlogn)time.[10]However if the machine has at leastnprocessors to perform the inner loop in parallel, the algorithm as a whole runs inO(logn)time, the number of iterations of the outer loop. A work-efficient parallel prefix sum can be computed by the following steps.[3][11][12] If the input sequence hasnsteps, then the recursion continues to a depth ofO(logn), which is also the bound on the parallel running time of this algorithm. The number of steps of the algorithm isO(n), and it can be implemented on aparallel random access machinewithO(n/logn)processors without any asymptotic slowdown by assigning multiple indices to each processor in rounds of the algorithm for which there are more elements than processors.[3] Each of the preceding algorithms runs inO(logn)time. However, the former takes exactlylog2nsteps, while the latter requires2 log2n− 2steps. For the 16-input examples illustrated, Algorithm 1 is 12-way parallel (49 units of work divided by a span of 4) while Algorithm 2 is only 4-way parallel (26 units of work divided by a span of 6). However, Algorithm 2 is work-efficient—it performs only a constant factor (2) of the amount of work required by the sequential algorithm—while Algorithm 1 is work-inefficient—it performs asymptotically more work (a logarithmic factor) than is required sequentially. Consequently, Algorithm 1 is likely to perform better when abundant parallelism is available, but Algorithm 2 is likely to perform better when parallelism is more limited. Parallel algorithms for prefix sums can often be generalized to other scan operations onassociative binary operations,[3][4]and they can also be computed efficiently on modern parallel hardware such as aGPU.[13]The idea of building in hardware a functional unit dedicated to computing multi-parameter prefix-sum was patented byUzi Vishkin.[14] Many parallel implementations follow a two pass procedure where partial prefix sums are calculated in the first pass on each processing unit; the prefix sum of these partial sums is then calculated and broadcast back to the processing units for a second pass using the now known prefix as the initial value. Asymptotically this method takes approximately two read operations and one write operation per item. An implementation of a parallel prefix sum algorithm, like other parallel algorithms, has to take theparallelization architectureof the platform into account. More specifically, multiple algorithms exist which are adapted for platforms working onshared memoryas well as algorithms which are well suited for platforms usingdistributed memory, relying onmessage passingas the only form of interprocess communication. The following algorithm assumes ashared memorymachine model; all processing elements (PEs) have access to the same memory. A version of this algorithm is implemented in the Multi-Core Standard Template Library (MCSTL),[15][16]a parallel implementation of theC++ standard template librarywhich provides adapted versions forparallel computingof various algorithms. In order to concurrently calculate the prefix sum overndata elements withpprocessing elements, the data is divided intop+1{\displaystyle p+1}blocks, each containingnp+1{\displaystyle {\frac {n}{p+1}}}elements (for simplicity we assume thatp+1{\displaystyle p+1}dividesn). Note, that although the algorithm divides the data intop+1{\displaystyle p+1}blocks, onlypprocessing elements run in parallel at a time. In a first sweep, each PE calculates a local prefix sum for its block. The last block does not need to be calculated, since these prefix sums are only calculated as offsets to the prefix sums of succeeding blocks and the last block is by definition not succeeded. Thepoffsets which are stored in the last position of each block are accumulated in a prefix sum of their own and stored in their succeeding positions. Forpbeing a small number, it is faster to do this sequentially, for a largep, this step could be done in parallel as well. A second sweep is performed. This time the first block does not have to be processed, since it does not need to account for the offset of a preceding block. However, in this sweep the last block is included instead and the prefix sums for each block are calculated taking the prefix sum block offsets calculated in the previous sweep into account. Improvement:In case that the number of blocks are too much that makes the serial step time-consuming by deploying a single processor, theHillis and Steele algorithmcan be used to accelerate the second phase. The Hypercube Prefix Sum Algorithm[17]is well adapted fordistributed memoryplatforms and works with the exchange of messages between the processing elements. It assumes to havep=2d{\displaystyle p=2^{d}}processor elements (PEs) participating in the algorithm equal to the number of corners in ad-dimensionalhypercube. Throughout the algorithm, each PE is seen as a corner in a hypothetical hyper cube with knowledge of the total prefix sumσas well as the prefix sumxof all elements up to itself (according to the ordered indices among the PEs), both in its own hypercube. In ad-dimensional hyper cube with2d{\displaystyle 2^{d}}PEs at the corners, the algorithm has to be repeateddtimes to have the2d{\displaystyle 2^{d}}zero-dimensional hyper cubes be unified into oned-dimensional hyper cube. Assuming aduplex communicationmodel where theσof two adjacent PEs in different hyper cubes can be exchanged in both directions in one communication step, this meansd=log2⁡p{\displaystyle d=\log _{2}p}communication startups. The Pipelined Binary Tree Algorithm[18]is another algorithm for distributed memory platforms which is specifically well suited for large message sizes. Like the hypercube algorithm, it assumes a special communication structure. The processing elements (PEs) are hypothetically arranged in abinary tree(e.g. a Fibonacci Tree) withinfix numerationaccording to their index within the PEs. Communication on such a tree always occurs between parent and child nodes. Theinfix numerationensures that for any given PEj, the indices of all nodes reachable by its left subtree[l…j−1]{\displaystyle \color {Blue}{[l\dots j-1]}}are less thanjand the indices[j+1…r]{\displaystyle \color {Blue}{[j+1\dots r]}}of all nodes in the right subtree are greater thanj. The parent's index is greater than any of the indices in PEj's subtree if PEjis a left child and smaller if PEjis a right child. This allows for the following reasoning: Note the distinction between subtree-local and total prefix sums. The points two, three and four can lead to believe they would form a circular dependency, but this is not the case. Lower level PEs might require the total prefix sum of higher level PEs to calculate their total prefix sum, but higher level PEs only require subtree local prefix sums to calculate their total prefix sum. The root node as highest level node only requires the local prefix sum of its left subtree to calculate its own prefix sum. Each PE on the path from PE0to the root PE only requires the local prefix sum of its left subtree to calculate its own prefix sum, whereas every node on the path from PEp-1(last PE) to the PErootrequires the total prefix sum of its parent to calculate its own total prefix sum. This leads to a two-phase algorithm: Note that the algorithm is run in parallel at each PE and the PEs will block upon receive until their children/parents provide them with packets. If the messagemof lengthncan be divided intokpackets and the operator ⨁ can be used on each of the corresponding message packets separately,pipeliningis possible.[18] If the algorithm is used without pipelining, there are always only two levels (the sending PEs and the receiving PEs) of the binary tree at work while all other PEs are waiting. If there arepprocessing elements and a balanced binary tree is used, the tree haslog2⁡p{\displaystyle \log _{2}p}levels, the length of the path fromPE0{\displaystyle PE_{0}}toPEroot{\displaystyle PE_{\mathrm {root} }}is thereforelog2⁡p−1{\displaystyle \log _{2}p-1}which represents the maximum number of non parallel communication operations during the upward phase, likewise, the communication on the downward path is also limited tolog2⁡p−1{\displaystyle \log _{2}p-1}startups. Assuming a communication startup time ofTstart{\displaystyle T_{\mathrm {start} }}and a bytewise transmission time ofTbyte{\displaystyle T_{\mathrm {byte} }}, upward and downward phase are limited to(2log2⁡p−2)(Tstart+n⋅Tbyte){\displaystyle (2\log _{2}p-2)(T_{\mathrm {start} }+n\cdot T_{\mathrm {byte} })}in a non pipelined scenario. Upon division into k packets, each of sizenk{\displaystyle {\tfrac {n}{k}}}and sending them separately, the first packet still needs(log2⁡p−1)(Tstart+nk⋅Tbyte){\displaystyle (\log _{2}p-1)\left(T_{\mathrm {start} }+{\frac {n}{k}}\cdot T_{\mathrm {byte} }\right)}to be propagated toPEroot{\displaystyle PE_{\mathrm {root} }}as part of a local prefix sum and this will occur again for the last packet ifk>log2⁡p{\displaystyle k>\log _{2}p}. However, in between, all the PEs along the path can work in parallel and each third communication operation (receive left, receive right, send to parent) sends a packet to the next level, so that one phase can be completed in2log2⁡p−1+3(k−1){\displaystyle 2\log _{2}p-1+3(k-1)}communication operations and both phases together need(4⋅log2⁡p−2+6(k−1))(Tstart+nk⋅Tbyte){\displaystyle (4\cdot \log _{2}p-2+6(k-1))\left(T_{\mathrm {start} }+{\frac {n}{k}}\cdot T_{\mathrm {byte} }\right)}which is favourable for large message sizesn. The algorithm can further be optimised by making use offull-duplex or telephone modelcommunication and overlapping the upward and the downward phase.[18] When a data set may be updated dynamically, it may be stored in aFenwick treedata structure. This structure allows both the lookup of any individual prefix sum value and the modification of any array value in logarithmic time per operation.[19]However, an earlier 1982 paper[20]presents a data structure called Partial Sums Tree (see Section 5.1) that appears to overlap Fenwick trees; in 1982 the term prefix-sum was not yet as common as it is today. For higher-dimensional arrays, thesummed area tableprovides a data structure based on prefix sums for computing sums of arbitrary rectangular subarrays. This can be a helpful primitive inimage convolutionoperations.[21] Counting sortis aninteger sortingalgorithm that uses the prefix sum of ahistogramof key frequencies to calculate the position of each key in the sorted output array. It runs in linear time for integer keys that are smaller than the number of items, and is frequently used as part ofradix sort, a fast algorithm for sorting integers that are less restricted in magnitude.[1] List ranking, the problem of transforming alinked listinto anarraythat represents the same sequence of items, can be viewed as computing a prefix sum on the sequence 1, 1, 1, ... and then mapping each item to the array position given by its prefix sum value; by combining list ranking, prefix sums, andEuler tours, many important problems ontreesmay be solved by efficient parallel algorithms.[4] An early application of parallel prefix sum algorithms was in the design ofbinary adders, Boolean circuits that can add twon-bit binary numbers. In this application, the sequence of carry bits of the addition can be represented as a scan operation on the sequence of pairs of input bits, using themajority functionto combine the previous carry with these two bits. Each bit of the output number can then be found as theexclusive orof two input bits with the corresponding carry bit. By using a circuit that performs the operations of the parallel prefix sum algorithm, it is possible to design an adder that usesO(n)logic gates andO(logn)time steps.[3][11][12] In theparallel random access machinemodel of computing, prefix sums can be used to simulate parallel algorithms that assume the ability for multiple processors to access the same memory cell at the same time, on parallel machines that forbid simultaneous access. By means of asorting network, a set of parallel memory access requests can be ordered into a sequence such that accesses to the same cell are contiguous within the sequence; scan operations can then be used to determine which of the accesses succeed in writing to their requested cells, and to distribute the results of memory read operations to multiple processors that request the same result.[22] InGuy Blelloch's Ph.D. thesis,[23]parallel prefix operations form part of the formalization of thedata parallelismmodel provided by machines such as theConnection Machine. The Connection Machine CM-1 and CM-2 provided ahypercubicnetwork on which the Algorithm 1 above could be implemented, whereas the CM-5 provided a dedicated network to implement Algorithm 2.[24] In the construction ofGray codes, sequences of binary values with the property that consecutive sequence values differ from each other in a single bit position, a numberncan be converted into the Gray code value at positionnof the sequence simply by taking theexclusive orofnandn/2(the number formed by shiftingnright by a single bit position). The reverse operation, decoding a Gray-coded valuexinto a binary number, is more complicated, but can be expressed as the prefix sum of the bits ofx, where each summation operation within the prefix sum is performed modulo two. A prefix sum of this type may be performed efficiently using the bitwise Boolean operations available on modern computers, by computing theexclusive orofxwith each of the numbers formed by shiftingxto the left by a number of bits that is a power of two.[25] Parallel prefix (using multiplication as the underlying associative operation) can also be used to build fast algorithms for parallelpolynomial interpolation. In particular, it can be used to compute thedivided differencecoefficients of theNewton formof the interpolation polynomial.[26]This prefix based approach can also be used to obtain the generalized divided differences for (confluent)Hermite interpolationas well as for parallel algorithms forVandermondesystems.[27] Parallel prefix algorithms can also be used for temporal parallelization ofRecursive Bayesian estimationmethods, including Bayesian filters,Kalman filters, as well as the corresponding smoothers.[28]The core idea is that, for example, the solutions to the Bayesian/Kalman filtering problems are written in terms of a suitably definedassociative filtering operatorsuch that the prefix "sums" of the filtering operator gives the filtering solution. This allows parallel prefix algorithms to be applied to compute the filtering and smoothing solutions. A similar idea also works for the parallelization of a class of probabilistic differential equation solvers[29]in the context ofProbabilistic numerics. In the context ofOptimal control, parallel prefix algorithms can be used for parallelization ofBellman equationandHamilton–Jacobi–Bellman equations(HJB equations), including theirLinear–quadratic regulatorspecial cases.[30][31]Here, the idea is that we can define anassociative operatorfor a combination of conditional value functions (conditioned on the end-point), and the prefix sums of this operator give solutions to the Bellman equations or HJB equations. Prefix sum is used forload balancingas a low-cost algorithm to distribute the work between multiple processors, where the overriding goal is achieving an equal amount of work on each processor. The algorithms uses an array of weights representing the amount of work required for each item. After the prefix sum is calculated, the work itemiis sent for processing to the processor unit with the number[⁠prefixSumValuei/totalWork / numberOfProcessors⁠].[32]Graphically this corresponds to an operation where the amount of work in each item is represented by the length of a linear segment, all segments are sequentially placed onto a line and the result cut into number of pieces, corresponding to the number of the processors.[33] Below is a lookup table of quarter squares with the remainder discarded for the digits 0 through 18; this allows for the multiplication of numbers up to9×9. If, for example, you wanted to multiply 9 by 3, you observe that the sum and difference are 12 and 6 respectively. Looking both those values up on the table yields 36 and 9, the difference of which is 27, which is the product of 9 and 3.
https://en.wikipedia.org/wiki/Prefix_sum
In computerprogramming languages, arecursive data type(also known as arecursively defined,inductively definedorinductive data type) is adata typefor values that may contain other values of the same type. Data of recursive types are usually viewed asdirected graphs.[citation needed] An important application of recursion in computer science is in defining dynamic data structures such as Lists and Trees. Recursive data structures can dynamically grow to an arbitrarily large size in response to runtime requirements; in contrast, a static array's size requirements must be set at compile time. Sometimes the term "inductive data type" is used foralgebraic data typeswhich are not necessarily recursive. An example is thelisttype, inHaskell: This indicates that a list of a's is either an empty list or acons cellcontaining an 'a' (the "head" of the list) and another list (the "tail"). Another example is a similar singly linked type inJava: This indicates that non-empty list of typeEcontains a data member of typeE, and a reference to another List object for the rest of the list (or anull referenceto indicate that this is the end of the list). Data types can also be defined bymutual recursion. The most important basic example of this is atree, which can be defined mutually recursively in terms of a forest (a list of trees). Symbolically: A forestfconsists of a list of trees, while a treetconsists of a pair of a valuevand a forestf(its children). This definition is elegant and easy to work with abstractly (such as when proving theorems about properties of trees), as it expresses a tree in simple terms: a list of one type, and a pair of two types. This mutually recursive definition can be converted to a singly recursive definition by inlining the definition of a forest: A treetconsists of a pair of a valuevand a list of trees (its children). This definition is more compact, but somewhat messier: a tree consists of a pair of one type and a list another, which require disentangling to prove results about. InStandard ML, the tree and forest data types can be mutually recursively defined as follows, allowing empty trees:[1] In Haskell, the tree and forest data types can be defined similarly: Intype theory, a recursive type has the general formμα.Twhere thetype variableαmay appear in the typeTand stands for the entire type itself. For example, the natural numbers (seePeano arithmetic) may be defined by the Haskell datatype: In type theory, we would say:nat=μα.1+α{\displaystyle {\text{nat}}=\mu \alpha .1+\alpha }where the two arms of thesum typerepresent the Zero and Succ data constructors. Zero takes no arguments (thus represented by theunit type) and Succ takes another Nat (thus another element ofμα.1+α{\displaystyle \mu \alpha .1+\alpha }). There are two forms of recursive types: the so-called isorecursive types, and equirecursive types. The two forms differ in how terms of a recursive type are introduced and eliminated. With isorecursive types, the recursive typeμα.T{\displaystyle \mu \alpha .T}and its expansion (orunrolling)T[μα.T/α]{\displaystyle T[\mu \alpha .T/\alpha ]}(where the notationX[Y/Z]{\displaystyle X[Y/Z]}indicates that all instances of Z are replaced with Y in X) are distinct (and disjoint) types with special term constructs, usually calledrollandunroll, that form anisomorphismbetween them. To be precise:roll:T[μα.T/α]→μα.T{\displaystyle roll:T[\mu \alpha .T/\alpha ]\to \mu \alpha .T}andunroll:μα.T→T[μα.T/α]{\displaystyle unroll:\mu \alpha .T\to T[\mu \alpha .T/\alpha ]}, and these two areinverse functions. Under equirecursive rules, a recursive typeμα.T{\displaystyle \mu \alpha .T}and its unrollingT[μα.T/α]{\displaystyle T[\mu \alpha .T/\alpha ]}areequal– that is, those two type expressions are understood to denote the same type. In fact, most theories of equirecursive types go further and essentially specify that any two type expressions with the same "infinite expansion" are equivalent. As a result of these rules, equirecursive types contribute significantly more complexity to a type system than isorecursive types do. Algorithmic problems such as type checking andtype inferenceare more difficult for equirecursive types as well. Since direct comparison does not make sense on an equirecursive type, they can be converted into a canonical form in O(n log n) time, which can easily be compared.[2] Isorecursive types capture the form of self-referential (or mutually referential) type definitions seen in nominalobject-orientedprogramming languages, and also arise in type-theoretic semantics of objects andclasses. In functional programming languages, isorecursive types (in the guise of datatypes) are common too.[3] InTypeScript, recursion is allowed in type aliases.[4]Thus, the following example is allowed. However, recursion is not allowed in type synonyms inMiranda,OCaml(unless-rectypesflag is used or it is a record or variant), or Haskell; so, for example the following Haskell types are illegal: Instead, they must be wrapped inside an algebraic data type (even if they only has one constructor): This is because type synonyms, liketypedefsin C, are replaced with their definition at compile time. (Type synonyms are not "real" types; they are just "aliases" for convenience of the programmer.) But if this is attempted with a recursive type, it will loop infinitely because no matter how many times the alias is substituted, it still refers to itself, e.g. "Bad" will grow indefinitely:Bad→(Int, Bad)→(Int, (Int, Bad))→.... Another way to see it is that a level of indirection (the algebraic data type) is required to allow the isorecursive type system to figure out when torollandunroll.
https://en.wikipedia.org/wiki/Recursive_data_type
Incomputer science, thereduction operator[1]is a type ofoperatorthat is commonly used inparallel programmingto reduce the elements of an array into a single result. Reduction operators areassociativeand often (but not necessarily)commutative.[2][3][4]The reduction of sets of elements is an integral part of programming models such asMap Reduce, where a reduction operator is applied (mapped) to all elements before they are reduced. Otherparallel algorithmsuse reduction operators as primary operations to solve more complex problems. Many reduction operators can be used for broadcasting to distribute data to all processors. A reduction operator can help break down a task into various partial tasks by calculating partial results which can be used to obtain a final result. It allows certain serial operations to be performed in parallel and the number of steps required for those operations to be reduced. A reduction operator stores the result of the partial tasks into a private copy of the variable. These private copies are then merged into a shared copy at the end. An operator is a reduction operator if: These two requirements are satisfied for commutative and associative operators that are applied to all array elements. Some operators which satisfy these requirements are addition, multiplication, and some logical operators (and, or, etc.). A reduction operator⊕{\displaystyle \oplus }can be applied in constant time on an input setV={v0=(e00⋮e0m−1),v1=(e10⋮e1m−1),…,vp−1=(ep−10⋮ep−1m−1)}{\displaystyle V=\left\{v_{0}={\begin{pmatrix}e_{0}^{0}\\\vdots \\e_{0}^{m-1}\end{pmatrix}},v_{1}={\begin{pmatrix}e_{1}^{0}\\\vdots \\e_{1}^{m-1}\end{pmatrix}},\dots ,v_{p-1}={\begin{pmatrix}e_{p-1}^{0}\\\vdots \\e_{p-1}^{m-1}\end{pmatrix}}\right\}}ofp{\displaystyle p}vectors withm{\displaystyle m}elements each. The resultr{\displaystyle r}of the operation is the combination of the elementsr=(e00⊕e10⊕⋯⊕ep−10⋮e0m−1⊕e1m−1⊕⋯⊕ep−1m−1)=(⨁i=0p−1ei0⋮⨁i=0p−1eim−1){\displaystyle r={\begin{pmatrix}e_{0}^{0}\oplus e_{1}^{0}\oplus \dots \oplus e_{p-1}^{0}\\\vdots \\e_{0}^{m-1}\oplus e_{1}^{m-1}\oplus \dots \oplus e_{p-1}^{m-1}\end{pmatrix}}={\begin{pmatrix}\bigoplus _{i=0}^{p-1}e_{i}^{0}\\\vdots \\\bigoplus _{i=0}^{p-1}e_{i}^{m-1}\end{pmatrix}}}and has to be stored at a specified root processor at the end of the execution. If the resultr{\displaystyle r}has to be available at every processor after the computation has finished, it is often called Allreduce. An optimal sequential linear-time algorithm for reduction can apply the operator successively from front to back, always replacing two vectors with the result of the operation applied to all its elements, thus creating an instance that has one vector less. It needs(p−1)⋅m{\displaystyle (p-1)\cdot m}steps until onlyr{\displaystyle r}is left. Sequential algorithms can not perform better than linear time, but parallel algorithms leave some space left to optimize. Suppose we have an array[2,3,5,1,7,6,8,4]{\displaystyle [2,3,5,1,7,6,8,4]}. The sum of this array can be computed serially by sequentially reducing the array into a single sum using the '+' operator. Starting the summation from the beginning of the array yields:((((((2+3)+5)+1)+7)+6)+8)+4=36.{\displaystyle {\Bigg (}{\bigg (}{\Big (}{\big (}\,(\,(2+3)+5)+1{\big )}+7{\Big )}+6{\bigg )}+8{\Bigg )}+4=36.}Since '+' is both commutative and associative, it is a reduction operator. Therefore this reduction can be performed in parallel using several cores, where each core computes the sum of a subset of the array, and the reduction operator merges the results. Using abinary treereduction would allow 4 cores to compute(2+3){\textstyle (2+3)},(5+1){\textstyle (5+1)},(7+6){\textstyle (7+6)}, and(8+4){\textstyle (8+4)}. Then two cores can compute(5+6){\displaystyle (5+6)}and(13+12){\displaystyle (13+12)}, and lastly a single core computes(11+25)=36{\displaystyle (11+25)=36}. So a total of 4 cores can be used to compute the sum inlog2⁡8=3{\textstyle \log _{2}8=3}steps instead of the7{\displaystyle 7}steps required for the serial version. This parallel binary tree technique computes((2+3)+(5+1))+((7+6)+(8+4)){\textstyle {\big (}\,(2+3)+(5+1)\,{\big )}+{\big (}\,(7+6)+(8+4)\,{\big )}}. Of course the result is the same, but only because of the associativity of the reduction operator. The commutativity of the reduction operator would be important if there were a master core distributing work to several processors, since then the results could arrive back to the master processor in any order. The property of commutativity guarantees that the result will be the same. IEEE 754-2019defines 4 kinds of sum reductions and 3 kinds of scaled-product reductions. Because the operations are reduction operators, the standards specifies that "implementations may associate in any order or evaluate in any wider format."[5] Matrix multiplicationisnota reduction operator since the operation is not commutative. If processes were allowed to return their matrix multiplication results in any order to the master process, the final result that the master computes will likely be incorrect if the results arrived out of order. However, note that matrix multiplication is associative, and therefore the result would be correct as long as the proper ordering were enforced, as in the binary tree reduction technique. Regarding parallel algorithms, there are two main models of parallel computation, theparallel random access machine(PRAM) as an extension of the RAM with shared memory between processing units and thebulk synchronous parallel computerwhich takes communication andsynchronizationinto account. Both models have different implications for thetime-complexity, therefore two algorithms will be shown. This algorithm represents a widely spread method to handle inputs wherep{\displaystyle p}is a power of two. The reverse procedure is often used for broadcasting elements.[6][7][8] The binary operator for vectors is defined element-wise such that(ei0⋮eim−1)⊕⋆(ej0⋮ejm−1)=(ei0⊕ej0⋮eim−1⊕ejm−1).{\displaystyle {\begin{pmatrix}e_{i}^{0}\\\vdots \\e_{i}^{m-1}\end{pmatrix}}\oplus ^{\star }{\begin{pmatrix}e_{j}^{0}\\\vdots \\e_{j}^{m-1}\end{pmatrix}}={\begin{pmatrix}e_{i}^{0}\oplus e_{j}^{0}\\\vdots \\e_{i}^{m-1}\oplus e_{j}^{m-1}\end{pmatrix}}.} The algorithm further assumes that in the beginningxi=vi{\displaystyle x_{i}=v_{i}}for alli{\displaystyle i}andp{\displaystyle p}is a power of two and uses the processing unitsp0,p1,…pn−1{\displaystyle p_{0},p_{1},\dots p_{n-1}}. In every iteration, half of the processing units become inactive and do not contribute to further computations. The figure shows a visualization of the algorithm using addition as the operator. Vertical lines represent the processing units where the computation of the elements on that line take place. The eight input elements are located on the bottom and every animation step corresponds to one parallel step in the execution of the algorithm. An active processorpi{\displaystyle p_{i}}evaluates the given operator on the elementxi{\displaystyle x_{i}}it is currently holding andxj{\displaystyle x_{j}}wherej{\displaystyle j}is the minimal index fulfillingj>i{\displaystyle j>i}, so thatpj{\displaystyle p_{j}}is becoming an inactive processor in the current step.xi{\displaystyle x_{i}}andxj{\displaystyle x_{j}}are not necessarily elements of the input setX{\displaystyle X}as the fields are overwritten and reused for previously evaluated expressions. To coordinate the roles of the processing units in each step without causing additional communication between them, the fact that the processing units are indexed with numbers from0{\displaystyle 0}top−1{\displaystyle p-1}is used. Each processor looks at itsk{\displaystyle k}-th least significant bit and decides whether to get inactive or compute the operator on its own element and the element with the index where thek{\displaystyle k}-th bit is not set. The underlying communication pattern of the algorithm is a binomial tree, hence the name of the algorithm. Onlyp0{\displaystyle p_{0}}holds the result in the end, therefore it is the root processor. For anAllreduceoperation the result has to be distributed, which can be done by appending a broadcast fromp0{\displaystyle p_{0}}. Furthermore, the numberp{\displaystyle p}of processors is restricted to be a power of two. This can be lifted by padding the number of processors to the next power of two. There are also algorithms that are more tailored for this use-case.[9] The main loop is executed⌈log2⁡p⌉{\displaystyle \lceil \log _{2}p\rceil }times, the time needed for the part done in parallel is inO(m){\displaystyle {\mathcal {O}}(m)}as a processing unit either combines two vectors or becomes inactive. Thus the parallel timeT(p,m){\displaystyle T(p,m)}for the PRAM isT(p,m)=O(log⁡(p)⋅m){\displaystyle T(p,m)={\mathcal {O}}(\log(p)\cdot m)}. The strategy for handling read and write conflicts can be chosen as restrictive as an exclusive read and exclusive write (EREW). The speedupS(p,m){\displaystyle S(p,m)}of the algorithm isS(p,m)∈O(TseqT(p,m))=O(plog⁡(p)){\textstyle S(p,m)\in {\mathcal {O}}\left({\frac {T_{\text{seq}}}{T(p,m)}}\right)={\mathcal {O}}\left({\frac {p}{\log(p)}}\right)}and therefore theefficiencyisE(p,m)∈O(S(p,m)p)=O(1log⁡(p)){\textstyle E(p,m)\in {\mathcal {O}}\left({\frac {S(p,m)}{p}}\right)={\mathcal {O}}\left({\frac {1}{\log(p)}}\right)}. The efficiency suffers because half of the active processing units become inactive after each step, sop2i{\displaystyle {\frac {p}{2^{i}}}}units are active in stepi{\displaystyle i}. In contrast to the PRAM-algorithm, in thedistributed memorymodel, memory is not shared between processing units and data has to be exchanged explicitly between processing units. Therefore, data has to be exchanged explicitly between units, as can be seen in the following algorithm. The only difference between the distributed algorithm and the PRAM version is the inclusion of explicit communication primitives, the operating principle stays the same. The communication between units leads to some overhead. A simple analysis for the algorithm uses the BSP-model and incorporates the timeTstart{\displaystyle T_{\text{start}}}needed to initiate communication andTbyte{\displaystyle T_{\text{byte}}}the time needed to send a byte. Then the resulting runtime isΘ((Tstart+n⋅Tbyte)⋅log(p)){\displaystyle \Theta ((T_{\text{start}}+n\cdot T_{\text{byte}})\cdot log(p))}, asm{\displaystyle m}elements of a vector are sent in each iteration and have sizen{\displaystyle n}in total. For distributed memory models, it can make sense to use pipelined communication. This is especially the case whenTstart{\displaystyle T_{\text{start}}}is small in comparison toTbyte{\displaystyle T_{\text{byte}}}. Usually,linear pipelinessplit data or a tasks into smaller pieces and process them in stages. In contrast to the binomial tree algorithms, the pipelined algorithm uses the fact that the vectors are not inseparable, but the operator can be evaluated for single elements:[10] It is important to note that the send and receive operations have to be executed concurrently for the algorithm to work. The result vector is stored atpp−1{\displaystyle p_{p-1}}at the end. The associated animation shows an execution of the algorithm on vectors of size four with five processing units. Two steps of the animation visualize one parallel execution step. The number of steps in the parallel execution arep+m−2{\displaystyle p+m-2}, it takesp−1{\displaystyle p-1}steps until the last processing unit receives its first element and additionalm−1{\displaystyle m-1}until all elements are received. Therefore, the runtime in the BSP-model isT(n,p,m)=(Tstart+nm⋅Tbyte)(p+m−2){\textstyle T(n,p,m)=\left(T_{\text{start}}+{\frac {n}{m}}\cdot T_{\text{byte}}\right)(p+m-2)}, assuming thatn{\displaystyle n}is the total byte-size of a vector. Althoughm{\displaystyle m}has a fixed value, it is possible to logically group elements of a vector together and reducem{\displaystyle m}. For example, a problem instance with vectors of size four can be handled by splitting the vectors into the first two and last two elements, which are always transmitted and computed together. In this case, double the volume is sent each step, but the number of steps has roughly halved. It means that the parameterm{\displaystyle m}is halved, while the total byte-sizen{\displaystyle n}stays the same. The runtimeT(p){\displaystyle T(p)}for this approach depends on the value ofm{\displaystyle m}, which can be optimized ifTstart{\displaystyle T_{\text{start}}}andTbyte{\textstyle T_{\text{byte}}}are known. It is optimal form=n⋅(p−2)⋅TbyteTstart{\textstyle m={\sqrt {\frac {n\cdot (p-2)\cdot T_{\text{byte}}}{T_{\text{start}}}}}}, assuming that this results in a smallerm{\displaystyle m}that divides the original one. Reduction is one of the maincollective operationsimplemented in theMessage Passing Interface, where performance of the used algorithm is important and evaluated constantly for different use cases.[11]Operators can be used as parameters forMPI_ReduceandMPI_Allreduce, with the difference that the result is available at one (root) processing unit or all of them. OpenMPoffers areductionclause for describing how the results from parallel operations are collected together.[12] MapReducerelies heavily on efficient reduction algorithms to process big data sets, even on huge clusters.[13][14] Some parallelsortingalgorithms use reductions to be able to handle very big data sets.[15]
https://en.wikipedia.org/wiki/Reduction_operator
Incomputer science,recursionis a method of solving acomputational problemwhere the solution depends on solutions to smaller instances of the same problem.[1][2]Recursion solves suchrecursive problemsby usingfunctionsthat call themselves from within their own code. The approach can be applied to many types of problems, and recursion is one of the central ideas of computer science.[3] The power of recursion evidently lies in the possibility of defining an infinite set of objects by a finite statement. In the same manner, an infinite number of computations can be described by a finite recursive program, even if this program contains no explicit repetitions. Most computerprogramming languagessupport recursion by allowing a function to call itself from within its own code. Somefunctional programminglanguages (for instance,Clojure)[5]do not define any looping constructs but rely solely on recursion to repeatedly call code. It is proved incomputability theorythat these recursive-only languages areTuring complete; this means that they are as powerful (they can be used to solve the same problems) asimperative languagesbased on control structures such aswhileandfor. Repeatedly calling a function from within itself may cause thecall stackto have a size equal to the sum of the input sizes of all involved calls. It follows that, for problems that can be solved easily by iteration, recursion is generally lessefficient, and, for certain problems, algorithmic or compiler-optimization techniques such astail calloptimization may improve computational performance over a naive recursive implementation. A commonalgorithm designtactic is to divide a problem into sub-problems of the same type as the original, solve those sub-problems, and combine the results. This is often referred to as thedivide-and-conquer method; when combined with alookup tablethat stores the results of previously solved sub-problems (to avoid solving them repeatedly and incurring extra computation time), it can be referred to asdynamic programmingormemoization. A recursive function definition has one or morebase cases, meaning input(s) for which the function produces a resulttrivially(without recurring), and one or morerecursive cases, meaning input(s) for which the program recurs (calls itself). For example, thefactorialfunction can be defined recursively by the equations0! = 1and, for alln> 0,n! =n(n− 1)!. Neither equation by itself constitutes a complete definition; the first is the base case, and the second is the recursive case. Because the base case breaks the chain of recursion, it is sometimes also called the "terminating case". The job of the recursive cases can be seen as breaking down complex inputs into simpler ones. In a properly designed recursive function, with each recursive call, the input problem must be simplified in such a way that eventually the base case must be reached. (Functions that are not intended to terminate under normal circumstances—for example, somesystem and server processes—are an exception to this.) Neglecting to write a base case, or testing for it incorrectly, can cause aninfinite loop. For some functions (such as one that computes theseriesfore= 1/0! + 1/1! + 1/2! + 1/3! + ...) there is not an obvious base case implied by the input data; for these one may add aparameter(such as the number of terms to be added, in our series example) to provide a 'stopping criterion' that establishes the base case. Such an example is more naturally treated bycorecursion,[how?]where successive terms in the output are the partial sums; this can be converted to a recursion by using the indexing parameter to say "compute thenth term (nth partial sum)". Manycomputer programsmust process or generate an arbitrarily large quantity ofdata. Recursion is a technique for representing data whose exact size is unknown to theprogrammer: the programmer can specify this data with aself-referentialdefinition. There are two types of self-referential definitions: inductive andcoinductivedefinitions. An inductively defined recursive data definition is one that specifies how to construct instances of the data. For example,linked listscan be defined inductively (here, usingHaskellsyntax): The code above specifies a list of strings to be either empty, or a structure that contains a string and a list of strings. The self-reference in the definition permits the construction of lists of any (finite) number of strings. Another example of inductivedefinitionis thenatural numbers(or positiveintegers): Similarly recursivedefinitionsare often used to model the structure ofexpressionsandstatementsin programming languages. Language designers often express grammars in a syntax such asBackus–Naur form; here is such a grammar, for a simple language of arithmetic expressions with multiplication and addition: This says that an expression is either a number, a product of two expressions, or a sum of two expressions. By recursively referring to expressions in the second and third lines, the grammar permits arbitrarily complicated arithmetic expressions such as(5 * ((3 * 6) + 8)), with more than one product or sum operation in a single expression. A coinductive data definition is one that specifies the operations that may be performed on a piece of data; typically, self-referential coinductive definitions are used for data structures of infinite size. A coinductive definition of infinitestreamsof strings, given informally, might look like this: This is very similar to an inductive definition of lists of strings; the difference is that this definition specifies how to access the contents of the data structure—namely, via theaccessorfunctionsheadandtail—and what those contents may be, whereas the inductive definition specifies how to create the structure and what it may be created from. Corecursion is related to coinduction, and can be used to compute particular instances of (possibly) infinite objects. As a programming technique, it is used most often in the context oflazyprogramming languages, and can be preferable to recursion when the desired size or precision of a program's output is unknown. In such cases the program requires both a definition for an infinitely large (or infinitely precise) result, and a mechanism for taking a finite portion of that result. The problem of computing the first nprime numbersis one that can be solved with a corecursive program (e.g.here). Recursion that contains only a single self-reference is known assingle recursion, while recursion that contains multiple self-references is known asmultiple recursion. Standard examples of single recursion include list traversal, such as in a linear search, or computing the factorial function, while standard examples of multiple recursion includetree traversal, such as in a depth-first search. Single recursion is often much more efficient than multiple recursion, and can generally be replaced by an iterative computation, running in linear time and requiring constant space. Multiple recursion, by contrast, may require exponential time and space, and is more fundamentally recursive, not being able to be replaced by iteration without an explicit stack. Multiple recursion can sometimes be converted to single recursion (and, if desired, thence to iteration). For example, while computing the Fibonacci sequence naively entails multiple iteration, as each value requires two previous values, it can be computed by single recursion by passing two successive values as parameters. This is more naturally framed as corecursion, building up from the initial values, while tracking two successive values at each step – seecorecursion: examples. A more sophisticated example involves using athreaded binary tree, which allows iterative tree traversal, rather than multiple recursion. Most basic examples of recursion, and most of the examples presented here, demonstratedirectrecursion, in which a function calls itself.Indirectrecursion occurs when a function is called not by itself but by another function that it called (either directly or indirectly). For example, iffcallsf,that is direct recursion, but iffcallsgwhich callsf,then that is indirect recursion off.Chains of three or more functions are possible; for example, function 1 calls function 2, function 2 calls function 3, and function 3 calls function 1 again. Indirect recursion is also calledmutual recursion, which is a more symmetric term, though this is simply a difference of emphasis, not a different notion. That is, iffcallsgand thengcallsf,which in turn callsgagain, from the point of view offalone,fis indirectly recursing, while from the point of view ofgalone, it is indirectly recursing, while from the point of view of both,fandgare mutually recursing on each other. Similarly a set of three or more functions that call each other can be called a set of mutually recursive functions. Recursion is usually done by explicitly calling a function by name. However, recursion can also be done via implicitly calling a function based on the current context, which is particularly useful foranonymous functions, and is known asanonymous recursion. Some authors classify recursion as either "structural" or "generative". The distinction is related to where a recursive procedure gets the data that it works on, and how it processes that data: [Functions that consume structured data] typically decompose their arguments into their immediate structural components and then process those components. If one of the immediate components belongs to the same class of data as the input, the function is recursive. For that reason, we refer to these functions as (STRUCTURALLY) RECURSIVE FUNCTIONS. Thus, the defining characteristic of a structurally recursive function is that the argument to each recursive call is the content of a field of the original input. Structural recursion includes nearly all tree traversals, including XML processing, binary tree creation and search, etc. By considering the algebraic structure of the natural numbers (that is, a natural number is either zero or the successor of a natural number), functions such as factorial may also be regarded as structural recursion. Generative recursionis the alternative: Many well-known recursive algorithms generate an entirely new piece of data from the given data and recur on it.HtDP(How to Design Programs)refers to this kind as generative recursion. Examples of generative recursion include:gcd,quicksort,binary search,mergesort,Newton's method,fractals, andadaptive integration. This distinction is important inproving terminationof a function. In actual implementation, rather than a pure recursive function (single check for base case, otherwise recursive step), a number of modifications may be made, for purposes of clarity or efficiency. These include: On the basis of elegance, wrapper functions are generally approved, while short-circuiting the base case is frowned upon, particularly in academia. Hybrid algorithms are often used for efficiency, to reduce the overhead of recursion in small cases, and arm's-length recursion is a special case of this. Awrapper functionis a function that is directly called but does not recurse itself, instead calling a separate auxiliary function which actually does the recursion. Wrapper functions can be used to validate parameters (so the recursive function can skip these), perform initialization (allocate memory, initialize variables), particularly for auxiliary variables such as "level of recursion" or partial computations formemoization, and handle exceptions and errors. In languages that supportnested functions, the auxiliary function can be nested inside the wrapper function and use a shared scope. In the absence of nested functions, auxiliary functions are instead a separate function, if possible private (as they are not called directly), and information is shared with the wrapper function by usingpass-by-reference. Short-circuiting the base case, also known asarm's-length recursion, consists of checking the base casebeforemaking a recursive call – i.e., checking if the next call will be the base case, instead of calling and then checking for the base case. Short-circuiting is particularly done for efficiency reasons, to avoid the overhead of a function call that immediately returns. Note that since the base case has already been checked for (immediately before the recursive step), it does not need to be checked for separately, but one does need to use a wrapper function for the case when the overall recursion starts with the base case itself. For example, in the factorial function, properly the base case is 0! = 1, while immediately returning 1 for 1! is a short circuit, and may miss 0; this can be mitigated by a wrapper function. The box showsCcode to shortcut factorial cases 0 and 1. Short-circuiting is primarily a concern when many base cases are encountered, such as Null pointers in a tree, which can be linear in the number of function calls, hence significant savings forO(n)algorithms; this is illustrated below for a depth-first search. Short-circuiting on a tree corresponds to considering a leaf (non-empty node with no children) as the base case, rather than considering an empty node as the base case. If there is only a single base case, such as in computing the factorial, short-circuiting provides onlyO(1)savings. Conceptually, short-circuiting can be considered to either have the same base case and recursive step, checking the base case only before the recursion, or it can be considered to have a different base case (one step removed from standard base case) and a more complex recursive step, namely "check valid then recurse", as in considering leaf nodes rather than Null nodes as base cases in a tree. Because short-circuiting has a more complicated flow, compared with the clear separation of base case and recursive step in standard recursion, it is often considered poor style, particularly in academia.[8] A basic example of short-circuiting is given indepth-first search(DFS) of a binary tree; seebinary treessection for standard recursive discussion. The standard recursive algorithm for a DFS is: In short-circuiting, this is instead: In terms of the standard steps, this moves the base case checkbeforethe recursive step. Alternatively, these can be considered a different form of base case and recursive step, respectively. Note that this requires a wrapper function to handle the case when the tree itself is empty (root node is Null). In the case of aperfect binary treeof heighth,there are 2h+1−1 nodes and 2h+1Null pointers as children (2 for each of the 2hleaves), so short-circuiting cuts the number of function calls in half in the worst case. In C, the standard recursive algorithm may be implemented as: The short-circuited algorithm may be implemented as: Note the use ofshort-circuit evaluationof the Boolean && (AND) operators, so that the recursive call is made only if the node is valid (non-Null). Note that while the first term in the AND is a pointer to a node, the second term is a Boolean, so the overall expression evaluates to a Boolean. This is a common idiom in recursive short-circuiting. This is in addition to the short-circuit evaluation of the Boolean || (OR) operator, to only check the right child if the left child fails. In fact, the entirecontrol flowof these functions can be replaced with a single Boolean expression in a return statement, but legibility suffers at no benefit to efficiency. Recursive algorithms are often inefficient for small data, due to the overhead of repeated function calls and returns. For this reason efficient implementations of recursive algorithms often start with the recursive algorithm, but then switch to a different algorithm when the input becomes small. An important example ismerge sort, which is often implemented by switching to the non-recursiveinsertion sortwhen the data is sufficiently small, as in thetiled merge sort. Hybrid recursive algorithms can often be further refined, as inTimsort, derived from a hybrid merge sort/insertion sort. Recursion anditerationare equally expressive: recursion can be replaced by iteration with an explicitcall stack, while iteration can be replaced withtail recursion. Which approach is preferable depends on the problem under consideration and the language used. Inimperative programming, iteration is preferred, particularly for simple recursion, as it avoids the overhead of function calls and call stack management, but recursion is generally used for multiple recursion. By contrast, infunctional languagesrecursion is preferred, with tail recursion optimization leading to little overhead. Implementing an algorithm using iteration may not be easily achievable. Compare the templates to compute xndefined by xn= f(n, xn-1) from xbase: For an imperative language the overhead is to define the function, and for a functional language the overhead is to define the accumulator variable x. For example, afactorialfunction may be implemented iteratively inCby assigning to a loop index variable and accumulator variable, rather than by passing arguments and returning values by recursion: Mostprogramming languagesin use today allow the direct specification of recursive functions and procedures. When such a function is called, the program'sruntime environmentkeeps track of the variousinstancesof the function (often using acall stack, although other methods may be used). Every recursive function can be transformed into an iterative function by replacing recursive calls withiterative control constructsand simulating the call stack with astackexplicitly managed by the program.[9][10] Conversely, all iterative functions and procedures that can be evaluated by a computer (seeTuring completeness) can be expressed in terms of recursive functions; iterative control constructs such aswhile loopsandfor loopsare routinely rewritten in recursive form infunctional languages.[11][12]However, in practice this rewriting depends ontail call elimination, which is not a feature of all languages.C,Java, andPythonare notable mainstream languages in which all function calls, includingtail calls, may cause stack allocation that would not occur with the use of looping constructs; in these languages, a working iterative program rewritten in recursive form mayoverflow the call stack, although tail call elimination may be a feature that is not covered by a language's specification, and different implementations of the same language may differ in tail call elimination capabilities. In languages (such asCandJava) that favor iterative looping constructs, there is usually significant time and space cost associated with recursive programs, due to the overhead required to manage the stack and the relative slowness of function calls; infunctional languages, a function call (particularly atail call) is typically a very fast operation, and the difference is usually less noticeable. As a concrete example, the difference in performance between recursive and iterative implementations of the "factorial" example above depends highly on thecompilerused. In languages where looping constructs are preferred, the iterative version may be as much as severalorders of magnitudefaster than the recursive one. In functional languages, the overall time difference of the two implementations may be negligible; in fact, the cost of multiplying the larger numbers first rather than the smaller numbers (which the iterative version given here happens to do) may overwhelm any time saved by choosing iteration. In some programming languages, the maximum size of thecall stackis much less than the space available in theheap, and recursive algorithms tend to require more stack space than iterative algorithms. Consequently, these languages sometimes place a limit on the depth of recursion to avoidstack overflows;Pythonis one such language.[13]Note the caveat below regarding the special case oftail recursion. Because recursive algorithms can be subject to stack overflows, they may be vulnerable topathologicalormaliciousinput.[14]Some malware specifically targets a program's call stack and takes advantage of the stack's inherently recursive nature.[15]Even in the absence of malware, a stack overflow caused by unbounded recursion can be fatal to the program, andexception handlinglogicmay not prevent the correspondingprocessfrom beingterminated.[16] Multiply recursive problems are inherently recursive, because of prior state they need to track. One example istree traversalas indepth-first search; though both recursive and iterative methods are used,[17]they contrast with list traversal and linear search in a list, which is a singly recursive and thus naturally iterative method. Other examples includedivide-and-conquer algorithmssuch asQuicksort, and functions such as theAckermann function. All of these algorithms can be implemented iteratively with the help of an explicitstack, but the programmer effort involved in managing the stack, and the complexity of the resulting program, arguably outweigh any advantages of the iterative solution. Recursive algorithms can be replaced with non-recursive counterparts.[18]One method for replacing recursive algorithms is to simulate them usingheap memoryin place ofstack memory.[19]An alternative is to develop a replacement algorithm entirely based on non-recursive methods, which can be challenging.[20]For example, recursive algorithms formatching wildcards, such asRich Salz'wildmatalgorithm,[21]were once typical. Non-recursive algorithms for the same purpose, such as theKrauss matching wildcards algorithm, have been developed to avoid the drawbacks of recursion[22]and have improved only gradually based on techniques such as collectingtestsandprofilingperformance.[23] Tail-recursive functions are functions in which all recursive calls aretail callsand hence do not build up any deferred operations. For example, the gcd function (shown again below) is tail-recursive. In contrast, the factorial function (also below) isnottail-recursive; because its recursive call is not in tail position, it builds up deferred multiplication operations that must be performed after the final recursive call completes. With acompilerorinterpreterthat treats tail-recursive calls asjumpsrather than function calls, a tail-recursive function such as gcd will execute using constant space. Thus the program is essentially iterative, equivalent to using imperative language control structures like the "for" and "while" loops. The significance of tail recursion is that when making a tail-recursive call (or any tail call), the caller's return position need not be saved on thecall stack; when the recursive call returns, it will branch directly on the previously saved return position. Therefore, in languages that recognize this property of tail calls, tail recursion saves both space and time. Consider these two functions: The output of function 2 is that of function 1 with the lines swapped. In the case of a function calling itself only once, instructions placed before the recursive call are executed once per recursion before any of the instructions placed after the recursive call. The latter are executed repeatedly after the maximum recursion has been reached. Also note that theorderof the print statements is reversed, which is due to the way the functions and statements are stored on thecall stack. A classic example of a recursive procedure is the function used to calculate thefactorialof anatural number: The function can also be written as arecurrence relation: This evaluation of the recurrence relation demonstrates the computation that would be performed in evaluating the pseudocode above: This factorial function can also be described without using recursion by making use of the typical looping constructs found in imperative programming languages: The imperative code above is equivalent to this mathematical definition using an accumulator variablet: The definition above translates straightforwardly tofunctional programming languagessuch asScheme; this is an example of iteration implemented recursively. TheEuclidean algorithm, which computes thegreatest common divisorof two integers, can be written recursively. Function definition: Recurrence relationfor greatest common divisor, wherex%y{\displaystyle x\%y}expresses theremainderofx/y{\displaystyle x/y}: The recursive program above istail-recursive; it is equivalent to an iterative algorithm, and the computation shown above shows the steps of evaluation that would be performed by a language that eliminates tail calls. Below is a version of the same algorithm using explicit iteration, suitable for a language that does not eliminate tail calls. By maintaining its state entirely in the variablesxandyand using a looping construct, the program avoids making recursive calls and growing the call stack. The iterative algorithm requires a temporary variable, and even given knowledge of the Euclidean algorithm it is more difficult to understand the process by simple inspection, although the two algorithms are very similar in their steps. The Towers of Hanoi is a mathematical puzzle whose solution illustrates recursion.[24][25]There are three pegs which can hold stacks of disks of different diameters. A larger disk may never be stacked on top of a smaller. Starting withndisks on one peg, they must be moved to another peg one at a time. What is the smallest number of steps to move the stack? Function definition: Recurrence relation for hanoi: Example implementations: Although not all recursive functions have an explicit solution, the Tower of Hanoi sequence can be reduced to an explicit formula.[26] Thebinary searchalgorithm is a method of searching asorted arrayfor a single element by cutting the array in half with each recursive pass. The trick is to pick a midpoint near the center of the array, compare the data at that point with the data being searched and then responding to one of three possible conditions: the data is found at the midpoint, the data at the midpoint is greater than the data being searched for, or the data at the midpoint is less than the data being searched for. Recursion is used in this algorithm because with each pass a new array is created by cutting the old one in half. The binary search procedure is then called recursively, this time on the new (and smaller) array. Typically the array's size is adjusted by manipulating a beginning and ending index. The algorithm exhibits a logarithmic order of growth because it essentially divides the problem domain in half with each pass. Example implementation of binary search in C: An important application of recursion in computer science is in defining dynamic data structures such aslistsandtrees. Recursive data structures can dynamically grow to a theoretically infinite size in response to runtime requirements; in contrast, the size of a static array must be set at compile time. "Recursive algorithms are particularly appropriate when the underlying problem or the data to be treated are defined in recursive terms."[27] The examples in this section illustrate what is known as "structural recursion". This term refers to the fact that the recursive procedures are acting on data that is defined recursively. As long as a programmer derives the template from a data definition, functions employ structural recursion. That is, the recursions in a function's body consume some immediate piece of a given compound value.[7] Below is a C definition of a linked list node structure. Notice especially how the node is defined in terms of itself. The "next" element ofstruct nodeis a pointer to anotherstruct node, effectively creating a list type. Because thestruct nodedata structure is defined recursively, procedures that operate on it can be implemented naturally as recursive procedures. Thelist_printprocedure defined below walks down the list until the list is empty (i.e., the list pointer has a value of NULL). For each node it prints the data element (an integer). In the C implementation, the list remains unchanged by thelist_printprocedure. Below is a simple definition for a binary tree node. Like the node for linked lists, it is defined in terms of itself, recursively. There are two self-referential pointers: left (pointing to the left sub-tree) and right (pointing to the right sub-tree). Operations on the tree can be implemented using recursion. Note that because there are two self-referencing pointers (left and right), tree operations may require two recursive calls: At most two recursive calls will be made for any given call totree_containsas defined above. The above example illustrates anin-order traversalof the binary tree. ABinary search treeis a special case of the binary tree where the data elements of each node are in order. Since the number of files in afilesystemmay vary,recursionis the only practical way to traverse and thus enumerate its contents. Traversing a filesystem is very similar to that oftree traversal, therefore the concepts behind tree traversal are applicable to traversing a filesystem. More specifically, the code below would be an example of apreorder traversalof a filesystem. This code is both recursion anditeration- the files and directories are iterated, and each directory is opened recursively. The "rtraverse" method is an example of direct recursion, whilst the "traverse" method is a wrapper function. The "base case" scenario is that there will always be a fixed number of files and/or directories in a given filesystem. Thetime efficiencyof recursive algorithms can be expressed in arecurrence relationofBig O notation. They can (usually) then be simplified into a single Big-O term. If the time-complexity of the function is in the formT(n)=a⋅T(n/b)+f(n){\displaystyle T(n)=a\cdot T(n/b)+f(n)} Then the Big O of the time-complexity is thus: wherearepresents the number of recursive calls at each level of recursion,brepresents by what factor smaller the input is for the next level of recursion (i.e. the number of pieces you divide the problem into), andf(n)represents the work that the function does independently of any recursion (e.g. partitioning, recombining) at each level of recursion. In the procedural interpretation oflogic programs, clauses (or rules) of the formA:-Bare treated as procedures, which reduce goals of the formAto subgoals of the formB. For example, thePrologclauses: define a procedure, which can be used to search for a path fromXtoY, either by finding a direct arc fromXtoY, or by finding an arc fromXtoZ, and then searching recursively for a path fromZtoY. Prolog executes the procedure by reasoning top-down (orbackwards) and searching the space of possible paths depth-first, one branch at a time. If it tries the second clause, and finitely fails to find a path fromZtoY, it backtracks and tries to find an arc fromXto another node, and then searches for a path from that other node toY. However, in the logical reading of logic programs, clauses are understooddeclarativelyas universally quantified conditionals. For example, the recursive clause of the path-finding procedure is understood asrepresenting the knowledgethat, for everyX,YandZ, if there is an arc fromXtoZand a path fromZtoYthen there is a path fromXtoY. In symbolic form: The logical reading frees the reader from needing to know how the clause is used to solve problems. The clause can be used top-down, as in Prolog, to reduce problems to subproblems. Or it can be used bottom-up (orforwards), as inDatalog, to derive conclusions from conditions. Thisseparation of concernsis a form ofabstraction, which separates declarative knowledge from problem solving methods (seeAlgorithm#Algorithm = Logic + Control).[28] A common mistake among programmers is not providing a way to exit a recursive function, often by omitting or incorrectly checking the base case, letting it run (at least theoretically) infinitely by endlessly calling itself recursively. This is calledinfinite recursion, and the program will never terminate. In practice, this typically exhausts the availablestackspace. In most programming environments, a program with infinite recursion will not really run forever. Eventually, something will break and the program will report an error.[29] Below is a Java code that would use infinite recursion: Running this code will result in astack overflowerror.
https://en.wikipedia.org/wiki/Recursion_(computer_science)#Recursive_data_structures_(structural_recursion)
Infunctional programming, afunctoris adesign patterninspired bythe definition from category theorythat allows one to apply afunctionto values inside ageneric typewithout changing the structure of the generic type. InHaskellthis idea can be captured in atype class: This declaration says that any instance ofFunctormust support a methodfmap, which maps a function over the elements of the instance. Functors in Haskell should also obey the so-calledfunctor laws,[1]which state that the mapping operation preserves the identity function and composition of functions: where.stands forfunction composition. InScalaatraitcan instead be used: Functors form a base for more complex abstractions likeapplicative functors,monads, andcomonads, all of which build atop a canonical functor structure. Functors are useful in modeling functional effects by values of parameterized data types. Modifiable computations are modeled by allowing a pure function to be applied to values of the "inner" type, thus creating the new overall value which represents the modified computation (which may have yet to run). In Haskell, lists are a simple example of a functor. We may implementfmapas A binary tree may similarly be described as a functor: If we have a binary treetr :: Tree aand a functionf :: a -> b, the functionfmap f trwill applyfto every element oftr. For example, ifaisInt, adding 1 to each element oftrcan be expressed asfmap (+ 1) tr.[2]
https://en.wikipedia.org/wiki/Functor_(functional_programming)
Incomputer science,zippingis afunctionwhichmapsatupleofsequencesinto asequenceoftuples. This name zip derives from the action of azipperin that it interleaves two formerly disjoint sequences. The inverse function isunzip. Given the three wordscat,fishandbewhere |cat| is 3, |fish| is 4 and |be| is 2. Letℓ{\displaystyle \ell }denote the length of the longest word which isfish;ℓ=4{\displaystyle \ell =4}. The zip ofcat,fish,beis then 4 tuples of elements: where#is a symbol not in the original alphabet. InHaskellthis truncates to the shortest sequenceℓ_{\displaystyle {\underline {\ell }}}, whereℓ_=2{\displaystyle {\underline {\ell }}=2}: Let Σ be analphabet, # a symbol not in Σ. Letx1x2...x|x|,y1y2...y|y|,z1z2...z|z|, ... benwords(i.e. finitesequences) of elements of Σ. Letℓ{\displaystyle \ell }denote the length of the longest word, i.e. the maximum of |x|, |y|, |z|, ... . The zip of these words is a finite sequence ofn-tuples of elements of(Σ ∪ {#}), i.e. an element of((Σ∪{#})n)∗{\displaystyle ((\Sigma \cup \{\#\})^{n})^{*}}: where for any indexi> |w|, thewiis #. The zip ofx, y, z, ...is denoted zip(x, y, z, ...) orx⋆y⋆z⋆ ... The inverse to zip is sometimes denoted unzip. A variation of the zip operation is defined by: whereℓ_{\displaystyle {\underline {\ell }}}is theminimumlength of the input words. It avoids the use of an adjoined element#{\displaystyle \#}, but destroys information about elements of the input sequences beyondℓ_{\displaystyle {\underline {\ell }}}. Zipfunctionsare often available inprogramming languages, often referred to aszip. InLisp-dialects one can simplymapthe desired function over the desired lists,mapisvariadicin Lisp so it can take an arbitrary number of lists as argument. An example fromClojure:[1] InCommon Lisp: Languages such asPythonprovide azip()function.[2]zip()in conjunction with the*operator unzips a list:[2] Haskellhas a method of zipping sequences but requires a specific function for eacharity(zipfor two sequences,zip3for three etc.),[3]similarly the functionsunzipandunzip3are available for unzipping: List of languages by support of zip:
https://en.wikipedia.org/wiki/Zipping_(computer_science)
Incomputer programming,foreach loop(orfor-each loop) is acontrol flowstatement for traversing items in acollection.foreachis usually used in place of a standardforloopstatement. Unlike otherforloop constructs, however,foreachloops[1]usually maintain no explicit counter: they essentially say "do this to everything in this set", rather than "do thisxtimes". This avoids potentialoff-by-one errorsand makes code simpler to read. Inobject-orientedlanguages, aniterator, even if implicit, is often used as the means of traversal. Theforeachstatement in some languages has some defined order, processing each item in the collection from the first to the last. Theforeachstatement in many other languages, especiallyarray programminglanguages, does not have any particular order. This simplifiesloop optimizationin general and in particular allowsvector processingof items in the collection concurrently. Syntax varies among languages. Most use the simple wordfor, although other use the more logical wordforeach, roughly as follows: Programming languageswhich support foreach loops includeABC,ActionScript,Ada,C++(sinceC++11),C#,ColdFusion Markup Language(CFML),Cobra,D,Daplex(query language),Delphi,ECMAScript,Erlang,Java(since 1.5),JavaScript,Lua,Objective-C(since 2.0),ParaSail,Perl,PHP,Prolog,[2]Python,R,REALbasic,Rebol,[3]Red,[4]Ruby,Scala,Smalltalk,Swift,Tcl,tcsh,Unix shells,Visual Basic (.NET), andWindows PowerShell. Notable languages without foreach areC, and C++ pre-C++11. ActionScriptsupports the ECMAScript 4.0 Standard[5]forfor each .. in[6]which pulls the value at each index. It also supportsfor .. in[7]which pulls the key at each index. Adasupports foreach loops as part of the normalfor loop. Say X is anarray: This syntax is used on mostly arrays, but will also work with other types when a full iteration is needed. Ada 2012 has generalized loops to foreach loops on any kind of container (array, lists, maps...): TheClanguage does not have collections or a foreach construct. However, it has several standard data structures that can be used as collections, and foreach can be made easily with amacro. However, two obvious problems occur: C string as a collection of char C int array as a collection of int (array size known at compile-time) Most general: string or array as collection (collection size known at run-time) InC#, assuming that myArray is an array of integers: Language Integrated Query(LINQ) provides the following syntax, accepting adelegateorlambda expression: C++11provides a foreach loop. The syntax is similar to that ofJava: C++11 range-based for statements have been implemented inGNU Compiler Collection(GCC) (since version 4.6),Clang(since version 3.0) andVisual C++2012 (version 11[8]) The range-basedforissyntactic sugarequivalent to: The compiler usesargument-dependent lookupto resolve thebeginandendfunctions.[9] TheC++ Standard Libraryalso supportsfor_each,[10]that applies each element to a function, which can be any predefined function or a lambda expression. While range-based for is only from the start to the end, the range or direction can be changed by altering the first two parameters. Qt, a C++ framework, offers a macro providing foreach loops[11]using the STL iterator interface: Boost, a set of free peer-reviewed portable C++ libraries also provides foreach loops:[12] TheC++/CLIlanguage proposes a construct similar to C#. Assuming that myArray is an array of integers: CFML incorrectly identifies the value as "index" in this construct; theindexvariable does receive the actual value of the array element, not its index. Common Lispprovides foreach ability either with thedolistmacro: or the powerfulloopmacro to iterate on more data types and even with themapcarfunction: or Foreach support was added inDelphi2005, and uses an enumerator variable that must be declared in thevarsection. The iteration (foreach) form of theEiffelloop construct is introduced by the keywordacross. In this example, every element of the structuremy_listis printed: The local entityicis an instance of the library classITERATION_CURSOR. The cursor's featureitemprovides access to each structure element. Descendants of classITERATION_CURSORcan be created to handle specialized iteration algorithms. The types of objects that can be iterated across (my_listin the example) are based on classes that inherit from the library classITERABLE. The iteration form of the Eiffel loop can also be used as a boolean expression when the keywordloopis replaced by eitherall(effectinguniversal quantification) orsome(effectingexistential quantification). This iteration is a boolean expression which is true if all items inmy_listhave counts greater than three: The following is true if at least one item has a count greater than three: Go's foreach loop can be used to loop over an array, slice, string, map, or channel. Using the two-value form gets the index/key (first element) and the value (second element): Using the one-value form gets the index/key (first element): [13] Groovysupportsforloops over collections like arrays, lists and ranges: Groovy also supports a C-style for loop with an array index: Collections in Groovy can also be iterated over using theeachkeyword and a closure. By default, the loop dummy is namedit Haskellallows looping over lists withmonadicactions usingmapM_andforM_(mapM_with its arguments flipped) fromControl.Monad: It's also possible to generalize those functions to work on applicative functors rather than monads and any data structure that is traversable usingtraverse(forwith its arguments flipped) andmapM(forMwith its arguments flipped) fromData.Traversable. InJava, a foreach-construct was introduced inJava Development Kit(JDK) 1.5.0.[14] Official sources use several names for the construct. It is referred to as the "Enhanced for Loop",[14]the "For-Each Loop",[15]and the "foreach statement".[16][17]: 264 Java also provides the stream api since java 8:[17]: 294–203 InECMAScript 5, a callback-basedforEach()method was added to the arrayprototype:[18] TheECMAScript 6standard introduced a more conventionalfor..ofsyntax that works on alliterablesrather than operating on only array instances. However, no index variable is available with the syntax. For unordered iteration over the keys in an object,JavaScriptfeatures thefor..inloop: To limit the iteration to the object's own properties, excluding those inherited through the prototype chain, it's often useful to add ahasOwnProperty()test (or ahasOwn()test if supported).[19] Alternatively, theObject.keys()method combined with thefor..ofloop can be used for a less verbose way to iterate over the keys of an object.[20] Source:[21] Iterate only through numerical index values: Iterate through all index values: InMathematica,Dowill simply evaluate an expression for each element of a list, without returning any value. It is more common to useTable, which returns the result of each evaluation in a new list. For each loops are supported in Mint, possessing the following syntax: Thefor (;;)orwhile (true)infinite loopin Mint can be written using a for each loop and aninfinitely long list.[22] Foreach loops, calledFast enumeration, are supported starting inObjective-C2.0. They can be used to iterate over any object that implements the NSFastEnumeration protocol, including NSArray, NSDictionary (iterates over keys), NSSet, etc. NSArrays can also broadcast a message to their members: Whereblocksare available, an NSArray can automatically perform a block on every contained item: The type of collection being iterated will dictate the item returned with each iteration. For example: OCamlis afunctional programminglanguage. Thus, the equivalent of a foreach loop can be achieved as a library function over lists and arrays. For lists: or in short way: For arrays: or in short way: TheParaSailparallel programming language supports several kinds of iterators, including a general "for each" iterator over a container: ParaSail also supports filters on iterators, and the ability to refer to both the key and the value of a map. Here is a forward iteration over the elements of "My_Map" selecting only elements where the keys are in "My_Set": InPascal, ISO standard 10206:1990 introduced iteration overset types, thus: InPerl,foreach(which is equivalent to the shorter for) can be used to traverse elements of a list. The expression which denotes the collection to loop over is evaluated in list-context and each item of the resulting list is, in turn, aliased to the loop variable. List literal example: Array examples: Hash example: Direct modification of collection members: It is also possible to extract both keys and values using the alternate syntax: Direct modification of collection members: Python's tuple assignment, fully available in its foreach loop, also makes it trivial to iterate on (key, value) pairs indictionaries: Asfor ... inis the only kind of for loop in Python, the equivalent to the "counter" loop found in other languages is... ... although using theenumeratefunction is considered more "Pythonic": Asfor ... inis the only kind offorloop in R, the equivalent to the "counter" loop found in other languages is... or using the conventional Schemefor-eachfunction: do-something-withis a one-argument function. InRaku, a sister language to Perl,formust be used to traverse elements of a list (foreachis not allowed). The expression which denotes the collection to loop over is evaluated in list-context, but not flattened by default, and each item of the resulting list is, in turn, aliased to the loop variable(s). List literal example: Array examples: The for loop in its statement modifier form: Hash example: or or Direct modification of collection members with a doubly pointy block,<->: or This can also be used with a hash. Theforloop has the structurefor<pattern>in<expression>{/* optional statements */}. It implicitly calls theIntoIterator::into_itermethod on the expression, and uses the resulting value, which must implement theIteratortrait. If the expression is itself an iterator, it is used directly by theforloop through animplementation ofIntoIteratorfor allIteratorsthat returns the iterator unchanged. The loop calls theIterator::nextmethod on the iterator before executing the loop body. IfIterator::nextreturnsSome(_), the value inside is assigned to thepatternand the loop body is executed; if it returnsNone, the loop is terminated. do-something-withis a one-argument function. Swiftuses thefor…inconstruct to iterate over members of a collection.[23] Thefor…inloop is often used with the closed and half-open range constructs to iterate over the loop body a certain number of times. SystemVerilogsupports iteration over any vector or array type of any dimensionality using theforeachkeyword. A trivial example iterates over an array of integers: A more complex example iterates over an associative array of arrays of integers: Tcluses foreach to iterate over lists. It is possible to specify more than one iterator variable, in which case they are assigned sequential values from the list. It is also possible to iterate over more than one list simultaneously. In the followingiassumes sequential values of the first list,jsequential values of the second list: or withouttype inference Invoke a hypotheticalfrobcommand three times, giving it a color name each time. From a pipeline [24]
https://en.wikipedia.org/wiki/Foreach_loop
Incomputer science,functional programmingis aprogramming paradigmwhere programs are constructed byapplyingandcomposingfunctions. It is adeclarative programmingparadigm in which function definitions aretreesofexpressionsthat mapvaluesto other values, rather than a sequence ofimperativestatementswhich update therunning stateof the program. In functional programming, functions are treated asfirst-class citizens, meaning that they can be bound to names (including localidentifiers), passed asarguments, andreturnedfrom other functions, just as any otherdata typecan. This allows programs to be written in adeclarativeandcomposablestyle, where small functions are combined in amodularmanner. Functional programming is sometimes treated as synonymous withpurely functional programming, a subset of functional programming that treats all functions asdeterministicmathematicalfunctions, orpure functions. When a pure function is called with some given arguments, it will always return the same result, and cannot be affected by any mutablestateor otherside effects. This is in contrast with impureprocedures, common inimperative programming, which can have side effects (such as modifying the program's state or taking input from a user). Proponents of purely functional programming claim that by restricting side effects, programs can have fewerbugs, be easier todebugandtest, and be more suited toformal verification.[1][2] Functional programming has its roots inacademia, evolving from thelambda calculus, a formal system of computation based only on functions. Functional programming has historically been less popular than imperative programming, but many functional languages are seeing use today in industry and education, includingCommon Lisp,Scheme,[3][4][5][6]Clojure,Wolfram Language,[7][8]Racket,[9]Erlang,[10][11][12]Elixir,[13]OCaml,[14][15]Haskell,[16][17]andF#.[18][19]Leanis a functional programming language commonly used for verifying mathematical theorems.[20]Functional programming is also key to some languages that have found success in specific domains, likeJavaScriptin the Web,[21]Rin statistics,[22][23]J,KandQin financial analysis, andXQuery/XSLTforXML.[24][25]Domain-specific declarative languages likeSQLandLex/Yaccuse some elements of functional programming, such as not allowingmutable values.[26]In addition, many other programming languages support programming in a functional style or have implemented features from functional programming, such asC++11,C#,[27]Kotlin,[28]Perl,[29]PHP,[30]Python,[31]Go,[32]Rust,[33]Raku,[34]Scala,[35]andJava (since Java 8).[36] Thelambda calculus, developed in the 1930s byAlonzo Church, is aformal systemofcomputationbuilt fromfunction application. In 1937Alan Turingproved that the lambda calculus andTuring machinesare equivalent models of computation,[37]showing that the lambda calculus isTuring complete. Lambda calculus forms the basis of all functional programming languages. An equivalent theoretical formulation,combinatory logic, was developed byMoses SchönfinkelandHaskell Curryin the 1920s and 1930s.[38] Church later developed a weaker system, thesimply typed lambda calculus, which extended the lambda calculus by assigning adata typeto all terms.[39]This forms the basis for statically typed functional programming. The firsthigh-levelfunctional programming language,Lisp, was developed in the late 1950s for theIBM 700/7000 seriesof scientific computers byJohn McCarthywhile atMassachusetts Institute of Technology(MIT).[40]Lisp functions were defined using Church's lambda notation, extended with a label construct to allowrecursivefunctions.[41]Lisp first introduced many paradigmatic features of functional programming, though early Lisps weremulti-paradigm languages, and incorporated support for numerous programming styles as new paradigms evolved. Later dialects, such asSchemeandClojure, and offshoots such asDylanandJulia, sought to simplify and rationalise Lisp around a cleanly functional core, whileCommon Lispwas designed to preserve and update the paradigmatic features of the numerous older dialects it replaced.[42] Information Processing Language(IPL), 1956, is sometimes cited as the first computer-based functional programming language.[43]It is anassembly-style languagefor manipulating lists of symbols. It does have a notion ofgenerator, which amounts to a function that accepts a function as an argument, and, since it is an assembly-level language, code can be data, so IPL can be regarded as having higher-order functions. However, it relies heavily on the mutating list structure and similar imperative features. Kenneth E. IversondevelopedAPLin the early 1960s, described in his 1962 bookA Programming Language(ISBN9780471430148). APL was the primary influence onJohn Backus'sFP. In the early 1990s, Iverson andRoger HuicreatedJ. In the mid-1990s,Arthur Whitney, who had previously worked with Iverson, createdK, which is used commercially in financial industries along with its descendantQ. In the mid-1960s,Peter LandininventedSECD machine,[44]the firstabstract machinefor a functional programming language,[45]described a correspondence betweenALGOL 60and thelambda calculus,[46][47]and proposed theISWIMprogramming language.[48] John BackuspresentedFPin his 1977Turing Awardlecture "Can Programming Be Liberated From thevon NeumannStyle? A Functional Style and its Algebra of Programs".[49]He defines functional programs as being built up in a hierarchical way by means of "combining forms" that allow an "algebra of programs"; in modern language, this means that functional programs follow theprinciple of compositionality.[citation needed]Backus's paper popularized research into functional programming, though it emphasizedfunction-level programmingrather than the lambda-calculus style now associated with functional programming. The 1973 languageMLwas created byRobin Milnerat theUniversity of Edinburgh, andDavid Turnerdeveloped the languageSASLat theUniversity of St Andrews. Also in Edinburgh in the 1970s, Burstall and Darlington developed the functional languageNPL.[50]NPL was based onKleene Recursion Equationsand was first introduced in their work on program transformation.[51]Burstall, MacQueen and Sannella then incorporated thepolymorphictype checking from ML to produce the languageHope.[52]ML eventually developed into several dialects, the most common of which are nowOCamlandStandard ML. In the 1970s,Guy L. SteeleandGerald Jay SussmandevelopedScheme, as described in theLambda Papersand the 1985 textbookStructure and Interpretation of Computer Programs. Scheme was the first dialect of lisp to uselexical scopingand to requiretail-call optimization, features that encourage functional programming. In the 1980s,Per Martin-Löfdevelopedintuitionistic type theory(also calledconstructivetype theory), which associated functional programs withconstructive proofsexpressed asdependent types. This led to new approaches tointeractive theorem provingand has influenced the development of subsequent functional programming languages.[citation needed] The lazy functional language,Miranda, developed by David Turner, initially appeared in 1985 and had a strong influence onHaskell. With Miranda being proprietary, Haskell began with a consensus in 1987 to form anopen standardfor functional programming research; implementation releases have been ongoing as of 1990. More recently it has found use in niches such as parametricCADin theOpenSCADlanguage built on theCGALframework, although its restriction on reassigning values (all values are treated as constants) has led to confusion among users who are unfamiliar with functional programming as a concept.[53] Functional programming continues to be used in commercial settings.[54][55][56] A number of concepts[57]and paradigms are specific to functional programming, and generally foreign toimperative programming(includingobject-oriented programming). However, programming languages often cater to several programming paradigms, so programmers using "mostly imperative" languages may have utilized some of these concepts.[58] Higher-order functionsare functions that can either take other functions as arguments or return them as results. In calculus, an example of a higher-order function is thedifferential operatord/dx{\displaystyle d/dx}, which returns thederivativeof a functionf{\displaystyle f}. Higher-order functions are closely related tofirst-class functionsin that higher-order functions and first-class functions both allow functions as arguments and results of other functions. The distinction between the two is subtle: "higher-order" describes a mathematical concept of functions that operate on other functions, while "first-class" is a computer science term for programming language entities that have no restriction on their use (thus first-class functions can appear anywhere in the program that other first-class entities like numbers can, including as arguments to other functions and as their return values). Higher-order functions enablepartial applicationorcurrying, a technique that applies a function to its arguments one at a time, with each application returning a new function that accepts the next argument. This lets a programmer succinctly express, for example, thesuccessor functionas the addition operator partially applied to thenatural numberone. Pure functions(or expressions) have noside effects(memory or I/O). This means that pure functions have several useful properties, many of which can be used to optimize the code: While most compilers for imperative programming languages detect pure functions and perform common-subexpression elimination for pure function calls, they cannot always do this for pre-compiled libraries, which generally do not expose this information, thus preventing optimizations that involve those external functions. Some compilers, such asgcc, add extra keywords for a programmer to explicitly mark external functions as pure, to enable such optimizations.Fortran 95also lets functions be designatedpure.[59]C++11 addedconstexprkeyword with similar semantics. Iteration(looping) in functional languages is usually accomplished viarecursion.Recursive functionsinvoke themselves, letting an operation be repeated until it reaches thebase case. In general, recursion requires maintaining astack, which consumes space in a linear amount to the depth of recursion. This could make recursion prohibitively expensive to use instead of imperative loops. However, a special form of recursion known astail recursioncan be recognized and optimized by a compiler into the same code used to implement iteration in imperative languages. Tail recursion optimization can be implemented by transforming the program intocontinuation passing styleduring compiling, among other approaches. TheSchemelanguage standard requires implementations to support proper tail recursion, meaning they must allow an unbounded number of active tail calls.[60][61]Proper tail recursion is not simply an optimization; it is a language feature that assures users that they can use recursion to express a loop and doing so would be safe-for-space.[62]Moreover, contrary to its name, it accounts for all tail calls, not just tail recursion. While proper tail recursion is usually implemented by turning code into imperative loops, implementations might implement it in other ways. For example,Chickenintentionally maintains a stack and lets thestack overflow. However, when this happens, itsgarbage collectorwill claim space back,[63]allowing an unbounded number of active tail calls even though it does not turn tail recursion into a loop. Common patterns of recursion can be abstracted away using higher-order functions, withcatamorphismsandanamorphisms(or "folds" and "unfolds") being the most obvious examples. Such recursion schemes play a role analogous to built-in control structures such asloopsinimperative languages. Most general purpose functional programming languages allow unrestricted recursion and areTuring complete, which makes thehalting problemundecidable, can cause unsoundness ofequational reasoning, and generally requires the introduction ofinconsistencyinto the logic expressed by the language'stype system. Some special purpose languages such asCoqallow onlywell-foundedrecursion and arestrongly normalizing(nonterminating computations can be expressed only with infinite streams of values calledcodata). As a consequence, these languages fail to be Turing complete and expressing certain functions in them is impossible, but they can still express a wide class of interesting computations while avoiding the problems introduced by unrestricted recursion. Functional programming limited to well-founded recursion with a few other constraints is calledtotal functional programming.[64] Functional languages can be categorized by whether they usestrict (eager)ornon-strict (lazy)evaluation, concepts that refer to how function arguments are processed when an expression is being evaluated. The technical difference is in thedenotational semanticsof expressions containing failing or divergent computations. Under strict evaluation, the evaluation of any term containing a failing subterm fails. For example, the expression: fails under strict evaluation because of the division by zero in the third element of the list. Under lazy evaluation, the length function returns the value 4 (i.e., the number of items in the list), since evaluating it does not attempt to evaluate the terms making up the list. In brief, strict evaluation always fully evaluates function arguments before invoking the function. Lazy evaluation does not evaluate function arguments unless their values are required to evaluate the function call itself. The usual implementation strategy for lazy evaluation in functional languages isgraph reduction.[65]Lazy evaluation is used by default in several pure functional languages, includingMiranda,Clean, andHaskell. Hughes 1984argues for lazy evaluation as a mechanism for improving program modularity throughseparation of concerns, by easing independent implementation of producers and consumers of data streams.[2]Launchbury 1993 describes some difficulties that lazy evaluation introduces, particularly in analyzing a program's storage requirements, and proposes anoperational semanticsto aid in such analysis.[66]Harper 2009 proposes including both strict and lazy evaluation in the same language, using the language's type system to distinguish them.[67] Especially since the development ofHindley–Milner type inferencein the 1970s, functional programming languages have tended to usetyped lambda calculus, rejecting all invalid programs at compilation time and riskingfalse positive errors, as opposed to theuntyped lambda calculus, that accepts all valid programs at compilation time and risksfalse negative errors, used in Lisp and its variants (such asScheme), as they reject all invalid programs at runtime when the information is enough to not reject valid programs. The use ofalgebraic data typesmakes manipulation of complex data structures convenient; the presence of strong compile-time type checking makes programs more reliable in absence of other reliability techniques liketest-driven development, whiletype inferencefrees the programmer from the need to manually declare types to the compiler in most cases. Some research-oriented functional languages such asCoq,Agda,Cayenne, andEpigramare based onintuitionistic type theory, which lets types depend on terms. Such types are calleddependent types. These type systems do not have decidable type inference and are difficult to understand and program with.[68][69][70][71]But dependent types can express arbitrary propositions inhigher-order logic. Through theCurry–Howard isomorphism, then, well-typed programs in these languages become a means of writing formalmathematical proofsfrom which a compiler can generatecertified code. While these languages are mainly of interest in academic research (including informalized mathematics), they have begun to be used in engineering as well.Compcertis acompilerfor a subset of the languageCthat is written in Coq and formally verified.[72] A limited form of dependent types calledgeneralized algebraic data types(GADT's) can be implemented in a way that provides some of the benefits of dependently typed programming while avoiding most of its inconvenience.[73]GADT's are available in theGlasgow Haskell Compiler, inOCaml[74]and inScala,[75]and have been proposed as additions to other languages including Java and C#.[76] Functional programs do not have assignment statements, that is, the value of a variable in a functional program never changes once defined. This eliminates any chances of side effects because any variable can be replaced with its actual value at any point of execution. So, functional programs are referentially transparent.[77] ConsiderCassignment statementx=x*10, this changes the value assigned to the variablex. Let us say that the initial value ofxwas1, then two consecutive evaluations of the variablexyields10and100respectively. Clearly, replacingx=x*10with either10or100gives a program a different meaning, and so the expressionis notreferentially transparent. In fact, assignment statements are never referentially transparent. Now, consider another function such asintplusone(intx){returnx+1;}istransparent, as it does not implicitly change the input x and thus has no suchside effects. Functional programs exclusively use this type of function and are therefore referentially transparent. Purely functionaldata structuresare often represented in a different way to theirimperativecounterparts.[78]For example, thearraywith constant access and update times is a basic component of most imperative languages, and many imperative data-structures, such as thehash tableandbinary heap, are based on arrays. Arrays can be replaced bymapsor random access lists, which admit purely functional implementation, but havelogarithmicaccess and update times. Purely functional data structures havepersistence, a property of keeping previous versions of the data structure unmodified. In Clojure, persistent data structures are used as functional alternatives to their imperative counterparts. Persistent vectors, for example, use trees for partial updating. Calling the insert method will result in some but not all nodes being created.[79] Functional programming is very different fromimperative programming. The most significant differences stem from the fact that functional programming avoidsside effects, which are used in imperative programming to implement state and I/O. Pure functional programming completely prevents side-effects and provides referential transparency. Higher-order functions are rarely used in older imperative programming. A traditional imperative program might use a loop to traverse and modify a list. A functional program, on the other hand, would probably use a higher-order "map" function that takes a function and a list, generating and returning a new list by applying the function to each list item. The following two examples (written inJavaScript) achieve the same effect: they multiply all even numbers in an array by 10 and add them all, storing the final sum in the variableresult. Traditional imperative loop: Functional programming with higher-order functions: Sometimes the abstractions offered by functional programming might lead to development of more robust code that avoids certain issues that might arise when building upon large amount of complex, imperative code, such asoff-by-one errors(seeGreenspun's tenth rule). There are tasks (for example, maintaining a bank account balance) that often seem most naturally implemented with state. Pure functional programming performs these tasks, and I/O tasks such as accepting user input and printing to the screen, in a different way. The pure functional programming languageHaskellimplements them usingmonads, derived fromcategory theory.[80]Monads offer a way to abstract certain types of computational patterns, including (but not limited to) modeling of computations with mutable state (and other side effects such as I/O) in an imperative manner without losing purity. While existing monads may be easy to apply in a program, given appropriate templates and examples, many students find them difficult to understand conceptually, e.g., when asked to define new monads (which is sometimes needed for certain types of libraries).[81] Functional languages also simulate states by passing around immutable states. This can be done by making a function accept the state as one of its parameters, and return a new state together with the result, leaving the old state unchanged.[82] Impure functional languages usually include a more direct method of managing mutable state.Clojure, for example, uses managed references that can be updated by applying pure functions to the current state. This kind of approach enables mutability while still promoting the use of pure functions as the preferred way to express computations.[citation needed] Alternative methods such asHoare logicanduniquenesshave been developed to track side effects in programs. Some modern research languages useeffect systemsto make the presence of side effects explicit.[83] Functional programming languages are typically less efficient in their use ofCPUand memory than imperative languages such asCandPascal.[84]This is related to the fact that some mutable data structures like arrays have a very straightforward implementation using present hardware. Flat arrays may be accessed very efficiently with deeply pipelined CPUs, prefetched efficiently through caches (with no complexpointer chasing), or handled with SIMD instructions. It is also not easy to create their equally efficient general-purpose immutable counterparts. For purely functional languages, the worst-case slowdown is logarithmic in the number of memory cells used, because mutable memory can be represented by a purely functional data structure with logarithmic access time (such as a balanced tree).[85]However, such slowdowns are not universal. For programs that perform intensive numerical computations, functional languages such asOCamlandCleanare only slightly slower than C according toThe Computer Language Benchmarks Game.[86]For programs that handle largematricesand multidimensionaldatabases,arrayfunctional languages (such asJandK) were designed with speed optimizations. Immutability of data can in many cases lead to execution efficiency by allowing the compiler to make assumptions that are unsafe in an imperative language, thus increasing opportunities forinline expansion.[87]Even if the involved copying that may seem implicit when dealing with persistent immutable data structures might seem computationally costly, some functional programming languages, likeClojuresolve this issue by implementing mechanisms for safe memory sharing betweenformallyimmutabledata.[88]Rustdistinguishes itself by its approach to data immutability which involves immutablereferences[89]and a concept calledlifetimes.[90] Immutable data with separation of identity and state andshared-nothingschemes can also potentially be more well-suited forconcurrent and parallelprogramming by the virtue of reducing or eliminating the risk of certain concurrency hazards, since concurrent operations are usuallyatomicand this allows eliminating the need for locks. This is how for examplejava.util.concurrentclasses are implemented, where some of them are immutable variants of the corresponding classes that are not suitable for concurrent use.[91]Functional programming languages often have a concurrency model that instead of shared state and synchronization, leveragesmessage passingmechanisms (such as theactor model, where each actor is a container for state, behavior, child actors and a message queue).[92][93]This approach is common inErlang/ElixirorAkka. Lazy evaluationmay also speed up the program, even asymptotically, whereas it may slow it down at most by a constant factor (however, it may introducememory leaksif used improperly). Launchbury 1993[66]discusses theoretical issues related to memory leaks from lazy evaluation, and O'Sullivanet al.2008[94]give some practical advice for analyzing and fixing them. However, the most general implementations of lazy evaluation making extensive use of dereferenced code and data perform poorly on modern processors with deep pipelines and multi-level caches (where a cache miss may cost hundreds of cycles)[citation needed]. Some functional programming languages might not optimize abstractions such as higher order functions like "map" or "filter" as efficiently as the underlying imperative operations. Consider, as an example, the following two ways to check if 5 is an even number inClojure: Whenbenchmarkedusing theCriteriumtool on aRyzen 7900XGNU/Linux PC in aLeiningenREPL2.11.2, running onJava VMversion 22 and Clojure version 1.11.1, the first implementation, which is implemented as: has the mean execution time of 4.76 ms, while the second one, in which.equalsis a direct invocation of the underlyingJavamethod, has a mean execution time of 2.8 μs – roughly 1700 times faster. Part of that can be attributed to the type checking and exception handling involved in the implementation ofeven?. For instance thelo libraryforGo, which implements various higher-order functions common in functional programming languages usinggenerics. In a benchmark provided by the library's author, callingmapis 4% slower than an equivalentforloop and has the sameallocationprofile,[95]which can be attributed to various compiler optimizations, such asinlining.[96] One distinguishing feature ofRustarezero-cost abstractions. This means that using them imposes no additional runtime overhead. This is achieved thanks to the compiler usingloop unrolling, where each iteration of a loop, be it imperative or using iterators, is converted into a standaloneAssemblyinstruction, without the overhead of the loop controlling code. If an iterative operation writes to an array, the resulting array's elementswill be stored in specific CPU registers, allowing forconstant-time accessat runtime.[97] It is possible to use a functional style of programming in languages that are not traditionally considered functional languages.[98]For example, bothD[99]andFortran 95[59]explicitly support pure functions. JavaScript,Lua,[100]PythonandGo[101]hadfirst class functionsfrom their inception.[102]Python had support for "lambda", "map", "reduce", and "filter" in 1994, as well as closures in Python 2.2,[103]though Python 3 relegated "reduce" to thefunctoolsstandard library module.[104]First-class functions have been introduced into other mainstream languages such asPerl5.0 in 1994,PHP5.3,Visual Basic 9,C#3.0,C++11, andKotlin.[28][citation needed] In Perl,lambda,map,reduce,filter, andclosuresare fully supported and frequently used. The bookHigher-Order Perl, released in 2005, was written to provide an expansive guide on using Perl for functional programming. In PHP,anonymous classes,closuresand lambdas are fully supported. Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style. InJava, anonymous classes can sometimes be used to simulate closures;[105]however, anonymous classes are not always proper replacements to closures because they have more limited capabilities.[106]Java 8 supports lambda expressions as a replacement for some anonymous classes.[107] InC#, anonymous classes are not necessary, because closures and lambdas are fully supported. Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style in C#. Manyobject-orienteddesign patternsare expressible in functional programming terms: for example, thestrategy patternsimply dictates use of a higher-order function, and thevisitorpattern roughly corresponds to acatamorphism, orfold. Similarly, the idea of immutable data from functional programming is often included in imperative programming languages,[108]for example the tuple in Python, which is an immutable array, and Object.freeze() in JavaScript.[109] Logic programmingcan be viewed as a generalisation of functional programming, in which functions are a special case of relations.[110]For example, the function, mother(X) = Y, (every X has only one mother Y) can be represented by the relation mother(X, Y). Whereas functions have a strict input-output pattern of arguments, relations can be queried with any pattern of inputs and outputs. Consider the following logic program: The program can be queried, like a functional program, to generate mothers from children: But it can also be queriedbackwards, to generate children: It can even be used to generate all instances of the mother relation: Compared with relational syntax, functional syntax is a more compact notation for nested functions. For example, the definition of maternal grandmother in functional syntax can be written in the nested form: The same definition in relational notation needs to be written in the unnested form: Here:-meansifand,meansand. However, the difference between the two representations is simply syntactic. InCiaoProlog, relations can be nested, like functions in functional programming:[111] Ciao transforms the function-like notation into relational form and executes the resulting logic program using the standard Prolog execution strategy. Emacs, a highly extensible text editor family uses its ownLisp dialectfor writing plugins. The original author of the most popular Emacs implementation,GNU Emacsand Emacs Lisp,Richard Stallmanconsiders Lisp one of his favorite programming languages.[112] Helix, since version 24.03 supports previewingASTasS-expressions, which are also the core feature of the Lisp programming language family.[113] Spreadsheetscan be considered a form of pure,zeroth-order, strict-evaluation functional programming system.[114]However, spreadsheets generally lack higher-order functions as well as code reuse, and in some implementations, also lack recursion. Several extensions have been developed for spreadsheet programs to enable higher-order and reusable functions, but so far remain primarily academic in nature.[115] Due to theircomposability, functional programming paradigms can be suitable formicroservices-based architectures.[116] Functional programming is an active area of research in the field ofprogramming language theory. There are severalpeer-reviewedpublication venues focusing on functional programming, including theInternational Conference on Functional Programming, theJournal of Functional Programming, and theSymposium on Trends in Functional Programming. Functional programming has been employed in a wide range of industrial applications. For example,Erlang, which was developed by theSwedishcompanyEricssonin the late 1980s, was originally used to implementfault-toleranttelecommunicationssystems,[11]but has since become popular for building a range of applications at companies such asNortel,Facebook,Électricité de FranceandWhatsApp.[10][12][117][118][119]Scheme, a dialect ofLisp, was used as the basis for several applications on earlyApple Macintoshcomputers[3][4]and has been applied to problems such as training-simulation software[5]andtelescopecontrol.[6]OCaml, which was introduced in the mid-1990s, has seen commercial use in areas such as financial analysis,[14]driververification, industrialrobotprogramming and static analysis ofembedded software.[15]Haskell, though initially intended as a research language,[17]has also been applied in areas such as aerospace systems, hardware design and web programming.[16][17] Other functional programming languages that have seen use in industry includeScala,[120]F#,[18][19]Wolfram Language,[7]Lisp,[121]Standard ML[122][123]and Clojure.[124]Scala has been widely used inData science,[125]whileClojureScript,[126]Elm[127]orPureScript[128]are some of the functional frontend programming languages used in production.Elixir's Phoenix framework is also used by some relatively popular commercial projects, such asFont AwesomeorAllegro(one of the biggest e-commerce platforms in Poland)[129]'s classified ads platformAllegro Lokalnie.[130] Functional "platforms" have been popular in finance for risk analytics (particularly with large investment banks). Risk factors are coded as functions that form interdependent graphs (categories) to measure correlations in market shifts, similar in manner toGröbner basisoptimizations but also for regulatory frameworks such asComprehensive Capital Analysis and Review. Given the use of OCaml andCamlvariations in finance, these systems are sometimes considered related to acategorical abstract machine. Functional programming is heavily influenced bycategory theory.[citation needed] Manyuniversitiesteach functional programming.[131][132][133][134]Some treat it as an introductory programming concept[134]while others first teach imperative programming methods.[133][135] Outside of computer science, functional programming is used to teach problem-solving, algebraic and geometric concepts.[136]It has also been used to teach classical mechanics, as in the bookStructure and Interpretation of Classical Mechanics. In particular,Schemehas been a relatively popular choice for teaching programming for years.[137][138]
https://en.wikipedia.org/wiki/Functional_programming
Inmathematicsandcomputer science, ahigher-order function(HOF) is afunctionthat does at least one of the following: All other functions arefirst-order functions. In mathematics higher-order functions are also termedoperatorsorfunctionals. Thedifferential operatorincalculusis a common example, since it maps a function to itsderivative, also a function. Higher-order functions should not be confused with other uses of the word "functor" throughout mathematics, seeFunctor (disambiguation). In the untypedlambda calculus, all functions are higher-order; in atyped lambda calculus, from which mostfunctional programminglanguages are derived, higher-order functions that take one function as argument are values with types of the form(τ1→τ2)→τ3{\displaystyle (\tau _{1}\to \tau _{2})\to \tau _{3}}. The examples are not intended to compare and contrast programming languages, but to serve as examples of higher-order function syntax In the following examples, the higher-order functiontwicetakes a function, and applies the function to some value twice. Iftwicehas to be applied several times for the samefit preferably should return a function rather than a value. This is in line with the "don't repeat yourself" principle. Or in a tacit manner: Usingstd::functioninC++11: Or, with generic lambdas provided by C++14: Using just delegates: Or equivalently, with static methods: In Elixir, you can mix module definitions andanonymous functions Alternatively, we can also compose using pure anonymous functions. In this Erlang example, the higher-order functionor_else/2takes a list of functions (Fs) and argument (X). It evaluates the functionFwith the argumentXas argument. If the functionFreturns false then the next function inFswill be evaluated. If the functionFreturns{false, Y}then the next function inFswith argumentYwill be evaluated. If the functionFreturnsRthe higher-order functionor_else/2will returnR. Note thatX,Y, andRcan be functions. The example returnsfalse. Notice a function literal can be defined either with an identifier (twice) or anonymously (assigned to variableplusThree). Explicitly, or tacitly, Using just functional interfaces: Or equivalently, with static methods: With arrow functions: Or with classical syntax: or with all functions in variables: Note that arrow functions implicitly capture any variables that come from the parent scope,[1]whereas anonymous functions require theusekeyword to do the same. or with all functions in variables: Python decorator syntax is often used to replace a function with the result of passing that function through a higher-order function. E.g., the functiongcould be implemented equivalently: In Raku, all code objects are closures and therefore can reference inner "lexical" variables from an outer scope because the lexical variable is "closed" inside of the function. Raku also supports "pointy block" syntax for lambda expressions which can be assigned to a variable or invoked anonymously. Tcl uses apply command to apply an anonymous function (since 8.6). The XACML standard defines higher-order functions in the standard to apply a function to multiple values of attribute bags. The list of higher-order functions in XACML can be foundhere. Function pointersin languages such asC,C++,Fortran, andPascalallow programmers to pass around references to functions. The following C code computes an approximation of the integral of an arbitrary function: Theqsortfunction from the C standard library uses a function pointer to emulate the behavior of a higher-order function. Macroscan also be used to achieve some of the effects of higher-order functions. However, macros cannot easily avoid the problem of variable capture; they may also result in large amounts of duplicated code, which can be more difficult for a compiler to optimize. Macros are generally not strongly typed, although they may produce strongly typed code. In otherimperative programminglanguages, it is possible to achieve some of the same algorithmic results as are obtained via higher-order functions by dynamically executing code (sometimes calledEvalorExecuteoperations) in the scope of evaluation. There can be significant drawbacks to this approach: Inobject-oriented programminglanguages that do not support higher-order functions,objectscan be an effective substitute. An object'smethodsact in essence like functions, and a method may accept objects as parameters and produce objects as return values. Objects often carry added run-time overhead compared to pure functions, however, and addedboilerplate codefor defining and instantiating an object and its method(s). Languages that permitstack-based (versusheap-based) objects orstructscan provide more flexibility with this method. An example of using a simple stack based record inFree Pascalwith a function that returns a function: The functiona()takes aTxyrecord as input and returns the integer value of the sum of the record'sxandyfields (3 + 7). Defunctionalizationcan be used to implement higher-order functions in languages that lackfirst-class functions: In this case, different types are used to trigger different functions viafunction overloading. The overloaded function in this example has the signatureauto apply.
https://en.wikipedia.org/wiki/Higher-order_function
Alist comprehensionis asyntacticconstruct available in someprogramming languagesfor creating a list based on existinglists. It follows the form of the mathematicalset-builder notation(set comprehension) as distinct from the use ofmapandfilterfunctions. Consider the following example in mathematicalset-builder notation. or often This can be read, "S{\displaystyle S}is the set of all numbers "2 timesx{\displaystyle x}" SUCH THATx{\displaystyle x}is an ELEMENT or MEMBER of the set ofnatural numbers(N{\displaystyle \mathbb {N} }), ANDx{\displaystyle x}squared is greater than3{\displaystyle 3}." The smallest natural number, x = 1, fails to satisfy the condition x2>3 (the condition 12>3 is false) so 2 ·1 is not included in S. The next natural number, 2, does satisfy the condition (22>3) as does every other natural number. Thus x consists of 2, 3, 4, 5... Since the setSconsists of all numbers "2 times x" it is given by S = {4, 6, 8, 10,...}. S is, in other words, the set of all even numbers greater than 2. In this annotated version of the example: A list comprehension has the same syntactic components to represent generation of a list in order from an inputlistoriterator: The order of generation of members of the output list is based on the order of items in the input. InHaskell'slist comprehension syntax, this set-builder construct would be written similarly, as: Here, the list[0..]representsN{\displaystyle \mathbb {N} },x^2>3represents the predicate, and2*xrepresents the output expression. List comprehensions give results in a defined order (unlike the members of sets); and list comprehensions maygeneratethe members of a list in order, rather than produce the entirety of the list thus allowing, for example, the previous Haskell definition of the members of an infinite list. The existence of related constructs predates the use of the term "List Comprehension". TheSETLprogramming language (1969) has a set formation construct which is similar to list comprehensions. E.g., this code prints all prime numbers from 2 toN: Thecomputer algebra systemAXIOM(1973) has a similar construct that processesstreams. The first use of the term "comprehension" for such constructs was inRod BurstallandJohn Darlington's description of their functional programming languageNPLfrom 1977. In his retrospective "Some History of Functional Programming Languages",[1]David Turnerrecalls: NPL was implemented in POP2 by Burstall and used for Darlington’s work on program transformation (Burstall & Darlington 1977). The language was first order, strongly (but not polymorphically) typed, purely functional, call-by-value. It also had “set expressions” e.g. In a footnote attached to the term "list comprehension", Turner also notes I initially called theseZF expressions, a reference toZermelo–Fraenkel set theory— it wasPhil Wadlerwho coined the better termlist comprehension. Burstall and Darlington's work with NPL influenced many functional programming languages during the 1980s, but not all included list comprehensions. An exception was Turner's influential, pure, lazy, functional programming languageMiranda, released in 1985. The subsequently developed standard pure lazy functional languageHaskellincludes many of Miranda's features, including list comprehensions. Comprehensions were proposed as a query notation for databases[2]and were implemented in theKleislidatabase query language.[3] In Haskell, amonad comprehensionis a generalization of the list comprehension to othermonads in functional programming. ThePythonlanguage introduces syntax forsetcomprehensions starting in version 2.7. Similar in form to list comprehensions, set comprehensions generate Python sets instead of lists. Racketset comprehensions generate Racket sets instead of lists. ThePythonlanguage introduced a new syntax fordictionarycomprehensions in version 2.7, similar in form to list comprehensions but which generate Pythondictsinstead of lists. Racket hash table comprehensions generate Racket hash tables (one implementation of the Racket dictionary type). TheGlasgow Haskell Compilerhas an extension calledparallel list comprehension(also known aszip-comprehension) that permits multiple independent branches of qualifiers within the list comprehension syntax. Whereas qualifiers separated by commas are dependent ("nested"), qualifier branches separated by pipes are evaluated in parallel (this does not refer to any form of multithreadedness: it merely means that the branches arezipped). Racket's comprehensions standard library contains parallel and nested versions of its comprehensions, distinguished by "for" vs "for*" in the name. For example, the vector comprehensions "for/vector" and "for*/vector" create vectors by parallel versus nested iteration over sequences. The following is Racket code for the Haskell list comprehension examples. In Python, we could do as follows: In Julia, practically the same results can be achieved as follows: with the only difference that instead of lists, in Julia, we have arrays. Like the original NPL use, these are fundamentally database access languages. This makes the comprehension concept more important, because it is computationally infeasible to retrieve the entire list and operate on it (the initial 'entire list' may be an entire XML database). In XPath, the expression: is conceptually evaluated as a series of "steps" where each step produces a list and the next step applies a filter function to each element in the previous step's output.[4] In XQuery, full XPath is available, butFLWORstatements are also used, which is a more powerful comprehension construct.[5] Here the XPath //book is evaluated to create a sequence (aka list); the where clause is a functional "filter", the order by sorts the result, and the<shortBook>...</shortBook>XML snippet is actually ananonymous functionthat builds/transforms XML for each element in the sequence using the 'map' approach found in other functional languages. So, in another functional language the above FLWOR statement may be implemented like this: C#3.0 has a group of related features calledLINQ, which defines a set of query operators for manipulating object enumerations. It also offers an alternative comprehension syntax, reminiscent of SQL: LINQ provides a capability over typical list comprehension implementations. When the root object of the comprehension implements theIQueryableinterface, rather than just executing the chained methods of the comprehension, the entire sequence of commands are converted into anabstract syntax tree(AST) object, which is passed to the IQueryable object to interpret and execute. This allows, amongst other things, for the IQueryable to C++ does not have any language features directly supporting list comprehensions butoperator overloading(e.g., overloading|,>>,>>=) has been used successfully to provide expressive syntax for "embedded" querydomain-specific languages(DSL). Alternatively, list comprehensions can be constructed using theerase-remove idiomto select elements in a container and the STL algorithm for_each to transform them. There is some effort in providing C++ with list-comprehension constructs/syntax similar to the set builder notation. LEESA provides>>for XPath's / separator. XPath's // separator that "skips" intermediate nodes in the tree is implemented in LEESA using what's known as Strategic Programming. In the example below, catalog_, book_, author_, and name_ are instances of catalog, book, author, and name classes, respectively.
https://en.wikipedia.org/wiki/List_comprehension
Mapis anidiominparallel computingwhere a simple operation is applied to all elements of a sequence, potentially in parallel.[1]It is used to solveembarrassingly parallelproblems: those problems that can be decomposed into independent subtasks, requiring no communication/synchronization between the subtasks except ajoinorbarrierat the end. When applying the map pattern, one formulates anelemental functionthat captures the operation to be performed on a data item that represents a part of the problem, then applies this elemental function in one or morethreads of execution,hyperthreads,SIMD lanesor onmultiple computers. Some parallel programming systems, such asOpenMPandCilk, have language support for the map pattern in the form of aparallel for loop;[2]languages such asOpenCLandCUDAsupport elemental functions (as "kernels") at the language level. The map pattern is typically combined with other parallel design patterns. For example, map combined with category reduction gives theMapReducepattern.[3]: 106–107
https://en.wikipedia.org/wiki/Map_(parallel_pattern)
Incomputer programming, aguardis aBoolean expressionthat must evaluate to true if theexecutionof theprogramis to continue in the branch in question. Regardless of whichprogramming languageis used, aguard clause,guard code, orguard statementis a check of integritypreconditionsused to avoid errors during execution. The termguard clauseis aSoftware design patternattributed toKent Beckwho codified many often unnamed coding practices into named software design patterns, the practice of using this technique dates back to at least the early 1960's. Theguard clausemost commonly is added at the beginning of a procedure and is said to "guard" the rest of the procedure by handling edgecases upfront. A typical example is checking that areferenceabout to be processed is not null, which avoidsnull-pointerfailures. Other uses include using a Boolean field foridempotence(so subsequent calls arenops), as in thedispose pattern. The guard provides anearly exitfrom asubroutine, and is a commonly used deviation fromstructured programming, removing one level of nesting and resulting in flatter code:[1]replacingif guard { ... }withif not guard: return; .... Using guard clauses can be arefactoringtechnique to improve code. In general, less nesting is good, as it simplifies the code and reduces cognitive burden. For example, inPython: Another example, written inC: The term is used with specific meaning inAPL,Haskell,Clean,Erlang,occam,Promela,OCaml,Swift,[2]Pythonfrom version 3.10, andScalaprogramming languages.[citation needed]InMathematica, guards are calledconstraints. Guards are the fundamental concept inGuarded Command Language, a language informal methods. Guards can be used to augmentpattern matchingwith the possibility to skip a pattern even if the structure matches. Boolean expressions inconditional statementsusually also fit this definition of a guard although they are calledconditions. In the following Haskell example, the guards occur between each pair of "|" and "=": This is similar to the respective mathematical notation: f(x)={1ifx>00otherwise{\displaystyle f(x)=\left\{{\begin{matrix}1&{\mbox{if }}x>0\\0&{\mbox{otherwise}}\end{matrix}}\right.} In this case the guards are in the "if" and "otherwise" clauses. If there are several parallel guards, they are normally tried in a top-to-bottom order, and the branch of the first to pass is chosen. Guards in a list of cases are typically parallel. However, in Haskelllist comprehensionsthe guards are in series, and if any of them fails, the list element is not produced. This would be the same as combining the separate guards withlogical AND, except that there can be other list comprehension clauses among the guards. A simple conditional expression, already present inCPLin 1963, has a guard on first sub-expression, and another sub-expression to use in case the first one cannot be used. Some common ways to write this: If the second sub-expression can be a further simple conditional expression, we can give more alternatives to try before the lastfall-through: In 1966ISWIMhad a form of conditional expression without an obligatory fall-through case, thus separating guard from the concept of choosing either-or. In the case of ISWIM, if none of the alternatives could be used, the value was to beundefined, which was defined to never compute into a value. KRC, a "miniaturized version"[3]ofSASL(1976), was one of the first programming languages to use the term "guard". Its function definitions could have several clauses, and the one to apply was chosen based on the guards that followed each clause: Use of guard clauses, and the term "guard clause", dates at least toSmalltalkpractice in the 1990s, as codified byKent Beck.[1] In 1996, Dyalog APL adopted an alternative pure functional style in which the guard is the only control structure.[4]This example, in APL, computes the parity of the input number: In addition to a guard attached to a pattern,pattern guardcan refer to the use ofpattern matchingin the context of a guard. In effect, a match of the pattern is taken to mean pass. This meaning was introduced in a proposal for Haskell bySimon Peyton JonestitledA new view of guardsin April 1997 and was used in the implementation of the proposal. The feature provides the ability to use patterns in the guards of a pattern. An example in extended Haskell: This would read: "Clunky for an environment and two variables,in case the lookups of the variables from the environment produce values, is the sum of the values. ..." As inlist comprehensions, the guards are in series, and if any of them fails the branch is not taken.
https://en.wikipedia.org/wiki/Guard_(computing)
Incomputing, aneffect systemis aformal systemthat describes the computational effects of computer programs, such asside effects. An effect system can be used to provide acompile-timecheck of the possible effects of the program. The effect system extends the notion of type to have an "effect" component, which comprises aneffect kindand aregion. The effect kind describeswhatis being done, and the region describeswith what(parameters) it is being done. An effect system is typically an extension of atype system. The term "type and effect system" is sometimes used in this case. Often, a type of a value is denoted together with its effect astype ! effect, where both the type component and the effect component mention certain regions (for example, a type of a mutable memory cell is parameterized by the label of the memory region in which the cell resides). The term "algebraic effect" follows from the type system. Effect systems may be used to prove the externalpurityof certain internally impure definitions: for example, if a function internally allocates and modifies a region of memory, but the function's type does not mention the region, then the corresponding effect may be erased from the function's effect.[1] Some examples of the behaviors that can be described by effect systems include: From a programmer's point of view, effects are useful as it allows for separating the implementation (how) of specific actions from the specification of what actions to perform. For example, anask nameeffect can read from either the console, pop a window, or just return a default value. The control flow can be described as a blend ofyield(in that the execution continues) andthrow(in that an unhandled effect propagates down until handled).[2]
https://en.wikipedia.org/wiki/Effect_system
Incomputing, aunique typeguarantees that an object is used in asingle-threadedway, with at most a single reference to it. If a value has a unique type, a function applied to it can beoptimizedto update the value in-place in theobject code. Such in-place updates improve the efficiency offunctional languageswhile maintainingreferential transparency. Unique types can also be used to integrate functional and imperative programming. Uniqueness typing is best explained using an example. Consider a functionreadLinethat reads the next line of text from a given file: NowdoImperativeReadLineSystemCallreads the next line from the file using anOS-levelsystem callwhich has theside effectof changing the current position in the file. But this violates referential transparency because calling it multiple times with the same argument will return different results each time as the current position in the file gets moved. This in turn makesreadLineviolate referential transparency because it callsdoImperativeReadLineSystemCall. However, using uniqueness typing, we can construct a new version ofreadLinethat is referentially transparent even though it's built on top of a function that's not referentially transparent: Theuniquedeclaration specifies that the type offis unique; that is to say thatfmay never be referred to again by the caller ofreadLine2afterreadLine2returns, and this restriction is enforced by thetype system. And sincereadLine2does not returnfitself but rather a new, different file objectdifferentF, this means that it's impossible forreadLine2to be called withfas an argument ever again, thus preserving referential transparency while allowing for side effects to occur. Uniqueness types are implemented in functional programming languages such asClean,Mercury,SACandIdris. They are sometimes used for doingI/Ooperations in functional languages in lieu ofmonads. A compiler extension has been developed for theScala programming languagewhich uses annotations to handle uniqueness in the context of message passing between actors.[1] A unique type is very similar to alinear type, to the point that the terms are often used interchangeably, but there is in fact a distinction: actual linear typing allows a non-linear value to betypecastto a linear form, while still retaining multiple references to it. Uniqueness guarantees that a value has no other references to it, while linearity guarantees that no more references can be made to a value.[2] Linearity and uniqueness can be seen as particularly distinct when in relation to non-linearity and non-uniqueness modalities, but can then also be unified in a single type system.[3]
https://en.wikipedia.org/wiki/Uniqueness_type
Incomputer science,arrayis adata typethat represents a collection ofelements(valuesorvariables), each selected by one or more indices (identifying keys) that can be computed atrun timeduring program execution. Such a collection is usually called anarray variableorarray value.[1]By analogy with the mathematical conceptsvectorandmatrix, array types with one and two indices are often calledvector typeandmatrix type, respectively. More generally, a multidimensional array type can be called atensor type, by analogy with the mathematical concept,tensor.[2] Language support for array types may include certainbuilt-inarray data types, some syntactic constructions (array type constructors) that theprogrammermay use to define such types and declare array variables, and special notation for indexing array elements.[1]For example, in thePascal programming language, the declarationtypeMyTable=array[1..4,1..2]ofinteger, defines a new array data type calledMyTable. The declarationvar A: MyTablethen defines a variableAof that type, which is an aggregate of eight elements, each being an integer variable identified by two indices. In the Pascal program, those elements are denotedA[1,1],A[1,2],A[2,1], …,A[4,2].[3]Special array types are often defined by the language's standardlibraries. Dynamic listsare also more common and easier to implement[dubious–discuss]thandynamic arrays. Array types are distinguished fromrecordtypes mainly because they allow the element indices to be computed atrun time, as in the PascalassignmentA[I,J] := A[N-I,2*J]. Among other things, this feature allows a single iterativestatementto process arbitrarily many elements of an array variable. In more theoretical contexts, especially intype theoryand in the description of abstractalgorithms, the terms "array" and "array type" sometimes refer to anabstract data type(ADT) also calledabstract arrayor may refer to anassociative array, amathematicalmodel with the basic operations and behavior of a typical array type in most languages – basically, a collection of elements that are selected by indices computed at run-time. Depending on the language, array types may overlap (or be identified with) other data types that describe aggregates of values, such aslistsandstrings. Array types are often implemented byarray data structures, but sometimes by other means, such ashash tables,linked lists, orsearch trees. Heinz Rutishauser's programming language Superplan (1949–1951) included multi-dimensional arrays. However, although Rutishauser described how a compiler for his language should be built, did not implement one. Assembly languages and low-level languages like BCPL[4]generally have no syntactic support for arrays. Because of the importance of array structures for efficient computation, the earliest high-level programming languages, includingFORTRAN(1957),COBOL(1960), andAlgol 60(1960), provided support for multi-dimensional arrays. An array data structure can be mathematically modeled as anabstract data structure(anabstract array) with two operations These operations are required to satisfy theaxioms[5] for any array stateA, any valueV, and any tuplesI,Jfor which the operations are defined. The first axiom means that each element behaves like a variable. The second axiom means that elements with distinct indices behave asdisjointvariables, so that storing a value in one element does not affect the value of any other element. These axioms do not place any constraints on the set of valid index tuplesI, therefore this abstract model can be used fortriangular matricesand other oddly-shaped arrays. In order to effectively implement variables of such types asarray structures(with indexing done bypointer arithmetic), many languages restrict the indices tointegerdata types[6][7](or other types that can be interpreted as integers, such asbytesandenumerated types), and require that all elements have the same data type and storage size. Most of those languages also restrict each index to a finiteintervalof integers, that remains fixed throughout the lifetime of the array variable. In somecompiledlanguages, in fact, the index ranges may have to be known atcompile time. On the other hand, some programming languages provide more liberal array types, that allow indexing by arbitrary values, such asfloating-point numbers,strings,objects,references, etc.. Such index values cannot be restricted to an interval, much less a fixed interval. So, these languages usually allow arbitrary new elements to be created at any time. This choice precludes the implementation of array types as array data structures. That is, those languages use array-like syntax to implement a more generalassociative arraysemantics, and must therefore be implemented by ahash tableor some othersearch data structure. The number of indices needed to specify an element is called thedimension,dimensionality, orrankof the array type. (This nomenclature conflicts with the concept of dimension in linear algebra, which expresses theshape of a matrix. Thus, an array of numbers with 5 rows and 4 columns, hence 20 elements, is said to have dimension 2 in computing contexts, but represents a matrix that is said to be 4×5-dimensional. Also, the computer science meaning of "rank" conflicts with the notion oftensor rank, which is a generalization of the linear algebra concept ofrank of a matrix.) Many languages support only one-dimensional arrays. In those languages, a multi-dimensional array is typically represented by anIliffe vector, a one-dimensional array ofreferencesto arrays of one dimension less. A two-dimensional array, in particular, would be implemented as a vector of pointers to its rows. Thus an element in rowiand columnjof an arrayAwould be accessed by double indexing (A[i][j]in typical notation). This way of emulating multi-dimensional arrays allows the creation ofjagged arrays, where each row may have a different size – or, in general, where the valid range of each index depends on the values of all preceding indices. This representation for multi-dimensional arrays is quite prevalent in C and C++ software. However, C and C++ will use a linear indexing formula for multi-dimensional arrays that are declared with compile time constant size, e.g. byintA[10][20]orintA[m][n], instead of the traditionalint**A.[8] The C99 standard introduced Variable Length Array types that let define array types with dimensions computed in run time. The dynamic 4D array can be constructed using a pointer to 4d array, e.g.int(*arr)[t][u][v][w]=malloc(sizeof*arr);. The individual elements are accessed by first de-referencing an array pointer followed by indexing, e.g.(*arr)[i][j][k][l]. Alternatively, n-d arrays can be declared as pointers to its first element which is a (n-1) dimensional array, e.g.int(*arr)[u][v][w]=malloc(t*sizeof*arr);and accessed using more idiomatic syntax, e.g.arr[i][j][k][l]. Most programming languages that support arrays support thestoreandselectoperations, and have special syntax for indexing. Early languages used parentheses, e.g.A(i,j), as in FORTRAN; others choose square brackets, e.g.A[i,j]orA[i][j], as in Algol 60 and Pascal (to distinguish from the use of parentheses forfunction calls). Array data types are most often implemented as array structures: with the indices restricted to integer (or totally ordered) values, index ranges fixed at array creation time, and multilinear element addressing. This was the case in most"third generation"languages, and is still the case of mostsystems programming languagessuch asAda,C, andC++. In some languages, however, array data types have the semantics of associative arrays, with indices of arbitrary type and dynamic element creation. This is the case in somescripting languagessuch asAwkandLua, and of some array types provided by standardC++libraries. Some languages (like Pascal and Modula) performbounds checkingon every access, raising anexceptionor aborting the program when any index is out of its valid range. Compilers may allow these checks to be turned off to trade safety for speed. Other languages (like FORTRAN and C) trust the programmer and perform no checks. Good compilers may also analyze the program to determine the range of possible values that the index may have, and this analysis may lead tobounds-checking elimination. Some languages, such as C, provide onlyzero-basedarray types, for which the minimum valid value for any index is 0.[9]This choice is convenient for array implementation and address computations. With a language such as C, a pointer to the interior of any array can be defined that will symbolically act as a pseudo-array that accommodates negative indices. This works only because C does not check an index against bounds when used. Other languages provide onlyone-basedarray types, where each index starts at 1; this is the traditional convention in mathematics for matrices and mathematicalsequences. A few languages, such as Pascal and Lua, supportn-basedarray types, whose minimum legal indices are chosen by the programmer. The relative merits of each choice have been the subject of heated debate. Zero-based indexing can avoidoff-by-oneorfencepost errors.[10] The relation between numbers appearing in an array declaration and the index of that array's last element also varies by language. In many languages (such as C), one should specify the number of elements contained in the array; whereas in others (such as Pascal andVisual Basic .NET) one should specify the numeric value of the index of the last element. Needless to say, this distinction is immaterial in languages where the indices start at 1, such asLua. Some programming languages supportarray programming, where operations and functions defined for certain data types are implicitly extended to arrays of elements of those types. Thus one can writeA+Bto add corresponding elements of two arraysAandB. Usually these languages provide both theelement-by-element multiplicationand the standardmatrix productoflinear algebra, and which of these is represented by the*operator varies by language. Languages providing array programming capabilities have proliferated since the innovations in this area ofAPL. These are core capabilities ofdomain-specific languagessuch asGAUSS,IDL,Matlab, andMathematica. They are a core facility in newer languages, such asJuliaand recent versions ofFortran. These capabilities are also provided via standard extension libraries for other general purpose programming languages (such as the widely usedNumPylibrary forPython). Many languages provide a built-instringdata type, with specialized notation ("string literals") to build values of that type. In some languages (such as C), a string is just an array of characters, or is handled in much the same way. Other languages, likePascal, may provide vastly different operations for strings and arrays. Some programming languages provide operations that return the size (number of elements) of a vector, or, more generally, range of each index of an array. InCandC++arrays do not support thesizefunction, so programmers often have to declare separate variable to hold the size, and pass it to procedures as a separate parameter. Elements of a newly created array may have undefined values (as in C), or may be defined to have a specific "default" value such as 0 or anull pointer(as in Java). InC++astd::vectorobject supports thestore,select, andappendoperations with the performance characteristics discussed above. Vectors can be queried for their size and can be resized. Slower operations like inserting an element in the middle are also supported. Anarray slicingoperation takes a subset of the elements of an array-typed entity (value or variable) and then assembles them as another array-typed entity, possibly with other indices. If array types are implemented as array structures, many useful slicing operations (such as selecting a sub-array, swapping indices, or reversing the direction of the indices) can be performed very efficiently by manipulating thedope vectorof the structure. The possible slicings depend on the implementation details: for example,Fortranallows slicing off one column of a matrix variable, but not a row, and treat it as a vector. On the other hand, other slicing operations are possible when array types are implemented in other ways. Some languages allowdynamic arrays(also called resizable, growable, or extensible): array variables whose index ranges may be expanded at any time after creation, without changing the values of its current elements. For one-dimensional arrays, this facility may be provided as an operationappend(A,x)that increases the size of the arrayAby one and then sets the value of the last element tox. Other array types (such as Pascal strings) provide a concatenation operator, which can be used together with slicing to achieve that effect and more. In some languages, assigning a value to an element of an array automatically extends the array, if necessary, to include that element. In other array types, a slice can be replaced by an array of different size, with subsequent elements being renumbered accordingly – as in Python's list assignmentA[5:5] = [10,20,30], that inserts three new elements (10, 20, and 30) before element "A[5]". Resizable arrays are conceptually similar tolists, and the two concepts are synonymous in some languages. An extensible array can be implemented as a fixed-size array, with a counter that records how many elements are actually in use. Theappendoperation merely increments the counter; until the whole array is used, when theappendoperation may be defined to fail. This is an implementation of adynamic arraywith a fixed capacity, as in thestringtype of Pascal. Alternatively, theappendoperation may re-allocate the underlying array with a larger size, and copy the old elements to the new area.
https://en.wikipedia.org/wiki/Array_data_type
Incomputer science, aqueueis acollectionof entities that are maintained in a sequence and can be modified by the addition of entities at one end of the sequence and the removal of entities from the other end of the sequence. By convention, the end of the sequence at which elements are added is called the back, tail, or rear of the queue, and the end at which elements are removed is called the head or front of the queue, analogously to the words used when people line up to wait for goods or services. The operation of adding an element to the rear of the queue is known asenqueue, and the operation of removing an element from the front is known asdequeue. Other operations may also be allowed, often including apeekorfrontoperation that returns the value of the next element to be dequeued without dequeuing it. The operations of a queue make it afirst-in-first-out (FIFO) data structure. In a FIFO data structure, the first element added to the queue will be the first one to be removed. This is equivalent to the requirement that once a new element is added, all elements that were added before have to be removed before the new element can be removed. A queue is an example of alinear data structure, or more abstractly a sequential collection. Queues are common in computer programs, where they are implemented as data structures coupled with access routines, as anabstract data structureor in object-oriented languages as classes. A queue has two ends, the top, which is the only position at which the push operation may occur, and the bottom, which is the only position at which the pop operation may occur. A queue may be implemented ascircular buffersandlinked lists, or by using both thestack pointerand the base pointer. Queues provide services incomputer science,transport, andoperations researchwhere various entities such as data, objects, persons, or events are stored and held to be processed later. In these contexts, the queue performs the function of abuffer. Another usage of queues is in the implementation ofbreadth-first search. Theoretically, one characteristic of a queue is that it does not have a specific capacity. Regardless of how many elements are already contained, a new element can always be added. It can also be empty, at which point removing an element will be impossible until a new element has been added again. Fixed-length arrays are limited in capacity, but it is not true that items need to be copied towards the head of the queue. The simple trick of turning the array into a closed circle and letting the head and tail drift around endlessly in that circle makes it unnecessary to ever move items stored in the array. If n is the size of the array, then computing indices modulo n will turn thearrayinto a circle. This is still the conceptually simplest way to construct a queue in ahigh-level language, but it does admittedly slow things down a little, because the array indices must be compared to zero and the array size, which is comparable to the time taken to check whether an array index is out of bounds, which some languages do, but this will certainly be the method of choice for a quick and dirty implementation, or for any high-level language that does not have pointer syntax. The array size must be declared ahead of time, but some implementations simply double the declared array size when overflow occurs. Most modern languages with objects orpointerscan implement or come with libraries for dynamic lists. Suchdata structuresmay have not specified a fixed capacity limit besides memory constraints. Queueoverflowresults from trying to add an element onto a full queue and queueunderflowhappens when trying to remove an element from an empty queue. Abounded queueis a queue limited to a fixed number of items.[1] There are several efficient implementations of FIFO queues. An efficient implementation is one that can perform the operations—en-queuing and de-queuing—inO(1) time. Queues may be implemented as a separate data type, or maybe considered a special case of adouble-ended queue(deque) and not implemented separately. For example,PerlandRubyallow pushing and popping an array from both ends, so one can usepushandshiftfunctions to enqueue and dequeue a list (or, in reverse, one can useunshiftandpop),[2]although in some cases these operations are not efficient. C++'sStandard Template Libraryprovides a "queue" templated class which is restricted to only push/pop operations. Since J2SE5.0, Java's library contains aQueueinterface that specifies queue operations; implementing classes includeLinkedListand (since J2SE 1.6)ArrayDeque. PHP has anSplQueueclass and third-party libraries likebeanstalk'dandGearman. A simple queue implemented inJavaScript: Queues can also be implemented as apurely functional data structure.[3]There are two implementations. The first one only achievesO(1){\displaystyle O(1)}per operationon average. That is, theamortizedtime isO(1){\displaystyle O(1)}, but individual operations can takeO(n){\displaystyle O(n)}wherenis the number of elements in the queue. The second implementation is called areal-time queue[4]and it allows the queue to bepersistentwith operations in O(1) worst-case time. It is a more complex implementation and requireslazylists withmemoization. This queue's data is stored in twosingly-linked listsnamedf{\displaystyle f}andr{\displaystyle r}. The listf{\displaystyle f}holds the front part of the queue. The listr{\displaystyle r}holds the remaining elements (a.k.a., the rear of the queue)in reverse order. It is easy to insert into the front of the queue by adding a node at the head off{\displaystyle f}. And, ifr{\displaystyle r}is not empty, it is easy to remove from the end of the queue by removing the node at the head ofr{\displaystyle r}. Whenr{\displaystyle r}is empty, the listf{\displaystyle f}is reversed and assigned tor{\displaystyle r}and then the head ofr{\displaystyle r}is removed. The insert ("enqueue") always takesO(1){\displaystyle O(1)}time. The removal ("dequeue") takesO(1){\displaystyle O(1)}when the listr{\displaystyle r}is not empty. Whenr{\displaystyle r}is empty, the reverse takesO(n){\displaystyle O(n)}wheren{\displaystyle n}is the number of elements inf{\displaystyle f}. But, we can say it isO(1){\displaystyle O(1)}amortizedtime, because every element inf{\displaystyle f}had to be inserted and we can assign a constant cost for each element in the reverse to when it was inserted. The real-time queue achievesO(1){\displaystyle O(1)}time for all operations, without amortization. This discussion will be technical, so recall that, forl{\displaystyle l}a list,|l|{\displaystyle |l|}denotes its length, thatNILrepresents an empty list andCONS⁡(h,t){\displaystyle \operatorname {CONS} (h,t)}represents the list whose head ishand whose tail ist. The data structure used to implement our queues consists of threesingly-linked lists(f,r,s){\displaystyle (f,r,s)}wherefis the front of the queue andris the rear of the queue in reverse order. The invariant of the structure is thatsis the rear offwithout its|r|{\displaystyle |r|}first elements, that is|s|=|f|−|r|{\displaystyle |s|=|f|-|r|}. The tail of the queue(CONS⁡(x,f),r,s){\displaystyle (\operatorname {CONS} (x,f),r,s)}is then almost(f,r,s){\displaystyle (f,r,s)}and inserting an elementxto(f,r,s){\displaystyle (f,r,s)}is almost(f,CONS⁡(x,r),s){\displaystyle (f,\operatorname {CONS} (x,r),s)}. It is said almost, because in both of those results,|s|=|f|−|r|+1{\displaystyle |s|=|f|-|r|+1}. An auxiliary functionaux{\displaystyle aux}must then be called for the invariant to be satisfied. Two cases must be considered, depending on whethers{\displaystyle s}is the empty list, in which case|r|=|f|+1{\displaystyle |r|=|f|+1}, or not. The formal definition isaux⁡(f,r,Cons⁡(_,s))=(f,r,s){\displaystyle \operatorname {aux} (f,r,\operatorname {Cons} (\_,s))=(f,r,s)}andaux⁡(f,r,NIL)=(f′,NIL,f′){\displaystyle \operatorname {aux} (f,r,{\text{NIL}})=(f',{\text{NIL}},f')}wheref′{\displaystyle f'}isffollowed byrreversed. Let us callreverse⁡(f,r){\displaystyle \operatorname {reverse} (f,r)}the function which returnsffollowed byrreversed. Let us furthermore assume that|r|=|f|+1{\displaystyle |r|=|f|+1}, since it is the case when this function is called. More precisely, we define a lazy functionrotate⁡(f,r,a){\displaystyle \operatorname {rotate} (f,r,a)}which takes as input three lists such that|r|=|f|+1{\displaystyle |r|=|f|+1}, and return the concatenation off, ofrreversed and ofa. Thenreverse⁡(f,r)=rotate⁡(f,r,NIL){\displaystyle \operatorname {reverse} (f,r)=\operatorname {rotate} (f,r,{\text{NIL}})}. The inductive definition of rotate isrotate⁡(NIL,Cons⁡(y,NIL),a)=Cons⁡(y,a){\displaystyle \operatorname {rotate} ({\text{NIL}},\operatorname {Cons} (y,{\text{NIL}}),a)=\operatorname {Cons} (y,a)}androtate⁡(CONS⁡(x,f),CONS⁡(y,r),a)=Cons⁡(x,rotate⁡(f,r,CONS⁡(y,a))){\displaystyle \operatorname {rotate} (\operatorname {CONS} (x,f),\operatorname {CONS} (y,r),a)=\operatorname {Cons} (x,\operatorname {rotate} (f,r,\operatorname {CONS} (y,a)))}. Its running time isO(r){\displaystyle O(r)}, but, since lazy evaluation is used, the computation is delayed until the results are forced by the computation. The listsin the data structure has two purposes. This list serves as a counter for|f|−|r|{\displaystyle |f|-|r|}, indeed,|f|=|r|{\displaystyle |f|=|r|}if and only ifsis the empty list. This counter allows us to ensure that the rear is never longer than the front list. Furthermore, usings, which is a tail off, forces the computation of a part of the (lazy) listfduring eachtailandinsertoperation. Therefore, when|f|=|r|{\displaystyle |f|=|r|}, the listfis totally forced. If it was not the case, the internal representation offcould be some append of append of... of append, and forcing would not be a constant time operation anymore.
https://en.wikipedia.org/wiki/Queue_(abstract_data_type)
Incomputer science, asetis anabstract data typethat can store unique values, without any particularorder. It is a computer implementation of themathematicalconcept of afinite set. Unlike most othercollectiontypes, rather than retrieving a specific element from a set, one typically tests a value for membership in a set. Some set data structures are designed forstaticorfrozen setsthat do not change after they are constructed. Static sets allow only query operations on their elements — such as checking whether a given value is in the set, or enumerating the values in some arbitrary order. Other variants, calleddynamicormutable sets, allow also the insertion and deletion of elements from the set. Amultisetis a special kind of set in which an element can appear multiple times in the set. Intype theory, sets are generally identified with theirindicator function(characteristic function): accordingly, a set of values of typeA{\displaystyle A}may be denoted by2A{\displaystyle 2^{A}}orP(A){\displaystyle {\mathcal {P}}(A)}. (Subtypes and subsets may be modeled byrefinement types, andquotient setsmay be replaced bysetoids.) The characteristic functionF{\displaystyle F}of a setS{\displaystyle S}is defined as: In theory, many other abstract data structures can be viewed as set structures with additional operations and/or additionalaxiomsimposed on the standard operations. For example, an abstractheapcan be viewed as a set structure with amin(S)operation that returns the element of smallest value. One may define the operations of thealgebra of sets: Typical operations that may be provided by a static set structureSare: Dynamic set structures typically add: Some set structures may allow only some of these operations. The cost of each operation will depend on the implementation, and possibly also on the particular values stored in the set, and the order in which they are inserted. There are many other operations that can (in principle) be defined in terms of the above, such as: Other operations can be defined for sets with elements of a special type: Sets can be implemented using variousdata structures, which provide different time and space trade-offs for various operations. Some implementations are designed to improve the efficiency of very specialized operations, such asnearestorunion. Implementations described as "general use" typically strive to optimize theelement_of,add, anddeleteoperations. A simple implementation is to use alist, ignoring the order of the elements and taking care to avoid repeated values. This is simple but inefficient, as operations like set membership or element deletion areO(n), as they require scanning the entire list.[b]Sets are often instead implemented using more efficient data structures, particularly various flavors oftrees,tries, orhash tables. As sets can be interpreted as a kind of map (by the indicator function), sets are commonly implemented in the same way as (partial) maps (associative arrays) – in this case in which the value of each key-value pair has theunit typeor a sentinel value (like 1) – namely, aself-balancing binary search treefor sorted sets[definition needed](which has O(log n) for most operations), or ahash tablefor unsorted sets (which has O(1) average-case, but O(n) worst-case, for most operations). A sorted linear hash table[8]may be used to provide deterministically ordered sets. Further, in languages that support maps but not sets, sets can be implemented in terms of maps. For example, a commonprogramming idiominPerlthat converts an array to a hash whose values are the sentinel value 1, for use as a set, is: Other popular methods includearrays. In particular a subset of the integers 1..ncan be implemented efficiently as ann-bitbit array, which also support very efficient union and intersection operations. ABloom mapimplements a set probabilistically, using a very compact representation but risking a small chance of false positives on queries. The Boolean set operations can be implemented in terms of more elementary operations (pop,clear, andadd), but specialized algorithms may yield lower asymptotic time bounds. If sets are implemented as sorted lists, for example, the naive algorithm forunion(S,T)will take time proportional to the lengthmofStimes the lengthnofT; whereas a variant of thelist merging algorithmwill do the job in time proportional tom+n. Moreover, there are specialized set data structures (such as theunion-find data structure) that are optimized for one or more of these operations, at the expense of others. One of the earliest languages to support sets wasPascal; many languages now include it, whether in the core language or in astandard library. As noted in the previous section, in languages which do not directly support sets but do supportassociative arrays, sets can be emulated using associative arrays, by using the elements as keys, and using a dummy value as the values, which are ignored. A generalization of the notion of a set is that of amultisetorbag, which is similar to a set but allows repeated ("equal") values (duplicates). This is used in two distinct senses: either equal values are consideredidentical,and are simply counted, or equal values are consideredequivalent,and are stored as distinct items. For example, given a list of people (by name) and ages (in years), one could construct a multiset of ages, which simply counts the number of people of a given age. Alternatively, one can construct a multiset of people, where two people are considered equivalent if their ages are the same (but may be different people and have different names), in which case each pair (name, age) must be stored, and selecting on a given age gives all the people of a given age. Formally, it is possible for objects in computer science to be considered "equal" under someequivalence relationbut still distinct under another relation. Some types of multiset implementations will store distinct equal objects as separate items in the data structure; while others will collapse it down to one version (the first one encountered) and keep a positive integer count of the multiplicity of the element. As with sets, multisets can naturally be implemented using hash table or trees, which yield different performance characteristics. The set of all bags over type T is given by the expression bag T. If by multiset one considers equal items identical and simply counts them, then a multiset can be interpreted as a function from the input domain to the non-negative integers (natural numbers), generalizing the identification of a set with its indicator function. In some cases a multiset in this counting sense may be generalized to allow negative values, as in Python. Where a multiset data structure is not available, a workaround is to use a regular set, but override the equality predicate of its items to always return "not equal" on distinct objects (however, such will still not be able to store multiple occurrences of the same object) or use anassociative arraymapping the values to their integer multiplicities (this will not be able to distinguish between equal elements at all). Typical operations on bags: Inrelational databases, a table can be a (mathematical) set or a multiset, depending on the presence of unicity constraints on some columns (which turns it into acandidate key). SQLallows the selection of rows from a relational table: this operation will in general yield a multiset, unless the keywordDISTINCTis used to force the rows to be all different, or the selection includes the primary (or a candidate) key. InANSI SQLtheMULTISETkeyword can be used to transform a subquery into a collection expression: is a general select that can be used assubquery expressionof another more general query, while transforms the subquery into acollection expressionthat can be used in another query, or in assignment to a column of appropriate collection type.
https://en.wikipedia.org/wiki/Set_(abstract_data_type)
Incomputer science, astackis anabstract data typethat serves as acollectionof elements with two main operations: Additionally, apeekoperation can, without modifying the stack, return the value of the last element added. The namestackis ananalogyto a set of physical items stacked one atop another, such as a stack of plates. The order in which an element added to or removed from a stack is described aslast in, first out, referred to by the acronymLIFO.[nb 1]As with a stack of physical objects, this structure makes it easy to take an item off the top of the stack, but accessing adatumdeeper in the stack may require removing multiple other items first.[1] Considered a sequential collection, a stack has one end which is the only position at which the push and pop operations may occur, thetopof the stack, and is fixed at the other end, thebottom. A stack may be implemented as, for example, asingly linked listwith a pointer to the top element. A stack may be implemented to have a bounded capacity. If the stack is full and does not contain enough space to accept another element, the stack is in a state ofstack overflow. Stacks entered the computer science literature in 1946, whenAlan Turingused the terms "bury" and "unbury" as a means of calling and returning from subroutines.[2][3]Subroutinesand a two-levelstackhad already been implemented inKonrad Zuse'sZ4in 1945.[4][5] Klaus SamelsonandFriedrich L. BauerofTechnical University Munichproposed the idea of a stack calledOperationskeller("operational cellar") in 1955[6][7]and filed apatentin 1957.[8][9][10][11]In March 1988, by which time Samelson was deceased, Bauer received theIEEEComputer Pioneer Awardfor the invention of the stack principle.[12][7]Similar concepts wereindependentlydeveloped byCharles Leonard Hamblinin the first half of 1954[13][7]and byWilhelm Kämmerer[de]with hisautomatisches Gedächtnis("automatic memory") in 1958.[14][15][7] Stacks are often described using the analogy of a spring-loaded stack of plates in a cafeteria.[16][1][17]Clean plates are placed on top of the stack, pushing down any plates already there. When the top plate is removed from the stack, the one below it is elevated to become the new top plate. In many implementations, a stack has more operations than the essential "push" and "pop" operations. An example of a non-essential operation is "top of stack", or "peek", which observes the top element without removing it from the stack.[18]Since this can be broken down into a "pop" followed by a "push" to return the same data to the stack, it is not considered an essential operation. If the stack is empty, an underflow condition will occur upon execution of either the "stack top" or "pop" operations. Additionally, many implementations provide a check if the stack is empty and an operation that returns its size. A stack can be easily implemented either through anarrayor alinked list, as it is merely a special case of a list.[19]In either case, what identifies the data structure as a stack is not the implementation but the interface: the user is only allowed to pop or push items onto the array or linked list, with few other helper operations. The following will demonstrate both implementations usingpseudocode. An array can be used to implement a (bounded) stack, as follows. The first element, usually at thezero offset, is the bottom, resulting inarray[0]being the first element pushed onto the stack and the last element popped off. The program must keep track of the size (length) of the stack, using a variabletopthat records the number of items pushed so far, therefore pointing to the place in the array where the next element is to be inserted (assuming a zero-based index convention). Thus, the stack itself can be effectively implemented as a three-element structure: Thepushoperation adds an element and increments thetopindex, after checking for overflow: Similarly,popdecrements thetopindex after checking for underflow, and returns the item that was previously the top one: Using adynamic array, it is possible to implement a stack that can grow or shrink as much as needed. The size of the stack is simply the size of the dynamic array, which is a very efficient implementation of a stack since adding items to or removing items from the end of a dynamic array requires amortized O(1) time. Another option for implementing stacks is to use asingly linked list. A stack is then a pointer to the "head" of the list, with perhaps a counter to keep track of the size of the list: Pushing and popping items happens at the head of the list; overflow is not possible in this implementation (unless memory is exhausted): Some languages, such asPerl,LISP,JavaScriptandPython, make the stack operations push and pop available on their standard list/array types. Some languages, notably those in theForthfamily (includingPostScript), are designed around language-defined stacks that are directly visible to and manipulated by the programmer. The following is an example of manipulating a stack inCommon Lisp(">" is the Lisp interpreter's prompt; lines not starting with ">" are the interpreter's responses to expressions): Several of theC++ Standard Librarycontainer types havepush_backandpop_backoperations with LIFO semantics; additionally, thestacktemplate class adapts existing containers to provide a restrictedAPIwith only push/pop operations. PHP has anSplStackclass. Java's library contains aStackclass that is a specialization ofVector. Following is an example program inJavalanguage, using that class. Some processors, such as thePDP-11,VAX, andMotorola 68000 serieshave addressing modes useful for stack manipulation. The following trivial PDP-11assemblysource code pushes two numbers on a stack and adds them, leaving the result on the stack. A common use of stacks at the architecture level is as a means of allocating and accessing memory. A typical stack is an area of computer memory with a fixed origin and a variable size. Initially the size of the stack is zero. Astack pointer(usually in the form of aprocessor register) points to the most recently referenced location on the stack; when the stack has a size of zero, the stack pointer points to the origin of the stack. The two operations applicable to all stacks are: There are many variations on the basic principle of stack operations. Every stack has a fixed location in memory at which it begins. As data items are added to the stack, the stack pointer is displaced to indicate the current extent of the stack, which expands away from the origin. Stack pointers may point to the origin of a stack or to a limited range of addresses above or below the origin (depending on the direction in which the stack grows); however, the stack pointer cannot cross the origin of the stack. In other words, if the origin of the stack is at address 1000 and the stack grows downwards (towards addresses 999, 998, and so on), the stack pointer must never be incremented beyond 1000 (to 1001 or beyond). If a pop operation on the stack causes the stack pointer to move past the origin of the stack, astack underflowoccurs. If a push operation causes the stack pointer to increment or decrement beyond the maximum extent of the stack, astack overflowoccurs. Some environments that rely heavily on stacks may provide additional operations, for example: Stacks are often visualized growing from the bottom up (like real-world stacks). They may also be visualized growing from left to right, where the top is on the far right, or even growing from top to bottom. The important feature is for the bottom of the stack to be in a fixed position. The illustration in this section is an example of a top-to-bottom growth visualization: the top (28) is the stack "bottom", since the stack "top" (9) is where items are pushed or popped from. Aright rotatewill move the first element to the third position, the second to the first and the third to the second. Here are two equivalent visualizations of this process: A stack is usually represented in computers by a block of memory cells, with the "bottom" at a fixed location, and the stack pointer holding the address of the current "top" cell in the stack. The "top" and "bottom" nomenclature is used irrespective of whether the stack actually grows towards higher memory addresses. Pushing an item on to the stack adjusts the stack pointer by the size of the item (either decrementing or incrementing, depending on the direction in which the stack grows in memory), pointing it to the next cell, and copies the new top item to the stack area. Depending again on the exact implementation, at the end of a push operation, the stack pointer may point to the next unused location in the stack, or it may point to the topmost item in the stack. If the stack points to the current topmost item, the stack pointer will be updated before a new item is pushed onto the stack; if it points to the next available location in the stack, it will be updatedafterthe new item is pushed onto the stack. Popping the stack is simply the inverse of pushing. The topmost item in the stack is removed and the stack pointer is updated, in the opposite order of that used in the push operation. ManyCISC-typeCPUdesigns, including thex86,Z80and6502, have a dedicated register for use as thecall stackstack pointer with dedicated call, return, push, and pop instructions that implicitly update the dedicated register, thus increasingcodedensity. Some CISC processors, like thePDP-11and the68000, also havespecial addressing modes for implementation of stacks, typically with a semi-dedicated stack pointer as well (such as A7 in the 68000). In contrast, mostRISCCPU designs do not have dedicated stack instructions and therefore most, if not all, registers may be used as stack pointers as needed. Some machines use a stack for arithmetic and logical operations; operands are pushed onto the stack, and arithmetic and logical operations act on the top one or more items on the stack, popping them off the stack and pushing the result onto the stack. Machines that function in this fashion are calledstack machines. A number ofmainframesandminicomputerswere stack machines, the most famous being theBurroughs large systems. Other examples include the CISCHP 3000machines and the CISC machines fromTandem Computers. Thex87floating pointarchitecture is an example of a set of registers organised as a stack where direct access to individual registers (relative to the current top) is also possible. Having the top-of-stack as an implicit argument allows for a smallmachine codefootprint with a good usage ofbusbandwidthandcode caches, but it also prevents some types of optimizations possible on processors permittingrandom accessto theregister filefor all (two or three) operands. A stack structure also makessuperscalarimplementations withregister renaming(forspeculative execution) somewhat more complex to implement, although it is still feasible, as exemplified by modernx87implementations. Sun SPARC,AMD Am29000, andIntel i960are all examples of architectures that useregister windowswithin a register-stack as another strategy to avoid the use of slow main memory for function arguments and return values. There is also a number of small microprocessors that implement a stack directly in hardware, and somemicrocontrollershave a fixed-depth stack that is not directly accessible. Examples are thePIC microcontrollers, theComputer CowboysMuP21, theHarris RTXline, and theNovixNC4016. At least one microcontroller family, theCOP400, implements a stack either directly in hardware or in RAM via a stack pointer, depending on the device. Many stack-based microprocessors were used to implement the programming languageForthat themicrocodelevel. Calculators that employreverse Polish notationuse a stack structure to hold values. Expressions can be represented in prefix, postfix or infix notations and conversion from one form to another may be accomplished using a stack. Many compilers use a stack to parse syntax before translation into low-level code. Most programming languages arecontext-free languages, allowing them to be parsed with stack-based machines. Another important application of stacks isbacktracking. An illustration of this is the simple example of finding the correct path in a maze that contains a series of points, a starting point, several paths and a destination. If random paths must be chosen, then after following an incorrect path, there must be a method by which to return to the beginning of that path. This can be achieved through the use of stacks, as a last correct point can be pushed onto the stack, and popped from the stack in case of an incorrect path. The prototypical example of a backtracking algorithm isdepth-first search, which finds all vertices of a graph that can be reached from a specified starting vertex. Other applications of backtracking involve searching through spaces that represent potential solutions to an optimization problem.Branch and boundis a technique for performing such backtracking searches without exhaustively searching all of the potential solutions in such a space. A number ofprogramming languagesarestack-oriented, meaning they define most basic operations (adding two numbers, printing a character) as taking their arguments from the stack, and placing any return values back on the stack. For example,PostScripthas a return stack and an operand stack, and also has a graphics state stack and a dictionary stack. Manyvirtual machinesare also stack-oriented, including thep-code machineand theJava Virtual Machine. Almost allcalling conventions‍—‌the ways in whichsubroutinesreceive their parameters and return results‍—‌use a special stack (the "call stack") to hold information about procedure/function calling and nesting in order to switch to the context of the called function and restore to the caller function when the calling finishes. The functions follow a runtime protocol between caller and callee to save arguments and return value on the stack. Stacks are an important way of supporting nested orrecursivefunction calls. This type of stack is used implicitly by the compiler to support CALL and RETURN statements (or their equivalents) and is not manipulated directly by the programmer. Some programming languages use the stack to store data that is local to a procedure. Space for local data items is allocated from the stack when the procedure is entered, and is deallocated when the procedure exits. TheC programming languageis typically implemented in this way. Using the same stack for both data and procedure calls has important security implications (see below) of which a programmer must be aware in order to avoid introducing serious security bugs into a program. Severalalgorithmsuse a stack (separate from the usual function call stack of most programming languages) as the principaldata structurewith which they organize their information. These include: Some computing environments use stacks in ways that may make them vulnerable tosecurity breachesand attacks. Programmers working in such environments must take special care to avoid such pitfalls in these implementations. As an example, some programming languages use a common stack to store both data local to a called procedure and the linking information that allows the procedure to return to its caller. This means that the program moves data into and out of the same stack that contains critical return addresses for the procedure calls. If data is moved to the wrong location on the stack, or an oversized data item is moved to a stack location that is not large enough to contain it, return information for procedure calls may be corrupted, causing the program to fail. Malicious parties may attempt astack smashingattack that takes advantage of this type of implementation by providing oversized data input to a program that does not check the length of input. Such a program may copy the data in its entirety to a location on the stack, and in doing so, it may change the return addresses for procedures that have called it. An attacker can experiment to find a specific type of data that can be provided to such a program such that the return address of the current procedure is reset to point to an area within the stack itself (and within the data provided by the attacker), which in turn contains instructions that carry out unauthorized operations. This type of attack is a variation on thebuffer overflowattack and is an extremely frequent source of security breaches in software, mainly because some of the most popular compilers use a shared stack for both data and procedure calls, and do not verify the length of data items. Frequently, programmers do not write code to verify the size of data items, either, and when an oversized or undersized data item is copied to the stack, a security breach may occur.
https://en.wikipedia.org/wiki/Stack_(abstract_data_type)
Incomputer science, astreamis asequenceof potentially unlimiteddata elementsmade available over time. A stream can be thought of as items on aconveyor beltbeing processed one at a time rather than in large batches. Streams are processed differently frombatch data. Normal functions cannot operate on streams as a whole because they have potentially unlimited data. Formally, streams arecodata(potentially unlimited), not data (which is finite). Functions that operate on a stream producing another stream are known asfiltersand can be connected inpipelinesin a manner analogous tofunction composition. Filters may operate on one item of a stream at a time or may base an item of output on multiple items of input such as amoving average. The term "stream" is used in a number of similar ways: Streams can be used as the underlying data type forchannelsininterprocess communication. The term "stream" is also applied tofile systemforks, where multiple sets of data are associated with a single filename. Most often, there is one main stream that makes up the normal file data, while additional streams containmetadata. Here "stream" is used to indicate "variable size data", as opposed to fixed size metadata such asextended attributes, but differs from "stream" as used otherwise, meaning "data available over time, potentially infinite".
https://en.wikipedia.org/wiki/Stream_(computing)
Infunctional programming, aresult typeis amonadic typeholding a returned value or an error code. They provide an elegant way of handling errors, without resorting to exception handling; when a function that may fail returns a result type, the programmer is forced to consider success or failure paths, before getting access to the expected result; this eliminates the possibility of an erroneous programmer assumption. The result object has the methodsis_ok()andis_err(). TheErrortype is an interface foriError.
https://en.wikipedia.org/wiki/Result_type
Incomputer science, atagged union, also called avariant,variant record,choice type,discriminated union,disjoint union,sum type, orcoproduct, is adata structureused to hold a value that could take on several different, but fixed, types. Only one of the types can be in use at any one time, and atagfield explicitly indicates which type is in use. It can be thought of as a type that has several "cases", each of which should be handled correctly when that type is manipulated. This is critical in defining recursive datatypes, in which some component of a value may have the same type as that value, for example in defining a type for representingtrees, where it is necessary to distinguish multi-node subtrees and leaves. Like ordinaryunions, tagged unions can save storage by overlapping storage areas for each type, since only one is in use at a time. Tagged unions are most important infunctional programminglanguages such asMLandHaskell, where they are calleddatatypes(seealgebraic data type) and thecompilercan verify that all cases of a tagged union are always handled, avoiding many types of errors. Compile-time checked sum types are also extensively used inRust, where they are calledenum. They can, however, be constructed in nearly anyprogramming language, and are much safer than untagged unions, often simply called unions, which are similar but do not explicitly track which member of a union is currently in use. Tagged unions are often accompanied by the concept of aconstructor, which is similar but not the same as aconstructorfor aclass. A constructor is a function or an expression that produces a value of the tagged union type, given a tag and a value of the corresponding type. Mathematically, tagged unions correspond todisjointordiscriminated unions, usually written using +. Given an element of a disjoint unionA+B, it is possible to determine whether it came fromAorB. If an element lies in both, there will be two effectively distinct copies of the value inA+B, one fromAand one fromB. Intype theory, a tagged union is called asum type. Sum types are thedualofproduct types. Notations vary, but usually the sum typeA+Bcomes with two introduction forms (injections)inj1:A→A+Bandinj2:B→A+B.The elimination form is case analysis, known aspattern matchinginML-stylelanguages: ifehas typeA+Bande1ande2have typeτ{\displaystyle \tau }under the assumptionsx:Aandy:Brespectively, then the termcaseeofx⇒e1∣y⇒e2{\displaystyle {\mathsf {case}}\ e\ {\mathsf {of}}\ x\Rightarrow e_{1}\mid y\Rightarrow e_{2}}has typeτ{\displaystyle \tau }. The sum type corresponds tointuitionisticlogical disjunctionunder theCurry–Howard correspondence. Anenumerated typecan be seen as a degenerate case: a tagged union ofunit types. It corresponds to a set of nullary constructors and may be implemented as a simple tag variable, since it holds no additional data besides the value of the tag. Many programming techniques and data structures, includingrope,lazy evaluation,class hierarchy(see below),arbitrary-precision arithmetic,CDR coding, theindirection bit, and other kinds oftagged pointers, are usually implemented using some sort of tagged union. A tagged union can be seen as the simplest kind ofself-describingdata format. The tag of the tagged union can be seen as the simplest kind ofmetadata. The primary advantage of a tagged union over an untagged union is that all accesses are safe, and the compiler can even check that all cases are handled. Untagged unions depend on program logic to correctly identify the currently active field, which may result in strange behavior and hard-to-find bugs if that logic fails. The primary advantage of a tagged union over a simplerecordcontaining a field for each type is that it saves storage by overlapping storage for all the types. Some implementations reserve enough storage for the largest type, while others dynamically adjust the size of a tagged union value as needed. When the value isimmutable, it is simple to allocate just as much storage as is needed. The main disadvantage of tagged unions is that the tag occupies space. Since there are usually a small number of alternatives, the tag can often be squeezed into 2 or 3 bits wherever space can be found, but sometimes even these bits are not available. In this case, a helpful alternative may befolded,computedorencoded tags, where the tag value is dynamically computed from the contents of the union field. Common examples are the use ofreserved values, where, for example, a function returning a positive number may return -1 to indicate failure, andsentinel values, most often used intagged pointers. Sometimes, untagged unions are used to perform bit-level conversions between types, called reinterpret casts in C++. Tagged unions are not intended for this purpose; typically a new value is assigned whenever the tag is changed. Many languages support, to some extent, auniversal data type, which is a type that includes every value of every other type, and often a way is provided to test the actual type of a value of the universal type. These are sometimes referred to asvariants. While universal data types are comparable to tagged unions in their formal definition, typical tagged unions include a relatively small number of cases, and these cases form different ways of expressing a single coherent concept, such as a data structure node or instruction. Also, there is an expectation that every possible case of a tagged union will be dealt with when it is used. The values of a universal data type are not related and there is no feasible way to deal with them all. Likeoption typesandexception handling, tagged unions are sometimes used to handle the occurrence of exceptional results. Often these tags are folded into the type asreserved values, and their occurrence is not consistently checked: this is a fairly common source of programming errors. This use of tagged unions can be formalized as amonadwith the following functions: where "value" and "err" are the constructors of the union type,AandBare valid result types andEis the type of error conditions. Alternately, the same monad may be described byreturnand two additional functions,fmapandjoin: Say we wanted to build abinary treeof integers. In ML, we would do this by creating a datatype like this: This is a tagged union with two cases: one, the leaf, is used to terminate a path of the tree, and functions much like a null value would in imperative languages. The other branch holds a node, which contains an integer and a left and right subtree. Leaf and Node are the constructors, which enable us to actually produce a particular tree, such as: which corresponds to this tree: Now we can easily write a typesafe function that, for example, counts the number of nodes in the tree: InALGOL 68, tagged unions are calledunited modes, the tag is implicit, and thecaseconstruct is used to determine which field is tagged: modenode=union(real,int,compl,string); Usage example forunioncaseofnode: In ALGOL 68, a union can be automatically coerced into a wider union, for example if all its constituents can be handled by the union parameter ofprint, a union can simply be passed to print as in theoutcase above. Functional programminglanguages such asML(from the 1970s) andHaskell(from the 1990s) give a central role to tagged unions and have the power to check that all cases are handled. Some other languages also support tagged unions. Pascal,Ada, andModula-2call themvariant records(formallydiscriminated typein Ada), and require the tag field to be manually created and the tag values specified, as in this Pascal example: and this Ada equivalent: InCandC++, a tagged union can be created from untagged unions using a strict access discipline where the tag is always checked: As long as the union fields are only accessed through the functions, the accesses will be safe and correct. The same approach can be used for encoded tags; we simply decode the tag and then check it on each access. If the inefficiency of these tag checks is a concern, they may be automatically removed in the final version. C and C++ also have language support for one particular tagged union: the possibly-nullpointer. This may be compared to theoptiontype in ML or theMaybetype in Haskell, and can be seen as atagged pointer: a tagged union (with an encoded tag) of two types: Unfortunately, C compilers do not verify that the null case is always handled. This is a particularly common source of errors in C code, since there is a tendency to ignore exceptional cases. However, using both untagged unions and pointers, another way to accomplish a tagged union in C and C++ is through the use ofopaque pointer typeswith functionality exposed only through an associated interfaceheader file: The benefits of using an opaque pointers to access tagged unions restricts all data access functionality to that provided by the opaque pointer's interfacing API. By restricting data access through an API, this approach significantly reduces the likelihood of unsafe and/or incorrect union data accesses. The downsides to this approach however is that dynamic memory allocation cannot be avoided as a pointer or other type ofhandlemust be provided to access the internal data and allocating implicitly places the burden of memory management of the opaque pointer onto the end-using developer as well as One advanced dialect of C, calledCyclone, has extensive built-in support for tagged unions.[1] The enum types in theRust,Haxe, andSwiftlanguages also work as tagged unions. The variant library from theBoost C++ Librariesdemonstrated it was possible to implement a safe tagged union as a library in C++, visitable using function objects. Scalahas case classes: Because the class hierarchy is sealed, the compiler can check that all cases are handled in a pattern match: Scala's case classes also permit reuse through subtyping: F#has discriminated unions: Because the defined cases are exhaustive, the compiler can check that all cases are handled in a pattern match: Haxe's enums also work as tagged unions:[2] These can be matched using a switch expression: Nimhas object variants[3]similar in declaration to those in Pascal and Ada: Macroscan be used to emulate pattern matching or to create syntactic sugar for declaring object variants, seen here as implemented by the packagepatty: Enums are added in Scala 3,[4]allowing us to rewrite the earlier Scala examples more concisely: TheRust languagehas extensive support for tagged unions, called enums.[5]For example: It also allows matching on unions: Rust's error handling model relies extensively on these tagged unions, especially theOption<T>type, which is eitherNoneorSome(T), and theResult<T, E>type, which is eitherOk(T)orErr(E).[6] Swiftalso has substantial support for tagged unions via enumerations.[7]For example: WithTypeScriptit is also possible to create tagged unions. For example: Python 3.9introduces support for typing annotations that can be used to define a tagged union type (PEP-593[8]): C++17introduces std::variant andconstexpr if In a typicalclass hierarchyinobject-oriented programming, each subclass can encapsulate data unique to that class. The metadata used to performvirtual methodlookup (for example, the object'svtablepointer in most C++ implementations) identifies the subclass and so effectively acts as a tag identifying the data stored by the instance (seeRTTI). An object'sconstructorsets this tag, and it remains constant throughout the object's lifetime. Nevertheless, a class hierarchy involves truesubtype polymorphism. It can be extended by creating further subclasses of the same base type, which could not be handled correctly under a tag/dispatch model. Hence, it is usually not possible to do case analysis or dispatch on a subobject's 'tag' as one would for tagged unions. Some languages such asScalaallow base classes to be "sealed", and unify tagged unions with sealed base classes.
https://en.wikipedia.org/wiki/Tagged_union
Nullable typesare a feature of someprogramming languageswhich allow a value to be set to the special value NULL instead of the usual possible values of thedata type. In statically typed languages, a nullable type is anoption type,[citation needed]while in dynamically typed languages (where values have types, but variables do not), equivalent behavior is provided by having a single null value. NULL is frequently used to represent a missing value or invalid value, such as from a function that failed to return or a missing field in a database, as inNULLinSQL. In other words, NULL is undefined. Primitive typessuch asintegersandBooleanscannot generally be null, but the corresponding nullable types (nullable integer and nullable Boolean, respectively) can also assume the NULL value.[jargon][citation needed]This can be represented in ternary logic as FALSE, NULL, TRUE as inthree-valued logic. An integer variable may represent integers, but 0 (zero) is a special case because 0 in many programming languages can mean "false". Also, this does not provide any notion of saying that the variable is empty, a need that arises in many circumstances. This need can be achieved with a nullable type. In programming languages likeC#2.0, a nullable integer, for example, can be declared by a question mark (int? x).[1][2]: 46In programming languages likeC#1.0, nullable types can be defined by an external library[3]as new types (e.g. NullableInteger, NullableBoolean).[4] A Boolean variable makes the effect more clear. Its values can be either "true" or "false", while a nullable Boolean may also contain a representation for "undecided". However, the interpretation or treatment of a logical operation involving such a variable depends on the language. In contrast, objectpointerscan be set toNULLby default in most common languages, meaning that the pointer or reference points to nowhere, that no object is assigned (the variable does not point to any object). Nullable references were invented byC. A. R. Hoarein 1965 as part of theAlgol Wlanguage. Hoare later described his invention as a "billion-dollar mistake".[5]This is because object pointers that can be NULL require the user to check the pointer before using it and require specific code to handle the case when the object pointer is NULL. Javahas classes that correspond to scalar values, such as Integer, Boolean, and Float. Combined withautoboxing(automatic usage-driven conversion between object and value), this effectively allows nullable variables for scalar values.[citation needed] Nullable type implementations usually adhere to thenull object pattern. There is a more general and formal concept that extend the nullable type concept: it comes fromoption types, which enforce explicit handling of the exceptional case. The following programming languages support nullable types. Statically typed languages with native null support include: Statically typed languages with library null support include: Dynamically-typed languages with null include:
https://en.wikipedia.org/wiki/Nullable_type
Inobject-orientedcomputer programming, anull objectis anobjectwith no referenced value or with defined neutral (null) behavior. The null objectdesign pattern, which describes the uses of such objects and their behavior (or lack thereof), was first published as "Void Value"[1]and later in thePattern Languages of Program Designbook seriesas "Null Object".[2] In most object-oriented languages, such asJavaorC#,referencesmay benull. These references need to be checked to ensure they are not null before invoking anymethods, because methods typically cannot be invoked on null references. TheObjective-C languagetakes another approach to this problem and does nothing when sending a message tonil; if a return value is expected,nil(for objects), 0 (for numeric values),NO(forBOOLvalues), or a struct (for struct types) with all its members initialised tonull/0/NO/zero-initialised struct is returned.[3] Instead of using anull referenceto convey the absence of an object (for instance, a non-existent customer), one uses an object which implements the expectedinterface, but whose method body is empty. A key purpose of using a null object is to avoid conditionals of different kinds, resulting in code that is more focused, and quicker to read and follow – i.e. improved readability. One advantage of this approach over a working default implementation is that a null object is very predictable and has no side effects: it doesnothing. For example, a function may retrieve a list of files in a folder and perform some action on each. In the case of an empty folder, one response may be to throw an exception or return a null reference rather than a list. Thus, the code expecting a list must verify that it in fact has one before continuing, which can complicate the design. By returning a null object (i.e., an empty list) instead, there is no need to verify that the return value is in fact a list. The calling function may simply iterate the list as normal, effectively doing nothing. It is, however, still possible to check whether the return value is a null object (an empty list) and react differently if desired. The null object pattern can also be used to act as a stub for testing, if a certain feature such as a database is not available for testing. Given abinary tree, with this node structure: One may implement a tree size procedure recursively: Since the child nodes may not exist, one must modify the procedure by adding non-existence or null checks: This, however, makes the procedure more complicated by mixing boundary checks with normal logic, and it becomes harder to read. Using the null object pattern, one can create a special version of the procedure but only for null nodes: This separates normal logic from special case handling and makes the code easier to understand. It can be regarded as a special case of theState patternand theStrategy pattern. It is not a pattern fromDesign Patterns, but is mentioned inMartin Fowler'sRefactoring[4]and Joshua Kerievsky's Refactoring To Patterns[5]as theInsert Null Objectrefactoring. Chapter 17 ofRobert Cecil Martin'sAgile Software Development: Principles, Patterns and Practices[6]is dedicated to the pattern. From C# 6.0 it is possible to use the "?." operator (akanull-conditional operator), which will simply evaluate to null if its left operand is null. In someMicrosoft .NETlanguages,Extension methodscan be used to perform what is called 'null coalescing'. This is because extension methods can be called on null values as if it concerns an 'instance method invocation' while in fact extension methods are static. Extension methods can be made to check for null values, thereby freeing code that uses them from ever having to do so. Note that the example below uses theC#Null coalescing operatorto guarantee error free invocation, where it could also have used a more mundane if...then...else. The following example only works when you do not care the existence of null, or you treat null and empty string the same. The assumption may not hold in other applications. A language with statically typed references to objects illustrates how the null object becomes a more complicated pattern: Here, the idea is that there are situations where a pointer or reference to anAnimalobject is required, but there is no appropriate object available. A null reference is impossible in standard-conforming C++. A nullAnimal*pointer is possible, and could be useful as a place-holder, but may not be used for direct dispatch:a->MakeSound()is undefined behavior ifais a null pointer. The null object pattern solves this problem by providing a specialNullAnimalclass which can be instantiated bound to anAnimalpointer or reference. The special null class must be created for each class hierarchy that is to have a null object, since aNullAnimalis of no use when what is needed is a null object with regard to someWidgetbase class that is not related to theAnimalhierarchy. Note that NOT having a null class at all is an important feature, in contrast to languages where "anything is a reference" (e.g., Java and C#). In C++, the design of a function or method may explicitly state whether null is allowed or not. C# is a language in which the null object pattern can be properly implemented. This example shows animal objects that display sounds and a NullAnimal instance used in place of the C# null keyword. The null object provides consistent behaviour and prevents a runtime null reference exception that would occur if the C# null keyword were used instead. Following the Smalltalk principle,everything is an object, the absence of an object is itself modeled by an object, callednil. In the GNU Smalltalk for example, the class ofnilisUndefinedObject, a direct descendant ofObject. Any operation that fails to return a sensible object for its purpose may returnnilinstead, thus avoiding the special case of returning "no object" unsupported by Smalltalk designers. This method has the advantage of simplicity (no need for a special case) over the classical "null" or "no object" or "null reference" approach. Especially useful messages to be used withnilareisNil,ifNil:orifNotNil:, which make it practical and safe to deal with possible references tonilin Smalltalk programs. In Lisp, functions can gracefully accept the special objectnil, which reduces the amount of special case testing in application code. For instance, althoughnilis an atom and does not have any fields, the functionscarandcdracceptniland just return it, which is very useful and results in shorter code. Sincenilisthe empty list in Lisp, the situation described in the introduction above doesn't exist. Code which returnsnilis returning what is in fact the empty list (and not anything resembling a null reference to a list type), so the caller does not need to test the value to see whether or not it has a list. The null object pattern is also supported in multiple value processing. If the program attempts to extract a value from an expression which returns no values, the behavior is that the null objectnilis substituted. Thus(list (values))returns(nil)(a one-element list containing nil). The(values)expression returns no values at all, but since the function call tolistneeds to reduce its argument expression to a value, the null object is automatically substituted. In Common Lisp, the objectnilis the one and only instance of the special classnull. What this means is that a method can be specialized to thenullclass, thereby implementing the null design pattern. Which is to say, it is essentially built into the object system: The classnullis a subclass of thesymbolclass, becausenilis a symbol. Sincenilalso represents the empty list,nullis a subclass of thelistclass, too. Methods parameters specialized tosymbolorlistwill thus take anilargument. Of course, anullspecialization can still be defined which is a more specific match fornil. Unlike Common Lisp, and many dialects of Lisp, the Scheme dialect does not have a nil value which works this way; the functionscarandcdrmay not be applied to an empty list; Scheme application code therefore has to use theempty?orpair?predicate functions to sidestep this situation, even in situations where very similar Lisp would not need to distinguish the empty and non-empty cases thanks to the behavior ofnil. Induck-typedlanguages likeRuby, language inheritance is not necessary to provide expected behavior. Attempts to directlymonkey-patchNilClass instead of providing explicit implementations give more unexpected side effects than benefits. Induck-typedlanguages likeJavaScript, language inheritance is not necessary to provide expected behavior. This code illustrates a variation of the C++ example, above, using the Java language. As with C++, a null class can be instantiated in situations where a reference to anAnimalobject is required, but there is no appropriate object available. A nullAnimalobject is possible (Animal myAnimal = null;) and could be useful as a place-holder, but may not be used for calling a method. In this example,myAnimal.makeSound();will throw a NullPointerException. Therefore, additional code may be necessary to test for null objects. The null object pattern solves this problem by providing a specialNullAnimalclass which can be instantiated as an object of typeAnimal. As with C++ and related languages, that special null class must be created for each class hierarchy that needs a null object, since aNullAnimalis of no use when what is needed is a null object that does not implement theAnimalinterface. The following null object pattern implementation demonstrates the concrete class providing its corresponding null object in a static fieldEmpty. This approach is frequently used in the .NET Framework (String.Empty,EventArgs.Empty,Guid.Empty, etc.). This pattern should be used carefully, as it can make errors/bugs appear as normal program execution.[7] Care should be taken not to implement this pattern just to avoid null checks and make code more readable, since the harder-to-read code may just move to another place and be less standard—such as when different logic must execute in case the object provided is indeed the null object. The common pattern in most languages withreference typesis to compare a reference to a single value referred to as null or nil. Also, there is an additional need for testing that no code anywhere ever assigns null instead of the null object, because in most cases and languages with static typing, this is not a compiler error if the null object is of a reference type, although it would certainly lead to errors at run time in parts of the code where the pattern was used to avoid null checks. On top of that, in most languages and assuming there can be many null objects (i.e., the null object is a reference type but doesn't implement thesingleton patternin one or another way), checking for the null object instead of for the null or nil value introduces overhead, as does the singleton pattern likely itself upon obtaining the singleton reference.
https://en.wikipedia.org/wiki/Null_object_pattern
Incomputingandcomputer programming,exception handlingis the process of responding to the occurrence ofexceptions– anomalous or exceptional conditions requiring special processing – during theexecutionof aprogram. In general, an exception breaks the normal flow of execution and executes a pre-registeredexception handler; the details of how this is done depend on whether it is ahardwareorsoftwareexception and how the software exception is implemented. Exceptions are defined by different layers of a computer system, and the typical layers are CPU-definedinterrupts,operating system(OS)-definedsignals,programming language-defined exceptions. Each layer requires different ways of exception handling although they may be interrelated, e.g. a CPU interrupt could be turned into an OS signal. Some exceptions, especially hardware ones, may be handled so gracefully that execution can resume where it was interrupted. The definition of an exception is based on the observation that eachprocedurehas aprecondition, a set of circumstances for which it will terminate "normally".[1]An exception handling mechanism allows the procedure toraise an exception[2]if this precondition is violated,[1]for example if the procedure has been called on an abnormal set of arguments. The exception handling mechanism thenhandlesthe exception.[3] The precondition, and the definition of exception, issubjective. The set of "normal" circumstances is defined entirely by the programmer, e.g. the programmer may deem division by zero to be undefined, hence an exception, or devise some behavior such as returning zero or a special "ZERO DIVIDE" value (circumventing the need for exceptions).[4]Common exceptions include an invalid argument (e.g. value is outside of thedomain of a function),[5]an unavailable resource (like a missing file,[6]a network drive error,[7]or out-of-memory errors[8]), or that the routine has detected a normal condition that requires special handling, e.g., attention, end of file.[9]Social pressure is a major influence on the scope of exceptions and use of exception-handling mechanisms, i.e. "examples of use, typically found in core libraries, and code examples in technical books, magazine articles, and online discussion forums, and in an organization’s code standards".[10] Exception handling solves thesemipredicate problem, in that the mechanism distinguishes normal return values from erroneous ones. In languages without built-in exception handling such as C, routines would need to signal the error in some other way, such as the commonreturn codeanderrnopattern.[11]Taking a broad view, errors can be considered to be a proper subset of exceptions,[12]and explicit error mechanisms such as errno can be considered (verbose) forms of exception handling.[11]The term "exception" is preferred to "error" because it does not imply that anything is wrong - a condition viewed as an error by one procedure or programmer may not be viewed that way by another.[13] The term "exception" may be misleading because its connotation of "anomaly" indicates that raising an exception is abnormal or unusual,[14]when in fact raising the exception may be a normal and usual situation in the program.[13]For example, suppose a lookup function for anassociative arraythrows an exception if the key has no value associated. Depending on context, this "key absent" exception may occur much more often than a successful lookup.[15] The first hardware exception handling was found in theUNIVAC Ifrom 1951. Arithmetic overflow executed two instructions at address 0 which could transfer control or fix up the result.[16]Software exception handling developed in the 1960s and 1970s. Exception handling was subsequently widely adopted by many programming languages from the 1980s onward. There is no clear consensus as to the exact meaning of an exception with respect to hardware.[17]From the implementation point of view, it is handled identically to aninterrupt: the processor halts execution of the current program, looks up theinterrupt handlerin theinterrupt vector tablefor that exception or interrupt condition, saves state, and switches control. Exception handling in theIEEE 754floating-pointstandard refers in general to exceptional conditions and defines an exception as "an event that occurs when an operation on some particular operands has no outcome suitable for every reasonable application. That operation might signal one or more exceptions by invoking the default or, if explicitly requested, a language-defined alternate handling." By default, an IEEE 754 exception is resumable and is handled by substituting a predefined value for different exceptions, e.g. infinity for a divide by zero exception, and providingstatus flagsfor later checking of whether the exception occurred (seeC99 programming languagefor a typical example of handling of IEEE 754 exceptions). An exception-handling style enabled by the use of status flags involves: first computing an expression using a fast, direct implementation; checking whether it failed by testing status flags; and then, if necessary, calling a slower, more numerically robust, implementation.[18] The IEEE 754 standard uses the term "trapping" to refer to the calling of a user-supplied exception-handling routine on exceptional conditions, and is an optional feature of the standard. The standard recommends several usage scenarios for this, including the implementation of non-default pre-substitution of a value followed by resumption, to concisely handleremovable singularities.[18][19][20] The default IEEE 754 exception handling behaviour of resumption following pre-substitution of a default value avoids the risks inherent in changing flow of program control on numerical exceptions. For example, the 1996Cluster spacecraftlaunch ended in a catastrophic explosion due in part to theAdaexception handling policy of aborting computation on arithmetic error.William Kahanclaims the default IEEE 754 exception handling behavior would have prevented this.[19] Front-end web developmentframeworks, such asReactandVue, have introduced error handling mechanisms where errors propagate up theuser interface(UI) component hierarchy, in a way that is analogous to how errors propagate up the call stack in executing code.[21][22]Here the error boundary mechanism serves as an analogue to the typical try-catch mechanism. Thus a component can ensure that errors from its child components are caught and handled, and not propagated up to parent components. For example, in Vue, a component would catch errors by implementingerrorCaptured When used like this in markup: The error produced by the child component is caught and handled by the parent component.[23]
https://en.wikipedia.org/wiki/Exception_handling
Incomputer science,pattern matchingis the act of checking a given sequence oftokensfor the presence of the constituents of somepattern. In contrast topattern recognition, the match usually must be exact: "either it will or will not be a match." The patterns generally have the form of eithersequencesortree structures. Uses of pattern matching include outputting the locations (if any) of a pattern within a token sequence, to output some component of the matched pattern, and to substitute the matching pattern with some other token sequence (i.e.,search and replace). Sequence patterns (e.g., a text string) are often described usingregular expressionsand matched using techniques such asbacktracking. Tree patterns are used in someprogramming languagesas a general tool to process data based on its structure, e.g.C#,[1]F#,[2]Haskell,[3]Java,[4]ML,Python,[5]Ruby,[6]Rust,[7]Scala,[8]Swift[9]and the symbolic mathematics languageMathematicahave specialsyntaxfor expressing tree patterns and alanguage constructforconditional executionand value retrieval based on it. Often it is possible to give alternative patterns that are tried one by one, which yields a powerful conditional programming construct. Pattern matching sometimes includes support forguards.[citation needed] Early programming languages with pattern matching constructs includeCOMIT(1957),SNOBOL(1962),Refal(1968) with tree-based pattern matching,Prolog(1972), St Andrews Static Language (SASL) (1976),NPL(1977), andKent Recursive Calculator(KRC) (1981). The pattern matching feature of function arguments in the languageML(1973) and its dialectStandard ML(1983) has been carried over to some otherfunctional programminglanguages that were influenced by them, such asHaskell(1990),Scala(2004), andF#(2005). The pattern matching construct with thematchkeyword that was introduced in theMLdialectCaml(1985) was followed by languages such asOCaml(1996),F#(2005),F*(2011), andRust(2015). Manytext editorssupport pattern matching of various kinds: theQED editorsupportsregular expressionsearch, and some versions ofTECOsupport the OR operator in searches. Computer algebra systemsgenerally support pattern matching on algebraic expressions.[10] The simplest pattern in pattern matching is an explicit value or a variable. For an example, consider a simple function definition in Haskell syntax (function parameters are not in parentheses but are separated by spaces, = is not assignment but definition): Here, 0 is a single value pattern. Now, whenever f is given 0 as argument the pattern matches and the function returns 1. With any other argument, the matching and thus the function fail. As the syntax supports alternative patterns in function definitions, we can continue the definition extending it to take more generic arguments: Here, the firstnis a single variable pattern, which will match absolutely any argument and bind it to name n to be used in the rest of the definition. In Haskell (unlike at leastHope), patterns are tried in order so the first definition still applies in the very specific case of the input being 0, while for any other argument the function returnsn * f (n-1)with n being the argument. The wildcard pattern (often written as_) is also simple: like a variable name, it matches any value, but does not bind the value to any name. Algorithms formatching wildcardsin simple string-matching situations have been developed in a number ofrecursiveand non-recursive varieties.[11] More complex patterns can be built from the primitive ones of the previous section, usually in the same way as values are built by combining other values. The difference then is that with variable and wildcard parts, a pattern does not build into a single value, but matches a group of values that are the combination of the concrete elements and the elements that are allowed to vary within the structure of the pattern. A tree pattern describes a part of a tree by starting with a node and specifying some branches and nodes and leaving some unspecified with a variable or wildcard pattern. It may help to think of theabstract syntax treeof a programming language andalgebraic data types. In Haskell, the following line defines an algebraic data typeColorthat has a single data constructorColorConstructorthat wraps an integer and a string. The constructor is a node in a tree and the integer and string are leaves in branches. When we want to writefunctionsto makeColoranabstract data type, we wish to write functions tointerfacewith the data type, and thus we want to extract some data from the data type, for example, just the string or just the integer part ofColor. If we pass a variable that is of type Color, how can we get the data out of this variable? For example, for a function to get the integer part ofColor, we can use a simple tree pattern and write: As well: The creations of these functions can be automated by Haskell's datarecordsyntax. ThisOCamlexample which defines ared–black treeand a function to re-balance it after element insertion shows how to match on a more complex structure generated by a recursive data type. The compiler verifies at compile-time that the list of cases is exhaustive and none are redundant. Pattern matching can be used to filter data of a certain structure. For instance, in Haskell alist comprehensioncould be used for this kind of filtering: evaluates to InMathematica, the only structure that exists is thetree, which is populated by symbols. In theHaskellsyntax used thus far, this could be defined as An example tree could then look like In the traditional, more suitable syntax, the symbols are written as they are and the levels of the tree are represented using[], so that for instancea[b,c]is a tree with a as the parent, and b and c as the children. A pattern in Mathematica involves putting "_" at positions in that tree. For instance, the pattern will match elements such as A[1], A[2], or more generally A[x] wherexis any entity. In this case,Ais the concrete element, while_denotes the piece of tree that can be varied. A symbol prepended to_binds the match to that variable name while a symbol appended to_restricts the matches to nodes of that symbol. Note that even blanks themselves are internally represented asBlank[]for_andBlank[x]for_x. The Mathematica functionCasesfilters elements of the first argument that match the pattern in the second argument:[12] evaluates to Pattern matching applies to thestructureof expressions. In the example below, returns because only these elements will match the patterna[b[_],_]above. In Mathematica, it is also possible to extract structures as they are created in the course of computation, regardless of how or where they appear. The functionTracecan be used to monitor a computation, and return the elements that arise which match a pattern. For example, we can define theFibonacci sequenceas Then, we can ask the question: Given fib[3], what is the sequence of recursive Fibonacci calls? returns a structure that represents the occurrences of the patternfib[_]in the computational structure: In symbolic programming languages, it is easy to have patterns as arguments to functions or as elements of data structures. A consequence of this is the ability to use patterns to declaratively make statements about pieces of data and to flexibly instruct functions how to operate. For instance, theMathematicafunctionCompilecan be used to make more efficient versions of the code. In the following example the details do not particularly matter; what matters is that the subexpression{{com[_], Integer}}instructsCompilethat expressions of the formcom[_]can be assumed to beintegersfor the purposes of compilation: Mailboxes inErlangalso work this way. TheCurry–Howard correspondencebetween proofs and programs relatesML-style pattern matching tocase analysisandproof by exhaustion. By far the most common form of pattern matching involves strings of characters. In many programming languages, a particular syntax of strings is used to represent regular expressions, which are patterns describing string characters. However, it is possible to perform some string pattern matching within the same framework that has been discussed throughout this article. In Mathematica, strings are represented as trees of root StringExpression and all the characters in order as children of the root. Thus, to match "any amount of trailing characters", a new wildcard ___ is needed in contrast to _ that would match only a single character. In Haskell andfunctional programminglanguages in general, strings are represented as functionallistsof characters. A functional list is defined as an empty list, or an element constructed on an existing list. In Haskell syntax: The structure for a list with some elements is thuselement:list. When pattern matching, we assert that a certain piece of data is equal to a certain pattern. For example, in the function: We assert that the first element ofhead's argument is called element, and the function returns this. We know that this is the first element because of the way lists are defined, a single element constructed onto a list. This single element must be the first. The empty list would not match the pattern at all, as an empty list does not have a head (the first element that is constructed). In the example, we have no use forlist, so we can disregard it, and thus write the function: The equivalent Mathematica transformation is expressed as In Mathematica, for instance, will match a string that has two characters and begins with "a". The same pattern in Haskell: Symbolic entities can be introduced to represent many different classes of relevant features of a string. For instance, will match a string that consists of a letter first, and then a number. In Haskell,guardscould be used to achieve the same matches: The main advantage of symbolic string manipulation is that it can be completely integrated with the rest of the programming language, rather than being a separate, special purpose subunit. The entire power of the language can be leveraged to build up the patterns themselves or analyze and transform the programs that contain them. SNOBOL (StriNg Oriented and symBOlic Language) is a computer programming language developed between 1962 and 1967 atAT&TBell LaboratoriesbyDavid J. Farber,Ralph E. Griswoldand Ivan P. Polonsky. SNOBOL4 stands apart from most programming languages by having patterns as afirst-class data type(i.e.a data type whose values can be manipulated in all ways permitted to any other data type in the programming language) and by providing operators for patternconcatenationandalternation. Strings generated during execution can be treated as programs and executed. SNOBOL was quite widely taught in larger US universities in the late 1960s and early 1970s and was widely used in the 1970s and 1980s as a text manipulation language in thehumanities. Since SNOBOL's creation, newer languages such asAWKandPerlhave made string manipulation by means ofregular expressionsfashionable. SNOBOL4 patterns, however, subsumeBackus–Naur form(BNF) grammars, which are equivalent tocontext-free grammarsand more powerful thanregular expressions.[13]
https://en.wikipedia.org/wiki/Pattern_matching
Inmathematical logic, thelambda calculus(also written asλ-calculus) is aformal systemfor expressingcomputationbased on functionabstractionandapplicationusing variablebindingandsubstitution. Untyped lambda calculus, the topic of this article, is auniversal machine, amodel of computationthat can be used to simulate anyTuring machine(and vice versa). It was introduced by the mathematicianAlonzo Churchin the 1930s as part of his research into thefoundations of mathematics. In 1936, Church found a formulation which waslogically consistent, and documented it in 1940. Lambda calculus consists of constructinglambda termsand performingreductionoperations on them. A term is defined as any valid lambda calculus expression. In the simplest form of lambda calculus, terms are built using only the following rules:[a] The reduction operations include: IfDe Bruijn indexingis used, then α-conversion is no longer required as there will be no name collisions. Ifrepeated applicationof the reduction steps eventually terminates, then by theChurch–Rosser theoremit will produce aβ-normal form. Variable names are not needed if using a universal lambda function, such asIota and Jot, which can create any function behavior by calling it on itself in various combinations. Lambda calculus isTuring complete, that is, it is a universalmodel of computationthat can be used to simulate anyTuring machine.[3]Its namesake, the Greek letter lambda (λ), is used in lambda expressions and lambda terms to denotebindinga variable in afunction. Lambda calculus may beuntypedortyped. In typed lambda calculus, functions can be applied only if they are capable of accepting the given input's "type" of data. Typed lambda calculi are strictlyweakerthan the untyped lambda calculus, which is the primary subject of this article, in the sense thattyped lambda calculi can express lessthan the untyped calculus can. On the other hand, typed lambda calculi allow more things to be proven. For example, insimply typed lambda calculus, it is a theorem that every evaluation strategy terminates for every simply typed lambda-term, whereas evaluation of untyped lambda-terms need not terminate (seebelow). One reason there are many different typed lambda calculi has been the desire to do more (of what the untyped calculus can do) without giving up on being able to prove strong theorems about the calculus. Lambda calculus has applications in many different areas inmathematics,philosophy,[4]linguistics,[5][6]andcomputer science.[7][8]Lambda calculus has played an important role in the development of thetheoryofprogramming languages.Functional programminglanguages implement lambda calculus. Lambda calculus is also a current research topic incategory theory.[9] Lambda calculus was introduced by mathematicianAlonzo Churchin the 1930s as part of an investigation into thefoundations of mathematics.[10][c]The original system was shown to belogically inconsistentin 1935 whenStephen KleeneandJ. B. Rosserdeveloped theKleene–Rosser paradox.[11][12] Subsequently, in 1936 Church isolated and published just the portion relevant to computation, what is now called the untyped lambda calculus.[13]In 1940, he also introduced a computationally weaker, but logically consistent system, known as thesimply typed lambda calculus.[14] Until the 1960s when its relation to programming languages was clarified, the lambda calculus was only a formalism. Thanks toRichard Montagueand other linguists' applications in the semantics of natural language, the lambda calculus has begun to enjoy a respectable place in both linguistics[15]and computer science.[16] There is some uncertainty over the reason for Church's use of the Greek letterlambda(λ) as the notation for function-abstraction in the lambda calculus, perhaps in part due to conflicting explanations by Church himself. According to Cardone and Hindley (2006): By the way, why did Church choose the notation "λ"? In [an unpublished 1964 letter to Harald Dickson] he stated clearly that it came from the notation "x^{\displaystyle {\hat {x}}}" used for class-abstraction byWhitehead and Russell, by first modifying "x^{\displaystyle {\hat {x}}}" to "∧x{\displaystyle \land x}" to distinguish function-abstraction from class-abstraction, and then changing "∧{\displaystyle \land }" to "λ" for ease of printing. This origin was also reported in [Rosser, 1984, p.338]. On the other hand, in his later years Church told two enquirers that the choice was more accidental: a symbol was needed and λ just happened to be chosen. Dana Scotthas also addressed this question in various public lectures.[17]Scott recounts that he once posed a question about the origin of the lambda symbol to Church's former student and son-in-law John W. Addison Jr., who then wrote his father-in-law a postcard: Dear Professor Church, Russell had theiota operator, Hilbert had theepsilon operator. Why did you choose lambda for your operator? According to Scott, Church's entire response consisted of returning the postcard with the following annotation: "eeny, meeny, miny, moe". Computable functionsare a fundamental concept within computer science and mathematics. The lambda calculus provides simplesemanticsfor computation which are useful for formally studying properties of computation. The lambda calculus incorporates two simplifications that make its semantics simple.The first simplification is that the lambda calculus treats functions "anonymously"; it does not give them explicit names. For example, the function can be rewritten inanonymous formas (which is read as "atupleofxandyismappedtox2+y2{\textstyle x^{2}+y^{2}}").[d]Similarly, the function can be rewritten in anonymous form as where the input is simply mapped to itself.[d] The second simplification is that the lambda calculus only uses functions of a single input. An ordinary function that requires two inputs, for instance thesquare_sum{\textstyle \operatorname {square\_sum} }function, can be reworked into an equivalent function that accepts a single input, and as output returnsanotherfunction, that in turn accepts a single input. For example, can be reworked into This method, known ascurrying, transforms a function that takes multiple arguments into a chain of functions each with a single argument. Function applicationof thesquare_sum{\textstyle \operatorname {square\_sum} }function to the arguments (5, 2), yields at once whereas evaluation of the curried version requires one more step to arrive at the same result. The lambda calculus consists of a language oflambda terms, that are defined by a certain formal syntax, and a set of transformation rules for manipulating the lambda terms. These transformation rules can be viewed as anequational theoryor as anoperational definition. As described above, having no names, all functions in the lambda calculus are anonymous functions. They only accept one input variable, socurryingis used to implement functions of several variables. The syntax of the lambda calculus defines some expressions as valid lambda calculus expressions and some as invalid, just as some strings of characters are valid computer programs and some are not. A valid lambda calculus expression is called a "lambda term". The following three rules give aninductive definitionthat can be applied to build all syntactically valid lambda terms:[e] Nothing else is a lambda term. That is, a lambda term is valid if and only if it can be obtained by repeated application of these three rules. For convenience, some parentheses can be omitted when writing a lambda term. For example, the outermost parentheses are usually not written. See§ Notation, below, for an explicit description of which parentheses are optional. It is also common to extend the syntax presented here with additional operations, which allows making sense of terms such asλx.x2.{\displaystyle \lambda x.x^{2}.}The focus of this article is the pure lambda calculus without extensions, but lambda terms extended with arithmetic operations are used for explanatory purposes. Anabstractionλx.t{\displaystyle \lambda x.t}denotes an§ anonymous function[g]that takes a single inputxand returnst. For example,λx.(x2+2){\displaystyle \lambda x.(x^{2}+2)}is an abstraction representing the functionf{\displaystyle f}defined byf(x)=x2+2,{\displaystyle f(x)=x^{2}+2,}using the termx2+2{\displaystyle x^{2}+2}fort. The namef{\displaystyle f}is superfluous when using abstraction. The syntax(λx.t){\displaystyle (\lambda x.t)}bindsthe variablexin the termt. The definition of a function with an abstraction merely "sets up" the function but does not invoke it. Anapplicationts{\displaystyle ts}represents the application of a functiontto an inputs, that is, it represents the act of calling functionton inputsto producet(s){\displaystyle t(s)}. A lambda term may refer to a variable that has not been bound, such as the termλx.(x+y){\displaystyle \lambda x.(x+y)}(which represents the function definitionf(x)=x+y{\displaystyle f(x)=x+y}). In this term, the variableyhas not been defined and is considered an unknown. The abstractionλx.(x+y){\displaystyle \lambda x.(x+y)}is a syntactically valid term and represents a function that adds its input to the yet-unknowny. Parentheses may be used and might be needed to disambiguate terms. For example, The examples 1 and 2 denote different terms, differing only in where the parentheses are placed. They have different meanings: example 1 is a function definition, while example 2 is a function application. The lambda variablexis a placeholder in both examples. Here,example 1definesa functionλx.B{\displaystyle \lambda x.B}, whereB{\displaystyle B}is(λx.x)x{\displaystyle (\lambda x.x)x}, an anonymous function(λx.x){\displaystyle (\lambda x.x)}, with inputx{\displaystyle x}; while example 2,M{\displaystyle M}N{\displaystyle N}, is M applied to N, whereM{\displaystyle M}is the lambda term(λx.(λx.x)){\displaystyle (\lambda x.(\lambda x.x))}being applied to the inputN{\displaystyle N}which isx{\displaystyle x}. Both examples 1 and 2 would evaluate to theidentity functionλx.x{\displaystyle \lambda x.x}. In lambda calculus, functions are taken to be 'first class values', so functions may be used as the inputs, or be returned as outputs from other functions. For example, the lambda termλx.x{\displaystyle \lambda x.x}represents theidentity function,x↦x{\displaystyle x\mapsto x}. Further,λx.y{\displaystyle \lambda x.y}represents theconstant functionx↦y{\displaystyle x\mapsto y}, the function that always returnsy{\displaystyle y}, no matter the input. As an example of a function operating on functions, thefunction compositioncan be defined asλf.λg.λx.(f(gx)){\displaystyle \lambda f.\lambda g.\lambda x.(f(gx))}. There are several notions of "equivalence" and "reduction" that allow lambda terms to be "reduced" to "equivalent" lambda terms. A basic form of equivalence, definable on lambda terms, isalpha equivalence. It captures the intuition that the particular choice of a bound variable, in an abstraction, does not (usually) matter. For instance,λx.x{\displaystyle \lambda x.x}andλy.y{\displaystyle \lambda y.y}are alpha-equivalent lambda terms, and they both represent the same function (the identity function). The termsx{\displaystyle x}andy{\displaystyle y}are not alpha-equivalent, because they are not bound in an abstraction. In many presentations, it is usual to identify alpha-equivalent lambda terms. The following definitions are necessary in order to be able to define β-reduction: Thefree variables[h]of a term are those variables not bound by an abstraction. The set of free variables of an expression is defined inductively: For example, the lambda term representing the identityλx.x{\displaystyle \lambda x.x}has no free variables, but the functionλx.yx{\displaystyle \lambda x.yx}has a single free variable,y{\displaystyle y}. Supposet{\displaystyle t},s{\displaystyle s}andr{\displaystyle r}are lambda terms, andx{\displaystyle x}andy{\displaystyle y}are variables. The notationt[x:=r]{\displaystyle t[x:=r]}indicates substitution ofr{\displaystyle r}forx{\displaystyle x}int{\displaystyle t}in acapture-avoidingmanner. This is defined so that: For example,(λx.x)[y:=y]=λx.(x[y:=y])=λx.x{\displaystyle (\lambda x.x)[y:=y]=\lambda x.(x[y:=y])=\lambda x.x}, and((λx.y)x)[x:=y]=((λx.y)[x:=y])(x[x:=y])=(λx.y)y{\displaystyle ((\lambda x.y)x)[x:=y]=((\lambda x.y)[x:=y])(x[x:=y])=(\lambda x.y)y}. The freshness condition (requiring thaty{\displaystyle y}is not in thefree variablesofr{\displaystyle r}) is crucial in order to ensure that substitution does not change the meaning of functions. For example, a substitution that ignores the freshness condition could lead to errors:(λx.y)[y:=x]=λx.(y[y:=x])=λx.x{\displaystyle (\lambda x.y)[y:=x]=\lambda x.(y[y:=x])=\lambda x.x}. This erroneous substitution would turn the constant functionλx.y{\displaystyle \lambda x.y}into the identityλx.x{\displaystyle \lambda x.x}. In general, failure to meet the freshness condition can be remedied by alpha-renaming first, with a suitable fresh variable. For example, switching back to our correct notion of substitution, in(λx.y)[y:=x]{\displaystyle (\lambda x.y)[y:=x]}the abstraction can be renamed with a fresh variablez{\displaystyle z}, to obtain(λz.y)[y:=x]=λz.(y[y:=x])=λz.x{\displaystyle (\lambda z.y)[y:=x]=\lambda z.(y[y:=x])=\lambda z.x}, and the meaning of the function is preserved by substitution. The β-reduction rule[b]states that an application of the form(λx.t)s{\displaystyle (\lambda x.t)s}reduces to the termt[x:=s]{\displaystyle t[x:=s]}. The notation(λx.t)s→t[x:=s]{\displaystyle (\lambda x.t)s\to t[x:=s]}is used to indicate that(λx.t)s{\displaystyle (\lambda x.t)s}β-reduces tot[x:=s]{\displaystyle t[x:=s]}. For example, for everys{\displaystyle s},(λx.x)s→x[x:=s]=s{\displaystyle (\lambda x.x)s\to x[x:=s]=s}. This demonstrates thatλx.x{\displaystyle \lambda x.x}really is the identity. Similarly,(λx.y)s→y[x:=s]=y{\displaystyle (\lambda x.y)s\to y[x:=s]=y}, which demonstrates thatλx.y{\displaystyle \lambda x.y}is a constant function. The lambda calculus may be seen as an idealized version of a functional programming language, likeHaskellorStandard ML. Under this view,β-reduction corresponds to a computational step. This step can be repeated by additional β-reductions until there are no more applications left to reduce. In the untyped lambda calculus, as presented here, this reduction process may not terminate. For instance, consider the termΩ=(λx.xx)(λx.xx){\displaystyle \Omega =(\lambda x.xx)(\lambda x.xx)}. Here(λx.xx)(λx.xx)→(xx)[x:=λx.xx]=(x[x:=λx.xx])(x[x:=λx.xx])=(λx.xx)(λx.xx){\displaystyle (\lambda x.xx)(\lambda x.xx)\to (xx)[x:=\lambda x.xx]=(x[x:=\lambda x.xx])(x[x:=\lambda x.xx])=(\lambda x.xx)(\lambda x.xx)}. That is, the term reduces to itself in a single β-reduction, and therefore the reduction process will never terminate. Another aspect of the untyped lambda calculus is that it does not distinguish between different kinds of data. For instance, it may be desirable to write a function that only operates on numbers. However, in the untyped lambda calculus, there is no way to prevent a function from being applied totruth values, strings, or other non-number objects. Lambda expressions are composed of: The set of lambda expressions,Λ, can bedefined inductively: Instances of rule 2 are known asabstractionsand instances of rule 3 are known asapplications.[18]See§ reducible expression This set of rules may be written inBackus–Naur formas: To keep the notation of lambda expressions uncluttered, the following conventions are usually applied: The abstraction operator, λ, is said to bind its variable wherever it occurs in the body of the abstraction. Variables that fall within the scope of an abstraction are said to bebound. In an expression λx.M, the part λxis often calledbinder, as a hint that the variablexis getting bound by prepending λxtoM. All other variables are calledfree. For example, in the expression λy.x x y,yis a bound variable andxis a free variable. Also a variable is bound by its nearest abstraction. In the following example the single occurrence ofxin the expression is bound by the second lambda: λx.y(λx.z x). The set offree variablesof a lambda expression,M, is denoted as FV(M) and is defined by recursion on the structure of the terms, as follows: An expression that contains no free variables is said to beclosed. Closed lambda expressions are also known ascombinatorsand are equivalent to terms incombinatory logic. The meaning of lambda expressions is defined by how expressions can be reduced.[22] There are three kinds of reduction: We also speak of the resulting equivalences: two expressions areα-equivalent, if they can be α-converted into the same expression. β-equivalence and η-equivalence are defined similarly. The termredex, short forreducible expression, refers to subterms that can be reduced by one of the reduction rules. For example, (λx.M)Nis a β-redex in expressing the substitution ofNforxinM. The expression to which a redex reduces is called itsreduct; the reduct of (λx.M)NisM[x:=N].[b] Ifxis not free inM, λx.M xis also an η-redex, with a reduct ofM. α-conversion(alpha-conversion), sometimes known as α-renaming,[23]allows bound variable names to be changed. For example, α-conversion of λx.xmight yield λy.y. Terms that differ only by α-conversion are calledα-equivalent. Frequently, in uses of lambda calculus, α-equivalent terms are considered to be equivalent. The precise rules for α-conversion are not completely trivial. First, when α-converting an abstraction, the only variable occurrences that are renamed are those that are bound to the same abstraction. For example, an α-conversion of λx.λx.xcould result in λy.λx.x, but it couldnotresult in λy.λx.y. The latter has a different meaning from the original. This is analogous to the programming notion ofvariable shadowing. Second, α-conversion is not possible if it would result in a variable getting captured by a different abstraction. For example, if we replacexwithyin λx.λy.x, we get λy.λy.y, which is not at all the same. In programming languages with static scope, α-conversion can be used to makename resolutionsimpler by ensuring that no variable namemasksa name in a containingscope(seeα-renaming to make name resolution trivial). In theDe Bruijn indexnotation, any two α-equivalent terms are syntactically identical. Substitution, writtenM[x:=N], is the process of replacing allfreeoccurrences of the variablexin the expressionMwith expressionN. Substitution on terms of the lambda calculus is defined by recursion on the structure of terms, as follows (note: x and y are only variables while M and N are any lambda expression): To substitute into an abstraction, it is sometimes necessary to α-convert the expression. For example, it is not correct for (λx.y)[y:=x] to result in λx.x, because the substitutedxwas supposed to be free but ended up being bound. The correct substitution in this case is λz.x,up toα-equivalence. Substitution is defined uniquely up to α-equivalence.See Capture-avoiding substitutionsabove. β-reduction(betareduction) captures the idea of function application. β-reduction is defined in terms of substitution: the β-reduction of (λx.M)NisM[x:=N].[b] For example, assuming some encoding of 2, 7, ×, we have the following β-reduction: (λn.n× 2) 7 → 7 × 2. β-reduction can be seen to be the same as the concept oflocal reducibilityinnatural deduction, via theCurry–Howard isomorphism. η-conversion(etaconversion) expresses the idea ofextensionality,[24]which in this context is that two functions are the sameif and only ifthey give the same result for all arguments. η-conversion converts between λx.fxandfwheneverxdoes not appear free inf. η-reduction changes λx.fxtof, and η-expansion changesfto λx.fx, under the same requirement thatxdoes not appear free inf. η-conversion can be seen to be the same as the concept oflocal completenessinnatural deduction, via theCurry–Howard isomorphism. For the untyped lambda calculus, β-reduction as arewriting ruleis neitherstrongly normalisingnorweakly normalising. However, it can be shown that β-reduction isconfluentwhen working up to α-conversion (i.e. we consider two normal forms to be equal if it is possible to α-convert one into the other). Therefore, both strongly normalising terms and weakly normalising terms have a unique normal form. For strongly normalising terms, any reduction strategy is guaranteed to yield the normal form, whereas for weakly normalising terms, some reduction strategies may fail to find it. The basic lambda calculus may be used to modelarithmetic, Booleans, data structures, and recursion, as illustrated in the following sub-sectionsi,ii,iii, and§ iv. There are several possible ways to define thenatural numbersin lambda calculus, but by far the most common are theChurch numerals, which can be defined as follows: and so on. Or using the alternative syntax presented above inNotation: A Church numeral is ahigher-order function—it takes a single-argument functionf, and returns another single-argument function. The Church numeralnis a function that takes a functionfas argument and returns then-th composition off, i.e. the functionfcomposed with itselfntimes. This is denotedf(n)and is in fact then-th power off(considered as an operator);f(0)is defined to be the identity function. Such repeated compositions (of a single functionf) obey thelaws of exponents, which is why these numerals can be used for arithmetic. (In Church's original lambda calculus, the formal parameter of a lambda expression was required to occur at least once in the function body, which made the above definition of0impossible.) One way of thinking about the Church numeraln, which is often useful when analysing programs, is as an instruction 'repeatntimes'. For example, using thePAIRandNILfunctions defined below, one can define a function that constructs a (linked) list ofnelements all equal toxby repeating 'prepend anotherxelement'ntimes, starting from an empty list. The lambda term is By varying what is being repeated, and varying what argument that function being repeated is applied to, a great many different effects can be achieved. We can define a successor function, which takes a Church numeralnand returnsn+ 1by adding another application off, where '(mf)x' means the function 'f' is applied 'm' times on 'x': Because them-th composition offcomposed with then-th composition offgives them+n-th composition off, addition can be defined as follows: PLUScan be thought of as a function taking two natural numbers as arguments and returning a natural number; it can be verified that and are β-equivalent lambda expressions. Since addingmto a numberncan be accomplished by adding 1mtimes, an alternative definition is: Similarly, multiplication can be defined as Alternatively since multiplyingmandnis the same as repeating the addnfunctionmtimes and then applying it to zero. Exponentiation has a rather simple rendering in Church numerals, namely The predecessor function defined byPREDn=n− 1for a positive integernandPRED 0 = 0is considerably more difficult. The formula can be validated by showing inductively that ifTdenotes(λg.λh.h(gf)), thenT(n)(λu.x) = (λh.h(f(n−1)(x)))forn> 0. Two other definitions ofPREDare given below, one usingconditionalsand the other usingpairs. With the predecessor function, subtraction is straightforward. Defining SUBmnyieldsm−nwhenm>nand0otherwise. By convention, the following two definitions (known as Church Booleans) are used for the Boolean valuesTRUEandFALSE: Then, with these two lambda terms, we can define some logic operators (these are just possible formulations; other expressions could be equally correct): We are now able to compute some logic functions, for example: and we see thatAND TRUE FALSEis equivalent toFALSE. Apredicateis a function that returns a Boolean value. The most fundamental predicate isISZERO, which returnsTRUEif its argument is the Church numeral0, butFALSEif its argument were any other Church numeral: The following predicate tests whether the first argument is less-than-or-equal-to the second: and sincem=n, ifLEQmnandLEQnm, it is straightforward to build a predicate for numerical equality. The availability of predicates and the above definition ofTRUEandFALSEmake it convenient to write "if-then-else" expressions in lambda calculus. For example, the predecessor function can be defined as: which can be verified by showing inductively thatn(λg.λk.ISZERO (g1)k(PLUS (gk) 1)) (λv.0)is the addn− 1 function forn> 0. A pair (2-tuple) can be defined in terms ofTRUEandFALSE, by using theChurch encoding for pairs. For example,PAIRencapsulates the pair (x,y),FIRSTreturns the first element of the pair, andSECONDreturns the second. A linked list can be defined as either NIL for the empty list, or thePAIRof an element and a smaller list. The predicateNULLtests for the valueNIL. (Alternatively, withNIL := FALSE, the constructl(λh.λt.λz.deal_with_head_h_and_tail_t) (deal_with_nil)obviates the need for an explicit NULL test). As an example of the use of pairs, the shift-and-increment function that maps(m,n)to(n,n+ 1)can be defined as which allows us to give perhaps the most transparent version of the predecessor function: There is a considerable body ofprogramming idiomsfor lambda calculus. Many of these were originally developed in the context of using lambda calculus as a foundation forprogramming language semantics, effectively using lambda calculus as alow-level programming language. Because several programming languages include the lambda calculus (or something very similar) as a fragment, these techniques also see use in practical programming, but may then be perceived as obscure or foreign. In lambda calculus, alibrarywould take the form of a collection of previously defined functions, which as lambda-terms are merely particular constants. The pure lambda calculus does not have a concept of named constants since all atomic lambda-terms are variables, but one can emulate having named constants by setting aside a variable as the name of the constant, using abstraction to bind that variable in the main body, and apply that abstraction to the intended definition. Thus to usefto meanN(some explicit lambda-term) inM(another lambda-term, the "main program"), one can say Authors often introducesyntactic sugar, such aslet,[k]to permit writing the above in the more intuitive order By chaining such definitions, one can write a lambda calculus "program" as zero or more function definitions, followed by one lambda-term using those functions that constitutes the main body of the program. A notable restriction of thisletis that the namefmay not be referenced inN, forNis outside the scope of the abstraction bindingf, which isM; this means a recursive function definition cannot be written withlet. Theletrec[l]construction would allow writing recursive function definitions, where the scope of the abstraction bindingfincludesNas well asM. Or self-application a-la that which leads toYcombinator could be used. Recursionis when a function invokes itself. What would a value be which were to represent such a function? It has to refer to itself somehow inside itself, just as the definition refers to itself inside itself. If this value were to contain itself by value, it would have to be of infinite size, which is impossible. Other notations, which support recursion natively, overcome this by referring to the functionby nameinside its definition. Lambda calculus cannot express this, since in it there simply are no names for terms to begin with, only arguments' names, i.e. parameters in abstractions. Thus, a lambda expression can receive itself as its argument and refer to (a copy of) itself via the corresponding parameter's name. This will work fine in case it was indeed called with itself as an argument. For example,(λx.xx)E= (E E)will express recursion whenEis an abstraction which is applying its parameter to itself inside its body to express a recursive call. Since this parameter receivesEas its value, its self-application will be the same(E E)again. As a concrete example, consider thefactorialfunctionF(n), recursively defined by In the lambda expression which is to represent this function, aparameter(typically the first one) will be assumed to receive the lambda expression itself as its value, so that calling it with itself as its first argument will amount to the recursive call. Thus to achieve recursion, the intended-as-self-referencing argument (calledshere, reminiscent of "self", or "self-applying") must always be passed to itself within the function body at a recursive call point: and we have Heres sbecomesthe same(E E)inside the result of the application(E E), and using the same function for a call is the definition of what recursion is. The self-application achieves replication here, passing the function's lambda expression on to the next invocation as an argument value, making it available to be referenced there by the parameter namesto be called via the self-applicationss, again and again as needed, each timere-creatingthe lambda-termF = E E. The application is an additional step just as the name lookup would be. It has the same delaying effect. Instead of havingFinside itself as a wholeup-front, delaying its re-creation until the next call makes its existence possible by having twofinitelambda-termsEinside it re-create it on the flylateras needed. This self-applicational approach solves it, but requires re-writing each recursive call as a self-application. We would like to have a generic solution, without the need for any re-writes: Given a lambda term with first argument representing recursive call (e.g.Ghere), thefixed-pointcombinatorFIXwill return a self-replicating lambda expression representing the recursive function (here,F). The function does not need to be explicitly passed to itself at any point, for the self-replication is arranged in advance, when it is created, to be done each time it is called. Thus the original lambda expression(FIX G)is re-created inside itself, at call-point, achievingself-reference. In fact, there are many possible definitions for thisFIXoperator, the simplest of them being: In the lambda calculus,Ygis a fixed-point ofg, as it expands to: Now, to perform the recursive call to the factorial function for an argumentn, we would simply call(YG)n. Givenn= 4, for example, this gives: Every recursively defined function can be seen as a fixed point of some suitably defined higher order function (also known as functional) closing over the recursive call with an extra argument. Therefore, usingY, every recursive function can be expressed as a lambda expression. In particular, we can now cleanly define the subtraction, multiplication, and comparison predicates of natural numbers, using recursion. WhenY combinatoris coded directly in astrict programming language, the applicative order of evaluation used in such languages will cause an attempt to fully expand the internal self-application(xx){\displaystyle (xx)}prematurely, causingstack overflowor, in case oftail call optimization, indefinite looping.[27]A delayed variant of Y, theZ combinator, can be used in such languages. It has the internal self-application hidden behind an extra abstraction througheta-expansion, as(λv.xxv){\displaystyle (\lambda v.xxv)}, thus preventing its premature expansion:[28] Certain terms have commonly accepted names:[29][30][31] Iis the identity function.SKandBCKWform completecombinator calculussystems that can express any lambda term - seethe next section.ΩisUU, the smallest term that has no normal form.YIis another such term.Yis standard and definedabove, and can also be defined asY=BU(CBU), so thatYg=g(Yg).TRUEandFALSEdefinedaboveare commonly abbreviated asTandF. IfNis a lambda-term without abstraction, but possibly containing named constants (combinators), then there exists a lambda-termT(x,N) which is equivalent toλx.Nbut lacks abstraction (except as part of the named constants, if these are considered non-atomic). This can also be viewed as anonymising variables, asT(x,N) removes all occurrences ofxfromN, while still allowing argument values to be substituted into the positions whereNcontains anx. The conversion functionTcan be defined by: In either case, a term of the formT(x,N)Pcan reduce by having the initial combinatorI,K, orSgrab the argumentP, just like β-reduction of(λx.N)Pwould do.Ireturns that argument.Kthrows the argument away, just like(λx.N)would do ifxhas no free occurrence inN.Spasses the argument on to both subterms of the application, and then applies the result of the first to the result of the second. The combinatorsBandCare similar toS, but pass the argument on to only one subterm of an application (Bto the "argument" subterm andCto the "function" subterm), thus saving a subsequentKif there is no occurrence ofxin one subterm. In comparison toBandC, theScombinator actually conflates two functionalities: rearranging arguments, and duplicating an argument so that it may be used in two places. TheWcombinator does only the latter, yielding theB, C, K, W systemas an alternative toSKI combinator calculus. Atyped lambda calculusis a typedformalismthat uses the lambda-symbol (λ{\displaystyle \lambda }) to denote anonymous function abstraction. In this context, types are usually objects of a syntactic nature that are assigned to lambda terms; the exact nature of a type depends on the calculus considered (seeKinds of typed lambda calculi). From a certain point of view, typed lambda calculi can be seen as refinements of the untyped lambda calculus but from another point of view, they can also be considered the more fundamental theory anduntyped lambda calculusa special case with only one type.[32] Typed lambda calculi are foundational programming languages and are the base of typed functional programming languages such as ML and Haskell and, more indirectly, typedimperative programminglanguages. Typed lambda calculi play an important role in the design oftype systemsfor programming languages; here typability usually captures desirable properties of the program, e.g., the program will not cause a memory access violation. Typed lambda calculi are closely related tomathematical logicandproof theoryvia theCurry–Howard isomorphismand they can be considered as theinternal languageof classes ofcategories, e.g., the simply typed lambda calculus is the language of aCartesian closed category(CCC). Whether a term is normalising or not, and how much work needs to be done in normalising it if it is, depends to a large extent on the reduction strategy used. Common lambda calculus reduction strategies include:[33][34][35] Weak reduction strategies do not reduce under lambda abstractions: Strategies with sharing reduce computations that are "the same" in parallel: There is no algorithm that takes as input any two lambda expressions and outputsTRUEorFALSEdepending on whether one expression reduces to the other.[13]More precisely, nocomputable functioncandecidethe question. This was historically the first problem for which undecidability could be proven. As usual for such a proof,computablemeans computable by anymodel of computationthat isTuring complete. In fact computability can itself be defined via the lambda calculus: a functionF:N→Nof natural numbers is a computable function if and only if there exists a lambda expressionfsuch that for every pair ofx,yinN,F(x)=yif and only iffx=βy, wherexandyare theChurch numeralscorresponding toxandy, respectively and =βmeaning equivalence with β-reduction. See theChurch–Turing thesisfor other approaches to defining computability and their equivalence. Church's proof of uncomputability first reduces the problem to determining whether a given lambda expression has anormal form. Then he assumes that this predicate is computable, and can hence be expressed in lambda calculus. Building on earlier work by Kleene and constructing aGödel numberingfor lambda expressions, he constructs a lambda expressionethat closely follows the proof ofGödel's first incompleteness theorem. Ifeis applied to its own Gödel number, a contradiction results. The notion ofcomputational complexityfor the lambda calculus is a bit tricky, because the cost of a β-reduction may vary depending on how it is implemented.[36]To be precise, one must somehow find the location of all of the occurrences of the bound variableVin the expressionE, implying a time cost, or one must keep track of the locations of free variables in some way, implying a space cost. A naïve search for the locations ofVinEisO(n)in the lengthnofE.Director stringswere an early approach that traded this time cost for a quadratic space usage.[37]More generally this has led to the study of systems that useexplicit substitution. In 2014, it was shown that the number of β-reduction steps taken by normal order reduction to reduce a term is areasonabletime cost model, that is, the reduction can be simulated on a Turing machine in time polynomially proportional to the number of steps.[38]This was a long-standing open problem, due tosize explosion, the existence of lambda terms which grow exponentially in size for each β-reduction. The result gets around this by working with a compact shared representation. The result makes clear that the amount of space needed to evaluate a lambda term is not proportional to the size of the term during reduction. It is not currently known what a good measure of space complexity would be.[39] An unreasonable model does not necessarily mean inefficient.Optimal reductionreduces all computations with the same label in one step, avoiding duplicated work, but the number of parallel β-reduction steps to reduce a given term to normal form is approximately linear in the size of the term. This is far too small to be a reasonable cost measure, as any Turing machine may be encoded in the lambda calculus in size linearly proportional to the size of the Turing machine. The true cost of reducing lambda terms is not due to β-reduction per se but rather the handling of the duplication of redexes during β-reduction.[40]It is not known if optimal reduction implementations are reasonable when measured with respect to a reasonable cost model such as the number of leftmost-outermost steps to normal form, but it has been shown for fragments of the lambda calculus that the optimal reduction algorithm is efficient and has at most a quadratic overhead compared to leftmost-outermost.[39]In addition the BOHM prototype implementation of optimal reduction outperformed bothCamlLight and Haskell on pure lambda terms.[40] As pointed out byPeter Landin's 1965 paper "A Correspondence betweenALGOL 60and Church's Lambda-notation",[41]sequentialprocedural programminglanguages can be understood in terms of the lambda calculus, which provides the basic mechanisms for procedural abstraction and procedure (subprogram) application. For example, inPythonthe "square" function can be expressed as a lambda expression as follows: The above example is an expression that evaluates to a first-class function. The symbollambdacreates an anonymous function, given a list of parameter names,x– just a single argument in this case, and an expression that is evaluated as the body of the function,x**2. Anonymous functions are sometimes called lambda expressions. For example,Pascaland many other imperative languages have long supported passingsubprogramsasargumentsto other subprograms through the mechanism offunction pointers. However, function pointers are an insufficient condition for functions to befirst classdatatypes, because a function is a first class datatype if and only if new instances of the function can be created atruntime. Such runtime creation of functions is supported inSmalltalk,JavaScript,Wolfram Language, and more recently inScala,Eiffel(as agents),C#(as delegates) andC++11, among others. TheChurch–Rosserproperty of the lambda calculus means that evaluation (β-reduction) can be carried out inany order, even in parallel. This means that variousnondeterministicevaluation strategiesare relevant. However, the lambda calculus does not offer any explicit constructs forparallelism. One can add constructs such asfuturesto the lambda calculus. Otherprocess calculihave been developed for describing communication and concurrency. The fact that lambda calculus terms act as functions on other lambda calculus terms, and even on themselves, led to questions about the semantics of the lambda calculus. Could a sensible meaning be assigned to lambda calculus terms? The natural semantics was to find a setDisomorphic to the function spaceD→D, of functions on itself. However, no nontrivial suchDcan exist, bycardinalityconstraints because the set of all functions fromDtoDhas greater cardinality thanD, unlessDis asingleton set. In the 1970s,Dana Scottshowed that if onlycontinuous functionswere considered, a set ordomainDwith the required property could be found, thus providing amodelfor the lambda calculus.[42] This work also formed the basis for thedenotational semanticsof programming languages. These extensions are in thelambda cube: Theseformal systemsare extensions of lambda calculus that are not in the lambda cube: These formal systems are variations of lambda calculus: These formal systems are related to lambda calculus: Some parts of this article are based on material fromFOLDOC, used withpermission.
https://en.wikipedia.org/wiki/Lambda_calculus
System F(alsopolymorphic lambda calculusorsecond-order lambda calculus) is atyped lambda calculusthat introduces, tosimply typed lambda calculus, a mechanism ofuniversal quantificationover types. System F formalizesparametric polymorphisminprogramming languages, thus forming a theoretical basis for languages such asHaskellandML. It was discovered independently bylogicianJean-Yves Girard(1972) andcomputer scientistJohn C. Reynolds. Whereassimply typed lambda calculushas variables ranging over terms, and binders for them, System F additionally has variables ranging overtypes, and binders for them. As an example, the fact that the identity function can have any type of the formA→Awould be formalized in System F as the judgement whereα{\displaystyle \alpha }is atype variable. The upper-caseΛ{\displaystyle \Lambda }is traditionally used to denote type-level functions, as opposed to the lower-caseλ{\displaystyle \lambda }which is used for value-level functions. (The superscriptedα{\displaystyle \alpha }means that the bound variablexis of typeα{\displaystyle \alpha }; the expression after the colon is the type of the lambda expression preceding it.) As aterm rewriting system, System F isstrongly normalizing. However,type inferencein System F (without explicit type annotations) is undecidable. Under theCurry–Howard isomorphism, System F corresponds to the fragment of second-orderintuitionistic logicthat uses only universal quantification. System F can be seen as part of thelambda cube, together with even more expressive typed lambda calculi, including those withdependent types. According to Girard, the "F" inSystem Fwas picked by chance.[1] The typing rules of System F are those of simply typed lambda calculus with the addition of the following: whereσ,τ{\displaystyle \sigma ,\tau }are types,α{\displaystyle \alpha }is a type variable, andαtype{\displaystyle \alpha ~{\text{type}}}in the context indicates thatα{\displaystyle \alpha }is bound. The first rule is that of application, and the second is that of abstraction.[2][3] TheBoolean{\displaystyle {\mathsf {Boolean}}}type is defined as:∀α.α→α→α{\displaystyle \forall \alpha .\alpha \to \alpha \to \alpha }, whereα{\displaystyle \alpha }is atype variable. This means:Boolean{\displaystyle {\mathsf {Boolean}}}is the type of all functions which take as input a type α and two expressions of type α, and produce as output an expression of type α (note that we consider→{\displaystyle \to }to beright-associative.) The following two definitions for the boolean valuesT{\displaystyle \mathbf {T} }andF{\displaystyle \mathbf {F} }are used, extending the definition ofChurch booleans: (Note that the above two functions requirethree— not two — arguments. The latter two should be lambda expressions, but the first one should be a type. This fact is reflected in the fact that the type of these expressions is∀α.α→α→α{\displaystyle \forall \alpha .\alpha \to \alpha \to \alpha }; the universal quantifier binding the α corresponds to the Λ binding the alpha in the lambda expression itself. Also, note thatBoolean{\displaystyle {\mathsf {Boolean}}}is a convenient shorthand for∀α.α→α→α{\displaystyle \forall \alpha .\alpha \to \alpha \to \alpha }, but it is not a symbol of System F itself, but rather a "meta-symbol". Likewise,T{\displaystyle \mathbf {T} }andF{\displaystyle \mathbf {F} }are also "meta-symbols", convenient shorthands, of System F "assemblies" (in theBourbaki sense); otherwise, if such functions could be named (within System F), then there would be no need for the lambda-expressive apparatus capable of defining functions anonymously and for thefixed-point combinator, which works around that restriction.) Then, with these twoλ{\displaystyle \lambda }-terms, we can define some logic operators (which are of typeBoolean→Boolean→Boolean{\displaystyle {\mathsf {Boolean}}\rightarrow {\mathsf {Boolean}}\rightarrow {\mathsf {Boolean}}}): Note that in the definitions above,Boolean{\displaystyle {\mathsf {Boolean}}}is a type argument tox{\displaystyle x}, specifying that the other two parameters that are given tox{\displaystyle x}are of typeBoolean{\displaystyle {\mathsf {Boolean}}}. As in Church encodings, there is no need for anIFTHENELSEfunction as one can just use rawBoolean{\displaystyle {\mathsf {Boolean}}}-typed terms as decision functions. However, if one is requested: will do. Apredicateis a function which returns aBoolean{\displaystyle {\mathsf {Boolean}}}-typed value. The most fundamental predicate isISZEROwhich returnsT{\displaystyle \mathbf {T} }if and only if its argument is theChurch numeral0: System F allows recursive constructions to be embedded in a natural manner, related to that inMartin-Löf's type theory. Abstract structures (S) are created usingconstructors. These are functions typed as: Recursivity is manifested whenSitself appears within one of the typesKi{\displaystyle K_{i}}. If you havemof these constructors, you can define the type ofSas: For instance, the natural numbers can be defined as an inductive datatypeNwith constructors The System F type corresponding to this structure is∀α.α→(α→α)→α{\displaystyle \forall \alpha .\alpha \to (\alpha \to \alpha )\to \alpha }. The terms of this type comprise a typed version of theChurch numerals, the first few of which are: If we reverse the order of the curried arguments (i.e.,∀α.(α→α)→α→α{\displaystyle \forall \alpha .(\alpha \rightarrow \alpha )\rightarrow \alpha \rightarrow \alpha }), then the Church numeral fornis a function that takes a functionfas argument and returns thenthpower off. That is to say, a Church numeral is ahigher-order function– it takes a single-argument functionf, and returns another single-argument function. The version of System F used in this article is as an explicitly typed, or Church-style, calculus. The typing information contained in λ-terms makestype-checkingstraightforward.Joe Wells(1994) settled an "embarrassing open problem" by proving that type checking isundecidablefor a Curry-style variant of System F, that is, one that lacks explicit typing annotations.[4][5] Wells's result implies thattype inferencefor System F is impossible. A restriction of System F known as "Hindley–Milner", or simply "HM", does have an easy type inference algorithm and is used for manystatically typedfunctional programming languagessuch asHaskell 98and theMLfamily. Over time, as the restrictions of HM-style type systems have become apparent, languages have steadily moved to more expressive logics for their type systems.GHC, a Haskell compiler, goes beyond HM (as of 2008) and uses System F extended with non-syntactic type equality;[6]non-HM features inOCaml's type system includeGADT.[7][8] In second-orderintuitionistic logic, the second-order polymorphic lambda calculus (F2) was discovered by Girard (1972) and independently by Reynolds (1974).[9]Girard proved theRepresentation Theorem: that in second-order intuitionistic predicate logic (P2), functions from the natural numbers to the natural numbers that can be proved total, form a projection from P2 into F2.[9]Reynolds proved theAbstraction Theorem: that every term in F2 satisfies a logical relation, which can be embedded into the logical relations P2.[9]Reynolds proved that a Girard projection followed by a Reynolds embedding form the identity, i.e., theGirard-Reynolds Isomorphism.[9] While System F corresponds to the first axis ofBarendregt'slambda cube,System Fωor thehigher-order polymorphic lambda calculuscombines the first axis (polymorphism) with the second axis (type operators); it is a different, more complex system. System Fωcan be defined inductively on a family of systems, where induction is based on thekindspermitted in each system: In the limit, we can define systemFω{\displaystyle F_{\omega }}to be That is, Fωis the system which allows functions from types to types where the argument (and result) may be of any order. Note that although Fωplaces no restrictions on theorderof the arguments in these mappings, it does restrict theuniverseof the arguments for these mappings: they must be types rather than values. System Fωdoes not permit mappings from values to types (dependent types), though it does permit mappings from values to values (λ{\displaystyle \lambda }abstraction), mappings from types to values (Λ{\displaystyle \Lambda }abstraction), and mappings from types to types (λ{\displaystyle \lambda }abstraction at the level of types). System F<:, pronounced "F-sub", is an extension of system F withsubtyping. System F<:has been of central importance toprogramming language theorysince the 1980s[citation needed]because the core offunctional programming languages, like those in theMLfamily, support bothparametric polymorphismandrecordsubtyping, which can be expressed inSystem F<:.[10][11]
https://en.wikipedia.org/wiki/System_F