text
stringlengths
21
172k
source
stringlengths
32
113
Inmathematics, specificallyalgebraic geometry, aschemeis astructurethat enlarges the notion ofalgebraic varietyin several ways, such as taking account ofmultiplicities(the equationsx= 0andx2= 0define the same algebraic variety but different schemes) and allowing "varieties" defined over anycommutative ring(for example,Fermat curvesare defined over theintegers). Scheme theorywas introduced byAlexander Grothendieckin 1960 in his treatiseÉléments de géométrie algébrique(EGA); one of its aims was developing the formalism needed to solve deep problems ofalgebraic geometry, such as theWeil conjectures(the last of which was proved byPierre Deligne).[1]Strongly based oncommutative algebra, scheme theory allows a systematic use of methods oftopologyandhomological algebra. Scheme theory also unifies algebraic geometry with much ofnumber theory, which eventually led toWiles's proof of Fermat's Last Theorem. Schemes elaborate the fundamental idea that an algebraic variety is best analyzed through thecoordinate ringof regular algebraic functions defined on it (or on its subsets), and each subvariety corresponds to theidealof functions which vanish on the subvariety. Intuitively, a scheme is atopological spaceconsisting of closed points which correspond to geometric points, together with non-closed points which aregeneric pointsof irreducible subvarieties. The space is covered by anatlasof open sets, each endowed with a coordinate ring of regular functions, with specified coordinate changes between the functions over intersecting open sets. Such a structure is called aringed spaceor asheafof rings. The cases of main interest are theNoetherian schemes, in which the coordinate rings areNoetherian rings. Formally, a scheme is a ringed space covered by affine schemes. An affine scheme is thespectrumof a commutative ring; its points are theprime idealsof the ring, and its closed points aremaximal ideals. The coordinate ring of an affine scheme is the ring itself, and the coordinate rings of open subsets arerings of fractions. Therelative point of viewis that much of algebraic geometry should be developed for a morphismX→Yof schemes (called a schemeXover the baseY), rather than for an individual scheme. For example, in studyingalgebraic surfaces, it can be useful to consider families of algebraic surfaces over any schemeY. In many cases, the family of all varieties of a given type can itself be viewed as a variety or scheme, known as amoduli space. For some of the detailed definitions in the theory of schemes, see theglossary of scheme theory. The origins of algebraic geometry mostly lie in the study ofpolynomialequations over thereal numbers. By the 19th century, it became clear (notably in the work ofJean-Victor PonceletandBernhard Riemann) that algebraic geometry over the real numbers is simplified by working over thefieldofcomplex numbers, which has the advantage of beingalgebraically closed.[2]The early 20th century saw analogies between algebraic geometry and number theory, suggesting the question: can algebraic geometry be developed over other fields, such as those with positivecharacteristic, and more generally overnumber ringslike the integers, where the tools of topology andcomplex analysisused to study complex varieties do not seem to apply? Hilbert's Nullstellensatzsuggests an approach to algebraic geometry over any algebraically closed fieldk: themaximal idealsin thepolynomial ringk[x1, ... ,xn]are in one-to-one correspondence with the setknofn-tuples of elements ofk, and theprime idealscorrespond to the irreducible algebraic sets inkn, known as affine varieties. Motivated by these ideas,Emmy NoetherandWolfgang Krulldeveloped commutative algebra in the 1920s and 1930s.[3]Their work generalizes algebraic geometry in a purely algebraic direction, generalizing the study of points (maximal ideals in a polynomial ring) to the study of prime ideals in any commutative ring. For example, Krull defined thedimensionof a commutative ring in terms of prime ideals and, at least when the ring isNoetherian, he proved that this definition satisfies many of the intuitive properties of geometric dimension. Noether and Krull's commutative algebra can be viewed as an algebraic approach toaffinealgebraic varieties. However, many arguments in algebraic geometry work better forprojective varieties, essentially because they arecompact. From the 1920s to the 1940s,B. L. van der Waerden,André WeilandOscar Zariskiapplied commutative algebra as a new foundation for algebraic geometry in the richer setting of projective (orquasi-projective) varieties.[4]In particular, theZariski topologyis a useful topology on a variety over any algebraically closed field, replacing to some extent the classical topology on a complex variety (based on themetric topologyof the complex numbers). For applications to number theory, van der Waerden and Weil formulated algebraic geometry over any field, not necessarily algebraically closed. Weil was the first to define anabstract variety(not embedded inprojective space), by gluing affine varieties along open subsets, on the model of abstractmanifoldsin topology. He needed this generality for his construction of theJacobian varietyof a curve over any field. (Later, Jacobians were shown to be projective varieties by Weil,ChowandMatsusaka.) The algebraic geometers of theItalian schoolhad often used the somewhat foggy concept of thegeneric pointof an algebraic variety. What is true for the generic point is true for "most" points of the variety. In Weil'sFoundations of Algebraic Geometry(1946), generic points are constructed by taking points in a very large algebraically closed field, called auniversal domain.[4]This worked awkwardly: there were many different generic points for the same variety. (In the later theory of schemes, each algebraic variety has a single generic point.) In the 1950s,Claude Chevalley,Masayoshi NagataandJean-Pierre Serre, motivated in part by theWeil conjecturesrelating number theory and algebraic geometry, further extended the objects of algebraic geometry, for example by generalizing the base rings allowed. The wordschemewas first used in the 1956 Chevalley Seminar, in which Chevalley pursued Zariski's ideas.[5]According toPierre Cartier, it wasAndré Martineauwho suggested to Serre the possibility of using the spectrum of an arbitrary commutative ring as a foundation for algebraic geometry.[6] The theory took its definitive form in Grothendieck'sÉléments de géométrie algébrique(EGA) and the laterSéminaire de géométrie algébrique(SGA), bringing to a conclusion a generation of experimental suggestions and partial developments.[7]Grothendieck defined thespectrumX{\displaystyle X}of acommutative ringR{\displaystyle R}as the space ofprime idealsofR{\displaystyle R}with a natural topology (known as the Zariski topology), but augmented it with asheafof rings: to every open subsetU{\displaystyle U}he assigned a commutative ringOX(U){\displaystyle {\mathcal {O}}_{X}(U)}, which may be thought of as the coordinate ring of regular functions onU{\displaystyle U}. These objectsSpec⁡(R){\displaystyle \operatorname {Spec} (R)}are the affine schemes; a general scheme is then obtained by "gluing together" affine schemes. Much of algebraic geometry focuses on projective or quasi-projective varieties over a fieldk{\displaystyle k}, most often over the complex numbers. Grothendieck developed a large body of theory for arbitrary schemes extending much of the geometric intuition for varieties. For example, it is common to construct a moduli space first as a scheme, and only later study whether it is a more concrete object such as a projective variety. Applying Grothendieck's theory to schemes over the integers and other number fields led to powerful new perspectives in number theory. Anaffine schemeis alocally ringed spaceisomorphic to thespectrumSpec⁡(R){\displaystyle \operatorname {Spec} (R)}of a commutative ringR{\displaystyle R}. Aschemeis a locally ringed spaceX{\displaystyle X}admitting a covering by open setsUi{\displaystyle U_{i}}, such that eachUi{\displaystyle U_{i}}(as a locally ringed space) is an affine scheme.[8]In particular,X{\displaystyle X}comes with a sheafOX{\displaystyle {\mathcal {O}}_{X}}, which assigns to every open subsetU{\displaystyle U}a commutative ringOX(U){\displaystyle {\mathcal {O}}_{X}(U)}called thering of regular functionsonU{\displaystyle U}. One can think of a scheme as being covered by "coordinate charts" that are affine schemes. The definition means exactly that schemes are obtained by gluing together affine schemes using the Zariski topology. In the early days, this was called aprescheme, and a scheme was defined to be aseparatedprescheme. The term prescheme has fallen out of use, but can still be found in older books, such as Grothendieck's "Éléments de géométrie algébrique" andMumford's "Red Book".[9]The sheaf properties ofOX(U){\displaystyle {\mathcal {O}}_{X}(U)}mean that its elements,which are not necessarily functions, can neverthess be patched together from their restrictions in the same way as functions. A basic example of an affine scheme isaffinen{\displaystyle n}-spaceover a fieldk{\displaystyle k}, for anatural numbern{\displaystyle n}. By definition,Akn{\displaystyle A_{k}^{n}}is the spectrum of the polynomial ringk[x1,…,xn]{\displaystyle k[x_{1},\dots ,x_{n}]}. In the spirit of scheme theory, affinen{\displaystyle n}-space can in fact be defined over any commutative ringR{\displaystyle R}, meaningSpec⁡(R[x1,…,xn]){\displaystyle \operatorname {Spec} (R[x_{1},\dots ,x_{n}])}. Schemes form acategory, with morphisms defined as morphisms of locally ringed spaces. (See also:morphism of schemes.) For a schemeY, a schemeXoverY(or aY-scheme) means a morphismX→Yof schemes. A schemeXovera commutative ringRmeans a morphismX→ Spec(R). An algebraic variety over a fieldkcan be defined as a scheme overkwith certain properties. There are different conventions about exactly which schemes should be called varieties. One standard choice is that avarietyoverkmeans anintegral separatedscheme offinite typeoverk.[10] A morphismf:X→Yof schemes determines apullback homomorphismon the rings of regular functions,f*:O(Y) →O(X). In the case of affine schemes, this construction gives a one-to-one correspondence between morphisms Spec(A) → Spec(B) of schemes and ring homomorphismsB→A.[11]In this sense, scheme theory completely subsumes the theory of commutative rings. SinceZis aninitial objectin thecategory of commutative rings, the category of schemes has Spec(Z) as aterminal object. For a schemeXover a commutative ringR, anR-pointofXmeans asectionof the morphismX→ Spec(R). One writesX(R) for the set ofR-points ofX. In examples, this definition reconstructs the old notion of the set of solutions of the defining equations ofXwith values inR. WhenRis a fieldk,X(k) is also called the set ofk-rational pointsofX. More generally, for a schemeXover a commutative ringRand any commutativeR-algebraS, anS-pointofXmeans a morphism Spec(S) →XoverR. One writesX(S) for the set ofS-points ofX. (This generalizes the old observation that given some equations over a fieldk, one can consider the set of solutions of the equations in anyfield extensionEofk.) For a schemeXoverR, the assignmentS↦X(S) is afunctorfrom commutativeR-algebras to sets. It is an important observation that a schemeXoverRis determined by thisfunctor of points.[12] Thefiber product of schemesalways exists. That is, for any schemesXandZwith morphisms to a schemeY, thecategorical fiber productX×YZ{\displaystyle X\times _{Y}Z}exists in the category of schemes. IfXandZare schemes over a fieldk, their fiber product over Spec(k) may be called theproductX×Zin the category ofk-schemes. For example, the product of affine spacesAm{\displaystyle \mathbb {A} ^{m}}andAn{\displaystyle \mathbb {A} ^{n}}overkis affine spaceAm+n{\displaystyle \mathbb {A} ^{m+n}}overk. Since the category of schemes has fiber products and also a terminal object Spec(Z), it has all finitelimits. Here and below, all the rings considered are commutative. Letkbe an algebraically closed field. The affine spaceX¯=Akn{\displaystyle {\bar {X}}=\mathbb {A} _{k}^{n}}is the algebraic variety of all pointsa=(a1,…,an){\displaystyle a=(a_{1},\ldots ,a_{n})}with coordinates ink; its coordinate ring is the polynomial ringR=k[x1,…,xn]{\displaystyle R=k[x_{1},\ldots ,x_{n}]}. The corresponding schemeX=Spec(R){\displaystyle X=\mathrm {Spec} (R)}is a topological space with the Zariski topology, whose closed points are the maximal idealsma=(x1−a1,…,xn−an){\displaystyle {\mathfrak {m}}_{a}=(x_{1}-a_{1},\ldots ,x_{n}-a_{n})}, the set of polynomials vanishing ata{\displaystyle a}. The scheme also contains a non-closed point for each non-maximal prime idealp⊂R{\displaystyle {\mathfrak {p}}\subset R}, whose vanishing defines an irreducible subvarietyV¯=V¯(p)⊂X¯{\displaystyle {\bar {V}}={\bar {V}}({\mathfrak {p}})\subset {\bar {X}}}; the topological closure of the scheme pointp{\displaystyle {\mathfrak {p}}}is the subschemeV(p)={q∈Xwithp⊂q}{\displaystyle V({\mathfrak {p}})=\{{\mathfrak {q}}\in X\ \ {\text{with}}\ \ {\mathfrak {p}}\subset {\mathfrak {q}}\}}, specially including all the closed points of the subvariety, i.e.ma{\displaystyle {\mathfrak {m}}_{a}}witha∈V¯{\displaystyle a\in {\bar {V}}}, or equivalentlyp⊂ma{\displaystyle {\mathfrak {p}}\subset {\mathfrak {m}}_{a}}. The schemeX{\displaystyle X}has a basis of open subsets given by the complements of hypersurfaces,Uf=X∖V(f)={p∈Xwithf∉p}{\displaystyle U_{f}=X\setminus V(f)=\{{\mathfrak {p}}\in X\ \ {\text{with}}\ \ f\notin {\mathfrak {p}}\}}for irreducible polynomialsf∈R{\displaystyle f\in R}. This set is endowed with its coordinate ring of regular functionsOX(Uf)=R[f−1]={rfmforr∈R,m∈Z≥0}.{\displaystyle {\mathcal {O}}_{X}(U_{f})=R[f^{-1}]=\left\{{\tfrac {r}{f^{m}}}\ \ {\text{for}}\ \ r\in R,\ m\in \mathbb {Z} _{\geq 0}\right\}.}This induces a unique sheafOX{\displaystyle {\mathcal {O}}_{X}}which gives the usual ring of rational functions regular on a given open setU{\displaystyle U}. Each ring elementr=r(x1,…,xn)∈R{\displaystyle r=r(x_{1},\ldots ,x_{n})\in R}, a polynomial function onX¯{\displaystyle {\bar {X}}}, also defines a function on the points of the schemeX{\displaystyle X}whose value atp{\displaystyle {\mathfrak {p}}}lies in the quotient ringR/p{\displaystyle R/{\mathfrak {p}}}, theresidue ring. We definer(p){\displaystyle r({\mathfrak {p}})}as the image ofr{\displaystyle r}under the natural mapR→R/p{\displaystyle R\to R/{\mathfrak {p}}}. A maximal idealma{\displaystyle {\mathfrak {m}}_{a}}gives theresidue fieldk(ma)=R/ma≅k{\displaystyle k({\mathfrak {m}}_{a})=R/{\mathfrak {m}}_{a}\cong k}, with the natural isomorphismxi↦ai{\displaystyle x_{i}\mapsto a_{i}}, so thatr(ma){\displaystyle r({\mathfrak {m}}_{a})}corresponds to the original valuer(a){\displaystyle r(a)}. The vanishing locus of a polynomialf=f(x1,…,xn){\displaystyle f=f(x_{1},\ldots ,x_{n})}is ahypersurfacesubvarietyV¯(f)⊂Akn{\displaystyle {\bar {V}}(f)\subset \mathbb {A} _{k}^{n}}, corresponding to theprincipal ideal(f)⊂R{\displaystyle (f)\subset R}. The corresponding scheme isV(f)=Spec⁡(R/(f)){\textstyle V(f)=\operatorname {Spec} (R/(f))}, a closed subscheme of affine space. For example, takingkto be the complex or real numbers, the equationx2=y2(y+1){\displaystyle x^{2}=y^{2}(y+1)}defines anodal cubic curvein the affine planeAk2{\displaystyle \mathbb {A} _{k}^{2}}, corresponding to the schemeV=Spec⁡k[x,y]/(x2−y2(y+1)){\displaystyle V=\operatorname {Spec} k[x,y]/(x^{2}-y^{2}(y+1))}. The ring of integersZ{\displaystyle \mathbb {Z} }can be considered as the coordinate ring of the schemeZ=Spec⁡(Z){\displaystyle Z=\operatorname {Spec} (\mathbb {Z} )}. The Zariski topology has closed pointsmp=(p){\displaystyle {\mathfrak {m}}_{p}=(p)}, the principal ideals of the prime numbersp∈Z{\displaystyle p\in \mathbb {Z} }; as well as the generic pointp0=(0){\displaystyle {\mathfrak {p}}_{0}=(0)}, the zero ideal, whoseclosure is the whole scheme. Closed sets are finite sets, and open sets are their complements, the cofinite sets; any infinite set of points is dense. The basis open set corresponding to the irreducible elementp∈Z{\displaystyle p\in \mathbb {Z} }isUp=Z∖{mp}{\displaystyle U_{p}=Z\smallsetminus \{{\mathfrak {m}}_{p}\}}, with coordinate ringOZ(Up)=Z[p−1]={npmforn∈Z,m≥0}{\displaystyle {\mathcal {O}}_{Z}(U_{p})=\mathbb {Z} [p^{-1}]=\{{\tfrac {n}{p^{m}}}\ {\text{for}}\ n\in \mathbb {Z} ,\ m\geq 0\}}. For the open setU=Z∖{mp1,…,mpℓ}{\displaystyle U=Z\smallsetminus \{{\mathfrak {m}}_{p_{1}},\ldots ,{\mathfrak {m}}_{p_{\ell }}\}}, this inducesOZ(U)=Z[p1−1,…,pℓ−1]{\displaystyle {\mathcal {O}}_{Z}(U)=\mathbb {Z} [p_{1}^{-1},\ldots ,p_{\ell }^{-1}]}. A numbern∈Z{\displaystyle n\in \mathbb {Z} }corresponds to a function on the schemeZ{\displaystyle Z}, a function whose value atmp{\displaystyle {\mathfrak {m}}_{p}}lies in the residue fieldk(mp)=Z/(p)=Fp{\displaystyle k({\mathfrak {m}}_{p})=\mathbb {Z} /(p)=\mathbb {F} _{p}}, thefinite fieldof integers modulop{\displaystyle p}:the function is defined byn(mp)=nmodp{\displaystyle n({\mathfrak {m}}_{p})=n\ {\text{mod}}\ p}, and alson(p0)=n{\displaystyle n({\mathfrak {p}}_{0})=n}in the generic residue ringZ/(0)=Z{\displaystyle \mathbb {Z} /(0)=\mathbb {Z} }. The functionn{\displaystyle n}is determined by its values at the pointsmp{\displaystyle {\mathfrak {m}}_{p}}only, so we can think ofn{\displaystyle n}as a kind of "regular function" on the closed points, a very special type among the arbitrary functionsf{\displaystyle f}withf(mp)∈Fp{\displaystyle f({\mathfrak {m}}_{p})\in \mathbb {F} _{p}}. Note that the pointmp{\displaystyle {\mathfrak {m}}_{p}}is the vanishing locus of the functionn=p{\displaystyle n=p}, the point where the value ofp{\displaystyle p}is equal to zero in the residue field. The field of "rational functions" onZ{\displaystyle Z}is the fraction field of the generic residue ring,k(p0)=Frac⁡(Z)=Q{\displaystyle k({\mathfrak {p}}_{0})=\operatorname {Frac} (\mathbb {Z} )=\mathbb {Q} }. A fractiona/b{\displaystyle a/b}has "poles" at the pointsmp{\displaystyle {\mathfrak {m}}_{p}}corresponding to prime divisors of the denominator. This also gives a geometric interpretaton ofBezout's lemmastating that if the integersn1,…,nr{\displaystyle n_{1},\ldots ,n_{r}}have no common prime factor, then there are integersa1,…,ar{\displaystyle a_{1},\ldots ,a_{r}}witha1n1+⋯+arnr=1{\displaystyle a_{1}n_{1}+\cdots +a_{r}n_{r}=1}. Geometrically, this is a version of the weakHilbert Nullstellensatzfor the schemeZ{\displaystyle Z}: if the functionsn1,…,nr{\displaystyle n_{1},\ldots ,n_{r}}have no common vanishing pointsmp{\displaystyle {\mathfrak {m}}_{p}}inZ{\displaystyle Z}, then they generate the unit ideal(n1,…,nr)=(1){\displaystyle (n_{1},\ldots ,n_{r})=(1)}in the coordinate ringZ{\displaystyle \mathbb {Z} }. Indeed, we may consider the termsρi=aini{\displaystyle \rho _{i}=a_{i}n_{i}}as forming a kind ofpartition of unitysubordinate to the covering ofZ{\displaystyle Z}by the open setsUi=Z∖V(ni){\displaystyle U_{i}=Z\smallsetminus V(n_{i})}. The affine spaceAZ1={afora∈Z}{\displaystyle \mathbb {A} _{\mathbb {Z} }^{1}=\{a\ {\text{for}}\ a\in \mathbb {Z} \}}is a variety with coordinate ringZ[x]{\displaystyle \mathbb {Z} [x]}, the polynomials with integer coefficients. The corresponding scheme isY=Spec⁡(Z[x]){\displaystyle Y=\operatorname {Spec} (\mathbb {Z} [x])}, whose points are all of the prime idealsp⊂Z[x]{\displaystyle {\mathfrak {p}}\subset \mathbb {Z} [x]}. The closed points are maximal ideals of the formm=(p,f(x)){\displaystyle {\mathfrak {m}}=(p,f(x))}, wherep{\displaystyle p}is a prime number, andf(x){\displaystyle f(x)}is a non-constant polynomial with no integer factor and which is irreducible modulop{\displaystyle p}. Thus, we may pictureY{\displaystyle Y}as two-dimensional, with a "characteristic direction" measured by the coordinatep{\displaystyle p}, and a "spatial direction" with coordinatex{\displaystyle x}. A given prime numberp{\displaystyle p}defines a "vertical line", the subschemeV(p){\displaystyle V(p)}of the prime idealp=(p){\displaystyle {\mathfrak {p}}=(p)}: this containsm=(p,f(x)){\displaystyle {\mathfrak {m}}=(p,f(x))}for allf(x){\displaystyle f(x)}, the "characteristicp{\displaystyle p}points" of the scheme. Fixing thex{\displaystyle x}-coordinate, we have the "horizontal line"x=a{\displaystyle x=a}, the subschemeV(x−a){\displaystyle V(x-a)}of the prime idealp=(x−a){\displaystyle {\mathfrak {p}}=(x-a)}. We also have the lineV(bx−a){\displaystyle V(bx-a)}corresponding to the rational coordinatex=a/b{\displaystyle x=a/b}, which does not intersectV(p){\displaystyle V(p)}for thosep{\displaystyle p}which divideb{\displaystyle b}. A higher degree "horizontal" subscheme likeV(x2+1){\displaystyle V(x^{2}+1)}corresponds tox{\displaystyle x}-values which are roots ofx2+1{\displaystyle x^{2}+1}, namelyx=±−1{\displaystyle x=\pm {\sqrt {-1}}}. This behaves differently under differentp{\displaystyle p}-coordinates. Atp=5{\displaystyle p=5}, we get two pointsx=±2mod5{\displaystyle x=\pm 2\ {\text{mod}}\ 5}, since(5,x2+1)=(5,x−2)∩(5,x+2){\displaystyle (5,x^{2}+1)=(5,x-2)\cap (5,x+2)}. Atp=2{\displaystyle p=2}, we get oneramifieddouble-pointx=1mod2{\displaystyle x=1\ {\text{mod}}\ 2}, since(2,x2+1)=(2,(x−1)2){\displaystyle (2,x^{2}+1)=(2,(x-1)^{2})}. And atp=3{\displaystyle p=3}, we get thatm=(3,x2+1){\displaystyle {\mathfrak {m}}=(3,x^{2}+1)}is a prime ideal corresponding tox=±−1{\displaystyle x=\pm {\sqrt {-1}}}in an extension field ofF3{\displaystyle \mathbb {F} _{3}}; since we cannot distinguish between these values (they are symmetric under theGalois group), we should pictureV(3,x2+1){\displaystyle V(3,x^{2}+1)}as two fused points. Overall,V(x2+1){\displaystyle V(x^{2}+1)}is a kind of fusion of two Galois-symmetric horizonal lines, a curve of degree 2. The residue field atm=(p,f(x)){\displaystyle {\mathfrak {m}}=(p,f(x))}isk(m)=Z[x]/m=Fp[x]/(f(x))≅Fp(α){\displaystyle k({\mathfrak {m}})=\mathbb {Z} [x]/{\mathfrak {m}}=\mathbb {F} _{p}[x]/(f(x))\cong \mathbb {F} _{p}(\alpha )}, a field extension ofFp{\displaystyle \mathbb {F} _{p}}adjoining a rootx=α{\displaystyle x=\alpha }off(x){\displaystyle f(x)}; this is a finite field withpd{\displaystyle p^{d}}elements,d=deg⁡(f){\displaystyle d=\operatorname {deg} (f)}. A polynomialr(x)∈Z[x]{\displaystyle r(x)\in \mathbb {Z} [x]}corresponds to a function on the schemeY{\displaystyle Y}with valuesr(m)=rmodm{\displaystyle r({\mathfrak {m}})=r\ \mathrm {mod} \ {\mathfrak {m}}}, that isr(m)=r(α)∈Fp(α){\displaystyle r({\mathfrak {m}})=r(\alpha )\in \mathbb {F} _{p}(\alpha )}. Again eachr(x)∈Z[x]{\displaystyle r(x)\in \mathbb {Z} [x]}is determined by its valuesr(m){\displaystyle r({\mathfrak {m}})}at closed points;V(p){\displaystyle V(p)}is the vanishing locus of the constant polynomialr(x)=p{\displaystyle r(x)=p}; andV(f(x)){\displaystyle V(f(x))}contains the points in each characteristicp{\displaystyle p}corresponding to Galois orbits of roots off(x){\displaystyle f(x)}in the algebraic closureF¯p{\displaystyle {\overline {\mathbb {F} }}_{p}}. The schemeY{\displaystyle Y}is notproper, so that pairs of curves may fail tointersect with the expected multiplicity. This is a major obstacle to analyzingDiophantine equationswithgeometric tools.Arakelov theoryovercomes this obstacle by compactifying affine arithmetic schemes, adding points at infinity corresponding tovaluations. If we consider a polynomialf∈Z[x,y]{\displaystyle f\in \mathbb {Z} [x,y]}then the affine schemeX=Spec⁡(Z[x,y]/(f)){\displaystyle X=\operatorname {Spec} (\mathbb {Z} [x,y]/(f))}has a canonical morphism toSpec⁡Z{\displaystyle \operatorname {Spec} \mathbb {Z} }and is called anarithmetic surface. The fibersXp=X×Spec⁡(Z)Spec⁡(Fp){\displaystyle X_{p}=X\times _{\operatorname {Spec} (\mathbb {Z} )}\operatorname {Spec} (\mathbb {F} _{p})}are then algebraic curves over the finite fieldsFp{\displaystyle \mathbb {F} _{p}}. Iff(x,y)=y2−x3+ax2+bx+c{\displaystyle f(x,y)=y^{2}-x^{3}+ax^{2}+bx+c}is anelliptic curve, then the fibers over its discriminant locus, whereΔf=−4a3c+a2b2+18abc−4b3−27c2=0modp,{\displaystyle \Delta _{f}=-4a^{3}c+a^{2}b^{2}+18abc-4b^{3}-27c^{2}=0\ {\text{mod}}\ p,}are all singular schemes.[13]For example, ifp{\displaystyle p}is a prime number andX=Spec⁡Z[x,y](y2−x3−p){\displaystyle X=\operatorname {Spec} {\frac {\mathbb {Z} [x,y]}{(y^{2}-x^{3}-p)}}}then its discriminant is−27p2{\displaystyle -27p^{2}}. This curve is singular over the prime numbers3,p{\displaystyle 3,p}. It is also fruitful to consider examples of morphisms as examples of schemes since they demonstrate their technical effectiveness for encapsulating many objects of study in algebraic and arithmetic geometry. Here are some of the ways in which schemes go beyond older notions of algebraic varieties, and their significance. A central part of scheme theory is the notion ofcoherent sheaves, generalizing the notion of (algebraic)vector bundles. For a schemeX, one starts by considering theabelian categoryofOX-modules, which are sheaves of abelian groups onXthat form amoduleover the sheaf of regular functionsOX. In particular, a moduleMover a commutative ringRdetermines anassociatedOX-module~MonX= Spec(R). Aquasi-coherent sheafon a schemeXmeans anOX-module that is the sheaf associated to a module on each affine open subset ofX. Finally, acoherent sheaf(on a Noetherian schemeX, say) is anOX-module that is the sheaf associated to afinitely generated moduleon each affine open subset ofX. Coherent sheaves include the important class ofvector bundles, which are the sheaves that locally come from finitely generatedfree modules. An example is thetangent bundleof a smooth variety over a field. However, coherent sheaves are richer; for example, a vector bundle on a closed subschemeYofXcan be viewed as a coherent sheaf onXthat is zero outsideY(by thedirect imageconstruction). In this way, coherent sheaves on a schemeXinclude information about all closed subschemes ofX. Moreover,sheaf cohomologyhas good properties for coherent (and quasi-coherent) sheaves. The resulting theory ofcoherent sheaf cohomologyis perhaps the main technical tool in algebraic geometry.[18][19] Considered as its functor of points, a scheme is a functor that is a sheaf of sets for the Zariski topology on the category of commutative rings, and that, locally in the Zariski topology, is an affine scheme. This can be generalized in several ways. One is to use theétale topology.Michael Artindefined analgebraic spaceas a functor that is a sheaf in the étale topology and that, locally in the étale topology, is an affine scheme. Equivalently, an algebraic space is the quotient of a scheme by an étale equivalence relation. A powerful result, theArtin representability theorem, gives simple conditions for a functor to be represented by an algebraic space.[20] A further generalization is the idea of astack. Crudely speaking,algebraic stacksgeneralize algebraic spaces by having analgebraic groupattached to each point, which is viewed as the automorphism group of that point. For example, anyactionof an algebraic groupGon an algebraic varietyXdetermines aquotient stack[X/G], which remembers thestabilizer subgroupsfor the action ofG. More generally, moduli spaces in algebraic geometry are often best viewed as stacks, thereby keeping track of the automorphism groups of the objects being classified. Grothendieck originally introduced stacks as a tool for the theory ofdescent. In that formulation, stacks are (informally speaking) sheaves of categories.[21]From this general notion, Artin defined the narrower class of algebraic stacks (or "Artin stacks"), which can be considered geometric objects. These includeDeligne–Mumford stacks(similar toorbifoldsin topology), for which the stabilizer groups are finite, and algebraic spaces, for which the stabilizer groups are trivial. TheKeel–Mori theoremsays that an algebraic stack with finite stabilizer groups has acoarse moduli spacethat is an algebraic space. Another type of generalization is to enrich the structure sheaf, bringing algebraic geometry closer tohomotopy theory. In this setting, known asderived algebraic geometryor "spectral algebraic geometry", the structure sheaf is replaced by a homotopical analog of a sheaf of commutative rings (for example, a sheaf ofE-infinity ring spectra). These sheaves admit algebraic operations that are associative and commutative only up to an equivalence relation. Taking the quotient by this equivalence relation yields the structure sheaf of an ordinary scheme. Not taking the quotient, however, leads to a theory that can remember higher information, in the same way thatderived functorsin homological algebra yield higher information about operations such astensor productand theHom functoron modules.
https://en.wikipedia.org/wiki/Scheme_(mathematics)
Quantile regressionis a type ofregression analysisused in statistics and econometrics. Whereas themethod of least squaresestimates the conditionalmeanof the response variable across values of the predictor variables, quantile regression estimates the conditionalmedian(or otherquantiles) of the response variable. [There is also a method for predicting the conditionalgeometric meanof the response variable,[1].] Quantile regression is an extension of linear regression used when the conditions of linear regression are not met. One advantage of quantile regression relative to ordinary least squares regression is that the quantile regression estimates are more robust against outliers in the response measurements. However, the main attraction of quantile regression goes beyond this and is advantageous when conditional quantile functions are of interest. Different measures ofcentral tendencyandstatistical dispersioncan be used to more comprehensively analyze the relationship between variables.[2] Inecology, quantile regression has been proposed and used as a way to discover more useful predictive relationships between variables in cases where there is no relationship or only a weak relationship between the means of such variables. The need for and success of quantile regression in ecology has been attributed to thecomplexityof interactions between different factors leading todatawith unequal variation of one variable for different ranges of another variable.[3] Another application of quantile regression is in the areas of growth charts, where percentile curves are commonly used to screen for abnormal growth.[4][5] The idea of estimating a median regression slope, a major theorem about minimizing sum of the absolute deviances and a geometrical algorithm for constructing median regression was proposed in 1760 byRuđer Josip Bošković, aJesuit Catholicpriest from Dubrovnik.[2]: 4[6]He was interested in the ellipticity of the earth, building on Isaac Newton's suggestion that its rotation could cause it to bulge at theequatorwith a corresponding flattening at the poles.[7]He finally produced the first geometric procedure for determining theequatorof a rotatingplanetfrom threeobservationsof a surface feature. More importantly for quantile regression, he was able to develop the first evidence of the least absolute criterion and preceded the least squares introduced byLegendrein 1805 by fifty years.[8] Other thinkers began building upon Bošković's idea such asPierre-Simon Laplace, who developed the so-called "methode de situation." This led toFrancis Edgeworth's plural median[9]- a geometric approach to median regression - and is recognized as the precursor of thesimplex method.[8]The works of Bošković, Laplace, and Edgeworth were recognized as a prelude toRoger Koenker's contributions to quantile regression. Median regression computations for larger data sets are quite tedious compared to the least squares method, for which reason it has historically generated a lack of popularity among statisticians, until the widespread adoption of computers in the latter part of the 20th century. Quantile regression expresses the conditional quantiles of a dependent variable as a linear function of the explanatory variables. Crucial to the practicality of quantile regression is that the quantiles can be expressed as the solution of a minimization problem, as we will show in this section before discussing conditional quantiles in the next section. LetY{\displaystyle Y}be a real-valued random variable withcumulative distribution functionFY(y)=P(Y≤y){\displaystyle F_{Y}(y)=P(Y\leq y)}. Theτ{\displaystyle \tau }th quantile of Y is given by whereτ∈(0,1).{\displaystyle \tau \in (0,1).} Define theloss functionasρτ(m)=m(τ−I(m<0)){\displaystyle \rho _{\tau }(m)=m(\tau -\mathbb {I} _{(m<0)})}, whereI{\displaystyle \mathbb {I} }is anindicator function. A specific quantile can be found by minimizing the expected loss ofY−u{\displaystyle Y-u}with respect tou{\displaystyle u}:[2](pp. 5–6): This can be shown by computing the derivative of the expected loss with respect tou{\displaystyle u}via an application of theLeibniz integral rule, setting it to 0, and lettingqτ{\displaystyle q_{\tau }}be the solution of This equation reduces to and then to If the solutionqτ{\displaystyle q_{\tau }}is not unique, then we have to take the smallest such solution to obtain theτ{\displaystyle \tau }th quantile of the random variableY. LetY{\displaystyle Y}be a discrete random variable that takes valuesyi=i{\displaystyle y_{i}=i}withi=1,2,…,9{\displaystyle i=1,2,\dots ,9}with equal probabilities. The task is to find the median of Y, and hence the valueτ=0.5{\displaystyle \tau =0.5}is chosen. Then the expected loss ofY−u{\displaystyle Y-u}is Since0.5/9{\displaystyle {0.5/9}}is a constant, it can be taken out of the expected loss function (this is only true ifτ=0.5{\displaystyle \tau =0.5}). Then, atu=3, Suppose thatuis increased by 1 unit. Then the expected loss will be changed by(3)−(6)=−3{\displaystyle (3)-(6)=-3}on changinguto 4. If,u=5, the expected loss is and any change inuwill increase the expected loss. Thusu=5 is the median. The Table below shows the expected loss (divided by0.5/9{\displaystyle {0.5/9}}) for different values ofu. Considerτ=0.5{\displaystyle \tau =0.5}and letqbe an initial guess forqτ{\displaystyle q_{\tau }}. The expected loss evaluated atqis In order to minimize the expected loss, we move the value ofqa little bit to see whether the expected loss will rise or fall. Suppose we increaseqby 1 unit. Then the change of expected loss would be The first term of the equation isFY(q){\displaystyle F_{Y}(q)}and second term of the equation is1−FY(q){\displaystyle 1-F_{Y}(q)}. Therefore, the change of expected loss function is negative if and only ifFY(q)<0.5{\displaystyle F_{Y}(q)<0.5}, that is if and only ifqis smaller than the median. Similarly, if we reduceqby 1 unit, the change of expected loss function is negative if and only ifqis larger than the median. In order to minimize the expected loss function, we would increase (decrease)qifqis smaller (larger) than the median, untilqreaches the median. The idea behind the minimization is to count the number of points (weighted with the density) that are larger or smaller thanqand then moveqto a point whereqis larger than100τ{\displaystyle 100\tau }% of the points. Theτ{\displaystyle \tau }sample quantile can be obtained by using animportance samplingestimate and solving the following minimization problem where the functionρτ{\displaystyle \rho _{\tau }}is the tilted absolute value function. The intuition is the same as for the population quantile. Theτ{\displaystyle \tau }th conditional quantile ofY{\displaystyle Y}givenX{\displaystyle X}is theτ{\displaystyle \tau }th quantile of theConditional probability distributionofY{\displaystyle Y}givenX{\displaystyle X}, We use a capitalQ{\displaystyle Q}to denote the conditional quantile to indicate that it is a random variable. In quantile regression for theτ{\displaystyle \tau }th quantile we make the assumption that theτ{\displaystyle \tau }th conditional quantile is given as a linear function of the explanatory variables: Given the distribution function ofY{\displaystyle Y},βτ{\displaystyle \beta _{\tau }}can be obtained by solving Solving the sample analog gives the estimator ofβ{\displaystyle \beta }. Note that whenτ=0.5{\displaystyle \tau =0.5}, the loss functionρτ{\displaystyle \rho _{\tau }}is proportional to the absolute value function, and thus median regression is the same as linear regression byleast absolute deviations. The mathematical forms arising from quantile regression are distinct from those arising in themethod of least squares. The method of least squares leads to a consideration of problems in aninner product space, involvingprojectiononto subspaces, and thus the problem of minimizing the squared errors can be reduced to a problem innumerical linear algebra. Quantile regression does not have this structure, and instead the minimization problem can be reformulated as alinear programmingproblem where Simplex methods[2]: 181orinterior point methods[2]: 190can be applied to solve the linear programming problem. Forτ∈(0,1){\displaystyle \tau \in (0,1)}, under some regularity conditions,β^τ{\displaystyle {\hat {\beta }}_{\tau }}isasymptotically normal: where Direct estimation of the asymptotic variance-covariance matrix is not always satisfactory. Inference for quantile regression parameters can be made with the regression rank-score tests or with the bootstrap methods.[10] Seeinvariant estimatorfor background on invariance or seeequivariance. For anya>0{\displaystyle a>0}andτ∈[0,1]{\displaystyle \tau \in [0,1]} For anyγ∈Rk{\displaystyle \gamma \in R^{k}}andτ∈[0,1]{\displaystyle \tau \in [0,1]} LetA{\displaystyle A}be anyp×p{\displaystyle p\times p}nonsingular matrix andτ∈[0,1]{\displaystyle \tau \in [0,1]} Ifh{\displaystyle h}is a nondecreasing function onR{\displaystyle \mathbb {R} }, the followinginvarianceproperty applies: Example (1): IfW=exp⁡(Y){\displaystyle W=\exp(Y)}andQY|X(τ)=Xβτ{\displaystyle Q_{Y|X}(\tau )=X\beta _{\tau }}, thenQW|X(τ)=exp⁡(Xβτ){\displaystyle Q_{W|X}(\tau )=\exp(X\beta _{\tau })}. The mean regression does not have the same property sinceE⁡(ln⁡(Y))≠ln⁡(E⁡(Y)).{\displaystyle \operatorname {E} (\ln(Y))\neq \ln(\operatorname {E} (Y)).} The linear modelQY|X(τ)=Xβτ{\displaystyle Q_{Y|X}(\tau )=X\beta _{\tau }}mis-specifies the true systematic relationQY|X(τ)=f(X,τ){\displaystyle Q_{Y|X}(\tau )=f(X,\tau )}whenf(⋅,τ){\displaystyle f(\cdot ,\tau )}is nonlinear. However,QY|X(τ)=Xβτ{\displaystyle Q_{Y|X}(\tau )=X\beta _{\tau }}minimizes a weighted distanced tof(X,τ){\displaystyle f(X,\tau )}among linear models.[11]Furthermore, the slope parametersβτ{\displaystyle \beta _{\tau }}of the linear model can be interpreted as weighted averages of the derivatives∇f(X,τ){\displaystyle \nabla f(X,\tau )}so thatβτ{\displaystyle \beta _{\tau }}can be used for causal inference.[12]Specifically, the hypothesisH0:∇f(x,τ)=0{\displaystyle H_{0}:\nabla f(x,\tau )=0}for allx{\displaystyle x}implies the hypothesisH0:βτ=0{\displaystyle H_{0}:\beta _{\tau }=0}, which can be tested using the estimatorβτ^{\displaystyle {\hat {\beta _{\tau }}}}and its limit distribution. Thegoodness of fitfor quantile regression for theτ{\displaystyle \tau }quantile can be defined as:[13]R1(τ)=1−V^τV~τ,{\displaystyle R^{1}(\tau )=1-{\frac {{\hat {V}}_{\tau }}{{\tilde {V}}_{\tau }}},}whereV^τ{\displaystyle {\hat {V}}_{\tau }}is the minimized expected loss function under the full model, whileV~τ{\displaystyle {\tilde {V}}_{\tau }}is the expected loss function under the intercept-only model. Because quantile regression does not normally assume a parametric likelihood for the conditional distributions of Y|X, the Bayesian methods work with a working likelihood. A convenient choice is the asymmetric Laplacian likelihood,[14]because the mode of the resulting posterior under a flat prior is the usual quantile regression estimates. The posterior inference, however, must be interpreted with care. Yang, Wang and He[15]provided a posterior variance adjustment for valid inference. In addition, Yang and He[16]showed that one can have asymptotically valid posterior inference if the working likelihood is chosen to be the empirical likelihood. Beyondsimple linear regression, there are several machine learning methods that can be extended to quantile regression. A switch from the squared error to the tilted absolute value loss function (a.k.a. thepinball loss[17]) allows gradient descent-based learning algorithms to learn a specified quantile instead of the mean. It means that we can apply allneural networkanddeep learningalgorithms to quantile regression,[18][19]which is then referred to asnonparametricquantile regression.[20]Tree-based learning algorithms are also available for quantile regression (see, e.g., Quantile Regression Forests,[21]as a simple generalization ofRandom Forests). If the response variable is subject to censoring, the conditional mean is not identifiable without additional distributional assumptions, but the conditional quantile is often identifiable. For recent work on censored quantile regression, see: Portnoy[22]and Wang and Wang[23] Example (2): LetYc=max(0,Y){\displaystyle Y^{c}=\max(0,Y)}andQY|X=Xβτ{\displaystyle Q_{Y|X}=X\beta _{\tau }}. ThenQYc|X(τ)=max(0,Xβτ){\displaystyle Q_{Y^{c}|X}(\tau )=\max(0,X\beta _{\tau })}. This is the censored quantile regression model: estimated values can be obtained without making any distributional assumptions, but at the cost of computational difficulty,[24]some of which can be avoided by using a simple three step censored quantile regression procedure as an approximation.[25] For random censoring on the response variables, the censored quantile regression of Portnoy (2003)[22]provides consistent estimates of all identifiable quantile functions based on reweighting each censored point appropriately. Censored quantile regression has close links tosurvival analysis. The quantile regression loss needs to be adapted in the presence of heteroscedastic errors in order to beefficient.[26] Numerous statistical software packages include implementations of quantile regression:
https://en.wikipedia.org/wiki/Quantile_regression
Principia Cyberneticais an international cooperation of scientists in the field ofcyberneticsandsystems science, especially known for their website, Principia Cybernetica. They have dedicated their organization to what they call "a computer-supported evolutionary-systemic philosophy, in the context of the transdisciplinary academic fields of Systems Science and Cybernetics".[1] Principia Cybernetica was initiated in 1989 in the USA byCliff JoslynandValentin Turchin, and a year later broadened to Europe withFrancis Heylighenfrom Belgium joining their cooperation. Major activities of the Principia Cybernetica Project are: The organization is associated with: The Principia Cybernetica Web,[4]which wentonline in 1993, is one of the first complex webs in the world. It contains content oncybernetics,systems theory,complexity, and related approaches. Especially in the 1990s the Principia Cybernetica has organized a series of workshops and international symposia oncyberneticthemes.[5]On the 1st Principia Cybernetica Workshop in June 1991 inBrusselsmanycyberneticistsattended like Harry Bronitz,Gordon Pask, J.L. Elohim, Robert Glueck,Ranulph Glanville, Annemie Van Kerkhoven, Don McNeil, Elan Moritz, Cliff Joslyn, A. Comhaire andValentin Turchin.[5]
https://en.wikipedia.org/wiki/Principia_Cybernetica
Ingeometry, asimplex(plural:simplexesorsimplices) is a generalization of the notion of atriangleortetrahedronto arbitrarydimensions. The simplex is so-named because it represents the simplest possiblepolytopein any given dimension. For example, Specifically, ak-simplexis ak-dimensionalpolytopethat is theconvex hullof itsk+ 1vertices. More formally, suppose thek+ 1pointsu0,…,uk{\displaystyle u_{0},\dots ,u_{k}}areaffinely independent, which means that thekvectorsu1−u0,…,uk−u0{\displaystyle u_{1}-u_{0},\dots ,u_{k}-u_{0}}arelinearly independent. Then, the simplex determined by them is the set of pointsC={θ0u0+⋯+θkuk|∑i=0kθi=1andθi≥0fori=0,…,k}.{\displaystyle C=\left\{\theta _{0}u_{0}+\dots +\theta _{k}u_{k}~{\Bigg |}~\sum _{i=0}^{k}\theta _{i}=1{\mbox{ and }}\theta _{i}\geq 0{\mbox{ for }}i=0,\dots ,k\right\}.} Aregular simplex[1]is a simplex that is also aregular polytope. A regulark-simplex may be constructed from a regular(k− 1)-simplex by connecting a new vertex to all original vertices by the common edge length. Thestandard simplexorprobability simplex[2]is the(k− 1)-dimensional simplex whose vertices are thekstandardunit vectorsinRk{\displaystyle \mathbf {R} ^{k}}, or in other words{x∈Rk:x0+⋯+xk−1=1,xi≥0fori=0,…,k−1}.{\displaystyle \left\{x\in \mathbf {R} ^{k}:x_{0}+\dots +x_{k-1}=1,x_{i}\geq 0{\text{ for }}i=0,\dots ,k-1\right\}.} Intopologyandcombinatorics, it is common to "glue together" simplices to form asimplicial complex. The geometric simplex and simplicial complex should not be confused with theabstract simplicial complex, in which a simplex is simply afinite setand the complex is a family of such sets that is closed under taking subsets. The concept of a simplex was known toWilliam Kingdon Clifford, who wrote about these shapes in 1886 but called them "prime confines".Henri Poincaré, writing aboutalgebraic topologyin 1900, called them "generalized tetrahedra". In 1902Pieter Hendrik Schoutedescribed the concept first with theLatinsuperlativesimplicissimum("simplest") and then with the same Latin adjective in the normal formsimplex("simple").[3] Theregular simplexfamily is the first of threeregular polytopefamilies, labeled byDonald Coxeterasαn, the other two being thecross-polytopefamily, labeled asβn, and thehypercubes, labeled asγn. A fourth family, thetessellation ofn-dimensional space by infinitely many hypercubes, he labeled asδn.[4] Theconvex hullof anynonemptysubsetof then+ 1points that define ann-simplex is called afaceof the simplex. Faces are simplices themselves. In particular, the convex hull of a subset of sizem+ 1(of then+ 1defining points) is anm-simplex, called anm-faceof then-simplex. The 0-faces (i.e., the defining points themselves as sets of size 1) are called thevertices(singular: vertex), the 1-faces are called theedges, the (n− 1)-faces are called thefacets, and the solen-face is the wholen-simplex itself. In general, the number ofm-faces is equal to thebinomial coefficient(n+1m+1){\displaystyle {\tbinom {n+1}{m+1}}}.[5]Consequently, the number ofm-faces of ann-simplex may be found in column (m+ 1) of row (n+ 1) ofPascal's triangle. A simplexAis acofaceof a simplexBifBis a face ofA.Faceandfacetcan have different meanings when describing types of simplices in asimplicial complex. The extendedf-vectorfor ann-simplex can be computed by(1,1)n+1, like the coefficients ofpolynomial products. For example, a7-simplexis (1,1)8= (1,2,1)4= (1,4,6,4,1)2= (1,8,28,56,70,56,28,8,1). The number of 1-faces (edges) of then-simplex is then-thtriangle number, the number of 2-faces of then-simplex is the(n− 1)thtetrahedron number, the number of 3-faces of then-simplex is the(n− 2)th 5-cell number, and so on. Ann-simplex is thepolytopewith the fewest vertices that requiresndimensions. Consider a line segmentABas a shape in a 1-dimensional space (the 1-dimensional space is the line in which the segment lies). One can place a new pointCsomewhere off the line. The new shape, triangleABC, requires two dimensions; it cannot fit in the original 1-dimensional space. The triangle is the 2-simplex, a simple shape that requires two dimensions. Consider a triangleABC, a shape in a 2-dimensional space (the plane in which the triangle resides). One can place a new pointDsomewhere off the plane. The new shape, tetrahedronABCD, requires three dimensions; it cannot fit in the original 2-dimensional space. The tetrahedron is the 3-simplex, a simple shape that requires three dimensions. Consider tetrahedronABCD, a shape in a 3-dimensional space (the 3-space in which the tetrahedron lies). One can place a new pointEsomewhere outside the 3-space. The new shapeABCDE, called a 5-cell, requires four dimensions and is called the 4-simplex; it cannot fit in the original 3-dimensional space. (It also cannot be visualized easily.) This idea can be generalized, that is, adding a single new point outside the currently occupied space, which requires going to the next higher dimension to hold the new shape. This idea can also be worked backward: the line segment we started with is a simple shape that requires a 1-dimensional space to hold it; the line segment is the 1-simplex. The line segment itself was formed by starting with a single point in 0-dimensional space (this initial point is the 0-simplex) and adding a second point, which required the increase to 1-dimensional space. More formally, an(n+ 1)-simplex can be constructed as a join (∨ operator) of ann-simplex and a point,( ). An(m+n+ 1)-simplex can be constructed as a join of anm-simplex and ann-simplex. The two simplices are oriented to be completely normal from each other, with translation in a direction orthogonal to both of them. A 1-simplex is the join of two points:( ) ∨ ( ) = 2 ⋅ ( ). A general 2-simplex (scalene triangle) is the join of three points:( ) ∨ ( ) ∨ ( ). Anisosceles triangleis the join of a 1-simplex and a point:{ } ∨ ( ). Anequilateral triangleis 3 ⋅ ( ) or {3}. A general 3-simplex is the join of 4 points:( ) ∨ ( ) ∨ ( ) ∨ ( ). A 3-simplex with mirror symmetry can be expressed as the join of an edge and two points:{ } ∨ ( ) ∨ ( ). A 3-simplex with triangular symmetry can be expressed as the join of an equilateral triangle and 1 point:3.( )∨( )or{3}∨( ). Aregular tetrahedronis4 ⋅ ( )or {3,3} and so on. In some conventions,[7]the empty set is defined to be a (−1)-simplex. The definition of the simplex above still makes sense ifn= −1. This convention is more common in applications to algebraic topology (such assimplicial homology) than to the study of polytopes. ThesePetrie polygons(skew orthogonal projections) show all the vertices of the regular simplex on acircle, and all vertex pairs connected by edges. Thestandardn-simplex(orunitn-simplex) is the subset ofRn+1given by The simplexΔnlies in theaffine hyperplaneobtained by removing the restrictionti≥ 0in the above definition. Then+ 1vertices of the standardn-simplex are the pointsei∈Rn+1, where Astandard simplexis an example of a0/1-polytope, with all coordinates as 0 or 1. It can also be seen onefacetof a regular(n+ 1)-orthoplex. There is a canonical map from the standardn-simplex to an arbitraryn-simplex with vertices (v0, ...,vn) given by The coefficientstiare called thebarycentric coordinatesof a point in then-simplex. Such a general simplex is often called anaffinen-simplex, to emphasize that the canonical map is anaffine transformation. It is also sometimes called anoriented affinen-simplexto emphasize that the canonical map may beorientation preservingor reversing. More generally, there is a canonical map from the standard(n−1){\displaystyle (n-1)}-simplex (withnvertices) onto anypolytopewithnvertices, given by the same equation (modifying indexing): These are known asgeneralized barycentric coordinates, and express every polytope as theimageof a simplex:Δn−1↠P.{\displaystyle \Delta ^{n-1}\twoheadrightarrow P.} A commonly used function fromRnto the interior of the standard(n−1){\displaystyle (n-1)}-simplex is thesoftmax function, or normalized exponential function; this generalizes thestandard logistic function. An alternative coordinate system is given by taking theindefinite sum: This yields the alternative presentation byorder,namely as nondecreasingn-tuples between 0 and 1: Geometrically, this is ann-dimensional subset ofRn{\displaystyle \mathbf {R} ^{n}}(maximal dimension, codimension 0) rather than ofRn+1{\displaystyle \mathbf {R} ^{n+1}}(codimension 1). The facets, which on the standard simplex correspond to one coordinate vanishing,ti=0,{\displaystyle t_{i}=0,}here correspond to successive coordinates being equal,si=si+1,{\displaystyle s_{i}=s_{i+1},}while theinteriorcorresponds to the inequalities becomingstrict(increasing sequences). A key distinction between these presentations is the behavior under permuting coordinates – the standard simplex is stabilized by permuting coordinates, while permuting elements of the "ordered simplex" do not leave it invariant, as permuting an ordered sequence generally makes it unordered. Indeed, the ordered simplex is a (closed)fundamental domainfor theactionof thesymmetric groupon then-cube, meaning that the orbit of the ordered simplex under then! elements of the symmetric group divides then-cube inton!{\displaystyle n!}mostly disjoint simplices (disjoint except for boundaries), showing that this simplex has volume1/n!. Alternatively, the volume can be computed by an iterated integral, whose successive integrands are 1,x,x2/2,x3/3!, ...,xn/n!. A further property of this presentation is that it uses the order but not addition, and thus can be defined in any dimension over any ordered set, and for example can be used to define an infinite-dimensional simplex without issues of convergence of sums. Especially in numerical applications ofprobability theory, aprojectiononto the standard simplex is of interest. Given⁠p{\displaystyle p}⁠, possibly with coordinates that are negative or in excess of 1, the closest point⁠t{\displaystyle t}⁠on the simplex has coordinates whereΔ{\displaystyle \Delta }is chosen such that∑imax{pi+Δ,0}=1.{\textstyle \sum _{i}\max\{p_{i}+\Delta \,,0\}=1.} Δ{\displaystyle \Delta }can be easily calculated from sorting the coordinates of⁠p{\displaystyle p}⁠.[8]The sorting approach takesO(nlog⁡n){\displaystyle O(n\log n)}complexity, which can be improved toO(n)complexity viamedian-findingalgorithms.[9]Projecting onto the simplex is computationally similar to projecting onto theℓ1{\displaystyle \ell _{1}}ball.Also see Integer programming. Finally, a simple variant is to replace "summing to 1" with "summing to at most 1"; this raises the dimension by 1, so to simplify notation, the indexing changes: This yields ann-simplex as a corner of then-cube, and is a standard orthogonal simplex. This is the simplex used in thesimplex method, which is based at the origin, and locally models a vertex on a polytope withnfacets. One way to write down a regularn-simplex inRnis to choose two points to be the first two vertices, choose a third point to make an equilateral triangle, choose a fourth point to make a regular tetrahedron, and so on. Each step requires satisfying equations that ensure that each newly chosen vertex, together with the previously chosen vertices, forms a regular simplex. There are several sets of equations that can be written down and used for this purpose. These include the equality of all the distances between vertices; the equality of all the distances from vertices to the center of the simplex; the fact that the angle subtended through the new vertex by any two previously chosen vertices isπ/3{\displaystyle \pi /3}; and the fact that the angle subtended through the center of the simplex by any two vertices isarccos⁡(−1/n){\displaystyle \arccos(-1/n)}. It is also possible to directly write down a particular regularn-simplex inRnwhich can then be translated, rotated, and scaled as desired. One way to do this is as follows. Denote thebasis vectorsofRnbye1throughen. Begin with the standard(n− 1)-simplex which is the convex hull of the basis vectors. By adding an additional vertex, these become a face of a regularn-simplex. The additional vertex must lie on the line perpendicular to the barycenter of the standard simplex, so it has the form(α/n, ...,α/n)for somereal numberα. Since the squared distance between two basis vectors is 2, in order for the additional vertex to form a regularn-simplex, the squared distance between it and any of the basis vectors must also be 2. This yields aquadratic equationforα. Solving this equation shows that there are two choices for the additional vertex: Either of these, together with the standard basis vectors, yields a regularn-simplex. The above regularn-simplex is not centered on the origin. It can be translated to the origin by subtracting the mean of its vertices. By rescaling, it can be given unit side length. This results in the simplex whose vertices are: for1≤i≤n{\displaystyle 1\leq i\leq n}, and Note that there are two sets of vertices described here. One set uses+{\displaystyle +}in each calculation. The other set uses−{\displaystyle -}in each calculation. This simplex is inscribed in a hypersphere of radiusn/(2(n+1)){\displaystyle {\sqrt {n/(2(n+1))}}}. A different rescaling produces a simplex that is inscribed in a unit hypersphere. When this is done, its vertices are where1≤i≤n{\displaystyle 1\leq i\leq n}, and The side length of this simplex is2(n+1)/n{\textstyle {\sqrt {2(n+1)/n}}}. A highly symmetric way to construct a regularn-simplex is to use a representation of thecyclic groupZn+1byorthogonal matrices. This is ann×northogonal matrixQsuch thatQn+1=Iis theidentity matrix, but no lower power ofQis. Applying powers of thismatrixto an appropriate vectorvwill produce the vertices of a regularn-simplex. To carry this out, first observe that for any orthogonal matrixQ, there is a choice of basis in whichQis a block diagonal matrix where eachQiis orthogonal and either2 × 2or1 × 1. In order forQto have ordern+ 1, all of these matrices must have orderdividingn+ 1. Therefore eachQiis either a1 × 1matrix whose only entry is1or, ifnisodd,−1; or it is a2 × 2matrix of the form where eachωiis anintegerbetween zero andninclusive. A sufficient condition for the orbit of a point to be a regular simplex is that the matricesQiform a basis for the non-trivial irreducible real representations ofZn+1, and the vector being rotated is not stabilized by any of them. In practical terms, forneventhis means that every matrixQiis2 × 2, there is an equality of sets and, for everyQi, the entries ofvupon whichQiacts are not both zero. For example, whenn= 4, one possible matrix is Applying this to the vector(1, 0, 1, 0)results in the simplex whose vertices are each of which has distance √5 from the others. Whennis odd, the condition means that exactly one of the diagonal blocks is1 × 1, equal to−1, and acts upon a non-zero entry ofv; while the remaining diagonal blocks, sayQ1, ...,Q(n− 1) / 2, are2 × 2, there is an equality of sets and each diagonal block acts upon a pair of entries ofvwhich are not both zero. So, for example, whenn= 3, the matrix can be For the vector(1, 0, 1/√2), the resulting simplex has vertices each of which has distance 2 from the others. Thevolumeof ann-simplex inn-dimensional space with vertices(v0, ...,vn)is where each column of then×ndeterminantis avectorthat points from vertexv0to another vertexvk.[10]This formula is particularly useful whenv0{\displaystyle v_{0}}is the origin. The expression employs aGram determinantand works even when then-simplex's vertices are in a Euclidean space with more thanndimensions, e.g., a triangle inR3{\displaystyle \mathbf {R} ^{3}}. A more symmetric way to compute the volume of ann-simplex inRn{\displaystyle \mathbf {R} ^{n}}is Another common way of computing the volume of the simplex is via theCayley–Menger determinant, which works even when the n-simplex's vertices are in a Euclidean space with more than n dimensions.[11] Without the1/n!it is the formula for the volume of ann-parallelotope. This can be understood as follows: Assume thatPis ann-parallelotope constructed on a basis(v0,e1,…,en){\displaystyle (v_{0},e_{1},\ldots ,e_{n})}ofRn{\displaystyle \mathbf {R} ^{n}}. Given apermutationσ{\displaystyle \sigma }of{1,2,…,n}{\displaystyle \{1,2,\ldots ,n\}}, call a list of verticesv0,v1,…,vn{\displaystyle v_{0},\ v_{1},\ldots ,v_{n}}an-path if (so there aren!n-paths andvn{\displaystyle v_{n}}does not depend on the permutation). The following assertions hold: IfPis the unitn-hypercube, then the union of then-simplexes formed by the convex hull of eachn-path isP, and these simplexes are congruent and pairwise non-overlapping.[12]In particular, the volume of such a simplex is IfPis a general parallelotope, the same assertions hold except that it is no longer true, in dimension > 2, that the simplexes need to be pairwise congruent; yet their volumes remain equal, because then-parallelotope is the image of the unitn-hypercube by thelinear isomorphismthat sends the canonical basis ofRn{\displaystyle \mathbf {R} ^{n}}toe1,…,en{\displaystyle e_{1},\ldots ,e_{n}}. As previously, this implies that the volume of a simplex coming from an-path is: Conversely, given ann-simplex(v0,v1,v2,…vn){\displaystyle (v_{0},\ v_{1},\ v_{2},\ldots v_{n})}ofRn{\displaystyle \mathbf {R} ^{n}}, it can be supposed that the vectorse1=v1−v0,e2=v2−v1,…en=vn−vn−1{\displaystyle e_{1}=v_{1}-v_{0},\ e_{2}=v_{2}-v_{1},\ldots e_{n}=v_{n}-v_{n-1}}form a basis ofRn{\displaystyle \mathbf {R} ^{n}}. Considering the parallelotope constructed fromv0{\displaystyle v_{0}}ande1,…,en{\displaystyle e_{1},\ldots ,e_{n}}, one sees that the previous formula is valid for every simplex. Finally, the formula at the beginning of this section is obtained by observing that From this formula, it follows immediately that the volume under a standardn-simplex (i.e. between the origin and the simplex inRn+1) is The volume of a regularn-simplex with unit side length is as can be seen by multiplying the previous formula byxn+1, to get the volume under then-simplex as a function of its vertex distancexfrom the origin, differentiating with respect tox, atx=1/2{\displaystyle x=1/{\sqrt {2}}}(where then-simplex side length is 1), and normalizing by the lengthdx/n+1{\displaystyle dx/{\sqrt {n+1}}}of the increment,(dx/(n+1),…,dx/(n+1)){\displaystyle (dx/(n+1),\ldots ,dx/(n+1))}, along the normal vector. Any two(n− 1)-dimensional faces of a regularn-dimensional simplex are themselves regular(n− 1)-dimensional simplices, and they have the samedihedral angleofcos−1(1/n).[13][14] This can be seen by noting that the center of the standard simplex is(1n+1,…,1n+1){\textstyle \left({\frac {1}{n+1}},\dots ,{\frac {1}{n+1}}\right)}, and the centers of its faces are coordinate permutations of(0,1n,…,1n){\textstyle \left(0,{\frac {1}{n}},\dots ,{\frac {1}{n}}\right)}. Then, by symmetry, the vector pointing from(1n+1,…,1n+1){\textstyle \left({\frac {1}{n+1}},\dots ,{\frac {1}{n+1}}\right)}to(0,1n,…,1n){\textstyle \left(0,{\frac {1}{n}},\dots ,{\frac {1}{n}}\right)}is perpendicular to the faces. So the vectors normal to the faces are permutations of(−n,1,…,1){\displaystyle (-n,1,\dots ,1)}, from which the dihedral angles are calculated. An "orthogonal corner" means here that there is a vertex at which all adjacent edges are pairwise orthogonal. It immediately follows that all adjacentfacesare pairwise orthogonal. Such simplices are generalizations of right triangles and for them there exists ann-dimensional version of thePythagorean theorem: The sum of the squared(n− 1)-dimensional volumes of the facets adjacent to the orthogonal corner equals the squared(n− 1)-dimensional volume of the facet opposite of the orthogonal corner. whereA1…An{\displaystyle A_{1}\ldots A_{n}}are facets being pairwise orthogonal to each other but not orthogonal toA0{\displaystyle A_{0}}, which is the facet opposite the orthogonal corner.[15] For a 2-simplex, the theorem is thePythagorean theoremfor triangles with a right angle and for a 3-simplex it isde Gua's theoremfor a tetrahedron with an orthogonal corner. TheHasse diagramof the face lattice of ann-simplex is isomorphic to the graph of the(n+ 1)-hypercube's edges, with the hypercube's vertices mapping to each of then-simplex's elements, including the entire simplex and the null polytope as the extreme points of the lattice (mapped to two opposite vertices on the hypercube). This fact may be used to efficiently enumerate the simplex's face lattice, since more general face lattice enumeration algorithms are more computationally expensive. Then-simplex is also thevertex figureof the(n+ 1)-hypercube. It is also thefacetof the(n+ 1)-orthoplex. Topologically, ann-simplex isequivalentto ann-ball. Everyn-simplex is ann-dimensionalmanifold with corners. In probability theory, the points of the standardn-simplex in(n+ 1)-space form the space of possible probability distributions on a finite set consisting ofn+ 1possible outcomes. The correspondence is as follows: For each distribution described as an ordered(n+ 1)-tuple of probabilities whose sum is (necessarily) 1, we associate the point of the simplex whosebarycentric coordinatesare precisely those probabilities. That is, thekth vertex of the simplex is assigned to have thekth probability of the(n+ 1)-tuple as its barycentric coefficient. This correspondence is an affine homeomorphism. Aitchinson geometry is a natural way to construct aninner product spacefrom the standard simplexΔD−1{\displaystyle \Delta ^{D-1}}. It defines the following operations on simplices and real numbers: Since all simplices are self-dual, they can form a series of compounds; Inalgebraic topology, simplices are used as building blocks to construct an interesting class oftopological spacescalledsimplicial complexes. These spaces are built from simplices glued together in acombinatorialfashion. Simplicial complexes are used to define a certain kind ofhomologycalledsimplicial homology. A finite set ofk-simplexes embedded in anopen subsetofRnis called an affinek-chain. The simplexes in a chain need not be unique; they may occur withmultiplicity. Rather than using standard set notation to denote an affine chain, it is instead the standard practice to use plus signs to separate each member in the set. If some of the simplexes have the oppositeorientation, these are prefixed by a minus sign. If some of the simplexes occur in the set more than once, these are prefixed with an integer count. Thus, an affine chain takes the symbolic form of a sum with integer coefficients. Note that each facet of ann-simplex is an affine(n− 1)-simplex, and thus theboundaryof ann-simplex is an affine(n− 1)-chain. Thus, if we denote one positively oriented affine simplex as with thevj{\displaystyle v_{j}}denoting the vertices, then the boundary∂σ{\displaystyle \partial \sigma }ofσis the chain It follows from this expression, and the linearity of the boundary operator, that the boundary of the boundary of a simplex is zero: Likewise, the boundary of the boundary of a chain is zero:∂2ρ=0{\displaystyle \partial ^{2}\rho =0}. More generally, a simplex (and a chain) can be embedded into amanifoldby means of smooth, differentiable mapf:Rn→M{\displaystyle f:\mathbf {R} ^{n}\to M}. In this case, both the summation convention for denoting the set, and the boundary operation commute with theembedding. That is, where theai{\displaystyle a_{i}}are the integers denoting orientation and multiplicity. For the boundary operator∂{\displaystyle \partial }, one has: whereρis a chain. The boundary operation commutes with the mapping because, in the end, the chain is defined as a set and little more, and the set operation always commutes with themap operation(by definition of a map). Acontinuous mapf:σ→X{\displaystyle f:\sigma \to X}to atopological spaceXis frequently referred to as asingularn-simplex. (A map is generally called "singular" if it fails to have some desirable property such as continuity and, in this case, the term is meant to reflect to the fact that the continuous map need not be an embedding.)[16] Since classicalalgebraic geometryallows one to talk about polynomial equations but not inequalities, thealgebraic standard n-simplexis commonly defined as the subset of affine(n+ 1)-dimensional space, where all coordinates sum up to 1 (thus leaving out the inequality part). The algebraic description of this set isΔn:={x∈An+1|∑i=1n+1xi=1},{\displaystyle \Delta ^{n}:=\left\{x\in \mathbb {A} ^{n+1}~{\Bigg |}~\sum _{i=1}^{n+1}x_{i}=1\right\},}which equals thescheme-theoretic descriptionΔn(R)=Spec⁡(R[Δn]){\displaystyle \Delta _{n}(R)=\operatorname {Spec} (R[\Delta ^{n}])}withR[Δn]:=R[x1,…,xn+1]/(1−∑xi){\displaystyle R[\Delta ^{n}]:=R[x_{1},\ldots ,x_{n+1}]\left/\left(1-\sum x_{i}\right)\right.}the ring of regular functions on the algebraicn-simplex (for anyringR{\displaystyle R}). By using the same definitions as for the classicaln-simplex, then-simplices for different dimensionsnassemble into onesimplicial object, while the ringsR[Δn]{\displaystyle R[\Delta ^{n}]}assemble into one cosimplicial objectR[Δ∙]{\displaystyle R[\Delta ^{\bullet }]}(in thecategoryof schemes resp. rings, since the face and degeneracy maps are all polynomial). The algebraicn-simplices are used in higherK-theoryand in the definition of higherChow groups.
https://en.wikipedia.org/wiki/Simplex
TheAdvanced Research Projects Agency for Health(ARPA-H) is an agency within theDepartment of Health and Human Services.[1]Its mission is to "make pivotal investments in break-through technologies and broadly applicable platforms, capabilities, resources, and solutions that have the potential to transform important areas of medicine and health for the benefit of all patients and that cannot readily be accomplished through traditional research or commercial activity."[2] ARPA-H was approved by Congress with the passing of H.R. 2471, theConsolidated Appropriations Act, 2022and was signed into Public Law 117-103 by U.S. presidentJoe Bidenon March 15, 2022.[3]15 days later Health and Human Services SecretaryXavier Becerraannounced that the agency will have access to the resources of the National Institutes of Health, but will answer to theU.S. Secretary of Health and Human Services.[4]The agency initially has a $1 billion budget to be used before fiscal year 2025 (October 2024) and the Biden administration has requested much more funding from Congress. In December 2022, theConsolidated Appropriations Act, 2023(Pub.L. 117–328) provided $1.5 billion for ARPA-H for fiscal year 2023. The Biden administration requested and received $2.5 billion for FY2024, and had spent $400 million in research grants by August 13, 2024.[5] In March 2023, ARPA-H announced one of its three headquarters locations would be in theWashington metropolitan area.[6][7]In September 2023, ARPA-H announced that a second hub would be located inCambridge, Massachusetts, following a bid led byU.S. representativeRichard NealfromMassachusetts's 1st congressional districtandUniversity of Massachusetts SystempresidentMarty Meehanto have the agency locate a hub in theGreater Boston area.[8][9]The third patient engagement-focused hub was established in Dallas, Texas.[10] TheDefense Advanced Research Projects Agency(DARPA, formerly ARPA) has been the military's in-house innovator since 1958, a year after the USSR launchedSputnik. DARPA is widely known for creatingARPAnet, the predecessor of theinternet, and has been instrumental in advancing hardened electronics,brain-computer interfacetechnology,drones, andstealth technology. Inspired by the success of DARPA, in 2002 theHomeland Security Advanced Research Projects Agency(HSARPA) was created and in 2006 theIntelligence Advanced Research Projects Activity(IARPA) was created. This was followed by theAdvanced Research Projects Agency–Energy(ARPA-E) in 2009 and theAdvanced Research Projects Agency–Infrastructure(ARPA-I) in 2022. DARPA also inspired theAdvanced Research and Invention Agencyin the UK and in 2021 the Biden administration proposed ARPA-C for climate research.[11] The Suzanne Wright Foundationproposed "HARPA" in 2017 to focus on pancreatic cancer and other challenging diseases.[12]A white paper was published by former Obama White House staffers,Michael StebbinsandGeoffrey Lingthrough the Day One Project thatproposedthe creation of a new federal agency modeled on DARPA, but focused on health. That proposal was adopted by President Biden's campaign and was the model used for establishing ARPA-H.[13]In June 2021 noted biologistsFrancis S. Collins(then head of the NIH),Tara Schwetz,Lawrence Tabak, andEric Landerpenned an article inSciencesupporting the idea.[14]Dr. Collins became an important champion of the idea on Capitol Hill and the legislation garnered numerous sponsors in the117th Congress. In September 2022,Renee Wegrzynwas appointed as the agency's inaugural director.[15][16][17]She was dismissed by theTrump administrationin February 2025.[18] A White House white paper identifies a number of potential directions for technological development that could occur under the direction of ARPA-H, including cancer vaccines, pandemic preparedness, and prevention technologies, less intrusive wearable blood glucose monitors, and patient-specific T-cell therapies.[19]Additionally, the proposal suggests that ARPA-H focus on platforms to reduce health disparities in maternal morbidity and mortality and improve how medications provided are taken. One of the first grants from the organization was part of if it's DIGIHEALS initiative to innovative research that aims to protect the United States health care system against hostile online threats. Christian Dameff andJeff Tully, medical doctors and medical cybersecurity researchers University of California San Diego School of Medicine, as well as cybersecurity expertStefan Savagewere named investigators to the Healthcare Ransomware Resiliency and Response Program, or H-R3P, project.[20][21]
https://en.wikipedia.org/wiki/Advanced_Research_Projects_Agency_for_Health
TheDiffie–Hellman problem(DHP) is a mathematical problem first proposed byWhitfield DiffieandMartin Hellman[1]in the context ofcryptographyand serves as the theoretical basis of theDiffie–Hellman key exchangeand its derivatives. The motivation for this problem is that many security systems useone-way functions: mathematical operations that are fast to compute, but hard to reverse. For example, they enable encrypting a message, but reversing the encryption is difficult. If solving the DHP were easy, these systems would be easily broken. The Diffie–Hellman problem is stated informally as follows: Formally,g{\displaystyle g}is ageneratorof somegroup(typically themultiplicative groupof afinite fieldor anelliptic curvegroup) andx{\displaystyle x}andy{\displaystyle y}are randomly chosen integers. For example, in the Diffie–Hellman key exchange, an eavesdropper observesgx{\displaystyle g^{x}}andgy{\displaystyle g^{y}}exchanged as part of the protocol, and the two parties both compute the shared keygxy{\displaystyle g^{xy}}. A fast means of solving the DHP would allow an eavesdropper to violate the privacy of the Diffie–Hellman key exchange and many of its variants, includingElGamal encryption. Incryptography, for certain groups, it isassumedthat the DHP is hard, and this is often called theDiffie–Hellman assumption. The problem has survived scrutiny for a few decades and no "easy" solution has yet been publicized. As of 2006, the most efficient means known to solve the DHP is to solve thediscrete logarithm problem(DLP), which is to findxgivengandgx. In fact, significant progress (by den Boer,Maurer, Wolf,BonehandLipton) has been made towards showing that over many groups the DHP is almost as hard as the DLP. There is no proof to date that either the DHP or the DLP is a hard problem, except in generic groups (by Nechaev and Shoup). A proof that either problem is hard implies thatP≠NP.[clarification needed] Many variants of the Diffie–Hellman problem have been considered. The most significant variant is thedecisional Diffie–Hellman problem(DDHP), which is to distinguishgxyfrom a random group element, giveng,gx, andgy. Sometimes the DHP is called thecomputational Diffie–Hellman problem(CDHP) to more clearly distinguish it from the DDHP. Recently groups withpairingshave become popular, and in these groups the DDHP is easy, yet the CDHP is still assumed to be hard. For less significant variants of the DHP see the references.
https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_problem
Incryptography, amusic cipheris analgorithmfor theencryptionof a plaintext into musical symbols or sounds. Music-based ciphers are related to, but not the same asmusical cryptograms. The latter were systems used by composers to create musicalthemesormotifsto represent names based on similarities between letters of the alphabet and musical note names, such as theBACH motif, whereas music ciphers were systems typically used by cryptographers to hide orencodemessages for reasons of secrecy or espionage. There are a variety of different types of music ciphers as distinguished by both the method of encryption and the musical symbols used. Regarding the former, most are simplesubstitution cipherswith a one-to-one correspondence between individual letters of the alphabet and a specific musical note. There are also historical music ciphers that utilize homophonic substitution (one-to-many),polyphonic substitution(many-to-one), compound cipher symbols, and/orcipher keys; all of which can make the enciphered message more difficult to break.[1]Regarding the type of symbol used for substitution, most music ciphers utilize the pitch of amusical noteas the primary cipher symbol. Since there are fewer notes in a standard musical scale (e.g., seven fordiatonic scalesand twelve forchromatic scales) than there are letters of the alphabet, cryptographers would often combine the note name with additional characteristics––such asoctaveregister,rhythmic duration, orclef––to create a complete set of cipher symbols to match every letter. However, there are some music ciphers which rely exclusively on rhythm instead of pitch[2]or on relativescale degreenames instead of absolute pitches.[3][4][5] Music ciphers often have both cryptographic and steganographic elements. Simply put, encryption is scrambling a message so that it is unreadable; steganography is hiding a message so no knows it is even there. Most practitioners of music ciphers believed that encrypting text into musical symbols gave it added security because, if intercepted, most people would not even suspect that the sheet music contained a message. However, asFrancesco Lana de Terzinotes, this is usually not because the resulting cipher melody appears to be a normal piece of music, but rather because so few people know enough about music to realize it is not ("ma gl'intelligenti di musica sono poci").[6]A message can also be visually hidden within a page of music without actually being a music cipher.William F. Friedmanembedded a secret message based on FrancisBacon's cipherinto a sheet music arrangement of Stephen Foster's "My Old Kentucky Home" by visually altering the appearance of thenote stems.[7]Another steganographic strategy is to musically encrypt a plaintext, but hide the message-bearing notes within a larger musical score that requires some visual marker that distinguishes them from the meaningless null-symbol notes (e.g., the cipher melody is only in the tenor line or only the notes with stems pointing down).[8][9] The cipher manuscript from Agostino Amadi there is a musical score in 41v with a pseudo-letter ciphered in it, which is an imaginary letter that Venice writes to Charles V.Italian historian Paolo Preto. "...The emperor sent to prince Gritti, with whom he had been familiar for a long time, a music score that looked like a madrigal....The prince summoned Willaert and the other musicians and asked them to play the melody sent to them by emperor Charles V. When Willaert and the others carefully studied the score, they were unable to play it and confessed they could not understand it."[10] Diatonic music ciphers utilize only the seven basic note names of the diatonic scale:A, B, C, D, E, F, andG. While some systems reuse the same seven pitches for multiple letters (e.g., the pitchAcan represent the lettersA,H,O, orV),[11]most algorithms combine these pitches with other musical attributes to achieve a one-to-one mapping. Perhaps the earliest documented music cipher is found in a manuscript from 1432 called "The Sermon Booklets of Friar Nicholas Philip." Philip's cipher uses only five pitches, but each note can appear with one of four different rhythmic durations, thus providing twenty distinct symbols.[12]A similar cipher appears in a 15th-century British anonymous manuscript[13]as well as in a much later treatise byGiambattista della Porta.[14] In editions of the same treatise (De Furtivis Literarum Notis), Porta also presents a simpler cipher which is much more well-known.Porta's music ciphermaps the lettersAthroughM(omittingJandK) onto a stepwise, ascending, octave-and-a-half scale ofwhole notes(semibreves); with the remainder of the alphabet (omittingVandW) onto a descending scale ofhalf notes(minims).[15]Since alphabetic and scalar sequences are in such close step with each other, this is not a very strong method of encryption, nor are the melodies it produces very natural. Nevertheless, one finds slight variations of this same method employed throughout the 17th and 18th centuries byDaniel Schwenter(1602),[16]John Wilkins(1641),[17]Athanasius Kircher(1650),[18]Kaspar Schott(1655),[19]Philip Thicknesse(1722),[20]and even the British Foreign Office (ca. 1750).[21] Music ciphers based on thechromatic scaleprovide a larger pool of note names to match with letters of the alphabet. Applyingsharpsandflatsto the seven diatonic pitches yields twenty-one unique cipher symbols. Since this is obviously still less than a standard alphabet, chromatic ciphers also require either a reduced letter set or additional features (e.g., octave register or duration). Most chromatic ciphers were developed by composers in the 20th Century when fully chromatic music itself was more common. A notable exception is a cipher attributed to the composerMichael Haydn(brother of the more famousJoseph Haydn).[22]Haydn's algorithm is one of the most comprehensive with symbols for thirty-one letters of theGerman alphabet, punctuations (usingrest signs), parentheses (usingclefs), and word segmentation (usingbar lines). However, because many of the pitches areenharmonic equivalents, this cipher can only be transmitted as visual steganography, not via musical sound. For example, the notesC-sharpandD-flatare spelled differently, but they sound the same on a piano. As such, if one were listening to an enciphered melody, it would not be possible to hear the difference between the lettersKandL. Furthermore, the purpose of this cipher was clearly not to generate musical themes that could pass for normal music. The use of such an extreme chromatic scale produces wildly dissonant,atonalmelodies that would have been obviously atypical for Haydn's time. Although chromatic ciphers did not seemed to be favored by cryptographers, there are several 20th-century composers who developed systems for use in their own music:Arthur Honegger,[23]Maurice Duruflé,[24]Norman Cazden,[25]Olivier Messiaen,[26]andJacques Chailley.[27]Similar to Haydn's cipher, most likewise match the alphabet sequentially onto a chromatic scale and rely on octave register to extend to twenty-six letters. Only Messiaen's appears to have been thoughtfully constructed to meet the composer's aesthetic goals. Although he also utilized different octave registers, the letters of the alphabet are not mapped in scalar order and also have distinct rhythmic values. Messiaen called his musical alphabet thelangage communicable, and used it to embed extra-musical text throughout his organ workMéditations sur le Mystère de la Sainte Trinité. In a compound substitution cipher, each single plaintext letter is replaced by a block of multiple cipher symbols (e.g., 'a' = EN or 'b' = WJU). Similarly, there are compound music ciphers in which each letter is represented by a musical motive with two or more notes. In the case of the former, the compound symbols are to makefrequency analysismore difficult; in the latter, the goal is to make the output more musical. For example, in 1804, Johann Bücking devised a compound cipher which generates musical compositions in the form of aminuetin thekeyof G Major.[28]Each letter of the alphabet is replaced by a measure of music consisting of a stylistically typical motive with three to six notes. After the plaintext is enciphered, additional pre-composed measures are appended to the beginning and end to provide a suitable musical framing. A few years earlier,Wolfgang Amadeus Mozartappears to have employed a similar technique (with much more sophisticated musical motives), although more likely intended as a parlor game than an actual cipher.[29][30]Since the compound symbols are musically meaningful motives, these ciphers could also be considered similar tocodes. Friedrich von Öttingen-Wallerstein proposed a different type of compound music cipher modeled after apolybius square cipher.[31]Öttingen-Wallerstein used a 5x5 grid containing the letters of the alphabet (hidden within the names of angels). Instead of indexing the rows and columns with coordinate numbers, he used thesolfegesyllables Ut, Re, Mi Fa, and Sol (i.e., the first five degrees of a diatonic scale). Each letter, therefore, becomes a two-note melodic motive. This same cipher appears in treatises byGustavus Selenus(1624)[32]and Johann Balthasar Friderici (1665)[33](but without credit to the earlier version of Öttingen-Wallerstein). Because Öttingen-Wallerstein's cipher uses relativescale degrees, rather than fixed note names, it is effectively apolyalphabetic cipher. The same enciphered message could be transposed to a different musical key––with different note names––and still retain the same meaning. The musical key literally becomes a cipher key (orcryptovariable), because the recipient needs that additional information to correctly decipher the melody. Öttingen-Wallerstein insertedrestsas cipherkey markers to indicate when a new musical key was needed to decrypt the message. Francesco Lana de Terziused a more conventional text-string cryptovariable, to add security to a very straightforward 'Porta-style' music cipher (1670).[34]Similar to aVigenère cipher, a single-letter cipher key shifts the position of the plaintext alphabet in relation to the sequence musical cipher symbols; a multi-letter key word shifts the musical scale for each letter of the text in a repeating cycle. A more elaborate cipherkey algorithm was found in an anonymous manuscript in Port-Lesney, France, most likely from the mid-18th century.[35]The so-called'Port-Lesney' music cipheruses a mechanical device known as anAlberti cipher disk[36]There are two rotating disks: the outer disk contains two concentric rings (one withtime signaturesand the other with letters of the alphabet); the inner disk has a ring of compound musical symbols, and a small inner circle with three differentclefsigns. The disks are rotated to align the letters of the alphabet with compound musical symbols to encrypt the message. When the melody is written out on a music staff, the corresponding clef and time signature are added to the beginning to indicate the cipher key (which the recipient aligns on their disk to decipher the message). This particular music cipher was apparently very popular, with a dozen variations (in French, German, and English) appearing throughout the 18th and 19th centuries.[37][38][39][40] The more recent Solfa Cipher[41]combines some of the above cryptovariable techniques. As the name suggests, Solfa Cipher uses relativesolfegedegrees (like Öttingen-Wallerstein) rather than fixed pitches, which allows the same encrypted message to be transposable to different musical keys. Since there are only seven scale degrees, these are combined with a rhythmic component to create enough unique cipher symbols. However, instead of absolute note lengths (e.g., quarter note, half note, etc.) that are employed in most music ciphers, Solfa Cipher uses relativemetricplacement. This type oftonal-metric[42]cipher makes the encrypted melody both harder to break and more musically natural (i.e. similar to common-practice tonal melodies).[43]To decrypt a cipher melody, the recipient needs to know in which musical key and with what rhythmic unit the original message was encrypted, as well as the clef sign and metric location of the first note. The cipher key could also be transmitted as a date by usingSolfalogy, a method of associating each unique date with a tone and modal scale.[44]To further confound interceptors, the transcribed sheet music could be written with a decoy clef, key signature, and time signature. The musical output, however, is a relatively normal, simple, singable tune in comparison to the disjunct, atonal melodies produced by fixed-pitch substitution ciphers.
https://en.wikipedia.org/wiki/Music_cipher#Musical_Steganagraphy
Theaccuracy paradoxis theparadoxicalfinding thataccuracyis not a good metric forpredictive modelswhenclassifyinginpredictive analytics. This is because a simple model may have a high level of accuracy but too crude to be useful. For example, if the incidence of category A is dominant, being found in 99% of cases, then predicting thateverycase is category A will have an accuracy of 99%.Precision and recallare better measures in such cases.[1][2]The underlying issue is that there is a class imbalance between the positive class and the negative class. Prior probabilities for these classes need to be accounted for in error analysis. Precision and recall help, but precision too can be biased by unbalanced class priors in the test sets.[citation needed] For example, a city of 1 million people has ten terrorists. A profiling system results in the followingconfusion matrix: Even though the accuracy is⁠10 + 999000/1000000⁠≈ 99.9%, 990 out of the 1000 positive predictions are incorrect. The precision of⁠10/10 + 990⁠= 1% reveals its poor performance. As the classes are so unbalanced, a better metric is theF1 score=⁠2 × 0.01 × 1/0.01 + 1⁠≈ 2% (the recall being⁠10 + 0/10⁠= 1). Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Accuracy_paradox
Inelectronics, themetal–oxide–semiconductor field-effect transistor(MOSFET,MOS-FET,MOS FET, orMOS transistor) is a type offield-effect transistor(FET), most commonly fabricated by thecontrolled oxidationofsilicon. It has an insulated gate, thevoltageof which determines the conductivity of the device. This ability to change conductivity with the amount of applied voltage can be used for amplifying or switching electronicsignals. The termmetal–insulator–semiconductor field-effect transistor(MISFET) is almost synonymous withMOSFET. Another near-synonym isinsulated-gate field-effect transistor(IGFET). The main advantage of a MOSFET is that it requires almost no input current to control the load current under steady-state or low-frequency conditions, especially compared to bipolar junction transistors (BJTs). However, at high frequencies or when switching rapidly, a MOSFET may require significant current to charge and discharge its gate capacitance. In anenhancement modeMOSFET, voltage applied to the gate terminal increases the conductivity of the device. Indepletion modetransistors, voltage applied at the gate reduces the conductivity.[1] The "metal" in the name MOSFET is sometimes amisnomer, because the gate material can be a layer ofpolysilicon(polycrystalline silicon). Similarly, "oxide" in the name can also be a misnomer, as different dielectric materials are used with the aim of obtaining strong channels with smaller applied voltages. The MOSFET is by far the most common transistor indigitalcircuits, as billions may be included in amemory chipormicroprocessor. As MOSFETs can be made with either p-type or n-type semiconductors, complementary pairs of MOS transistors can be used to make switching circuits with very low power consumption, in the form ofCMOS logic. The basic principle of thefield-effect transistorwas first patented byJulius Edgar Lilienfeldin 1925.[2]In 1934, inventorOskar Heilindependently patented a similar device in Europe.[3] In the 1940s,Bell LabsscientistsWilliam Shockley,John BardeenandWalter Houser Brattainattempted to build a field-effect device, which led to their discovery of thetransistoreffect. However, the structure failed to show the anticipated effects, due to the problem ofsurface states: traps on the semiconductor surface that hold electrons immobile. With nosurface passivation, they were only able to build theBJTandthyristortransistors. In 1955,Carl Froschand Lincoln Derick accidentally grew a layer of silicon dioxide over the silicon wafer, for which they observed surface passivation effects.[4][5]By 1957, Frosch and Derick, using masking and predeposition, were able to manufacture silicon dioxide transistors, in which drain and source were adjacent at the same surface.[6]They showed that silicon dioxide insulated, protected silicon wafers and prevented dopants from diffusing into the wafer.[4][7]At Bell Labs, the importance of Frosch and Derick technique and transistors was immediately realized. Results of their work circulated around Bell Labs in the form of BTL memos before being published in 1957. AtShockley Semiconductor, Shockley had circulated the preprint of their article in December 1956 to all his senior staff, includingJean Hoerni,[8][9][10][11]who would later invent theplanar processin 1959 while atFairchild Semiconductor.[12][13]After this, J.R. Ligenza and W.G. Spitzer studied the mechanism of thermally grown oxides, fabricated a high quality Si/SiO2stack and published their results in 1960.[14][15][16] Following this research,Mohamed AtallaandDawon Kahngproposed a silicon MOS transistor in 1959[17]and successfully demonstrated a working MOS device with their Bell Labs team in 1960.[18][19]The first MOS transistor at Bell Labs was about 100 times slower than contemporarybipolar transistorsand was initially seen as inferior. Nevertheless, Kahng pointed out several advantages of the device, notably ease of fabrication and its application inintegrated circuits.[20] Usually thesemiconductorof choice issilicon. Some chip manufacturers, most notablyIBMandIntel, use analloyof silicon and germanium (SiGe) in MOSFET channels.[citation needed]Many semiconductors with better electrical properties than silicon, such asgallium arsenide, do not form good semiconductor-to-insulator interfaces, and thus are not suitable for MOSFETs. Research continues on creating insulators with acceptable electrical characteristics on other semiconductor materials. To overcome the increase in power consumption due to gate current leakage, ahigh-κ dielectricis used instead of silicon dioxide for the gate insulator, while polysilicon is replaced by metal gates (e.g.Intel, 2009).[21] The gate is separated from the channel by a thin insulating layer, traditionally of silicon dioxide and later ofsilicon oxynitride. Some companies use a high-κ dielectric and metal gate combination in the45 nanometernode. When a voltage is applied between the gate and the source, the electric field generated penetrates through the oxide and creates aninversion layerorchannelat the semiconductor-insulator interface. The inversion layer provides a channel through which current can pass between source and drain terminals. Varying the voltage between the gate and body modulates theconductivityof this layer and thereby controls the current flow between drain and source. This is known as enhancement mode. The traditional metal–oxide–semiconductor (MOS) structure is obtained by growing a layer ofsilicon dioxide(SiO2) on top of a silicon substrate, commonly bythermal oxidationand depositing a layer of metal orpolycrystalline silicon(the latter is commonly used). As silicon dioxide is adielectricmaterial, its structure is equivalent to a planarcapacitor, with one of the electrodes replaced by a semiconductor. When a voltage is applied across a MOS structure, it modifies the distribution of charges in the semiconductor. If we consider a p-type semiconductor (withNAthe density ofacceptors,pthe density of holes;p = NAin neutral bulk), a positive voltage,VG, from gate to body (see figure) creates adepletion layerby forcing the positively charged holes away from the gate-insulator/semiconductor interface, leaving exposed a carrier-free region of immobile, negatively charged acceptor ions (seedoping). IfVGis high enough, a high concentration of negative charge carriers forms in aninversion layerlocated in a thin layer next to the interface between the semiconductor and the insulator. Conventionally, the gate voltage at which the volume density of electrons in the inversion layer is the same as the volume density of holes in the body is called thethreshold voltage. When the voltage between transistor gate and source (VG) exceeds the threshold voltage (Vth), the difference is known asoverdrive voltage. This structure with p-type body is the basis of the n-type MOSFET, which requires the addition of n-type source and drain regions. The MOS capacitor structure is the heart of the MOSFET. Consider a MOS capacitor where the silicon base is of p-type. If a positive voltage is applied at the gate, holes which are at the surface of the p-type substrate will be repelled by the electric field generated by the voltage applied. At first, the holes will simply be repelled and what will remain on the surface will be immobile (negative) atoms of the acceptor type, which creates a depletion region on the surface. A hole is created by an acceptor atom, e.g., boron, which has one less electron than a silicon atom. Holes are not actually repelled, being non-entities; electrons are attracted by the positive field, and fill these holes. This creates a depletion region where no charge carriers exist because the electron is now fixed onto the atom and immobile. As the voltage at the gate increases, there will be a point at which the surface above the depletion region will be converted from p-type into n-type, as electrons from the bulk area will start to get attracted by the larger electric field. This is known asinversion. The threshold voltage at which this conversion happens is one of the most important parameters in a MOSFET. In the case of a p-type MOSFET, bulk inversion happens when the intrinsic energy level at the surface becomes smaller than theFermi levelat the surface. This can be seen on a band diagram. The Fermi level defines the type of semiconductor in discussion. If the Fermi level is equal to the Intrinsic level, the semiconductor is of intrinsic, or pure type. If the Fermi level lies closer to the conduction band (valence band) then the semiconductor type will be of n-type (p-type). When the gate voltage is increased in a positive sense(for the given example),[clarify]this will shift the intrinsic energy level band so that it will curve downwards towards the valence band. If the Fermi level lies closer to the valence band (for p-type), there will be a point when the Intrinsic level will start to cross the Fermi level and when the voltage reaches the threshold voltage, the intrinsic level does cross the Fermi level, and that is what is known as inversion. At that point, the surface of the semiconductor is inverted from p-type into n-type. If the Fermi level lies above the intrinsic level, the semiconductor is of n-type, therefore at inversion, when the intrinsic level reaches and crosses the Fermi level (which lies closer to the valence band), the semiconductor type changes at the surface as dictated by the relative positions of the Fermi and Intrinsic energy levels. A MOSFET is based on the modulation of charge concentration by a MOS capacitance between abodyelectrode and agateelectrode located above the body and insulated from all other device regions by a gate dielectric layer. If dielectrics other than an oxide are employed, the device may be referred to as a metal-insulator-semiconductor FET (MISFET). Compared to the MOS capacitor, the MOSFET includes two additional terminals (sourceanddrain), each connected to individual highly doped regions that are separated by the body region. These regions can be either p or n type, but they must both be of the same type, and of opposite type to the body region. The source and drain (unlike the body) are highly doped as signified by a "+" sign after the type of doping. If the MOSFET is an n-channel or nMOS FET, then the source and drain aren+regions and the body is apregion. If the MOSFET is a p-channel or pMOS FET, then the source and drain arep+regions and the body is anregion. The source is so named because it is the source of the charge carriers (electrons for n-channel, holes for p-channel) that flow through the channel; similarly, the drain is where the charge carriers leave the channel. The occupancy of the energy bands in a semiconductor is set by the position of theFermi levelrelative to the semiconductor energy-band edges. With sufficient gate voltage, the valence band edge is driven far from the Fermi level, and holes from the body are driven away from the gate. At larger gate bias still, near the semiconductor surface the conduction band edge is brought close to the Fermi level, populating the surface with electrons in aninversion layerorn-channelat the interface between the p region and the oxide. This conducting channel extends between the source and the drain, and current is conducted through it when a voltage is applied between the two electrodes. Increasing the voltage on the gate leads to a higher electron density in the inversion layer and therefore increases the current flow between the source and drain. For gate voltages below the threshold value, the channel is lightly populated, and only a very smallsubthreshold leakagecurrent can flow between the source and the drain. When a negative gate-source voltage (positive source-gate) is applied, it creates ap-channelat the surface of the n region, analogous to the n-channel case, but with opposite polarities of charges and voltages. When a voltage less negative than the threshold value (a negative voltage for the p-channel) is applied between gate and source, the channel disappears and only a very small subthreshold current can flow between the source and the drain. The device may comprise asilicon on insulatordevice in which a buried oxide is formed below a thin semiconductor layer. If the channel region between the gate dielectric and the buried oxide region is very thin, the channel is referred to as an ultrathin channel region with the source and drain regions formed on either side in or above the thin semiconductor layer. Other semiconductor materials may be employed. When the source and drain regions are formed above the channel in whole or in part, they are referred to as raised source/drain regions. The operation of a MOSFET can be separated into three different modes, depending on the voltages at the terminals. In the following discussion, a simplified algebraic model is used.[24]Modern MOSFET characteristics are more complex than the algebraic model presented here.[25] For anenhancement-mode, n-channel MOSFET, the three operational modes are: WhenVGS<Vth: whereVGS{\displaystyle V_{\text{GS}}}is gate-to-source bias andVth{\displaystyle V_{\text{th}}}is thethreshold voltageof the device. According to the basic threshold model, the transistor is turned off, and there is no conduction between drain and source. A more accurate model considers the effect of thermal energy on theFermi–Dirac distributionof electron energies which allow some of the more energetic electrons at the source to enter the channel and flow to the drain. This results in a subthreshold current that is an exponential function of gate-source voltage. While the current between drain and source should ideally be zero when the transistor is being used as a turned-off switch, there is a weak-inversion current, sometimes called subthreshold leakage. In weak inversion where the source is tied to bulk, the current varies exponentially withVGS{\displaystyle V_{\text{GS}}}as given approximately by:[26][27] ID≈ID0eVGS−VthnVT,{\displaystyle I_{\text{D}}\approx I_{\text{D0}}e^{\frac {V_{\text{GS}}-V_{\text{th}}}{nV_{\text{T}}}},} whereID0{\displaystyle I_{\text{D0}}}= current atVGS=Vth{\displaystyle V_{\text{GS}}=V_{\text{th}}}, the thermal voltageVT=kT/q{\displaystyle V_{\text{T}}=kT/q}and the slope factornis given by: n=1+CdepCox,{\displaystyle n=1+{\frac {C_{\text{dep}}}{C_{\text{ox}}}},} withCdep{\displaystyle C_{\text{dep}}}= capacitance of the depletion layer andCox{\displaystyle C_{\text{ox}}}= capacitance of the oxide layer. This equation is generally used, but is only an adequate approximation for the source tied to the bulk. For the source not tied to the bulk, the subthreshold equation for drain current in saturation is[28][29] ID≈ID0eVG−VthnVTe−VSVT.{\displaystyle I_{\text{D}}\approx I_{\text{D0}}e^{\frac {V_{\text{G}}-V_{\text{th}}}{nV_{\text{T}}}}e^{-{\frac {V_{\text{S}}}{V_{\text{T}}}}}.} In a long-channel device, there is no drain voltage dependence of the current onceVDS≫VT{\displaystyle V_{\text{DS}}\gg V_{\text{T}}}, but as channel length is reduceddrain-induced barrier loweringintroduces drain voltage dependence that depends in a complex way upon the device geometry (for example, the channel doping, the junction doping and so on). Frequently, threshold voltageVthfor this mode is defined as the gate voltage at which a selected value of currentID0occurs, for example,ID0= 1μA, which may not be the sameVth-value used in the equations for the following modes. Some micropower analog circuits are designed to take advantage of subthreshold conduction.[30][31][32]By working in the weak-inversion region, the MOSFETs in these circuits deliver the highest possible transconductance-to-current ratio, namely:gm/ID=1/(nVT){\displaystyle g_{m}/I_{\text{D}}=1/\left(nV_{\text{T}}\right)}, almost that of a bipolar transistor.[33] The subthresholdI–V curvedepends exponentially upon threshold voltage, introducing a strong dependence on any manufacturing variation that affects threshold voltage; for example: variations in oxide thickness, junction depth, or body doping that change the degree of drain-induced barrier lowering. The resulting sensitivity to fabricational variations complicates optimization for leakage and performance.[34][35] WhenVGS>VthandVDS<VGS−Vth: The transistor is turned on, and a channel has been created which allows current between the drain and the source. The MOSFET operates like a resistor, controlled by the gate voltage relative to both the source and drain voltages. The current from drain to source is modeled as: ID=μnCoxWL((VGS−Vth)VDS−VDS22){\displaystyle I_{\text{D}}=\mu _{n}C_{\text{ox}}{\frac {W}{L}}\left(\left(V_{\text{GS}}-V_{\rm {th}}\right)V_{\text{DS}}-{\frac {{V_{\text{DS}}}^{2}}{2}}\right)} whereμn{\displaystyle \mu _{n}}is the charge-carrier effective mobility,W{\displaystyle W}is the gate width,L{\displaystyle L}is the gate length andCox{\displaystyle C_{\text{ox}}}is the gate oxide capacitance per unit area. The transition from the exponential subthreshold region to the triode region is not as sharp as the equations suggest.[36][37][verification needed] WhenVGS> VthandVDS≥ (VGS– Vth): The switch is turned on, and a channel has been created, which allows current between the drain and source. Since the drain voltage is higher than the source voltage, the electrons spread out, and conduction is not through a narrow channel but through a broader, two- or three-dimensional current distribution extending away from the interface and deeper in the substrate. The onset of this region is also known aspinch-offto indicate the lack of channel region near the drain. Although the channel does not extend the full length of the device, the electric field between the drain and the channel is very high, and conduction continues. The drain current is now weakly dependent upon drain voltage and controlled primarily by the gate-source voltage, and modeled approximately as: ID=μnCox2WL[VGS−Vth]2[1+λVDS].{\displaystyle I_{\text{D}}={\frac {\mu _{n}C_{\text{ox}}}{2}}{\frac {W}{L}}\left[V_{\text{GS}}-V_{\text{th}}\right]^{2}\left[1+\lambda V_{\text{DS}}\right].} The additional factor involving λ, the channel-length modulation parameter, models current dependence on drain voltage due to theEarly effect, orchannel length modulation. According to this equation, a key design parameter, the MOSFET transconductance is: gm=∂ID∂VGS=2IDVGS−Vth=2IDVov,{\displaystyle g_{m}={\frac {\partial I_{D}}{\partial V_{\text{GS}}}}={\frac {2I_{\text{D}}}{V_{\text{GS}}-V_{\text{th}}}}={\frac {2I_{\text{D}}}{V_{\text{ov}}}},} where the combinationVov=VGS−Vthis called theoverdrive voltage,[38]and whereVDSsat=VGS−Vthaccounts for a small discontinuity inID{\displaystyle I_{\text{D}}}which would otherwise appear at the transition between the triode and saturation regions. Another key design parameter is the MOSFET output resistanceroutgiven by: rout=1λID{\displaystyle r_{\text{out}}={\frac {1}{\lambda I_{\text{D}}}}}. routis the inverse ofgDSwheregDS=∂IDS∂VDS{\displaystyle g_{\text{DS}}={\frac {\partial I_{\text{DS}}}{\partial V_{\text{DS}}}}}.IDis the expression in saturation region. If λ is taken as zero, an infinite output resistance of the device results that leads to unrealistic circuit predictions, particularly in analog circuits. As the channel length becomes very short, these equations become quite inaccurate. New physical effects arise. For example, carrier transport in the active mode may become limited byvelocity saturation. When velocity saturation dominates, the saturation drain current is more nearly linear than quadratic inVGS. At even shorter lengths, carriers transport with near zero scattering, known as quasi-ballistic transport. In the ballistic regime, the carriers travel at an injection velocity that may exceed the saturation velocity and approaches theFermi velocityat high inversion charge density. In addition, drain-induced barrier lowering increases off-state (cutoff) current and requires an increase in threshold voltage to compensate, which in turn reduces the saturation current.[39][40][verification needed] The occupancy of the energy bands in a semiconductor is set by the position of theFermi levelrelative to the semiconductor energy-band edges. Application of a source-to-substrate reverse bias of the source-body pn-junction introduces a split between the Fermi levels for electrons and holes, moving the Fermi level for the channel further from the band edge, lowering the occupancy of the channel. The effect is to increase the gate voltage necessary to establish the channel, as seen in the figure. This change in channel strength by application of reverse bias is called the "body effect." Using an nMOS example, the gate-to-body biasVGBpositions the conduction-band energy levels, while the source-to-body bias VSBpositions the electron Fermi level near the interface, deciding occupancy of these levels near the interface, and hence the strength of the inversion layer or channel. The body effect upon the channel can be described using a modification of the threshold voltage, approximated by the following equation: whereVTBis the threshold voltage with substrate bias present, andVT0is the zero-VSBvalue of threshold voltage,γ{\displaystyle \gamma }is the body effect parameter, and 2φBis the approximate potential drop between surface and bulk across the depletion layer whenVSB= 0and gate bias is sufficient to ensure that a channel is present.[41]As this equation shows, a reverse biasVSB> 0causes an increase in threshold voltageVTBand therefore demands a larger gate voltage before the channel populates. The body can be operated as a second gate, and is sometimes referred to as the "back gate"; the body effect is sometimes called the "back-gate effect".[42] A variety of symbols are used for the MOSFET. The basic design is generally a line for the channel with the source and drain leaving it at right angles and then bending back at right angles into the same direction as the channel. Sometimes three line segments are used forenhancement modeand a solid line for depletion mode (seedepletion and enhancement modes). Another line is drawn parallel to the channel for the gate. Thebulkorbodyconnection, if shown, is shown connected to the back of the channel with an arrow indicating pMOS or nMOS. Arrows always point from P to N, so an NMOS (N-channel in P-well or P-substrate) has the arrow pointing in (from the bulk to the channel). If the bulk is connected to the source (as is generally the case with discrete devices) it is sometimes angled to meet the source leaving the transistor. If the bulk is not shown (as is often the case in IC design as they are generally common bulk) an inversion symbol is sometimes used to indicate PMOS, alternatively an arrow on the source may be used in the same way as for bipolar transistors (out for nMOS, in for pMOS). Comparison of enhancement-mode and depletion-mode MOSFET symbols, along withJFETsymbols. The orientation of the symbols, (most significantly the position of source relative to drain) is such that more positive voltages appear higher on the page than less positive voltages, implyingconventional currentflowing "down" the page:[43][44][45] In schematics where G, S, D are not labeled, the detailed features of the symbol indicate which terminal is source and which is drain. For enhancement-mode and depletion-mode MOSFET symbols (in columns two and five), the source terminal is the one connected to the triangle. Additionally, in this diagram, the gate is shown as an "L" shape, whose input leg is closer to S than D, also indicating which is which. However, these symbols are often drawn with a T-shaped gate (as elsewhere on this page), so it is the triangle which must be relied upon to indicate the source terminal. For the symbols in which the bulk, or body, terminal is shown, it is here shown internally connected to the source (i.e., the black triangles in the diagrams in columns 2 and 5). This is a typical configuration, but by no means the only important configuration. In general, the MOSFET is a four-terminal device, and in integrated circuits many of the MOSFETs share a body connection, not necessarily connected to the source terminals of all the transistors. Digitalintegrated circuitssuch asmicroprocessorsand memory devices contain thousands to billions of integrated MOSFETs on each device, providing the basic switching functions required to implementlogic gatesanddata storage. Discrete devices are widely used in applications such asswitch mode power supplies,variable-frequency drivesand otherpower electronicsapplications where each device may be switching thousands of watts. Radio-frequency amplifiers up to theUHFspectrum use MOSFET transistors as analog signal and power amplifiers. Radio systems also use MOSFETs as oscillators, ormixersto convert frequencies. MOSFET devices are also applied in audio-frequency power amplifiers for public address systems,sound reinforcementand home and automobile sound systems[citation needed] Following the development ofclean roomsto reduce contamination to levels never before thought necessary, and ofphotolithography[46]and theplanar processto allow circuits to be made in very few steps, the Si–SiO2system possessed the technical attractions of low cost of production (on a per circuit basis) and ease of integration. Largely because of these two factors, the MOSFET has become the most widely used type of transistor in theInstitution of Engineering and Technology(IET).[citation needed] General Microelectronics introduced the first commercial MOS integrated circuit in 1964.[47]Additionally, the method of coupling two complementary MOSFETs (P-channel and N-channel) into one high/low switch, known as CMOS, means that digital circuits dissipate very little power except when actually switched. Theearliest microprocessorsstarting in 1970 were allMOS microprocessors; i.e., fabricated entirely fromPMOS logicor fabricated entirely fromNMOS logic. In the 1970s,MOS microprocessorswere often contrasted withCMOS microprocessorsandbipolar bit-slice processors.[48] The MOSFET is used in digital complementary metal–oxide–semiconductor (CMOS) logic,[49]which uses p- and n-channel MOSFETs as building blocks. Overheating is a major concern inintegrated circuitssince ever more transistors are packed into ever smaller chips. CMOS logic reduces power consumption because no current flows (ideally), and thus nopoweris consumed, except when the inputs tologic gatesare being switched. CMOS accomplishes this current reduction by complementing every nMOSFET with a pMOSFET and connecting both gates and both drains together. A high voltage on the gates will cause the nMOSFET to conduct and the pMOSFET not to conduct and a low voltage on the gates causes the reverse. During the switching time as the voltage goes from one state to another, both MOSFETs will conduct briefly. This arrangement greatly reduces power consumption and heat generation. The growth of digital technologies like themicroprocessorhas provided the motivation to advance MOSFET technology faster than any other type of silicon-based transistor.[50]A big advantage of MOSFETs for digital switching is that the oxide layer between the gate and the channel prevents DC current from flowing through the gate, further reducing power consumption and giving a very large input impedance. The insulating oxide between the gate and channel effectively isolates a MOSFET in one logic stage from earlier and later stages, which allows a single MOSFET output to drive a considerable number of MOSFET inputs. Bipolar transistor-based logic (such asTTL) does not have such a high fanout capacity. This isolation also makes it easier for the designers to ignore to some extent loading effects between logic stages independently. That extent is defined by the operating frequency: as frequencies increase, the input impedance of the MOSFETs decreases. The MOSFET's advantages in digital circuits do not translate into supremacy in allanalog circuits. The two types of circuit draw upon different features of transistor behavior. Digital circuits switch, spending most of their time either fully on or fully off. The transition from one to the other is only of concern with regards to speed and charge required. Analog circuits depend on operation in the transition region where small changes toVgscan modulate the output (drain) current. The JFET andbipolar junction transistor(BJT) are preferred for accurate matching (of adjacent devices in integrated circuits), highertransconductanceand certain temperature characteristics which simplify keeping performance predictable as circuit temperature varies. Nevertheless, MOSFETs are widely used in many types of analog circuits because of their own advantages (zero gate current, high and adjustable output impedance and improved robustness vs. BJTs which can be permanently degraded by even lightly breaking down the emitter-base).[vague]The characteristics and performance of many analog circuits can be scaled up or down by changing the sizes (length and width) of the MOSFETs used. By comparison, in bipolar transistors follow a differentscaling law. MOSFETs' ideal characteristics regarding gate current (zero) and drain-source offset voltage (zero) also make them nearly ideal switch elements, and also makeswitched capacitoranalog circuits practical. In their linear region, MOSFETs can be used as precision resistors, which can have a much higher controlled resistance than BJTs. In high power circuits, MOSFETs sometimes have the advantage of not suffering fromthermal runawayas BJTs do.[dubious–discuss]This means that complete analog circuits can be made on a silicon chip in a much smaller space and with simpler fabrication techniques. MOSFETS are ideally suited to switch inductive loads because of tolerance toinductive kickback. Some ICs combine analog and digital MOSFET circuitry on a singlemixed-signal integrated circuit, making the needed board space even smaller. This creates a need to isolate the analog circuits from the digital circuits on a chip level, leading to the use of isolation rings andsilicon on insulator(SOI). Since MOSFETs require more space to handle a given amount of power than a BJT, fabrication processes can incorporate BJTs and MOSFETs into a single device. Mixed-transistor devices are called bi-FETs (bipolar FETs) if they contain just one BJT-FET andBiCMOS(bipolar-CMOS) if they contain complementary BJT-FETs. Such devices have the advantages of both insulated gates and higher current density. MOSFET analog switches use the MOSFET to pass analog signals when on, and as a high impedance when off. Signals flow in both directions across a MOSFET switch. In this application, the drain and source of a MOSFET exchange places depending on the relative voltages of the source and drain electrodes. The source is the more negative side for an N-MOS or the more positive side for a P-MOS. All of these switches are limited on what signals they can pass or stop by their gate-source, gate-drain and source–drain voltages; exceeding the voltage, current, or power limits will potentially damage the switch. SeePower MOSFETsubsection down below. This analog switch uses a four-terminal simple MOSFET of either P or N type. In the case of an n-type switch, the body is connected to the most negative supply (usually GND) and the gate is used as the switch control. Whenever the gate voltage exceeds the source voltage by at least a threshold voltage, the MOSFET conducts. The higher the voltage, the more the MOSFET can conduct. An N-MOS switch passes all voltages less thanVgate−Vtn. When the switch is conducting, it typically operates in the linear (or ohmic) mode of operation, since the source and drain voltages will typically be nearly equal. In the case of a P-MOS, the body is connected to the most positive voltage, and the gate is brought to a lower potential to turn the switch on. The P-MOS switch passes all voltages higher thanVgate−Vtp(threshold voltageVtpis negative in the case of enhancement-mode P-MOS). This "complementary" or CMOS type of switch uses one P-MOS and one N-MOS FET to counteract the limitations of the single-type switch. The FETs have their drains and sources connected in parallel, the body of the P-MOS is connected to the high potential (VDD) and the body of the N-MOS is connected to the low potential (gnd). To turn the switch on, the gate of the P-MOS is driven to the low potential and the gate of the N-MOS is driven to the high potential. For voltages betweenVDD−Vtnandgnd−Vtp, both FETs conduct the signal; for voltages less thangnd−Vtp, the N-MOS conducts alone; and for voltages greater thanVDD−Vtn, the P-MOS conducts alone. The voltage limits for this switch are the gate-source, gate-drain and source-drain voltage limits for both FETs. Also, the P-MOS is typically two to three times wider than the N-MOS, so the switch will be balanced for speed in the two directions. Tri-state circuitrysometimes incorporates a CMOS MOSFET switch on its output to provide for a low-ohmic, full-range output when on, and a high-ohmic, mid-level signal when off. The primary criterion for the gate material is that it is a goodconductor. Highly dopedpolycrystalline siliconis an acceptable but certainly not ideal conductor, and also suffers from some more technical deficiencies in its role as the standard gate material. Nevertheless, there are several reasons favoring use of polysilicon: While polysilicon gates have been the de facto standard for the last twenty years, they do have some disadvantages which have led to their likely future replacement by metal gates. These disadvantages include: Present high performance CPUs use metal gate technology, together withhigh-κ dielectrics, a combination known ashigh-κ, metal gate(HKMG). The disadvantages of metal gates are overcome by a few techniques:[51] As devices are made smaller, insulating layers are made thinner, often through steps ofthermal oxidationor localised oxidation of silicon (LOCOS). For nano-scaled devices, at some pointtunnelingof carriers through the insulator from the channel to the gate electrode takes place. To reduce the resultingleakagecurrent, the insulator can be made thinner by choosing a material with a higher dielectric constant. To see how thickness and dielectric constant are related, note thatGauss's lawconnects field to charge as: withQ= charge density, κ = dielectric constant, ε0= permittivity of empty space andE= electric field. From this law it appears the same charge can be maintained in the channel at a lower field provided κ is increased. The voltage on the gate is given by: withVG= gate voltage,Vch= voltage at channel side of insulator, andtins= insulator thickness. This equation shows the gate voltage will not increase when the insulator thickness increases, provided κ increases to keeptins/ κ = constant (see the article on high-κ dielectrics for more detail, and the section in this article ongate-oxide leakage). The insulator in a MOSFET is a dielectric which can in any event be silicon oxide, formed byLOCOSbut many other dielectric materials are employed. The generic term for the dielectric is gate dielectric since the dielectric lies directly below the gate electrode and above the channel of the MOSFET. The source-to-body and drain-to-bodyjunctionsare the object of much attention because of three major factors: their design affects thecurrent-voltage (I-V) characteristicsof the device, lowering output resistance, and also the speed of the device through the loading effect of the junctioncapacitances, and finally, the component of stand-by power dissipation due to junction leakage. The drain induced barrier lowering of the threshold voltage andchannel length modulationeffects uponI-Vcurves are reduced by using shallow junction extensions. In addition,halodoping can be used, that is, the addition of very thin heavily doped regions of the same doping type as the body tight against the junction walls to limit the extent ofdepletion regions.[52] The capacitive effects are limited by using raised source and drain geometries that make most of the contact area border thick dielectric instead of silicon.[53] These various features of junction design are shown (withartistic license) in the figure. Over the past decades, the MOSFET (as used for digital logic) has continually been scaled down in size; typical MOSFET channel lengths were once severalmicrometres, but modern integrated circuits are incorporating MOSFETs with channel lengths of tens of nanometers.Robert Dennard's work onscaling theorywas pivotal in recognising that this ongoing reduction was possible. Intel began production of a process featuring a 32 nm feature size (with the channel being even shorter) in late 2009. The semiconductor industry maintains a "roadmap", theITRS,[54]which sets the pace for MOSFET development. Historically, the difficulties with decreasing the size of the MOSFET have been associated with the semiconductor device fabrication process, the need to use very low voltages, and with poorer electrical performance necessitating circuit redesign and innovation (small MOSFETs exhibit higher leakage currents and lower output resistance). Smaller MOSFETs are desirable for several reasons. The main reason to make transistors smaller is to pack more and more devices in a given chip area. This results in a chip with the same functionality in a smaller area, or chips with more functionality in the same area. Since fabrication costs for asemiconductor waferare relatively fixed, the cost per integrated circuits is mainly related to the number of chips that can be produced per wafer. Hence, smaller ICs allow more chips per wafer, reducing the price per chip. In fact, over the past 30 years the number of transistors per chip has been doubled every 2–3 years once a new technology node is introduced. For example, the number of MOSFETs in a microprocessor fabricated in a45 nmtechnology can well be twice as many as in a65 nmchip. This doubling of transistor density was first observed byGordon Moorein 1965 and is commonly referred to asMoore's law.[55]It is also expected that smaller transistors switch faster. For example, one approach to size reduction is a scaling of the MOSFET that requires all device dimensions to reduce proportionally. The main device dimensions are the channel length, channel width, and oxide thickness. When they are scaled down by equal factors, the transistor channel resistance does not change, while gate capacitance is cut by that factor. Hence, theRC delayof the transistor scales with a similar factor. While this has been traditionally the case for the older technologies, for the state-of-the-art MOSFETs reduction of the transistor dimensions does not necessarily translate to higher chip speed because the delay due to interconnections is more significant. Producing MOSFETs with channel lengths much smaller than amicrometreis a challenge, and the difficulties of semiconductor device fabrication are always a limiting factor in advancing integrated circuit technology. Though processes such asALDhave improved fabrication for small components, the small size of the MOSFET (less than a few tens of nanometers) has created operational problems: As MOSFET geometries shrink, the voltage that can be applied to the gate must be reduced to maintain reliability. To maintain performance, the threshold voltage of the MOSFET has to be reduced as well. As threshold voltage is reduced, the transistor cannot be switched from complete turn-off to complete turn-on with the limited voltage swing available; the circuit design is a compromise between strong current in theoncase and low current in theoffcase, and the application determines whether to favor one over the other. Subthreshold leakage (including subthreshold conduction, gate-oxide leakage and reverse-biased junction leakage), which was ignored in the past, now can consume upwards of half of the total power consumption of modern high-performance VLSI chips.[56][57] The gate oxide, which serves as insulator between the gate and channel, should be made as thin as possible to increase the channel conductivity and performance when the transistor is on and to reduce subthreshold leakage when the transistor is off. However, with current gate oxides with a thickness of around 1.2nm(which in silicon is ~5atomsthick) thequantum mechanicalphenomenon ofelectron tunnelingoccurs between the gate and channel, leading to increased power consumption.Silicon dioxidehas traditionally been used as the gate insulator. Silicon dioxide however has a modest dielectric constant. Increasing the dielectric constant of the gate dielectric allows a thicker layer while maintaining a high capacitance (capacitance is proportional to dielectric constant and inversely proportional to dielectric thickness). All else equal, a higher dielectric thickness reduces thequantum tunnelingcurrent through the dielectric between the gate and the channel. Insulators that have a largerdielectric constantthan silicon dioxide (referred to ashigh-κ dielectrics), such as group IVb metal silicates e.g.hafniumandzirconiumsilicates and oxides are being used to reduce the gate leakage from the 45 nanometer technology node onwards. On the other hand, the barrier height of the new gate insulator is an important consideration; the difference inconduction bandenergy between the semiconductor and the dielectric (and the corresponding difference invalence bandenergy) also affects leakage current level. For the traditional gate oxide, silicon dioxide, the former barrier is approximately 8eV. For many alternative dielectrics the value is significantly lower, tending to increase the tunneling current, somewhat negating the advantage of higher dielectric constant. The maximum gate-source voltage is determined by the strength of the electric field able to be sustained by the gate dielectric before significant leakage occurs. As the insulating dielectric is made thinner, the electric field strength within it goes up for a fixed voltage. This necessitates using lower voltages with the thinner dielectric. To make devices smaller, junction design has become more complex, leading to higherdopinglevels, shallower junctions, "halo" doping and so forth,[58][59]all to decrease drain-induced barrier lowering (see the section onjunction design). To keep these complex junctions in place, the annealing steps formerly used to remove damage and electrically active defects must be curtailed[60]increasing junction leakage. Heavier doping is also associated with thinner depletion layers and more recombination centers that result in increased leakage current, even without lattice damage. Drain-induced barrier lowering(DIBL) andVTroll off: Because of theshort-channel effect, channel formation is not entirely done by the gate, but now the drain and source also affect the channel formation. As the channel length decreases, the depletion regions of the source and drain come closer together and make the threshold voltage (VT) a function of the length of the channel. This is calledVTroll-off.VTalso becomes function of drain to source voltageVDS. As we increase theVDS, the depletion regions increase in size, and a considerable amount of charge is depleted by theVDS. The gate voltage required to form the channel is then lowered, and thus, theVTdecreases with an increase inVDS. This effect is called drain induced barrier lowering (DIBL). For analog operation, good gain requires a high MOSFET output impedance, which is to say, the MOSFET current should vary only slightly with the applied drain-to-source voltage. As devices are made smaller, the influence of the drain competes more successfully with that of the gate due to the growing proximity of these two electrodes, increasing the sensitivity of the MOSFET current to the drain voltage. To counteract the resulting decrease in output resistance, circuits are made more complex, either by requiring more devices, for example thecascodeandcascade amplifiers, or by feedback circuitry usingoperational amplifiers, for example a circuit like that in the adjacent figure. Thetransconductanceof the MOSFET decides its gain and is proportional to hole orelectron mobility(depending on device type), at least for low drain voltages. As MOSFET size is reduced, the fields in the channel increase and the dopant impurity levels increase. Both changes reduce the carrier mobility, and hence the transconductance. As channel lengths are reduced without proportional reduction in drain voltage, raising the electric field in the channel, the result is velocity saturation of the carriers, limiting the current and the transconductance. Traditionally, switching time was roughly proportional to the gate capacitance of gates. However, with transistors becoming smaller and more transistors being placed on the chip,interconnect capacitance(the capacitance of the metal-layer connections between different parts of the chip) is becoming a large percentage of capacitance.[61][62]Signals have to travel through the interconnect, which leads to increased delay and lower performance. The ever-increasing density of MOSFETs on an integrated circuit creates problems of substantial localized heat generation that can impair circuit operation. Circuits operate more slowly at high temperatures, and have reduced reliability and shorter lifetimes. Heat sinks and other cooling devices and methods are now required for many integrated circuits including microprocessors.Power MOSFETsare at risk ofthermal runaway. As their on-state resistance rises with temperature, if the load is approximately a constant-current load then the power loss rises correspondingly, generating further heat. When theheatsinkis not able to keep the temperature low enough, the junction temperature may rise quickly and uncontrollably, resulting in destruction of the device. With MOSFETs becoming smaller, the number of atoms in the silicon that produce many of the transistor's properties is becoming fewer, with the result that control of dopant numbers and placement is more erratic. During chip manufacturing, random process variations affect all transistor dimensions: length, width, junction depths, oxide thicknessetc., and become a greater percentage of overall transistor size as the transistor shrinks. The transistor characteristics become less certain, more statistical. The random nature of manufacture means we do not know which particular example MOSFETs actually will end up in a particular instance of the circuit. This uncertainty forces a less optimal design because the design must work for a great variety of possible component MOSFETs. Seeprocess variation,design for manufacturability,reliability engineering, andstatistical process control.[63] Modern ICs are computer-simulated with the goal of obtaining working circuits from the first manufactured lot. As devices are miniaturized, the complexity of the processing makes it difficult to predict exactly what the final devices look like, and modeling of physical processes becomes more challenging as well. In addition, microscopic variations in structure due simply to the probabilistic nature of atomic processes require statistical (not just deterministic) predictions. These factors combine to make adequate simulation and "right the first time" manufacture difficult. The dual-gate MOSFET has atetrodeconfiguration, where both gates control the current in the device. It is commonly used for small-signal devices in radio frequency applications where biasing the drain-side gate at constant potential reduces the gain loss caused byMiller effect, replacing two separate transistors incascodeconfiguration. Other common uses in RF circuits include gain control and mixing (frequency conversion). Thetetrodedescription, though accurate, does not replicate the vacuum-tube tetrode. Vacuum-tube tetrodes, using a screen grid, exhibit much lower grid-plate capacitance and much higher output impedance and voltage gains than triode vacuum tubes. These improvements are commonly an order of magnitude (10 times) or considerably more. Tetrode transistors (whether bipolar junction or field-effect) do not exhibit improvements of such a great degree. TheFinFETis a double-gatesilicon-on-insulatordevice, one of a number of geometries being introduced to mitigate the effects of short channels and reduce drain-induced barrier lowering. Thefinrefers to the narrow channel between source and drain. A thin insulating oxide layer on either side of the fin separates it from the gate. SOI FinFETs with a thick oxide on top of the fin are calleddouble-gateand those with a thin oxide on top as well as on the sides are calledtriple-gateFinFETs.[64][65] There aredepletion-modeMOSFET devices, which are less commonly used than the standardenhancement-modedevices already described. These are MOSFET devices that are doped so that a channel exists even with zero voltage from gate to source. To control the channel, a negative voltage is applied to the gate (for an n-channel device), depleting the channel, which reduces the current flow through the device. In essence, the depletion-mode device is equivalent to anormally closed(on) switch, while the enhancement-mode device is equivalent to anormally open(off) switch.[66] Due to their lownoise figurein theRFregion, and bettergain, these devices are often preferred tobipolarsinRF front-endssuch as inTVsets. Depletion-mode MOSFET families include the BF960 bySiemensandTelefunken, and the BF980 in the 1980s byPhilips(later to becomeNXP Semiconductors), whose derivatives are still used inAGCand RFmixerfront-ends. Metal–insulator–semiconductor field-effect-transistor,[67][68][69]orMISFET, is a more general term thanMOSFETand a synonym toinsulated-gate field-effect transistor(IGFET). All MOSFETs are MISFETs, but not all MISFETs are MOSFETs. The gate dielectric insulator in a MISFET is a substrate oxide (hence typicallysilicon dioxide) in a MOSFET, but other materials can also be employed. Thegate dielectriclies directly below thegate electrodeand above thechannelof the MISFET. The termmetalis historically used for the gate material, even though now it is usuallyhighly dopedpolysiliconor some othernon-metal. Insulator types may be: For devices of equal current driving capability, n-channel MOSFETs can be made smaller than p-channel MOSFETs, due to p-channel charge carriers (holes) having lowermobilitythan do n-channel charge carriers (electrons), and producing only one type of MOSFET on a silicon substrate is cheaper and technically simpler. These were the driving principles in the design ofNMOS logicwhich uses n-channel MOSFETs exclusively. However, neglectingleakage current, unlike CMOS logic, NMOS logic consumes power even when no switching is taking place. With advances in technology, CMOS logic displaced NMOS logic in the mid-1980s to become the preferred process for digital chips. Power MOSFETshave a different structure.[71]As with most power devices, the structure is vertical and not planar. Using a vertical structure, it is possible for the transistor to sustain both high blocking voltage and high current. The voltage rating of the transistor is a function of the doping and thickness of the N-epitaxiallayer (see cross section), while the current rating is a function of the channel width (the wider the channel, the higher the current). In a planar structure, the current and breakdown voltage ratings are both a function of the channel dimensions (respectively width and length of the channel), resulting in inefficient use of the "silicon estate". With the vertical structure, the component area is roughly proportional to the current it can sustain, and the component thickness (actually the N-epitaxial layer thickness) is proportional to the breakdown voltage.[72] Power MOSFETs with lateral structure are mainly used in high-end audio amplifiers and high-power PA systems. Their advantage is a better behaviour in the saturated region (corresponding to the linear region of a bipolar transistor) than the vertical MOSFETs. Vertical MOSFETs are designed for switching applications.[73] There areLDMOS(lateral double-diffused metal oxide semiconductor) andVDMOS(vertical double-diffused metal oxide semiconductor). Most power MOSFETs are made using this technology. Semiconductor sub-micrometer and nanometer electronic circuits are the primary concern for operating within the normal tolerance in harshradiationenvironments likeouter space. One of the design approaches for making aradiation-hardened-by-design(RHBD) device is enclosed-layout-transistor (ELT). Normally, the gate of the MOSFET surrounds the drain, which is placed in the center of the ELT. The source of the MOSFET surrounds the gate. Another RHBD MOSFET is called H-Gate. Both of these transistors have very low leakage currents with respect to radiation. However, they are large in size and take up more space on silicon than a standard MOSFET. In older STI (shallow trench isolation) designs, radiation strikes near the silicon oxide region cause the channel inversion at the corners of the standard MOSFET due to accumulation of radiation induced trapped charges. If the charges are large enough, the accumulated charges affect STI surface edges along the channel near the channel interface (gate) of the standard MOSFET. This causes a device channel inversion to occur along the channel edges, creating an off-state leakage path. Subsequently, the device turns on; this process severely degrades the reliability of circuits. The ELT offers many advantages, including an improvement ofreliabilityby reducing unwanted surface inversion at the gate edges which occurs in the standard MOSFET. Since the gate edges are enclosed in ELT, there is no gate oxide edge (STI at gate interface), and thus the transistor off-state leakage is reduced very much. Low-power microelectronic circuits including computers, communication devices, and monitoring systems in space shuttles and satellites are very different from what is used on earth. They are radiation (high-speed atomic particles likeprotonandneutron,solar flaremagnetic energy dissipation in Earth's space, energeticcosmic rayslikeX-ray,gamma rayetc.) tolerant circuits. These special electronics are designed by applying different techniques using RHBD MOSFETs to ensure safe space journeys and safe space-walks of astronauts.
https://en.wikipedia.org/wiki/MOSFET#Scaling
Extreme learning machinesarefeedforward neural networksforclassification,regression,clustering,sparse approximation, compression andfeature learningwith a single layer or multiple layers of hidden nodes, where the parameters of hidden nodes (not just the weights connecting inputs to hidden nodes) need to be tuned. These hidden nodes can be randomly assigned and never updated (i.e. they arerandom projectionbut with nonlinear transforms), or can be inherited from their ancestors without being changed. In most cases, the output weights of hidden nodes are usually learned in a single step, which essentially amounts to learning a linear model. The name "extreme learning machine" (ELM) was given to such models by Guang-Bin Huang who originally proposed for the networks with any type of nonlinear piecewise continuous hidden nodes including biological neurons and different type of mathematical basis functions.[1][2]The idea for artificial neural networks goes back toFrank Rosenblatt, who not only published a single layerPerceptronin 1958,[3]but also introduced amultilayer perceptronwith 3 layers: an input layer, a hidden layer with randomized weights that did not learn, and a learning output layer.[4] According to some researchers, these models are able to produce good generalization performance and learn thousands of times faster than networks trained usingbackpropagation.[5]In literature, it also shows that these models can outperformsupport vector machinesin both classification and regression applications.[6][1][7] From 2001-2010, ELM research mainly focused on the unified learning framework for "generalized" single-hidden layer feedforward neural networks (SLFNs), including but not limited to sigmoid networks, RBF networks, threshold networks,[8]trigonometric networks, fuzzy inference systems, Fourier series,[9][10]Laplacian transform, wavelet networks,[11]etc. One significant achievement made in those years is to successfully prove the universal approximation and classification capabilities of ELM in theory.[9][12][13] From 2010 to 2015, ELM research extended to the unified learning framework for kernel learning, SVM and a few typical feature learning methods such asPrincipal Component Analysis(PCA) andNon-negative Matrix Factorization(NMF). It is shown that SVM actually provides suboptimal solutions compared to ELM, and ELM can provide the whitebox kernel mapping, which is implemented by ELM random feature mapping, instead of the blackbox kernel used in SVM. PCA and NMF can be considered as special cases where linear hidden nodes are used in ELM.[14][15] From 2015 to 2017, an increased focus has been placed on hierarchical implementations[16][17]of ELM. Additionally since 2011, significant biological studies have been made that support certain ELM theories.[18][19][20] From 2017 onwards, to overcome low-convergence problem during trainingLU decomposition,Hessenberg decompositionandQR decompositionbased approaches withregularizationhave begun to attract attention[21][22][23] In 2017, Google Scholar Blog published a list of "Classic Papers: Articles That Have Stood The Test of Time".[24]Among these are two papers written about ELM which are shown in studies 2 and 7 from the "List of 10 classic AI papers from 2006".[25][26][27] Given a single hidden layer of ELM, suppose that the output function of thei{\displaystyle i}-th hidden node ishi(x)=G(ai,bi,x){\displaystyle h_{i}(\mathbf {x} )=G(\mathbf {a} _{i},b_{i},\mathbf {x} )}, whereai{\displaystyle \mathbf {a} _{i}}andbi{\displaystyle b_{i}}are the parameters of thei{\displaystyle i}-th hidden node. The output function of the ELM for single hidden layer feedforward networks (SLFN) withL{\displaystyle L}hidden nodes is: fL(x)=∑i=1Lβihi(x){\displaystyle f_{L}({\bf {x}})=\sum _{i=1}^{L}{\boldsymbol {\beta }}_{i}h_{i}({\bf {x}})}, whereβi{\displaystyle {\boldsymbol {\beta }}_{i}}is the output weight of thei{\displaystyle i}-th hidden node. h(x)=[hi(x),...,hL(x)]{\displaystyle \mathbf {h} (\mathbf {x} )=[h_{i}(\mathbf {x} ),...,h_{L}(\mathbf {x} )]}is the hidden layer output mapping of ELM. GivenN{\displaystyle N}training samples, the hidden layer output matrixH{\displaystyle \mathbf {H} }of ELM is given as:H=[h(x1)⋮h(xN)]=[G(a1,b1,x1)⋯G(aL,bL,x1)⋮⋮⋮G(a1,b1,xN)⋯G(aL,bL,xN)]{\displaystyle {\bf {H}}=\left[{\begin{matrix}{\bf {h}}({\bf {x}}_{1})\\\vdots \\{\bf {h}}({\bf {x}}_{N})\end{matrix}}\right]=\left[{\begin{matrix}G({\bf {a}}_{1},b_{1},{\bf {x}}_{1})&\cdots &G({\bf {a}}_{L},b_{L},{\bf {x}}_{1})\\\vdots &\vdots &\vdots \\G({\bf {a}}_{1},b_{1},{\bf {x}}_{N})&\cdots &G({\bf {a}}_{L},b_{L},{\bf {x}}_{N})\end{matrix}}\right]} andT{\displaystyle \mathbf {T} }is the training data target matrix:T=[t1⋮tN]{\displaystyle {\bf {T}}=\left[{\begin{matrix}{\bf {t}}_{1}\\\vdots \\{\bf {t}}_{N}\end{matrix}}\right]} Generally speaking, ELM is a kind of regularization neural networks but with non-tuned hidden layer mappings (formed by either random hidden nodes, kernels or other implementations), its objective function is: Minimize:‖β‖pσ1+C‖Hβ−T‖qσ2{\displaystyle {\text{Minimize: }}\|{\boldsymbol {\beta }}\|_{p}^{\sigma _{1}}+C\|{\bf {H}}{\boldsymbol {\beta }}-{\bf {T}}\|_{q}^{\sigma _{2}}} whereσ1>0,σ2>0,p,q=0,12,1,2,⋯,+∞{\displaystyle \sigma _{1}>0,\sigma _{2}>0,p,q=0,{\frac {1}{2}},1,2,\cdots ,+\infty }. Different combinations ofσ1{\displaystyle \sigma _{1}},σ2{\displaystyle \sigma _{2}},p{\displaystyle p}andq{\displaystyle q}can be used and result in different learning algorithms for regression, classification, sparse coding, compression, feature learning and clustering. As a special case, a simplest ELM training algorithm learns a model of the form (for single hidden layer sigmoid neural networks): whereW1is the matrix of input-to-hidden-layer weights,σ{\displaystyle \sigma }is an activation function, andW2is the matrix of hidden-to-output-layer weights. The algorithm proceeds as follows: In most cases, ELM is used as a single hidden layer feedforward network (SLFN) including but not limited to sigmoid networks, RBF networks, threshold networks, fuzzy inference networks, complex neural networks, wavelet networks, Fourier transform, Laplacian transform, etc. Due to its different learning algorithm implementations for regression, classification, sparse coding, compression, feature learning and clustering, multi ELMs have been used to form multi hidden layer networks,deep learningor hierarchical networks.[16][17][28] A hidden node in ELM is a computational element, which need not be considered as classical neuron. A hidden node in ELM can be classical artificial neurons, basis functions, or a subnetwork formed by some hidden nodes.[12] Both universal approximation and classification capabilities[6][1]have been proved for ELM in literature. Especially,Guang-Bin Huangand his team spent almost seven years (2001-2008) on the rigorous proofs of ELM's universal approximation capability.[9][12][13] In theory, any nonconstant piecewise continuous function can be used as activation function in ELM hidden nodes, such an activation function need not be differential. If tuning the parameters of hidden nodes could make SLFNs approximate any target functionf(x){\displaystyle f(\mathbf {x} )}, then hidden node parameters can be randomly generated according to any continuous distribution probability, andlimL→∞‖∑i=1Lβihi(x)−f(x)‖=0{\displaystyle \lim _{L\rightarrow \infty }\left\|\sum _{i=1}^{L}{\boldsymbol {\beta }}_{i}h_{i}({\bf {x}})-f({\bf {x}})\right\|=0}holds with probability one with appropriate output weightsβ{\displaystyle {\boldsymbol {\beta }}}. Given any nonconstant piecewise continuous function as the activation function in SLFNs, if tuning the parameters of hidden nodes can make SLFNs approximate any target functionf(x){\displaystyle f(\mathbf {x} )}, then SLFNs with random hidden layer mappingh(x){\displaystyle \mathbf {h} (\mathbf {x} )}can separate arbitrary disjoint regions of any shapes. A wide range of nonlinear piecewise continuous functionsG(a,b,x){\displaystyle G(\mathbf {a} ,b,\mathbf {x} )}can be used in hidden neurons of ELM, for example: Sigmoid function:G(a,b,x)=11+exp⁡(−(a⋅x+b)){\displaystyle G(\mathbf {a} ,b,\mathbf {x} )={\frac {1}{1+\exp(-(\mathbf {a} \cdot \mathbf {x} +b))}}} Fourier function:G(a,b,x)=sin⁡(a⋅x+b){\displaystyle G(\mathbf {a} ,b,\mathbf {x} )=\sin(\mathbf {a} \cdot \mathbf {x} +b)} Hardlimit function:G(a,b,x)={1,ifa⋅x−b≥00,otherwise{\displaystyle G(\mathbf {a} ,b,\mathbf {x} )={\begin{cases}1,&{\text{if }}{\bf {a}}\cdot {\bf {x}}-b\geq 0\\0,&{\text{otherwise}}\end{cases}}} Gaussian function:G(a,b,x)=exp⁡(−b‖x−a‖2){\displaystyle G(\mathbf {a} ,b,\mathbf {x} )=\exp(-b\|\mathbf {x} -\mathbf {a} \|^{2})} Multiquadrics function:G(a,b,x)=(‖x−a‖2+b2)1/2{\displaystyle G(\mathbf {a} ,b,\mathbf {x} )=(\|\mathbf {x} -\mathbf {a} \|^{2}+b^{2})^{1/2}} Wavelet:G(a,b,x)=‖a‖−1/2Ψ(x−ab){\displaystyle G(\mathbf {a} ,b,\mathbf {x} )=\|a\|^{-1/2}\Psi \left({\frac {\mathbf {x} -\mathbf {a} }{b}}\right)}whereΨ{\displaystyle \Psi }is a single mother wavelet function. Circular functions: tan⁡(z)=eiz−e−izi(eiz+e−iz){\displaystyle \tan(z)={\frac {e^{iz}-e^{-iz}}{i(e^{iz}+e^{-iz})}}} sin⁡(z)=eiz−e−iz2i{\displaystyle \sin(z)={\frac {e^{iz}-e^{-iz}}{2i}}} Inverse circular functions: arctan⁡(z)=∫0zdt1+t2{\displaystyle \arctan(z)=\int _{0}^{z}{\frac {dt}{1+t^{2}}}} arccos⁡(z)=∫0zdt(1−t2)1/2{\displaystyle \arccos(z)=\int _{0}^{z}{\frac {dt}{(1-t^{2})^{1/2}}}} Hyperbolic functions: tanh⁡(z)=ez−e−zez+e−z{\displaystyle \tanh(z)={\frac {e^{z}-e^{-z}}{e^{z}+e^{-z}}}} sinh⁡(z)=ez−e−z2{\displaystyle \sinh(z)={\frac {e^{z}-e^{-z}}{2}}} Inverse hyperbolic functions: arctanh(z)=∫0zdt1−t2{\displaystyle {\text{arctanh}}(z)=\int _{0}^{z}{\frac {dt}{1-t^{2}}}} arcsinh(z)=∫0zdt(1+t2)1/2{\displaystyle {\text{arcsinh}}(z)=\int _{0}^{z}{\frac {dt}{(1+t^{2})^{1/2}}}} Theblack-boxcharacter of neural networks in general and extreme learning machines (ELM) in particular is one of the major concerns that repels engineers from application in unsafe automation tasks. This particular issue was approached by means of several different techniques. One approach is to reduce the dependence on the random input.[29][30]Another approach focuses on the incorporation of continuous constraints into the learning process of ELMs[31][32]which are derived from prior knowledge about the specific task. This is reasonable, because machine learning solutions have to guarantee a safe operation in many application domains. The mentioned studies revealed that the special form of ELMs, with its functional separation and the linear read-out weights, is particularly well suited for the efficient incorporation of continuous constraints in predefined regions of the input space. There are two main complaints from academic community concerning this work, the first one is about "reinventing and ignoring previous ideas", the second one is about "improper naming and popularizing", as shown in some debates in 2008 and 2015.[33]In particular, it was pointed out in a letter[34]to the editor ofIEEE Transactions on Neural Networksthat the idea of using a hidden layer connected to the inputs by random untrained weights was already suggested in the original papers onRBF networksin the late 1980s; Guang-Bin Huang replied by pointing out subtle differences.[35]In a 2015 paper,[1]Huang responded to complaints about his invention of the name ELM for already-existing methods, complaining of "very negative and unhelpful comments on ELM in neither academic nor professional manner due to various reasons and intentions" and an "irresponsible anonymous attack which intends to destroy harmony research environment", arguing that his work "provides a unifying learning platform" for various types of neural nets,[1]including hierarchical structured ELM.[28]In 2015, Huang also gave a formal rebuttal to what he considered as "malign and attack."[36]Recent research replaces the random weights with constrained random weights.[6][37]
https://en.wikipedia.org/wiki/Extreme_learning_machine
TheSecurity of Information Act(French:Loi sur la protection de l’information, R.S.C. 1985, c. O-5),[1]formerly known as theOfficial Secrets Act, is an Act of theParliament of Canadathat addressesnational securityconcerns, including threats ofespionageby foreign powers andterrorist organizations, and the intimidation or coercion of ethnocultural communities in and against Canada. Certain departments ('Scheduled department') and classes of people (past and current employees) are 'permanently bound to secrecy' under the Act. These are individuals who should be held to a higher level of accountability for unauthorized disclosures of information obtained in relation to their work. For example, Military Intelligence, employees ofCanadian Security Intelligence Service(CSIS),Communications Security Establishmentand certain members of theRoyal Canadian Mounted Police(RCMP). This act applies to anyone who has been grantedsecurity clearanceby the Federal Government, including those who have been granted Reliability Status for accessing designated information. Previously, only 'classified' information was protected under theOfficial Secrets Act1981. On 13 January 2012,Jeffrey Delisle, acommissioned officerin theRoyal Canadian Navy, was arrested and charged under theSecurity of Information Act. Delisle was accused of sellingclassifiedintelligence, including that of Canadian and otherFive Eyesintelligence agencies, to theRussian Federation.[2] In October 2012, Delisle pleaded guilty to one count each of communicating safeguarded information and attempting to communicate safeguarded information, and one count of breach of trust under theCriminal Code. Delisle was sentenced on 8 February 2013 to 20 years in prison and fined $111,817, the amount investigators claimed he received from Russia.[3] Delisle was granted day parole in August 2018 and subsequently released on full parole in March 2019 with theParole Board of Canadaconsidering Delisle of low risk to reoffend.[4] On 12 September 2019, Cameron Ortis, a senior intelligence official with theRoyal Canadian Mounted Police, was arrested and subsequently charged with ten offences under theSecurity of Information Actand theCriminal Code.[5]Ortis was accused of leakingclassifiedintelligence to individuals under investigation by Canadian and international authorities.[6] Following an eight-week-long trial, on 22 November 2023, Ortis was found guilty of four counts of intentionally and without authority communicating special operational information to unauthorized individuals. Ortis was also found guilty of one count each of fraudulently obtaining a computer service and breach of trust under theCriminal Code.[7] On 7 February 2024, Ortis was sentenced to 14 years in prison minus time served in custody.[8]
https://en.wikipedia.org/wiki/Security_of_Information_Act
Scientific modellingis an activity that producesmodelsrepresentingempiricalobjects, phenomena, and physical processes, to make a particular part or feature of the world easier tounderstand,define,quantify,visualize, orsimulate. It requires selecting and identifying relevant aspects of a situation in the real world and then developing a model to replicate a system with those features. Different types of models may be used for different purposes, such asconceptual modelsto better understand, operational models tooperationalize,mathematical modelsto quantify,computational modelsto simulate, andgraphical modelsto visualize the subject. Modelling is an essential and inseparable part of many scientific disciplines, each of which has its own ideas about specific types of modelling.[1][2]The following was said byJohn von Neumann.[3] ... the sciences do not try to explain, they hardly even try to interpret, they mainly make models. By a model is meant a mathematical construct which, with the addition of certain verbal interpretations, describes observed phenomena. The justification of such a mathematical construct is solely and precisely that it is expected to work—that is, correctly to describe phenomena from a reasonably wide area. There is also an increasing attention to scientific modelling[4]in fields such asscience education,[5]philosophy of science,systems theory, andknowledge visualization. There is a growing collection ofmethods, techniques and meta-theoryabout all kinds of specialized scientific modelling. A scientific model seeks to representempiricalobjects, phenomena, and physical processes in alogicalandobjectiveway. All models arein simulacra, that is, simplified reflections of reality that, despite being approximations, can be extremely useful.[6]Building and disputing models is fundamental to the scientific enterprise. Complete and true representation may be impossible, but scientific debate often concerns which is the better model for a given task, e.g., which is the more accurate climate model for seasonal forecasting.[7] Attempts toformalizetheprinciplesof theempirical sciencesuse aninterpretationto model reality, in the same way logiciansaxiomatizetheprinciplesoflogic. The aim of these attempts is to construct aformal systemthat will not produce theoretical consequences that are contrary to what is found inreality. Predictions or other statements drawn from such a formal system mirror or map the real world only insofar as these scientific models are true.[8][9] For the scientist, a model is also a way in which the human thought processes can be amplified.[10]For instance, models that are rendered in software allow scientists to leverage computational power to simulate, visualize, manipulate and gain intuition about the entity, phenomenon, or process being represented. Such computer models arein silico. Other types of scientific models arein vivo(living models, such aslaboratory rats) andin vitro(in glassware, such astissue culture).[11] Models are typically used when it is either impossible or impractical to create experimental conditions in which scientists can directly measure outcomes. Direct measurement of outcomes undercontrolled conditions(seeScientific method) will always be more reliable than modeled estimates of outcomes. Withinmodeling and simulation, a model is a task-driven, purposeful simplification and abstraction of a perception of reality, shaped by physical, legal, and cognitive constraints.[12]It is task-driven because a model is captured with a certain question or task in mind. Simplifications leave all the known and observed entities and their relation out that are not important for the task. Abstraction aggregates information that is important but not needed in the same detail as the object of interest. Both activities, simplification, and abstraction, are done purposefully. However, they are done based on a perception of reality. This perception is already amodelin itself, as it comes with a physical constraint. There are also constraints on what we are able to legally observe with our current tools and methods, and cognitive constraints that limit what we are able to explain with our current theories. This model comprises the concepts, their behavior, and their relations informal form and is often referred to as aconceptual model. In order to execute the model, it needs to be implemented as acomputer simulation. This requires more choices, such as numerical approximations or the use of heuristics.[13]Despite all these epistemological and computational constraints, simulation has been recognized as the third pillar of scientific methods: theory building, simulation, and experimentation.[14] Asimulationis a way to implement the model, often employed when the model is too complex for the analytical solution. A steady-state simulation provides information about the system at a specific instant in time (usually at equilibrium, if such a state exists). A dynamic simulation provides information over time. A simulation shows how a particular object or phenomenon will behave. Such a simulation can be useful fortesting, analysis, or training in those cases where real-world systems or concepts can be represented by models.[15] Structureis a fundamental and sometimes intangible notion covering the recognition, observation, nature, and stability of patterns and relationships of entities. From a child's verbal description of a snowflake, to the detailedscientific analysisof the properties ofmagnetic fields, the concept of structure is an essential foundation of nearly every mode of inquiry and discovery in science, philosophy, and art.[16] Asystemis a set of interacting or interdependent entities, real or abstract, forming an integrated whole. In general, a system is a construct or collection of different elements that together can produce results not obtainable by the elements alone.[17]The concept of an 'integrated whole' can also be stated in terms of a system embodying a set of relationships which are differentiated from relationships of the set to other elements, and form relationships between an element of the set and elements not a part of the relational regime. There are two types of system models: 1) discrete in which the variables change instantaneously at separate points in time and, 2) continuous where the state variables change continuously with respect to time.[18] Modelling is the process of generating a model as a conceptual representation of some phenomenon. Typically a model will deal with only some aspects of the phenomenon in question, and two models of the same phenomenon may be essentially different—that is to say, that the differences between them comprise more than just a simple renaming of components. Such differences may be due to differing requirements of the model's end users, or to conceptual or aesthetic differences among the modelers and to contingent decisions made during the modelling process. Considerations that may influence thestructureof a model might be the modeler's preference for a reducedontology, preferences regardingstatistical modelsversusdeterministic models, discrete versus continuous time, etc. In any case, users of a model need to understand the assumptions made that are pertinent to its validity for a given use. Building a model requiresabstraction. Assumptions are used in modelling in order to specify the domain of application of the model. For example, thespecial theory of relativityassumes aninertial frame of reference. This assumption was contextualized and further explained by thegeneral theory of relativity. A model makes accurate predictions when its assumptions are valid, and might well not make accurate predictions when its assumptions do not hold. Such assumptions are often the point with which older theories are succeeded by new ones (thegeneral theory of relativityworks in non-inertial reference frames as well). A model is evaluated first and foremost by its consistency to empirical data; any model inconsistent with reproducible observations must be modified or rejected. One way to modify the model is by restricting the domain over which it is credited with having high validity. A case in point is Newtonian physics, which is highly useful except for the very small, the very fast, and the very massive phenomena of the universe. However, a fit to empirical data alone is not sufficient for a model to be accepted as valid. Factors important in evaluating a model include:[citation needed] People may attempt to quantify the evaluation of a model using autility function. Visualizationis any technique for creating images, diagrams, or animations to communicate a message. Visualization through visual imagery has been an effective way to communicate both abstract and concrete ideas since the dawn of man. Examples from history includecave paintings,Egyptian hieroglyphs, Greekgeometry, andLeonardo da Vinci's revolutionary methods of technical drawing for engineering and scientific purposes. Space mappingrefers to a methodology that employs a "quasi-global" modelling formulation to link companion "coarse" (ideal or low-fidelity) with "fine" (practical or high-fidelity) models of different complexities. Inengineering optimization, space mapping aligns (maps) a very fast coarse model with its related expensive-to-compute fine model so as to avoid direct expensive optimization of the fine model. The alignment process iteratively refines a "mapped" coarse model (surrogate model). One application of scientific modelling is the field ofmodelling and simulation, generally referred to as "M&S". M&S has a spectrum of applications which range from concept development and analysis, through experimentation, measurement, and verification, to disposal analysis. Projects and programs may use hundreds of different simulations, simulators and model analysis tools. The figure shows how modelling and simulation is used as a central part of an integrated program in a defence capability development process.[15] Nowadays there are some 40 magazines about scientific modelling which offer all kinds of international forums. Since the 1960s there is a strongly growing number of books and magazines about specific forms of scientific modelling. There is also a lot of discussion about scientific modelling in the philosophy-of-science literature. A selection:
https://en.wikipedia.org/wiki/Scientific_modelling
Privacy(UK:/ˈprɪvəsi/,US:/ˈpraɪ-/)[1][2]is the ability of an individual or group to seclude themselves orinformationabout themselves, and thereby express themselves selectively. The domain of privacy partially overlaps withsecurity, which can include the concepts of appropriate use andprotection of information. Privacy may also take the form ofbodily integrity. Throughout history, there have been various conceptions of privacy. Most cultures acknowledge the right of individuals to keep aspects of their personal lives out of the public domain. The right to be free from unauthorized invasions of privacy by governments, corporations, or individuals is enshrined in the privacy laws of many countries and, in some instances, their constitutions. With the rise of technology, the debate regarding privacy has expanded from a bodily sense to include a digital sense. In most countries, the right todigital privacyis considered an extension of the originalright to privacy, and many countries have passed acts that further protect digital privacy from public and private entities. There are multiple techniques to invade privacy, which may be employed by corporations or governments for profit or political reasons. Conversely, in order to protect privacy, people may employencryptionoranonymitymeasures. The word privacy is derived from the Latin word and concept of ‘privatus’, which referred to things set apart from what is public; personal and belonging to oneself, and not to the state.[3]Literally, ‘privatus’ is the past participle of the Latin verb ‘privere’ meaning ‘to be deprived of’.[4] The concept of privacy has been explored and discussed by numerous philosophers throughout history. Privacy has historical roots in ancient Greek philosophical discussions. The most well-known of these wasAristotle's distinction between two spheres of life: the public sphere of thepolis, associated with political life, and the private sphere of theoikos, associated with domestic life.[5]Privacy is valued along with other basic necessities of life in the Jewishdeutero-canonicalBook of Sirach.[6] Islam's holy text, the Qur'an, states the following regarding privacy: ‘Do not spy on one another’ (49:12); ‘Do not enter any houses except your own homes unless you are sure of their occupants' consent’ (24:27).[7] English philosopherJohn Locke’s (1632-1704) writings on natural rights and the social contract laid the groundwork for modern conceptions of individual rights, including the right to privacy. In hisSecond Treatise of Civil Government(1689), Locke argued that a man is entitled to his own self through one’s natural rights of life, liberty, and property.[8]He believed that the government was responsible for protecting these rights so individuals were guaranteed private spaces to practice personal activities.[9] In the political sphere, philosophers hold differing views on the right of private judgment. German philosopherGeorg Wilhelm Friedrich Hegel(1770-1831) makes the distinction betweenmoralität, which refers to an individual’s private judgment, andsittlichkeit, pertaining to one’s rights and obligations as defined by an existing corporate order. On the contrary,Jeremy Bentham(1748-1832), an English philosopher, interpreted law as an invasion of privacy. His theory ofutilitarianismargued that legal actions should be judged by the extent of their contribution to human wellbeing, or necessary utility.[10] Hegel’s notions were modified by prominent 19th century English philosopherJohn Stuart Mill. Mill’s essayOn Liberty(1859) argued for the importance of protecting individual liberty against the tyranny of the majority and the interference of the state. His views emphasized the right of privacy as essential for personal development and self-expression.[11] Discussions surrounding surveillance coincided with philosophical ideas on privacy. Jeremy Bentham developed the phenomenon known as the Panoptic effect through his 1791 architectural design of a prison calledPanopticon. The phenomenon explored the possibility of surveillance as a general awareness of being watched that could never be proven at any particular moment.[12]French philosopherMichel Foucault(1926-1984) concluded that the possibility of surveillance in the instance of the Panopticon meant a prisoner had no choice but to conform to the prison's rules.[12] As technology has advanced, the way in which privacy is protected and violated has changed with it. In the case of some technologies, such as theprinting pressor theInternet, the increased ability to share information can lead to new ways in which privacy can be breached. It is generally agreed that the first publication advocating privacy in the United States was the 1890 article bySamuel WarrenandLouis Brandeis, "The Right to Privacy",[13]and that it was written mainly in response to the increase in newspapers and photographs made possible by printing technologies.[14] In 1948,1984,written byGeorge Orwell, was published. A classic dystopian novel,1984describes the life of Winston Smith in 1984, located in Oceania, a totalitarian state. The all-controlling Party, the party in power led by Big Brother, is able to control power through masssurveillanceand limited freedom of speech and thought. George Orwell provides commentary on the negative effects oftotalitarianism, particularly on privacy andcensorship.[15]Parallels have been drawn between1984and modern censorship and privacy, a notable example being that large social media companies, rather than the government, are able to monitor a user's data and decide what is allowed to be said online through their censorship policies, ultimately for monetary purposes.[16] In the 1960s, people began to consider how changes in technology were bringing changes in the concept of privacy.[17]Vance Packard'sThe Naked Societywas a popular book on privacy from that era and led US discourse on privacy at that time.[17]In addition,Alan Westin'sPrivacy and Freedomshifted the debate regarding privacy from a physical sense, how the government controls a person's body (i.e.Roe v. Wade) and other activities such as wiretapping and photography. As important records became digitized, Westin argued that personal data was becoming too accessible and that a person should have complete jurisdiction over their data, laying the foundation for the modern discussion of privacy.[18] New technologies can also create new ways to gather private information. In 2001, the legal caseKyllo v. United States(533 U.S. 27) determined that the use ofthermal imagingdevices that can reveal previously unknown information without a warrant constitutes a violation of privacy. In 2019, after developing a corporate rivalry in competing voice-recognition software,AppleandAmazonrequired employees to listen tointimatemoments and faithfully transcribe the contents.[19] Police and citizens often conflict on what degree the police can intrude a citizen's digital privacy. For instance, in 2012, theSupreme Courtruled unanimously inUnited States v. Jones(565 U.S. 400), in the case of Antoine Jones who was arrested of drug possession using aGPStracker on his car that was placed without a warrant, that warrantless tracking infringes theFourth Amendment. The Supreme Court also justified that there is some "reasonable expectation of privacy" in transportation since the reasonable expectation of privacy had already been established underGriswold v. Connecticut(1965). The Supreme Court also further clarified that the Fourth Amendment did not only pertain to physical instances of intrusion but also digital instances, and thusUnited States v. Jonesbecame a landmark case.[20] In 2014, the Supreme Court ruled unanimously inRiley v. California(573 U.S. 373), where David Leon Riley was arrested after he was pulled over for driving on expired license tags when the police searched his phone and discovered that he was tied to a shooting, that searching a citizen's phone without a warrant was an unreasonable search, a violation of the Fourth Amendment. The Supreme Court concluded that the cell phones contained personal information different from trivial items, and went beyond to state that information stored on the cloud was not necessarily a form of evidence.Riley v. Californiaevidently became a landmark case, protecting the digital protection of citizen's privacy when confronted with the police.[21] A recent notable occurrence of the conflict between law enforcement and a citizen in terms of digital privacy has been in the 2018 case,Carpenter v. United States(585 U.S. ____). In this case, the FBI used cell phone records without a warrant to arrest Timothy Ivory Carpenter on multiple charges, and the Supreme Court ruled that the warrantless search of cell phone records violated the Fourth Amendment, citing that the Fourth Amendment protects "reasonable expectations of privacy" and that information sent to third parties still falls under data that can be included under "reasonable expectations of privacy".[22] Beyond law enforcement, many interactions between the government and citizens have been revealed either lawfully or unlawfully, specifically through whistleblowers. One notable example isEdward Snowden, who released multiple operations related to the mass surveillance operations of theNational Security Agency(NSA), where it was discovered that the NSA continues to breach the security of millions of people, mainly through mass surveillance programs whether it was collecting great amounts of data through third party private companies, hacking into other embassies or frameworks of international countries, and various breaches of data, which prompted a culture shock and stirred international debate related to digital privacy.[23] The Internet and technologies built on it enable new forms of social interactions at increasingly faster speeds and larger scales. Because the computer networks which underlie the Internet introduce such a wide range of novel security concerns, the discussion ofprivacyon the Internet is often conflated withsecurity.[24]Indeed, many entities such as corporations involved in thesurveillance economyinculcate a security-focused conceptualization of privacy which reduces their obligations to uphold privacy into a matter ofregulatory compliance,[25]while at the same timelobbyingto minimize those regulatory requirements.[26] The Internet's effect on privacy includes all of the ways that computational technology and the entities that control it can subvert the privacy expectations of theirusers.[27][28]In particular, theright to be forgottenis motivated by both thecomputational abilityto store and search through massive amounts of data as well as thesubverted expectationsof users who share information online without expecting it to be stored and retained indefinitely. Phenomena such asrevenge pornanddeepfakesare not merely individual because they require both the ability to obtain images without someone's consent as well as the social and economic infrastructure to disseminate that content widely.[28]Therefore, privacy advocacy groups such as theCyber Civil Rights Initiativeand theElectronic Frontier Foundationargue that addressing the new privacy harms introduced by the Internet requires both technological improvements toencryptionandanonymityas well as societal efforts such aslegal regulationsto restrict corporate and government power.[29][30] While theInternetbegan as a government and academic effort up through the 1980s, private corporations began to enclose the hardware and software of the Internet in the 1990s, and now most Internet infrastructure is owned and managed by for-profit corporations.[31]As a result, the ability of governments to protect their citizens' privacy is largely restricted toindustrial policy, instituting controls on corporations that handle communications orpersonal data.[32][33]Privacy regulations are often further constrained to only protect specific demographics such as children,[34]or specific industries such as credit card bureaus.[35] Several online social network sites (OSNs) are among the top 10 most visited websites globally. Facebook for example, as of August 2015, was the largest social-networking site, with nearly 2.7 billion[36]members, who upload over 4.75 billion pieces of content daily. WhileTwitteris significantly smaller with 316 million registered users, the USLibrary of Congressrecently announced that it will be acquiring and permanently storing the entire archive of public Twitter posts since 2006.[27] A review and evaluation of scholarly work regarding the current state of the value of individuals' privacy of online social networking show the following results: "first, adults seem to be more concerned about potential privacy threats than younger users; second, policy makers should be alarmed by a large part of users who underestimate risks of their information privacy on OSNs; third, in the case of using OSNs and its services, traditional one-dimensional privacy approaches fall short".[37]This is exacerbated bydeanonymizationresearch indicating that personal traits such as sexual orientation, race, religious and political views, personality, or intelligence can be inferred based on a wide variety ofdigital footprints, such as samples of text, browsing logs, or Facebook Likes.[38] Intrusions of social media privacy are known to affect employment in the United States.Microsoftreports that 75 percent of U.S. recruiters and human-resource professionals now do online research about candidates, often using information provided by search engines, social-networking sites, photo/video-sharing sites, personal web sites and blogs, andTwitter. They also report that 70 percent of U.S. recruiters have rejected candidates based on internet information. This has created a need by many candidates to control various onlineprivacy settingsin addition to controlling their online reputations, the conjunction of which has led to legal suits against both social media sites and US employers.[27] Selfiesare popular today. A search for photos with the hashtag #selfie retrieves over 23 million results on Instagram and 51 million with the hashtag #me.[39]However, due to modern corporate and governmental surveillance, this may pose a risk to privacy.[40]In a research study which takes a sample size of 3763, researchers found that for users posting selfies on social media, women generally have greater concerns over privacy than men, and that users' privacy concerns inversely predict their selfie behavior and activity.[41] An invasion of someone's privacy may be widely and quickly disseminated over the Internet. When social media sites and other online communities fail to invest incontent moderation, an invasion of privacy can expose people to a much greater volume and degree of harassment than would otherwise be possible.Revenge pornmay lead tomisogynistorhomophobicharassment, such as in thesuicide of Amanda Toddand thesuicide of Tyler Clementi. When someone's physical location or other sensitive information is leaked over the Internet viadoxxing, harassment may escalate to direct physical harm such asstalkingorswatting. Despite the way breaches of privacy can magnify online harassment, online harassment is often used as a justification to curtailfreedom of speech, by removing the expectation of privacy viaanonymity, or by enabling law enforcement to invade privacy without asearch warrant. In the wake of Amanda Todd's death, the Canadian parliament proposed a motion purporting to stop bullying, but Todd's mother herself gave testimony to parliament rejecting the bill due to its provisions for warrantless breaches of privacy, stating "I don't want to see our children victimized again by losing privacy rights."[42][43][44] Even where these laws have been passed despite privacy concerns, they have not demonstrated a reduction in online harassment. When theKorea Communications Commissionintroduced a registration system for online commenters in 2007, they reported that malicious comments only decreased by 0.9%, and in 2011 it was repealed.[45]A subsequent analysis found that the set of users who posted the most comments actually increased the number of "aggressive expressions" when forced to use their real name.[46] In the US, while federal law only prohibits online harassment based on protected characteristics such as gender and race,[47]individual states have expanded the definition of harassment to further curtail speech: Florida's definition of online harassment includes "any use of data or computer software" that "Has the effect of substantially disrupting the orderly operation of a school."[48] Increasingly, mobile devices facilitatelocation tracking. This creates user privacy problems. A user's location and preferences constitutepersonal information, and their improper use violates that user's privacy. A recent MIT study by de Montjoye et al. showed that four spatio-temporal points constituting approximate places and times are enough to uniquely identify 95% of 1.5M people in a mobility database. The study further shows that these constraints hold even when the resolution of the dataset is low. Therefore, even coarse or blurred datasets confer little privacy protection.[49] Several methods to protect user privacy in location-based services have been proposed, including the use of anonymizing servers and blurring of information. Methods to quantify privacy have also been proposed, to calculate the equilibrium between the benefit of obtaining accurate location information and the risks of breaching an individual's privacy.[50] There have been scandals regarding location privacy. One instance was the scandal concerningAccuWeather, where it was revealed that AccuWeather was selling locational data. This consisted of a user's locational data, even if they opted out within Accuweather, which tracked users' location. Accuweather sold this data to Reveal Mobile, a company that monetizes data related to a user's location.[51]Other international cases are similar to the Accuweather case. In 2017, a leaky API inside the McDelivery App exposed private data, which consisted of home addresses, of 2.2 million users.[52] In the wake of these types of scandals, many large American technology companies such as Google, Apple, and Facebook have been subjected to hearings and pressure under the U.S. legislative system. In 2011, US SenatorAl Frankenwrote an open letter toSteve Jobs, noting the ability ofiPhonesandiPadsto record and store users' locations in unencrypted files.[53][54]Apple claimed this was an unintentionalsoftware bug, but Justin Brookman of theCenter for Democracy and Technologydirectly challenged that portrayal, stating "I'm glad that they are fixing what they call bugs, but I take exception with their strong denial that they track users."[55]In 2021, the U.S. state of Arizona found in a court case that Google misled its users and stored the location of users regardless of their location settings.[56] The Internet has become a significant medium for advertising, with digital marketing making up approximately half of the global ad spending in 2019.[57]While websites are still able to sell advertising space without tracking, including viacontextual advertising, digital ad brokers such asFacebookandGooglehave instead encouraged the practice ofbehavioral advertising, providing code snippets used by website owners to track their users viaHTTP cookies. This tracking data is also sold to other third parties as part of themass surveillance industry. Since the introduction of mobile phones, data brokers have also been planted within apps, resulting in a $350 billion digital industry especially focused on mobile devices.[58] Digital privacy has become the main source of concern for many mobile users, especially with the rise of privacy scandals such as theFacebook–Cambridge Analytica data scandal.[58]Applehas received some reactions for features that prohibit advertisers from tracking a user's data without their consent.[59]Google attempted to introduce an alternative to cookies namedFLoCwhich it claimed reduced the privacy harms, but it later retracted the proposal due to antitrust probes and analyses that contradicted their claims of privacy.[60][61][62] The ability to do online inquiries about individuals has expanded dramatically over the last decade. Importantly, directly observed behavior, such as browsing logs, search queries, or contents of a public Facebook profile, can be automatically processed to infer secondary information about an individual, such as sexual orientation, political and religious views, race, substance use, intelligence, and personality.[63] In Australia, theTelecommunications (Interception and Access) Amendment (Data Retention) Act 2015made a distinction between collecting the contents of messages sent between users and the metadata surrounding those messages. Most countries give citizens rights to privacy in their constitutions.[17]Representative examples of this include theConstitution of Brazil, which says "the privacy, private life, honor and image of people are inviolable"; theConstitution of South Africasays that "everyone has a right to privacy"; and theConstitution of the Republic of Koreasays "the privacy of no citizen shall be infringed."[17]TheItalian Constitutionalso defines the right to privacy.[64]Among most countries whose constitutions do not explicitly describe privacy rights, court decisions have interpreted their constitutions to intend to give privacy rights.[17] Many countries have broad privacy laws outside their constitutions, including Australia'sPrivacy Act 1988, Argentina's Law for the Protection of Personal Data of 2000, Canada's 2000Personal Information Protection and Electronic Documents Act, and Japan's 2003 Personal Information Protection Law.[17] Beyond national privacy laws, there are international privacy agreements.[65]The United NationsUniversal Declaration of Human Rightssays "No one shall be subjected to arbitrary interference with [their] privacy, family, home or correspondence, nor to attacks upon [their] honor and reputation."[17]TheOrganisation for Economic Co-operation and Developmentpublished its Privacy Guidelines in 1980. The European Union's 1995 Data Protection Directive guides privacy protection in Europe.[17]The 2004 Privacy Framework by theAsia-Pacific Economic Cooperationis a privacy protection agreement for the members of that organization.[17] Approaches to privacy can, broadly, be divided into two categories: free market orconsumer protection.[66] One example of the free market approach is to be found in the voluntary OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data.[67]The principles reflected in the guidelines, free of legislative interference, are analyzed in an article putting them into perspective with concepts of theGDPRput into law later in the European Union.[68] In a consumer protection approach, in contrast, it is claimed that individuals may not have the time or knowledge to make informed choices, or may not have reasonable alternatives available. In support of this view, Jensen and Potts showed that mostprivacy policiesare above the reading level of the average person.[69] ThePrivacy Act 1988is administered by the Office of the Australian Information Commissioner. The initial introduction of privacy law in 1998 extended to the public sector, specifically to Federal government departments, under the Information Privacy Principles. State government agencies can also be subject to state based privacy legislation. This built upon the already existing privacy requirements that applied to telecommunications providers (under Part 13 of theTelecommunications Act 1997), and confidentiality requirements that already applied to banking, legal and patient / doctor relationships.[70] In 2008 the Australian Law Reform Commission (ALRC) conducted a review of Australian privacy law and produced a report titled "For Your Information".[71]Recommendations were taken up and implemented by the Australian Government via the Privacy Amendment (Enhancing Privacy Protection) Bill 2012.[72] In 2015, theTelecommunications (Interception and Access) Amendment (Data Retention) Act 2015was passed, to somecontroversy over its human rights implicationsand the role of media. Canada is a federal state whose provinces and territories abide by thecommon lawsave the province of Quebec whose legal tradition is thecivil law. Privacy in Canada was first addressed through thePrivacy Act,[73]a 1985 piece of legislation applicable to personal information held by government institutions. The provinces and territories would later follow suit with their own legislation. Generally, the purposes of said legislation are to provide individuals rights to access personal information; to have inaccurate personal information corrected; and to prevent unauthorized collection, use, and disclosure of personal information.[74]In terms of regulating personal information in the private sector, the federalPersonal Information Protection and Electronic Documents Act[75]("PIPEDA") is enforceable in all jurisdictions unless a substantially similar provision has been enacted on the provincial level.[76]However, inter-provincial or international information transfers still engage PIPEDA.[76]PIPEDA has gone through two law overhaul efforts in 2021 and 2023 with the involvement of the Office of the Privacy Commissioner and Canadian academics.[77]In the absence of a statutory private right of action absent an OPC investigation, the common law torts of intrusion upon seclusion and public disclosure of private facts, as well as the Civil Code of Quebec may be brought for an infringement or violation of privacy.[78][79]Privacy is also protected under ss. 7 and 8 of theCanadian Charter of Rights and Freedoms[80]which is typically applied in the criminal law context.[81]In Quebec, individuals' privacy is safeguarded by articles 3 and 35 to 41 of theCivil Code of Quebec[82]as well as by s. 5 of theCharter of human rights and freedoms.[83] In 2016, the European Union passed the General Data Protection Regulation (GDPR), which was intended to reduce the misuse of personal data and enhance individual privacy, by requiring companies to receive consent before acquiring personal information from users.[84] Although there are comprehensive regulations for data protection in the European Union, one study finds that despite the laws, there is a lack of enforcement in that no institution feels responsible to control the parties involved and enforce their laws.[85]The European Union also champions theRight to be Forgottenconcept in support of its adoption by other countries.[86] Since the introduction of theAadhaarproject in 2009, which resulted in all 1.2 billion Indians being associated with a 12-digit biometric-secured number. Aadhaar has uplifted the poor in India[how?][promotion?]by providing them with a form of identity and preventing the fraud and waste of resources, as normally the government would not be able to allocate its resources to its intended assignees due to the ID issues.[citation needed]With the rise of Aadhaar, India has debated whether Aadhaar violates an individual's privacy and whether any organization should have access to an individual's digital profile, as the Aadhaar card became associated with other economic sectors, allowing for the tracking of individuals by both public and private bodies.[87]Aadhaar databases have suffered from security attacks as well and the project was also met with mistrust regarding the safety of the social protection infrastructures.[88]In 2017, where the Aadhar was challenged, the Indian Supreme Court declared privacy as a human right, but postponed the decision regarding the constitutionality of Aadhaar for another bench.[89]In September 2018, the Indian Supreme Court determined that the Aadhaar project did not violate the legal right to privacy.[90] In the United Kingdom, it is not possible to bring an action for invasion of privacy. An action may be brought under anothertort(usually breach of confidence) and privacy must then be considered under EC law. In the UK, it is sometimes a defence that disclosure of private information was in the public interest.[91]There is, however, theInformation Commissioner's Office(ICO), an independent public body set up to promote access to official information and protect personal information. They do this by promoting good practice, ruling on eligible complaints, giving information to individuals and organisations, and taking action when the law is broken. The relevant UK laws include:Data Protection Act 1998;Freedom of Information Act 2000;Environmental Information Regulations 2004;Privacy and Electronic Communications Regulations 2003. The ICO has also provided a "Personal Information Toolkit" online which explains in more detail the various ways of protecting privacy online.[92] In the United States, more systematic treatises of privacy did not appear until the 1890s, with the development ofprivacy law in America.[93]Although theUS Constitutiondoes not explicitly include theright to privacy, individual as well aslocational privacymay be implicitly granted by the Constitution under the4th Amendment.[94]TheSupreme Court of the United Stateshas found that other guarantees havepenumbrasthat implicitly grant a right to privacy against government intrusion, for example inGriswold v. ConnecticutandRoe v. Wade.Dobbs v. Jackson Women's Health Organizationlater overruledRoe v. Wade, with Supreme Court JusticeClarence ThomascharacterizingGriswold's penumbral argument as having a "facial absurdity",[95]casting doubt on the validity of a constitutional right to privacy in the United States and of previous decisions relying on it.[96]In the United States, the right offreedom of speechgranted in theFirst Amendmenthas limited the effects of lawsuits for breach of privacy. Privacy is regulated in the US by thePrivacy Act of 1974, and various state laws. The Privacy Act of 1974 only applies to federal agencies in the executive branch of the federal government.[97]Certain privacy rights have been established in the United States via legislation such as theChildren's Online Privacy Protection Act(COPPA),[98]theGramm–Leach–Bliley Act(GLB), and theHealth Insurance Portability and Accountability Act(HIPAA).[99] Unlike the EU and most EU-member states, the US does not recognize the right to privacy of non-US citizens. The UN's Special Rapporteur on the right to privacy, Joseph A. Cannataci, criticized this distinction.[100] The theory ofcontextual integrity,[101]developed byHelen Nissenbaum, defines privacy as an appropriate information flow, where appropriateness, in turn, is defined as conformance with legitimate, informational norms specific to social contexts. In 1890, the United StatesjuristsSamuel D. Warren and Louis Brandeis wrote "The Right to Privacy", an article in which they argued for the "right to be let alone", using that phrase as a definition of privacy.[102]This concept relies on the theory ofnatural rightsand focuses on protecting individuals. The citation was a response to recent technological developments, such as photography, and sensationalist journalism, also known asyellow journalism.[103] There is extensive commentary over the meaning of being "let alone", and among other ways, it has been interpreted to mean the right of a person to chooseseclusionfrom the attention of others if they wish to do so, and the right to be immune from scrutiny or being observed in private settings, such as one's own home.[102]Although this early vague legal concept did not describe privacy in a way that made it easy to design broad legal protections of privacy, it strengthened the notion of privacy rights for individuals and began a legacy of discussion on those rights in the US.[102] Limited access refers to a person's ability to participate in society without having other individuals and organizations collect information about them.[104] Various theorists have imagined privacy as a system for limiting access to one's personal information.[104]Edwin Lawrence Godkinwrote in the late 19th century that "nothing is better worthy of legal protection than private life, or, in other words, the right of every man to keep his affairs to himself, and to decide for himself to what extent they shall be the subject of public observation and discussion."[104][105]Adopting an approach similar to the one presented by Ruth Gavison[106]Nine years earlier,[107]Sissela Boksaid that privacy is "the condition of being protected from unwanted access by others—either physical access, personal information, or attention."[104][108] Control over one's personal information is the concept that "privacy is the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others." Generally, a person who hasconsensually formed an interpersonal relationshipwith another person is not considered "protected" by privacy rights with respect to the person they are in the relationship with.[109][110]Charles Friedsaid that "Privacy is not simply an absence of information about us in the minds of others; rather it is the control we have over information about ourselves. Nevertheless, in the era ofbig data, control over information is under pressure.[111][112][This quote needs a citation][check quotation syntax] Alan Westin defined four states—or experiences—of privacy: solitude, intimacy, anonymity, and reserve.Solitudeis a physical separation from others;[113]Intimacy is a "close, relaxed; and frank relationship between two or more individuals" that results from the seclusion of a pair or small group of individuals.[113]Anonymity is the "desire of individuals for times of 'public privacy.'"[113]Lastly, reserve is the "creation of a psychological barrier against unwanted intrusion"; this creation of a psychological barrier requires others to respect an individual's need or desire to restrict communication of information concerning themself.[113] In addition to the psychological barrier of reserve, Kirsty Hughes identified three more kinds of privacy barriers: physical, behavioral, and normative. Physical barriers, such as walls and doors, prevent others from accessing and experiencing the individual.[114](In this sense, "accessing" an individual includes accessing personal information about them.)[114]Behavioral barriers communicate to others—verbally, through language, or non-verbally, through personal space, body language, or clothing—that an individual does not want the other person to access or experience them.[114]Lastly, normative barriers, such as laws and social norms, restrain others from attempting to access or experience an individual.[114] PsychologistCarl A. Johnson has identified the psychological concept of “personal control” as closely tied to privacy. His concept was developed as a process containing four stages and two behavioural outcome relationships, with one’s outcomes depending on situational as well as personal factors.[115]Privacy is described as “behaviors falling at specific locations on these two dimensions”.[116] Johnson examined the following four stages to categorize where people exercise personal control: outcome choice control is the selection between various outcomes. Behaviour selection control is the selection between behavioural strategies to apply to attain selected outcomes. Outcome effectance describes the fulfillment of selected behaviour to achieve chosen outcomes. Outcome realization control is the personal interpretation of one’s achieved outcome. The relationship between two factors– primary and secondary control, is defined as the two-dimensional phenomenon where one reaches personal control: primary control describes behaviour directly causing outcomes, while secondary control is behaviour indirectly causing outcomes.[117]Johnson explores the concept that privacy is a behaviour that has secondary control over outcomes. Lorenzo Magnaniexpands on this concept by highlighting how privacy is essential in maintaining personal control over one's identity and consciousness.[118]He argues that consciousness is partly formed by external representations of ourselves, such as narratives and data, which are stored outside the body. However, much of our consciousness consists of internal representations that remain private and are rarely externalized. This internal privacy, which Magnani refers to as a form of "information property" or "moral capital," is crucial for preserving free choice and personal agency. According to Magnani,[119]when too much of our identity and data is externalized and subjected to scrutiny, it can lead to a loss of personal control, dignity, and responsibility. The protection of privacy, therefore, safeguards our ability to develop and pursue personal projects in our own way, free from intrusive external forces. Acknowledging other conceptions of privacy while arguing that the fundamental concern of privacy is behavior selection control, Johnson converses with other interpretations including those of Maxine Wolfe and Robert S. Laufer, and Irwin Altman. He clarifies the continuous relationship between privacy and personal control, where outlined behaviours not only depend on privacy, but the conception of one’s privacy also depends on his defined behavioural outcome relationships.[120] Privacy is sometimes defined as an option to have secrecy. Richard Posner said that privacy is the right of people to "conceal information about themselves that others might use to their disadvantage".[121][122] In various legal contexts, when privacy is described as secrecy, a conclusion is reached: if privacy is secrecy, then rights to privacy do not apply for any information which is already publicly disclosed.[123]When privacy-as-secrecy is discussed, it is usually imagined to be a selective kind of secrecy in which individuals keep some information secret and private while they choose to make other information public and not private.[123] Privacy may be understood as a necessary precondition for the development and preservation of personhood. Jeffrey Reiman defined privacy in terms of a recognition of one's ownership of their physical and mental reality and a moral right toself-determination.[124]Through the "social ritual" of privacy, or the social practice of respecting an individual's privacy barriers, the social group communicates to developing children that they have exclusive moral rights to their bodies—in other words, moral ownership of their body.[124]This entails control over both active (physical) and cognitive appropriation, the former being control over one's movements and actions and the latter being control over who can experience one's physical existence and when.[124] Alternatively, Stanley Benn defined privacy in terms of a recognition of oneself as a subject with agency—as an individual with the capacity to choose.[125]Privacy is required to exercise choice.[125]Overt observation makes the individual aware of himself or herself as an object with a "determinate character" and "limited probabilities."[125]Covert observation, on the other hand, changes the conditions in which the individual is exercising choice without his or her knowledge and consent.[125] In addition, privacy may be viewed as a state that enables autonomy, a concept closely connected to that of personhood. According to Joseph Kufer, an autonomous self-concept entails a conception of oneself as a "purposeful, self-determining, responsible agent" and an awareness of one's capacity to control the boundary between self and other—that is, to control who can access and experience him or her and to what extent.[126]Furthermore, others must acknowledge and respect the self's boundaries—in other words, they must respect the individual's privacy.[126] The studies of psychologists such as Jean Piaget and Victor Tausk show that, as children learn that they can control who can access and experience them and to what extent, they develop an autonomous self-concept.[126]In addition, studies of adults in particular institutions, such as Erving Goffman's study of "total institutions" such as prisons and mental institutions,[127]suggest that systemic and routinized deprivations or violations of privacy deteriorate one's sense of autonomy over time.[126] Privacy may be understood as a prerequisite for the development of a sense of self-identity. Privacy barriers, in particular, are instrumental in this process. According to Irwin Altman, such barriers "define and limit the boundaries of the self" and thus "serve to help define [the self]."[128]This control primarily entails the ability to regulate contact with others.[128]Control over the "permeability" of the self's boundaries enables one to control what constitutes the self and thus to define what is the self.[128] In addition, privacy may be seen as a state that fosters personal growth, a process integral to the development of self-identity. Hyman Gross suggested that, without privacy—solitude, anonymity, and temporary releases from social roles—individuals would be unable to freely express themselves and to engage in self-discovery andself-criticism.[126]Such self-discovery and self-criticism contributes to one's understanding of oneself and shapes one's sense of identity.[126] In a way analogous to how the personhood theory imagines privacy as some essential part of being an individual, the intimacy theory imagines privacy to be an essential part of the way that humans have strengthened orintimate relationshipswith other humans.[129]Because part ofhuman relationshipsincludes individuals volunteering to self-disclose most if not all personal information, this is one area in which privacy does not apply.[129] James Rachelsadvanced this notion by writing that privacy matters because "there is a close connection between our ability to control who has access to us and to information about us, and our ability to create and maintain different sorts of social relationships with different people."[129][130]Protecting intimacy is at the core of the concept of sexual privacy, which law professorDanielle Citronargues should be protected as a unique form of privacy.[131] Physical privacy could be defined as preventing "intrusions into one's physical space or solitude."[132]An example of the legal basis for the right to physical privacy is the U.S.Fourth Amendment, which guarantees "the right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures".[133] Physical privacy may be a matter of cultural sensitivity, personal dignity, and/or shyness. There may also be concerns about safety, if, for example one is wary of becoming the victim of crime orstalking.[134]There are different things that can be prevented to protect one's physical privacy, including people watching (even through recorded images) one'sintimate behavioursorintimate partsand unauthorized access to one's personal possessions or places. Examples of possible efforts used to avoid the former, especially formodestyreasons, areclothes,walls,fences, privacy screens,cathedral glass,window coverings, etc. Government agencies, corporations, groups/societies and other organizations may desire to keep their activities or secrets from being revealed to other organizations or individuals, adopting varioussecuritypractices and controls in order to keep private information confidential. Organizations may seek legal protection for their secrets. For example, a government administration may be able to invokeexecutive privilege[135]or declare certain information to beclassified, or a corporation might attempt to protect valuable proprietary information astrade secrets.[133] Privacy self-synchronization is a hypothesized mode by which the stakeholders of an enterprise privacy program spontaneously contribute collaboratively to the program's maximum success. The stakeholders may be customers, employees, managers, executives, suppliers, partners or investors. When self-synchronization is reached, the model states that the personal interests of individuals toward their privacy is in balance with the business interests of enterprises who collect and use the personal information of those individuals.[136] David Flahertybelieves networked computer databases pose threats to privacy. He develops 'data protection' as an aspect of privacy, which involves "the collection, use, and dissemination of personal information". This concept forms the foundation for fair information practices used by governments globally. Flaherty forwards an idea of privacy as information control, "[i]ndividuals want to be left alone and to exercise some control over how information about them is used".[137] Richard Posnerand Lawrence Lessig focus on the economic aspects of personal information control. Posner criticizes privacy for concealing information, which reduces market efficiency. For Posner, employment is selling oneself in the labour market, which he believes is like selling a product. Any 'defect' in the 'product' that is not reported is fraud.[138]For Lessig, privacy breaches online can be regulated through code and law. Lessig claims "the protection of privacy would be stronger if people conceived of the right as a property right",[139]and that "individuals should be able to control information about themselves".[140] There have been attempts to establish privacy as one of the fundamentalhuman rights, whose social value is an essential component in the functioning of democratic societies.[141] Priscilla Regan believes that individual concepts of privacy have failed philosophically and in policy. She supports a social value of privacy with three dimensions: shared perceptions, public values, andcollectivecomponents. Shared ideas about privacy allows freedom of conscience and diversity in thought. Public values guarantee democratic participation, including freedoms of speech and association, and limits government power. Collective elements describe privacy as collective good that cannot be divided. Regan's goal is to strengthen privacy claims in policy making: "if we did recognize the collective or public-good value of privacy, as well as the common and public value of privacy, those advocating privacy protections would have a stronger basis upon which to argue for its protection".[142] Leslie Regan Shade argues that the human right to privacy is necessary for meaningful democratic participation, and ensures human dignity and autonomy. Privacy depends on norms for how information is distributed, and if this is appropriate. Violations of privacy depend on context. The human right to privacy has precedent in theUnited Nations Declaration of Human Rights: "Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers."[143]Shade believes that privacy must be approached from a people-centered perspective, and not through the marketplace.[144] Dr. Eliza Watt, Westminster Law School, University of Westminster in London, UK, proposes application of the International Human Right Law (IHRL) concept of “virtual control” as an approach to deal with extraterritorial mass surveillance by state intelligence agencies. Dr. Watt envisions the “virtual control” test, understood as a remote control over the individual's right to privacy of communications, where privacy is recognized under the ICCPR, Article 17. This, she contends, may help to close the normative gap that is being exploited by nation states.[145] Theprivacy paradoxis a phenomenon in which online users state that they are concerned about their privacy but behave as if they were not.[146]While this term was coined as early as 1998,[147]it was not used in its current popular sense until the year 2000.[148][146] Susan B. Barnes similarly used the termprivacy paradoxto refer to the ambiguous boundary between private and public space on social media.[149]When compared to adults, young people tend to disclose more information onsocial media. However, this does not mean that they are not concerned about their privacy. Susan B. Barnes gave a case in her article: in a television interview about Facebook, a student addressed her concerns about disclosing personal information online. However, when the reporter asked to see her Facebook page, she put her home address, phone numbers, and pictures of her young son on the page. The privacy paradox has been studied and scripted in different research settings. Several studies have shown this inconsistency between privacy attitudes and behavior among online users.[150]However, by now an increasing number of studies have also shown that there are significant and at times large correlations between privacy concerns and information sharing behavior,[151]which speaks against the privacy paradox. A meta-analysis of 166 studies published on the topic reported an overall small but significant relation between privacy concerns and informations sharing or use of privacy protection measures.[152]So although there are several individual instances or anecdotes where behavior appear paradoxical, on average privacy concerns and privacy behaviors seem to be related, and several findings question the general existence of the privacy paradox.[153] However, the relationship between concerns and behavior is likely only small, and there are several arguments that can explain why that is the case. According to theattitude-behavior gap, attitudes and behaviors arein generaland in most cases not closely related.[154]A main explanation for the partial mismatch in the context of privacy specifically is that users lack awareness of the risks and the degree of protection.[155]Users may underestimate the harm of disclosing information online.[28]On the other hand, some researchers argue that the mismatch comes from lack of technology literacy and from the design of sites.[156]For example, users may not know how to change theirdefault settingseven though they care about their privacy. Psychologists Sonja Utz and Nicole C. Krämer particularly pointed out that the privacy paradox can occur when users must trade-off between their privacy concerns and impression management.[157] A study conducted by Susanne Barth and Menno D.T. de Jo demonstrates that decision making takes place on an irrational level, especially when it comes to mobile computing. Mobile applications in particular are often built up in such a way that spurs decision making that is fast and automatic without assessing risk factors. Protection measures against these unconscious mechanisms are often difficult to access while downloading and installing apps. Even with mechanisms in place to protect user privacy, users may not have the knowledge or experience to enable these mechanisms.[158] Users of mobile applications generally have very little knowledge of how their personal data are used. When they decide which application to download, they typically are not able to effectively interpret the information provided by application vendors regarding the collection and use of personal data.[159]Other research finds that this lack of interpretability means users are much more likely to be swayed by cost, functionality, design, ratings, reviews and number of downloads than requested permissions for usage of their personal data.[160] The willingness to incur a privacy risk is suspected to be driven by a complex array of factors including risk attitudes, personal value for private information, and general attitudes to privacy (which are typically measured using surveys).[161]One experiment aiming to determine the monetary value of several types of personal information indicated relatively low evaluations of personal information.[159]Despite claims that ascertaining the value of data requires a "stock-market for personal information",[162]surveillance capitalismand themass surveillance industryregularly place price tags on this form of data as it is shared between corporations and governments. Users are not always given the tools to live up to their professed privacy concerns, and they are sometimes willing to trade private information for convenience, functionality, or financial gain, even when the gains are very small.[163]One study suggests that people think their browser history is worth the equivalent of a cheap meal.[164]Another finds that attitudes to privacy risk do not appear to depend on whether it is already under threat or not.[161]The methodology ofuser empowermentdescribes how to provide users with sufficient context to make privacy-informed decisions. It is suggested byAndréa Belligerand David J. Krieger that the privacy paradox should not be considered a paradox, but more of aprivacy dilemma, for services that cannot exist without the user sharing private data.[164]However, the general public is typically not given the choice whether to share private data or not,[19][56]making it difficult to verify any claim that a service truly cannot exist without sharing private data. The privacy calculus model posits that two factors determine privacy behavior, namely privacy concerns (or perceived risks) and expected benefits.[165][166]By now, the privacy calculus has been supported by several studies.[167][168] As with otherconceptions of privacy, there are various ways to discuss what kinds of processes or actions remove, challenge, lessen, or attack privacy. In 1960 legal scholarWilliam Prossercreated the following list of activities which can be remedied with privacy protection:[169][170] From 2004 to 2008, building from this and other historical precedents, Daniel J. Solove presented another classification of actions which are harmful to privacy, including collection of information which is already somewhat public, processing of information, sharing information, and invading personal space to get private information.[171] In the context of harming privacy, information collection means gathering whatever information can be obtained by doing something to obtain it.[171]Examples include surveillance andinterrogation.[171]Another example is how consumers and marketers also collect information in the business context through facial recognition which has recently caused a concern for things such as privacy. There is currently research being done related to this topic.[172] Companies like Google and Meta collect vast amounts of personal data from their users through various services and platforms. This data includes browsing habits, search history, location information, and even personal communications. These companies then analyze and aggregate this data to create detailed user profiles, which are sold to advertisers and other third parties. This practice is often done without explicit user consent, leading to an invasion of privacy as individuals have little control over how their information is used. The sale of personal data can result in targeted advertising, manipulation, and even potential security risks, as sensitive information can be exploited by malicious actors. This commercial exploitation of personal data undermines user trust and raises significant ethical and legal concerns regarding data protection and privacy rights.[173] It can happen that privacy is not harmed when information is available, but that the harm can come when that information is collected as a set, then processed together in such a way that the collective reporting of pieces of information encroaches on privacy.[174]Actions in this category which can lessen privacy include the following:[174] Count not him among your friends who will retail your privacies to the world. Information dissemination is an attack on privacy when information which was shared in confidence is shared or threatened to be shared in a way that harms the subject of the information.[174] There are various examples of this.[174]Breach of confidentiality is when one entity promises to keep a person's information private, then breaks that promise.[174]Disclosure is making information about a person more accessible in a way that harms the subject of the information, regardless of how the information was collected or the intent of making it available.[174]Exposure is a special type of disclosure in which the information disclosed is emotional to the subject or taboo to share, such as revealing their private life experiences, their nudity, or perhaps private body functions.[174]Increased accessibility means advertising the availability of information without actually distributing it, as in the case ofdoxing.[174]Blackmail is making a threat to share information, perhaps as part of an effort to coerce someone.[174]Appropriation is an attack on thepersonhoodof someone, and can include using the value of someone's reputation or likeness to advance interests which are not those of the person being appropriated.[174]Distortion is the creation of misleading information or lies about a person.[174] Invasion of privacy, a subset ofexpectation of privacy, is a different concept from the collecting, aggregating, and disseminating information because those three are a misuse of available data, whereas invasion is an attack on the right of individuals to keep personal secrets.[174]An invasion is an attack in which information, whether intended to be public or not, is captured in a way that insults the personal dignity and right to private space of the person whose data is taken.[174] Anintrusionis any unwanted entry into a person's private personal space and solitude for any reason, regardless of whether data is taken during that breach of space.[174]Decisional interferenceis when an entity somehow injects itself into the personal decision-making process of another person, perhaps to influence that person's private decisions but in any case doing so in a way that disrupts the private personal thoughts that a person has.[174] Similarly toactions which reduce privacy, there are multiple angles of privacy and multiple techniques to improve them to varying extents. When actions are done at anorganizational level, they may be referred to ascybersecurity. Individuals can encrypt e-mails via enabling either two encryption protocols,S/MIME, which is built into companies like Apple orOutlookand thus most common, orPGP.[175]TheSignalmessaging app, which encrypts messages so that only the recipient can read the message, is notable for being available on many mobile devices and implementing a form ofperfect forward secrecy.[176]Signal has received praise from whistleblowerEdward Snowden.[177]Encryption and other privacy-based security measures are also used in some cryptocurrencies such asMoneroandZCash.[178][179] Anonymizing proxiesor anonymizing networks likeI2PandTorcan be used to prevent Internet service providers (ISP) from knowing which sites one visits and with whom one communicates, by hiding IP addresses and location, but does not necessarily protect a user from third party data mining. Anonymizing proxies are built into a user's device, in comparison to aVirtual Private Network(VPN), where users must download software.[180]Using a VPN hides all data and connections that are exchanged between servers and a user's computer, resulting in the online data of the user being unshared and secure, providing a barrier between the user and their ISP, and is especially important to use when a user is connected to public Wi-Fi. However, users should understand that all their data does flow through the VPN's servers rather than the ISP. Users should decide for themselves if they wish to use either an anonymizing proxy or a VPN. In a more non-technical sense, using incognito mode or private browsing mode will prevent a user's computer from saving history, Internet files, and cookies, but the ISP will still have access to the users' search history. Usinganonymous search engineswill not share a user's history, clicks, and will obstruct ad blockers.[181] Concrete solutions on how to solve paradoxical behavior still do not exist. Many efforts are focused on processes of decision making, like restricting data access permissions during application installation, but this would not completely bridge the gap between user intention and behavior. Susanne Barth and Menno D.T. de Jong believe that for users to make more conscious decisions on privacy matters, the design needs to be more user-oriented.[158]That being said, delivering on privacy protections is difficult due to the complexity of online consent processes, for example.[182] In a social sense, simply limiting the amount of personal information that users posts on social media could increase their security, which in turn makes it harder for criminals to perform identity theft.[181]Moreover, creating a set of complex passwords and using two-factor authentication can allow users to be less susceptible to their accounts being compromised when various data leaks occur. Furthermore, users should protect their digital privacy by using anti-virus software, which can block harmful viruses like a pop-up scanning for personal information on a users' computer.[183] Although there are laws that promote the protection of users, in some countries, like the U.S., there is no federal digital privacy law and privacy settings are essentially limited by the state of current enacted privacy laws. To further their privacy, users can start conversing with representatives, letting representatives know that privacy is a main concern, which in turn increases the likelihood of further privacy laws being enacted.[184] David Attenborough, abiologistandnatural historian, affirmed thatgorillas"value their privacy" while discussing a brief escape by a gorilla inLondon Zoo.[185] Lack of privacy in public spaces, caused by overcrowding, increases health issues in animals, includingheart diseaseandhigh blood pressure. Also, the stress from overcrowding is connected to an increase in infant mortality rates and maternal stress. The lack of privacy that comes with overcrowding is connected to other issues in animals, which causes their relationships with others to diminish. How they present themselves to others of their species is a necessity in their life, and overcrowding causes the relationships to become disordered.[186] For example, David Attenborough claims that the gorilla's right to privacy is being violated when they are looked at through glass enclosures. They are aware that they are being looked at, therefore they do not have control over how much the onlookers can see of them. Gorillas and other animals may be in the enclosures due to safety reasons, however Attenborough states that this is not an excuse for them to be constantly watched by unnecessary eyes. Also, animals will start hiding in unobserved spaces.[186]Animals in zoos have been found to exhibit harmful or different behaviours due to the presence of visitors watching them:[187]
https://en.wikipedia.org/wiki/Privacy
Incontrol theory, thecoefficient diagram method(CDM) is analgebraicapproach applied to apolynomialloop in theparameter space. A special diagram called a "coefficient diagram" is used as the vehicle to carry the necessary information and as the criterion of good design.[1]The performance of the closed-loop system is monitored by the coefficient diagram. The most considerable advantages of CDM can be listed as follows:[2] It is usually required that the controller for a given plant should be designed under some practical limitations. The controller is desired to be of minimum degree, minimum phase (if possible) and stable. It must have enough bandwidth and power rating limitations. If the controller is designed without considering these limitations, the robustness property will be very poor, even though the stability andtime responserequirements are met. CDM controllers designed while considering all these problems is of the lowest degree, has a convenient bandwidth and results with a unit step time response without an overshoot. These properties guarantee the robustness, the sufficientdampingof the disturbance effects and the low economic property.[7] Although the main principles of CDM have been known since the 1950s,[8][9][10]the first systematic method was proposed byShunji Manabe.[11]He developed a new method that easily builds a target characteristic polynomial to meet the desired time response. CDM is an algebraic approach combining classical and modern control theories and uses polynomial representation in the mathematical expression. The advantages of the classical and modern control techniques are integrated with the basic principles of this method, which is derived by making use of the previous experience and knowledge of the controller design. Thus, an efficient and fertile control method has appeared as a tool with which control systems can be designed without needing much experience and without confronting many problems. Many control systems have been designed successfully using CDM.[12][13]It is very easy to design a controller under the conditions of stability, time domain performance and robustness. The close relations between these conditions and coefficients of the characteristic polynomial can be simply determined. This means that CDM is effective not only for control system design but also for controller parameters tuning. .
https://en.wikipedia.org/wiki/Coefficient_diagram_method
Product findersareinformation systemsthat help consumers to identify products within a large palette of similar alternative products. Product finders differ in complexity, the more complex among them being a special case ofdecision support systems. Conventional decision support systems, however, aim at specialized user groups, e.g. marketing managers, whereas product finders focus on consumers. Usually, product finders are part of ane-shopor an online presentation of a product-line. Being part of an e-shop, a product finder ideally leads to an online buy, while conventional distribution channels are involved in product finders that are part of an online presentation (e.g. shops, order by phone). Product finders are best suited for product groups whose individual products are comparable by specific criteria. This is true, in most cases, with technical products such asnotebooks: their features (e.g.clock rate, size ofharddisk, price, screen size) may influence the consumer's decision. Beside technical products such as notebooks, cars, dish washers, cell phones orGPSdevices, non-technical products such as wine, socks, toothbrushes or nails may be supported by product finders as well, as comparison by features takes place. On the other hand, the application of product finders is limited when it comes to individualized products such as books, jewelry or compact discs as consumers do not select such products along specific, comparable features. Furthermore, product finders are used not only for products sensu stricto, but for services as well, e.g. account types of a bank, health insurance, or communication providers. In these cases, the termservice finderis used sometimes. Product finders are used both by manufacturers, dealers (comprising several manufacturers), and web portals (comprising several dealers). There is a move to integrate Product finders withsocial networkingandgroup buyingallowing users to add and rate products, locations and purchase recommended products with others. Technical implementations differ in their benefit for the consumers. The following list displays the main approaches, from simple ones to more complex ones, each with a typical example: Product finder has an important role ine-commerce, items has to be categorized to better serve consumer in searching the desired product,recommender systemfor recommending items based on their purchases etc. As people are moving from offline to online commerce (e-commerce), it is getting more difficult and cumbersome to deal with the large amount of data about items, people that need to be kept and analyzed in order to better serve consumer. Large amount of data cannot be handled by just using man power, we need machine to do these things for us, they can deal with large amount of data efficiently and effectively. Online commerce has gained a lot of popularity over the past decade. Large online consumer to consumer marketplaces such aseBay,Amazon, andAlibabafeature millions of items with more entered into the marketplace every day. Item categorization helps in classifying products and giving themtagsandlabels, which helps consumer find them. Traditionallybag-of-words modelapproach is used to solve the problem with using nohierarchyat all or using human-defined hierarchy. A new method,[4]using hierarchical approach which decomposes theclassificationproblem into a coarse level task and a fine level task, with the hierarchy made usinglatent class modeldiscovery. A simple classifier is applied to perform the coarse level classification (because the data is so large we cannot use more sophisticated approach due to time issue) while a more sophisticated model is used to separate classes at the fine level. Highlights/Methods used: The problem faced by these online e-commerce companies are: Recommendation systems are used to recommend consumer items/product based on their purchasing or search history.
https://en.wikipedia.org/wiki/Product_finder
Box braidsare a type of hair-braiding style that is predominantly popular among African people and theAfrican diaspora. This type of hairstyle is a "protective style" (a style which can be worn for a long period of time to let natural hair grow and protect the ends of the hair) and is "boxy", consisting of square-shaped hair divisions. Box braids are generally installed by using synthetic hair which helps to add thickness as well as helping the natural hair that is in the braid. Because they are not attached to the scalp like other similar styles such ascornrows, box braids can be styled in a number of different ways. The installation process of box braids can be lengthy, but once installed they can last for six to eight weeks. They are known for being easy to maintain.[2][3] Hair-braiding styles were used to help differentiate tribes, locations, and also possibly a symbol of wealth and power due to the amount of effort that went into styling braids.[4]Box braids were not given a specific name until the 1990s when popularized by R&B musicianJanet Jackson, but have been used for years. This style of braiding comes from the Eembuvi braids ofNamibiaor the chin-length bob braids of the women of theNile Valleyfrom over 3,000 years ago.[4]In the Mbalantu tribe of Namibia, braiding was an important social practice. Older women would gather with their girls and teach them how to braid.[5]Box braids are also commonly worn by theKhoisanpeople of South Africa[6]and theAfar peoplein the horn of Africa.[7][8]InAfrica, braid styles and patterns have been used to distinguish tribal membership, marital status, age, wealth, religion and social ranking.[9]In some countries ofAfrica, the braids were used for communication.[10]In some Caribbean islands, braid patterns were used to map routes to escape slavery.[11][12]Layers of finely chopped tree bark and oils can be used to support the hairstyle. Human hair was at one point wefted into fiber wig caps made of durable materials like wool and felt for reuse in traditional clothing as well as different rituals.[4]Cowry shells, jewels, beads and other material items adorned box braids of older women alluding to their readiness to have daughters, emulation of wealth, high priesthood and any other classifications.[4] Hair was and is a very important and symbolic part of different African communities. Africans believed that hair could help with divine communication as it was the elevated part of one's body. Hair styling was entrusted only to close relatives, as it was explained that if a strand fell into the hands of an enemy, harm could come to the hair's owner.[13]Members of royalty would often wear elaborate hairstyles as a symbol of their stature, and those in mourning, usually women, would pay some attention to their hair during the period of grieving. Hair was seen as a symbol of fertility, as thick, long tresses and neat, clean hair symbolised ability to bear healthy daughters.[13]Elaborate patterns were done for special occasions like weddings, social ceremonies or war preparations. People belonging to a tribe could easily be identified by another tribe member with the help of a braid pattern or style.[14] The U.S. Army has strong regulations and restrictions on hairstyles for both men and women. In 2014, the army updated its policies because the old regulations were too restrictive for African-American women. Army policy originally considered African American women's natural hair "not neat" and deemedprotective hairstyles"unprofessional". In the newer regulations, "twists, cornrows and braids can be up to1⁄2inch [13 mm] in diameter. The previous maximum was a diameter of approximately1⁄4inch [6 mm]".[15]This gives more opportunity to wear protective styles. Box braids can be worn by members of the US Army as long as they show no more than3⁄8of the scalp. The parting must be square or rectangular shape. The ends of the braids must be secured. Once the newly grown natural hair outside of the braid, also known as new growth, reaches1⁄2inch [13 mm], the style must be redone. Similar regulations apply for styles like dreadlocks, flat twists, and braids with natural hair. The hairstyles must not interfere with the wear of uniform or covers (uniform hats).[16]Though synthetic hair for box braids exists in multiple colors, the military dictates that enlisted women must have box braids in natural hair colors without any additional jewelry like hairclips or beads. Medium box braids are a popular hairstyle within the African and African American communities. They involve parting the hair into individual square-shaped sections, and then each section is braided from the scalp to the ends. These braids are termed 'medium' due to their thickness, which is typically about the width of a pencil to that of a felt tip marker.[17] The medium size of these braids strikes a balance between the delicate appearance of smaller braids and the more pronounced look of jumbo braids. They are versatile in length, often extending just beyond the wearer's natural hair length, and can be styled in various ways including buns, ponytails, and more. As a protective hairstyle, medium box braids can safeguard the hair from environmental factors and styling stress. They require routine maintenance, including scalp hydration and proper cleansing, to maintain the health of the hair and scalp. These braids can be kept in for several weeks before they need to be redone. Tight or heavy hairstyles, such as long box braids, can also cause anexternal-traction headache, previously called aponytail headache.[18]Overly tight braids may causetraction alopecia.[19]Looser braids have a lower risk than tight braids or other styles, such ascornrowsanddreadlocks.[20]
https://en.wikipedia.org/wiki/Box_braids
Brevity codesare used inamateur radio, maritime, aviation, police, and military communications. They are designed to convey complex information with a few words or codes. Some areclassifiedfrom the public.
https://en.wikipedia.org/wiki/Brevity_code
Amicrocellis a cell in amobile phone networkserved by a low powercellularbase station(tower), covering a limited area such as a mall, a hotel, or a transportation hub. A microcell is usually larger than apicocell, though the distinction is not always clear. A microcell uses power control to limit the radius of its coverage area. Typically the range of a microcell is less than two kilometers wide, whereasstandard base stationsmay have ranges of up to 35 kilometres (22 mi). Apicocell, on the other hand, is 200 meters or less, and afemtocellis on the order of 10 meters,[1]although AT&T calls its femtocell that has a range of 40 feet (12 m), a "microcell".[2]AT&T uses "AT&T 3G MicroCell" as a trademark and not necessarily the "microcell" technology, however.[3] Amicrocellular networkis a radio network composed of microcells. Like picocells, microcells are usually used to addnetwork capacityin areas with very dense phone usage, such as train stations. Microcells are often deployed temporarily during sporting events and other occasions in which extra capacity is known to be needed at a specific location in advance. Cell size flexibility is a feature of2G(and later) networks and is a significant part of how such networks have been able to improve capacity. Power controls implemented on digital networks make it easier to prevent interference from nearby cells using the same frequencies.[4]By subdividing cells, and creating more cells to help serve high density areas, a cellular network operator can optimize the use of spectrum and ensure capacity can grow. By comparison, older analog systems have fixed limits, beyond which attempts to subdivide cells simply would result in an unacceptable level of interference. Certain mobile phone systems, notablyPHSandDECT, only provide microcellular (and Pico cellular) coverage. Microcellular systems are typically used to provide low cost mobile phone systems in high-density environments such as large cities. PHS is deployed throughout major cities in Japan as an alternative to ordinary cellular service. DECT is used by many businesses to deploy private license-free microcellular networks within large campuses where wireline phone service is less useful. DECT is also used as a private, non-networked, cordless phone system where its low power profile ensures that nearby DECT systems do not interfere with each other. A forerunner of these types of network was theCT2cordless phone system, which provided access to a looser network (without handover), again with base stations deployed in areas where large numbers of people might need to make calls. CT2's limitations ensured the concept never took off. CT2's successor, DECT, was provided with an interworking profile,GIPso that GSM networks could make use of it for microcellular access, but in practice the success of GSM within Europe, and the ability of GSM to support microcells without using alternative technologies, meant GIP was rarely used, and DECT's use in general was limited to non-GSM private networks, including use as cordless phone systems.
https://en.wikipedia.org/wiki/Microcell
ERIL(Entity-Relationship and Inheritance Language) is avisual languagefor representing the data structure of a computer system. As its name suggests, ERIL is based onentity-relationshipdiagrams andclass diagrams. ERIL combines therelationalandobject-orientedapproaches todata modeling. ERIL can be seen as a set of guidelines aimed at improving the readability of structure diagrams. These guidelines were borrowed fromDRAKON, a variant offlowchartscreated within the Russian space program. ERIL itself was developed by Stepan Mitkin. The ERIL guidelines for drawing diagrams: A class (table) in ERIL can have several indexes. Each index in ERIL can include one or more fields, similar to indexes inrelational databases. ERIL indexes are logical. They can optionally be implemented by real data structures. Links between classes (tables) in ERIL are implemented by the so-called "link" fields. Link fields can be of different types according to the link type: Example: there is a one-to-many link betweenDocumentsandLines. OneDocumentcan have manyLines. Then theDocument.Linesfield is a collection of references to the lines that belong to the document.Line.Documentis a reference to the document that contains the line. Link fields are also logical. They may or may not be implemented physically in the system. ERIL is supposed to model any kind of data regardless of the storage. The same ERIL diagram can represent data stored in arelational database, in aNoSQLdatabase,XMLfile or in the memory. ERIL diagrams serve two purposes. The primary purpose is to explain the data structure of an existing or future system or component. The secondary purpose is to automatically generate source code from the model. Code that can be generated includes specialized collection classes, hash and comparison functions, data retrieval and modification procedures,SQL data-definitioncode, etc. Code generated from ERIL diagrams can ensure referential and uniquenessdata integrity. Serialization code of different kinds can also be automatically generated. In some ways ERIL can be compared toobject-relational mappingframeworks.
https://en.wikipedia.org/wiki/ERIL
Insoftware engineering, asoftware development processorsoftware development life cycle(SDLC) is a process of planning and managingsoftware development. It typically involves dividing software development work into smaller, parallel, or sequential steps or sub-processes to improvedesignand/orproduct management. The methodology may include the pre-definition of specificdeliverablesand artifacts that are created and completed by a project team to develop or maintain an application.[1] Most modern development processes can be vaguely described asagile. Other methodologies includewaterfall,prototyping,iterative and incremental development,spiral development,rapid application development, andextreme programming. A life-cycle "model" is sometimes considered a more general term for a category of methodologies and a software development "process" is a particular instance as adopted by a specific organization.[citation needed]For example, many specific software development processes fit the spiral life-cycle model. The field is often considered a subset of thesystems development life cycle. The software development methodology framework did not emerge until the 1960s. According to Elliott (2004), thesystems development life cyclecan be considered to be the oldest formalized methodology framework for buildinginformation systems. The main idea of the software development life cycle has been "to pursue the development of information systems in a very deliberate, structured and methodical way, requiring each stage of the life cycle––from the inception of the idea to delivery of the final system––to be carried out rigidly and sequentially"[2]within the context of the framework being applied. The main target of this methodology framework in the 1960s was "to develop large scale functionalbusiness systemsin an age of large scale business conglomerates. Information systems activities revolved around heavydata processingandnumber crunchingroutines."[2] Requirements gathering and analysis:The first phase of the custom software development process involves understanding the client's requirements and objectives. This stage typically involves engaging in thorough discussions and conducting interviews with stakeholders to identify the desired features, functionalities, and overall scope of the software. The development team works closely with the client to analyze existing systems and workflows, determine technical feasibility, and define project milestones. Planning and design:Once the requirements are understood, the custom software development team proceeds to create a comprehensive project plan. This plan outlines the development roadmap, including timelines, resource allocation, and deliverables. The software architecture and design are also established during this phase. User interface (UI) and user experience (UX) design elements are considered to ensure the software's usability, intuitiveness, and visual appeal. Development:With the planning and design in place, the development team begins the coding process. This phase involveswriting, testing, and debugging the software code. Agile methodologies, such as scrum or kanban, are often employed to promote flexibility, collaboration, and iterative development. Regular communication between the development team and the client ensures transparency and enables quick feedback and adjustments. Testing and quality assurance:To ensure the software's reliability, performance, and security, rigorous testing and quality assurance (QA) processes are carried out. Different testing techniques, including unit testing, integration testing, system testing, and user acceptance testing, are employed to identify and rectify any issues or bugs. QA activities aim to validate the software against the predefined requirements, ensuring that it functions as intended. Deployment and implementation:Once the software passes the testing phase, it is ready for deployment and implementation. The development team assists the client in setting up the software environment, migrating data if necessary, and configuring the system. User training and documentation are also provided to ensure a smooth transition and enable users to maximize the software's potential. Maintenance and support:After the software is deployed, ongoing maintenance and support become crucial to address any issues, enhance performance, and incorporate future enhancements. Regular updates, bug fixes, and security patches are released to keep the software up-to-date and secure. This phase also involves providing technical support to end users and addressing their queries or concerns. Methodologies, processes, and frameworks range from specific prescriptive steps that can be used directly by an organization in day-to-day work, to flexible frameworks that an organization uses to generate a custom set of steps tailored to the needs of a specific project or group. In some cases, a "sponsor" or "maintenance" organization distributes an official set of documents that describe the process. Specific examples include: Since DSDM in 1994, all of the methodologies on the above list except RUP have been agile methodologies - yet many organizations, especially governments, still use pre-agile processes (often waterfall or similar). Software process andsoftware qualityare closely interrelated; some unexpected facets and effects have been observed in practice.[3] Among these, another software development process has been established inopen source. The adoption of these best practices known and established processes within the confines of a company is calledinner source. Software prototypingis about creating prototypes, i.e. incomplete versions of the software program being developed. The basic principles are:[1] A basic understanding of the fundamental business problem is necessary to avoid solving the wrong problems, but this is true for all software methodologies. "Agile software development" refers to a group of software development frameworks based on iterative development, where requirements and solutions evolve via collaboration between self-organizing cross-functional teams. The term was coined in the year 2001 when theAgile Manifestowas formulated. Agile software development uses iterative development as a basis but advocates a lighter and more people-centric viewpoint than traditional approaches. Agile processes fundamentally incorporate iteration and the continuous feedback that it provides to successively refine and deliver a software system. The Agile model also includes the following software development processes: Continuous integrationis the practice of merging all developer working copies to a sharedmainlineseveral times a day.[4]Grady Boochfirst named and proposed CI inhis 1991 method,[5]although he did not advocate integrating several times a day.Extreme programming(XP) adopted the concept of CI and did advocate integrating more than once per day – perhaps as many as tens of times per day. Various methods are acceptable for combining linear and iterative systems development methodologies, with the primary objective of each being to reduce inherent project risk by breaking a project into smaller segments and providing more ease-of-change during the development process. There are three main variants of incremental development:[1] Rapid application development(RAD) is a software development methodology, which favorsiterative developmentand the rapid construction ofprototypesinstead of large amounts of up-front planning. The "planning" of software developed using RAD is interleaved with writing the software itself. The lack of extensive pre-planning generally allows software to be written much faster and makes it easier to change requirements. The rapid development process starts with the development of preliminarydata modelsandbusiness process modelsusingstructured techniques. In the next stage, requirements are verified using prototyping, eventually to refine the data and process models. These stages are repeated iteratively; further development results in "a combined business requirements and technical design statement to be used for constructing new systems".[6] The term was first used to describe a software development process introduced byJames Martinin 1991. According to Whitten (2003), it is a merger of variousstructured techniques, especially data-driveninformation technology engineering, with prototyping techniques to accelerate software systems development.[6] The basic principles of rapid application development are:[1] The waterfall model is a sequential development approach, in which development is seen as flowing steadily downwards (like a waterfall) through several phases, typically: The first formal description of the method is often cited as an article published byWinston W. Royce[7]in 1970, although Royce did not use the term "waterfall" in this article. Royce presented this model as an example of a flawed, non-working model.[8] The basic principles are:[1] The waterfall model is a traditional engineering approach applied to software engineering. A strict waterfall approach discourages revisiting and revising any prior phase once it is complete.[according to whom?]This "inflexibility" in a pure waterfall model has been a source of criticism by supporters of other more "flexible" models. It has been widely blamed for several large-scale government projects running over budget, over time and sometimes failing to deliver on requirements due to thebig design up frontapproach.[according to whom?]Except when contractually required, the waterfall model has been largely superseded by more flexible and versatile methodologies developed specifically for software development.[according to whom?]SeeCriticism of waterfall model. In 1988,Barry Boehmpublished a formal software system development "spiral model," which combines some key aspects of thewaterfall modelandrapid prototypingmethodologies, in an effort to combine advantages oftop-down and bottom-upconcepts. It provided emphasis on a key area many felt had been neglected by other methodologies: deliberate iterative risk analysis, particularly suited to large-scale complex systems. The basic principles are:[1] Shape Up is a software development approach introduced byBasecampin 2018. It is a set of principles and techniques that Basecamp developed internally to overcome the problem of projects dragging on with no clear end. Its primary target audience is remote teams. Shape Up has no estimation and velocity tracking, backlogs, or sprints, unlikewaterfall,agile, orscrum. Instead, those concepts are replaced with appetite, betting, and cycles. As of 2022, besides Basecamp, notable organizations that have adopted Shape Up include UserVoice and Block.[12][13] Other high-level software project methodologies include: Some "process models" are abstract descriptions for evaluating, comparing, and improving the specific process adopted by an organization.
https://en.wikipedia.org/wiki/Software_development_methodology
"AI slop", often simply "slop", is a derogatory term for low-quality media, including writing and images, made usinggenerative artificial intelligencetechnology, characterized by an inherent lack of effort, logic, or purpose.[1][4][5]Coined in the 2020s, the term has a pejorative connotation akin to "spam".[4] It has been variously defined as "digital clutter", "filler content produced by AI tools that prioritize speed and quantity over substance and quality",[6]and "shoddy or unwanted AI content insocial media, art, books and, increasingly, in search results".[7] Jonathan Gilmore, a philosophy professor at theCity University of New York, describes the "incredibly banal, realistic style" of AI slop as being "very easy to process".[8] As earlylarge language models(LLMs) andimage diffusion modelsaccelerated the creation of high-volume but low-quality written content and images, discussion commenced among journalists and on social platforms for the appropriate term for the influx of material. Terms proposed included "AI garbage", "AI pollution", and "AI-generated dross".[5]Early uses of the term "slop" as a descriptor for low-grade AI material apparently came in reaction to the release of AI image generators in 2022. Its early use has been noted among4chan,Hacker News, andYouTubecommentators as a form of in-groupslang.[7] The British computer programmerSimon Willisonis credited with being an early champion of the term "slop" in the mainstream,[1][7]having used it on his personal blog in May 2024.[9]However, he has said it was in use long before he began pushing for the term.[7] The term gained increased popularity in the second quarter of 2024 in part because ofGoogle's use of itsGeminiAI model to generate responses to search queries,[7]and was widely criticized in media headlines during the fourth quarter of 2024.[1][4] Research found that training LLMs on slop causesmodel collapse: a consistent decrease in the lexical, syntactic, and semantic diversity of the model outputs through successive iterations, notably remarkable for tasks demanding high levels of creativity.[10]AI slop is similarly produced when the same content is continuously refined, paraphrased, or reprocessed through LLMs, with each output becoming the input for the next iteration. Research has shown that this process causes information to gradually distort as it passes through a chain of LLMs, a phenomenon reminiscent of a classic communication exercise known as thetelephone game.[11] AI image and video slop proliferated on social media in part because it was revenue-generating for its creators onFacebookandTikTok, with the issue affecting Facebook most notably. This incentivizes individuals fromdeveloping countriesto create images that appeal to audiences in the United States which attract higher advertising rates.[12][13][14] The journalist Jason Koebler speculated that the bizarre nature of some of the content may be due to the creators using Hindi, Urdu, and Vietnamese prompts (languages which are underrepresented in the model'straining data), or using erraticspeech-to-textmethods to translate their intentions into English.[12] Speaking toNew Yorkmagazine, a Kenyan creator of slop images described givingChatGPTprompts such as "WRITE ME 10 PROMPT picture OF JESUS WHICH WILLING BRING HIGH ENGAGEMENT ON FACEBOOK [sic]", and then feeding those created prompts into atext-to-imageAI service such asMidjourney.[4] In August 2024,The Atlanticnoted that AI slop was becoming associated with the political right in the United States, who were using it forshitpostingandengagement farmingon social media, with the technology offering "cheap, fast, on-demand fodder for content".[15] AI slop is frequently used in political campaigns in an attempt at gaining attention throughcontent farming.[16]In August 2024, the American politicianDonald Trumpposted a series of AI-generated images on his social media platform,Truth Social, that portrayed fans of the singerTaylor Swiftin "Swifties for Trump" T-shirts, as well as a photo of the singer herself appearing to endorseTrump's 2024 presidential campaign. The images originated from the conservativeTwitteraccount@amuse, which posted numerous AI slop images leading up to the2024 United States electionsthat were shared by other high-profile figures within the AmericanRepublican Party, such asElon Musk, who has publicly endorsed the utilization of generative AI, furthering this association.[17] In the aftermath ofHurricane Helenein the United States, members of the Republican Party circulated an AI-generated image of a young girl holding a puppy in a flood, and used it as evidence of the failure of PresidentJoe Bidento respond to the disaster.[18][3]Some, likeAmy Kremer, shared the image on social media even while acknowledging that it was not genuine.[19][20] In November 2024,Coca-Colaused artificial intelligence to create three commercials as part of their annualholiday campaign. These videos were immediately met with negative reception from both casual viewers and artists;[21]the animatorAlex Hirsch, the creator ofGravity Falls, criticized the company's decision not to employ human artists to create the commercial.[22]In response to the negative feedback, the company defended their decision to use generative artificial intelligence stating that "Coca-Cola will always remain dedicated to creating the highest level of work at the intersection of human creativity and technology".[23] In March 2025,Paramount Pictureswas criticized for using AI scripting and narration in anInstagramvideo promoting the filmNovocaine.[24]The ad uses a robotic AI voice in a style similar to low-quality AI spam videos produced by content farms.A24received similar backlash for releasing a series of AI-generated posters for the 2024 filmCivil War. One poster appears to depict a group of soldiers in a tank-like raft preparing to fire on a large swan, an image which does not resemble the events of the film.[25][26] In the same month,Activisionposted various advertisements and posters for fake video games such as "Guitar HeroMobile", "Crash Bandicoot: Brawl", and "Call of Duty: Zombie Defender" that were all made using generative AI on platforms such asFacebookand Instagram, which many labelled as AI slop.[27]The intention of the posts was later stated to act as a survey for interest in possible titles by the company.[28]TheItalian brainrotAI trend was widely adopted by advertisers to adjust well to younger audiences.[29] Fantastical promotional graphics for the 2024Willy's Chocolate Experienceevent, characterized as "AI-generated slop",[30]misled audiences into attending an event that was held in a lightly decorated warehouse. Tickets were marketed throughFacebookadvertisements showing AI-generated imagery, with no genuine photographs of the venue.[31] In October 2024, thousands of people were reported to have assembled for a non-existent Halloween parade inDublinas a result of a listing on an aggregation listings website, MySpiritHalloween.com, which used AI-generated content.[32][33]The listing went viral on TikTok andInstagram.[34]While a similar parade had been held inGalway, and Dublin had hosted parades in prior years, there was no parade in Dublin in 2024.[33]One analyst characterized the website, which appeared to use AI-generated staff pictures, as likely using artificial intelligence "to create content quickly and cheaply where opportunities are found".[35]The site's owner said that "We asked ChatGPT to write the article for us, but it wasn't ChatGPT by itself." In the past the site had removed non-existent events when contacted by their venues, but in the case of the Dublin parade the site owner said that "no one reported that this one wasn't going to happen". MySpiritHalloween.com updated their page to say that the parade had been "canceled" when they became aware of the issue.[36] Online booksellers and library vendors now have many titles that are written by AI and are not curated into collections by librarians. The digital media providerHoopla, which supplies libraries withebooksand downloadable content, has generative AI books with fictional authors and dubious quality, which cost libraries money when checked out by unsuspecting patrons.[37] The 2024 video gameCall of Duty: Black Ops 6includes assets generated by artificial intelligence. Since the game's initial release, many players had accusedTreyarchandRaven Softwareof using AI to create in-game assets, including loading screens, emblems, and calling cards. A particular example was a loading screen for the zombies game mode that depicted "Necroclaus", a zombifiedSanta Clauswith six fingers on one hand, an image which also had other irregularities.[38]Theprevious entryin theCall of Dutyfranchise was also accused of selling AI-generatedcosmetics.[39] In February 2025, Activision disclosedBlack Ops 6's usage of generative artificial intelligence to comply withValve's policies on AI-generated or assisted products onSteam. Activision states on the game's product page on Steam that "Our team uses generative AI tools to help develop some in game assets."[40] Foamstars, amultiplayerthird-person shooterreleased bySquare Enixin 2024, features in-game music withcover artthat was generated usingMidjourney. Square Enix confirmed the use of AI, but defended the decision, saying that they wanted to "experiment" with artificial intelligence technologies and claiming that the generated assets make up "about 0.01% or even less" of game content.[41][42][43]Previously, on January 1, 2024, Square Enix president Takashi Kiryu stated in a new year letter that the company will be "aggressive in applying AI and other cutting-edge technologies to both [their] content development and [their] publishing functions".[44][45] In 2024,Rovio Entertainmentreleased a demo of a mobile game called Angry Birds: Block Quest onAndroid. The game featured AI-generated images for loading screens and backgrounds.[46]It was heavily criticized by players, who called itshovelwareand disapproved of Rovio's use of AI images.[47][48]It was eventually discontinued and removed from thePlay Store. Some films have received backlash for including AI-generated content. The filmLate Night with the Devilwas notable for its use of AI, which some criticized as being AI slop.[49][50]Several low-quality AI-generated images were used as interstitial title cards, with one image featuring a skeleton with inaccurate bone structure and poorly-generated fingers that appear disconnected from its hands.[51] Some streaming services such asAmazon Prime Videohave used AI to generate posters and thumbnail images in a manner that can be described as slop. A low-quality AI poster was used for the 1922 filmNosferatu, depictingCount Orlokin a way that does not resemble his look in the film.[52]A thumbnail image for12 Angry MenonAmazon Freeveeused AI to depict 19 men with smudged faces, none of whom appeared to bear any similarities to the characters in the film.[53][54]Additionally, some viewers have noticed that many plot descriptions appear to be generated by AI, which some people have characterized as slop. One synopsis briefly listed on the site for the filmDog Day Afternoonread: "A man takes hostages at a bank in Brooklyn. Unfortunately I do not have enough information to summarize further within the provided guidelines."[55] In one case Deutsche Telekom removed a series from their media offer after viewers complained about the bad quality and monotonous German voice dubbing (translated from original Polish) and it was found out that it was done via AI.[56]
https://en.wikipedia.org/wiki/AI_slop
Ingame theory, a game is said to be apotential gameif the incentive of all players to change theirstrategycan be expressed using a single global function called thepotential function. The concept originated in a 1996 paper by Dov Monderer andLloyd Shapley.[1] The properties of several types of potential games have since been studied. Games can be eitherordinalorcardinalpotential games. In cardinal games, the difference in individualpayoffsfor each player from individually changing one's strategy, other things equal, has to have the same value as the difference in values for the potential function. In ordinal games, only the signs of the differences have to be the same. The potential function is a useful tool to analyze equilibrium properties of games, since the incentives of all players are mapped into one function, and the set of pureNash equilibriacan be found by locating the local optima of the potential function. Convergence and finite-time convergence of an iterated game towards a Nash equilibrium can also be understood by studying the potential function. Potential games can be studied asrepeated gameswith state so that every round played has a direct consequence on game's state in the next round.[2]This approach has applications in distributed control such as distributed resource allocation, where players without a central correlation mechanism can cooperate to achieve a globally optimal resource distribution. LetN{\displaystyle N}be the number of players,A{\displaystyle A}the set of action profiles over the action setsAi{\displaystyle A_{i}}of each player andui:A→R{\displaystyle u_{i}:A\to \mathbb {R} }be the payoff function for player1≤i≤N{\displaystyle 1\leq i\leq N}. Given a gameG=(N,A=A1×…×AN,u:A→RN){\displaystyle G=(N,A=A_{1}\times \ldots \times A_{N},u:A\rightarrow \mathbb {R} ^{N})}, we say thatG{\displaystyle G}is apotential gamewith an exact (weighted, ordinal, generalized ordinal, best response)potential functionifΦ:A→R{\displaystyle \Phi :A\rightarrow \mathbb {R} }is an exact (weighted, ordinal, generalized ordinal, best response, respectively) potential function forG{\displaystyle G}. Here,Φ{\displaystyle \Phi }is called Note that while there areN{\displaystyle N}utility functions, one for each player, there is only one potential function. Thus, through the lens of potential functions, the players become interchangeable (in the sense of one of the definitions above). Because of thissymmetryof the game, decentralized algorithms based on the shared potential function often lead to convergence (in some of sense) to a Nash equilibria. Ina2-player, 2-action game with externalities, individual players' payoffs are given by the functionui(ai,aj) =biai+waiaj, whereaiis players i's action,ajis the opponent's action, andwisapositiveexternalityfrom choosing the same action. The action choices are +1 and −1, as seen in thepayoff matrixin Figure 1. This game hasapotential functionP(a1,a2) =b1a1+b2a2+wa1a2. If player 1 moves from −1 to +1, the payoff difference isΔu1=u1(+1,a2) –u1(–1,a2) = 2b1+ 2wa2. The change in potential isΔP = P(+1,a2) – P(–1,a2) = (b1+b2a2+wa2) – (–b1+b2a2–wa2) = 2b1+ 2wa2= Δu1. The solution for player 2 is equivalent. Using numerical valuesb1= 2,b2= −1,w= 3, this example transforms intoasimplebattle of the sexes, as shown in Figure 2. The game has two pure Nash equilibria,(+1, +1)and(−1, −1). These are also the local maxima of the potential function (Figure 3). The onlystochastically stable equilibriumis(+1, +1), the global maximum of the potential function. A 2-player, 2-action game cannot beapotential game unless Exact potential games are equivalent tocongestion games: Rosenthal[3]proved that everycongestion gamehas an exact potential; Monderer and Shapley[1]proved the opposite direction: every game with an exact potential function is acongestion game. Animprovement path(also calledNash dynamics) is a sequence of strategy-vectors, in which each vector is attained from the previous vector by a single player switching his strategy to a strategy that strictly increases his utility. If a game has a generalized-ordinal-potential functionΦ{\displaystyle \Phi }, thenΦ{\displaystyle \Phi }is strictly increasing in every improvement path, so every improvement path is acyclic. If, in addition, the game has finitely many strategies, then every improvement path must be finite. This property is called thefinite improvement property (FIP). We have just proved that every finite generalized-ordinal-potential game has the FIP. The opposite is also true: every finite game has the FIP has a generalized-ordinal-potential function.[4][clarification needed]The terminal state in every finite improvement path is a Nash equilibrium, so FIP implies the existence of a pure-strategy Nash equilibrium. Moreover, it implies that a Nash equilibrium can be computed by a distributed process, in which each agent only has to improve his own strategy. Abest-response pathis a special case of an improvement path, in which each vector is attained from the previous vector by a single player switching his strategy to a best-response strategy. The property that every best-response path is finite is called thefinite best-response property (FBRP). FBRP is weaker than FIP, and it still implies the existence of a pure-strategy Nash equilibrium. It also implies that a Nash equilibrium can be computed by a distributed process, but the computational burden on the agents is higher than with FIP, since they have to compute a best-response. An even weaker property isweak-acyclicity (WA).[5]It means that, for any initial strategy-vector,there existsa finite best-response path starting at that vector. Weak-acyclicity is not sufficient for existence of a potential function (since some improvement-paths may be cyclic), but it is sufficient for the existence of pure-strategy Nash equilibrium. It implies that a Nash equilibrium can be computedalmost-surelyby a stochastic distributed process, in which at each point, a player is chosen at random, and this player chooses a best-strategy at random.[4]
https://en.wikipedia.org/wiki/Potential_game#Bounded_Rational_Models
Security Assertion Markup Language(SAML, pronouncedSAM-el,/ˈsæməl/)[1]is anopen standardfor exchangingauthenticationandauthorizationdata between parties, in particular, between anidentity providerand aservice provider. SAML is anXML-basedmarkup languagefor security assertions (statements that service providers use to make access-control decisions). SAML is also: An important use case that SAML addresses isweb-browsersingle sign-on(SSO). Single sign-on is relatively easy to accomplish within asecurity domain(usingcookies, for example) but extending SSO across security domains is more difficult and resulted in the proliferation of non-interoperable proprietary technologies. The SAML Web Browser SSO profile was specified and standardized to promote interoperability.[2]In practice, SAML SSO is most commonly used for authentication into cloud-based business software.[3] The SAML specification defines three roles: the principal (typically a human user), theidentity provider(IdP) and theservice provider(SP). In the primary use case addressed by SAML, the principal requests a service from the service provider. The service provider requests and obtains an authentication assertion from the identity provider. On the basis of this assertion, the service provider can make anaccess controldecision, that is, it can decide whether to perform the service for the connected principal. At the heart of the SAML assertion is a subject (a principal within the context of a particular security domain) about which something is being asserted. The subject is usually (but not necessarily) a human. As in the SAML 2.0 Technical Overview,[4]the terms subject and principal are used interchangeably in this document. Before delivering the subject-based assertion from Identity Provider to the Service Provider, the Identity Provider may request some information from the principal (such as a user name and password) in order to authenticate the principal. SAML specifies the content of the assertion that is passed from the Identity Provider to the Service Provider. In SAML, one Identity Provider may provide SAML assertions to many Service Providers. Similarly, one Service Provider (SP) may rely on and trust assertions from many independent Identity Providers (IdP).[5] SAML does not specify the method of authentication at the identity provider. The IdP may use a username and password, or some other form of authentication, includingmulti-factor authentication. A directory service such asRADIUS,LDAP, orActive Directorythat allows users to log in with a user name and password is a typical source of authentication tokens at an identity provider.[6]The popular Internet social networking services also provide identity services that in theory could be used to support SAML exchanges. TheOrganization for the Advancement of Structured Information Standards (OASIS)Security Services Technical Committee (SSTC), which met for the first time in January 2001, was chartered "to define an XML framework for exchanging authentication and authorization information."[7]To this end, the following intellectual property was contributed to the SSTC during the first two months of that year: Building on these initial contributions, in November 2002 OASIS announced the Security Assertion Markup Language (SAML) 1.0 specification as an OASIS Standard.[8] Meanwhile, theLiberty Alliance, a large consortium of companies, non-profit and government organizations, proposed an extension to the SAML standard called the Liberty Identity Federation Framework (ID-FF).[9]Like its SAML predecessor, Liberty ID-FF proposed a standardized, cross-domain, web-based, single sign-on framework. In addition, Liberty described acircle of trustwhere each participating domain is trusted to accurately document the processes used to identify a user, the type of authentication system used, and any policies associated with the resulting authentication credentials. Other members of the circle of trust could then examine these policies to determine whether to trust such information.[10] While Liberty was developing ID-FF, the SSTC began work on a minor upgrade to the SAML standard. The resulting SAML 1.1 specification was ratified by the SSTC in September 2003. Then, in November of that same year,Liberty contributed ID-FF 1.2 to OASIS, thereby sowing the seeds for the next major version of SAML. In March 2005, SAML 2.0 was announced as an OASIS Standard. SAML 2.0 represents the convergence of Liberty ID-FF and proprietary extensions contributed by theShibbolethproject, as well as early versions of SAML itself. Most SAML implementations support v2.0 while many still support v1.1 for backward compatibility. By January 2008, deployments of SAML 2.0 became common in government, higher education, and commercial enterprises worldwide.[10] SAML has undergone one minor and one major revision since 1.0. The Liberty Alliance contributed its Identity Federation Framework (ID-FF) to the OASIS SSTC in September 2003: Versions 1.0 and 1.1 of SAML are similar even though small differences exist.,[11]however, the differences between SAML 2.0 and SAML 1.1 are substantial. Although the two standards address the same use case, SAML 2.0 is incompatible with its predecessor. Although ID-FF 1.2 was contributed to OASIS as the basis of SAML 2.0, there are some important differences between SAML 2.0 and ID-FF 1.2. In particular, the two specifications, despite their common roots, are incompatible.[10] SAML is built upon a number of existing standards: SAML defines XML-based assertions and protocols, bindings, and profiles. The termSAML Corerefers to the general syntax and semantics of SAML assertions as well as the protocol used to request and transmit those assertions from one system entity to another.SAML protocolrefers towhatis transmitted, nothow(the latter is determined by the choice of binding). So SAML Core defines "bare" SAML assertions along with SAML request and response elements. ASAML bindingdetermines how SAML requests and responses map onto standard messaging or communications protocols. An important (synchronous) binding is the SAML SOAP binding. ASAML profileis a concrete manifestation of a defined use case using a particular combination of assertions, protocols and bindings. A SAMLassertioncontains a packet of security information: Loosely speaking, a relying party interprets an assertion as follows: AssertionAwas issued at timetby issuerRregarding subjectSprovided conditionsCare valid. SAML assertions are usually transferred from identity providers to service providers. Assertions containstatementsthat service providers use to make access-control decisions. Three types of statements are provided by SAML: Authentication statementsassert to the service provider that the principal did indeed authenticate with the identity provider at a particular time using a particular method of authentication. Other information about the authenticated principal (called theauthentication context) may be disclosed in an authentication statement. Anattribute statementasserts that a principal is associated with certain attributes. Anattributeis simply aname–value pair. Relying parties use attributes to make access-control decisions. Anauthorization decision statementasserts that a principal is permitted to perform actionAon resourceRgiven evidenceE. The expressiveness of authorization decision statements in SAML is intentionally limited. More-advanced use cases are encouraged to useXACMLinstead. A SAMLprotocoldescribes how certain SAML elements (including assertions) are packaged within SAML request and response elements, and gives the processing rules that SAML entities must follow when producing or consuming these elements. For the most part, a SAML protocol is a simple request-response protocol. The most important type of SAML protocol request is called aquery. A service provider makes a query directly to an identity provider over a secure back channel. Thus query messages are typically bound to SOAP. Corresponding to the three types of statements, there are three types of SAML queries: The result of an attribute query is a SAML response containing an assertion, which itself contains an attribute statement. See the SAML 2.0 topic foran example of attribute query/response. Beyond queries, SAML 1.1 specifies no other protocols. SAML 2.0 expands the notion ofprotocolconsiderably. The following protocols are described in detail in SAML 2.0 Core: Most of these protocols are new inSAML 2.0. A SAMLbindingis a mapping of a SAML protocol message onto standard messaging formats and/or communications protocols. For example, the SAML SOAP binding specifies how a SAML message is encapsulated in a SOAP envelope, which itself is bound to an HTTP message. SAML 1.1 specifies just one binding, the SAML SOAP Binding. In addition to SOAP, implicit in SAML 1.1 Web Browser SSO are the precursors of the HTTP POST Binding, the HTTP Redirect Binding, and the HTTP Artifact Binding. These are not defined explicitly, however, and are only used in conjunction with SAML 1.1 Web Browser SSO. The notion of binding is not fully developed until SAML 2.0. SAML 2.0 completely separates the binding concept from the underlying profile. In fact, there is a brandnew binding specification in SAML 2.0that defines the following standalone bindings: This reorganization provides tremendous flexibility: taking just Web Browser SSO alone as an example, a service provider can choose from four bindings (HTTP Redirect, HTTP POST and two flavors of HTTP Artifact), while the identity provider has three binding options (HTTP POST plus two forms of HTTP Artifact), for a total of twelve possible deployments of the SAML 2.0 Web Browser SSO Profile. A SAMLprofiledescribes in detail how SAML assertions, protocols, and bindings combine to support a defined use case. The most important SAML profile is the Web Browser SSO Profile. SAML 1.1 specifies two forms of Web Browser SSO, the Browser/Artifact Profile and the Browser/POST Profile. The latter passes assertionsby valuewhereas Browser/Artifact passes assertionsby reference. As a consequence, Browser/Artifact requires a back-channel SAML exchange over SOAP. In SAML 1.1, all flows begin with a request at the identity provider for simplicity. Proprietary extensions to the basic IdP-initiated flow have been proposed (byShibboleth, for example). The Web Browser SSO Profile was completely refactored for SAML 2.0. Conceptually, SAML 1.1 Browser/Artifact and Browser/POST are special cases of SAML 2.0 Web Browser SSO. The latter is considerably more flexible than its SAML 1.1 counterpart due to the new "plug-and-play" binding design of SAML 2.0. Unlike previous versions, SAML 2.0 browser flows begin with a request at the service provider. This provides greater flexibility, but SP-initiated flows naturally give rise to the so-calledIdentity Provider Discoveryproblem, the focus of much research today. In addition to Web Browser SSO, SAML 2.0 introduces numerous new profiles: Aside from the SAML Web Browser SSO Profile, some important third-party profiles of SAML include: The SAML specifications recommend, and in some cases mandate, a variety of security mechanisms: Requirements are often phrased in terms of (mutual) authentication, integrity, and confidentiality, leaving the choice of security mechanism to implementers and deployers. The primary SAML use case is calledWeb Browser Single Sign-On (SSO). A user utilizes auser agent(usually a web browser) to request a web resource protected by a SAMLservice provider. The service provider, wishing to know the identity of the requesting user, issues an authentication request to a SAMLidentity providerthrough the user agent. The resulting protocol flow is depicted in the following diagram. In SAML 1.1, the flow begins with a request to the identity provider's inter-site transfer service at step 3. In the example flow above, all depicted exchanges arefront-channel exchanges, that is, an HTTP user agent (browser) communicates with a SAML entity at each step. In particular, there are noback-channel exchangesor direct communications between the service provider and the identity provider. Front-channel exchanges lead to simple protocol flows where all messages are passedby valueusing a simple HTTP binding (GET or POST). Indeed, the flow outlined in the previous section is sometimes called theLightweight Web Browser SSO Profile. Alternatively, for increased security or privacy, messages may be passedby reference. For example, an identity provider may supply a reference to a SAML assertion (called anartifact) instead of transmitting the assertion directly through the user agent. Subsequently, the service provider requests the actual assertion via a back channel. Such a back-channel exchange is specified as aSOAPmessage exchange (SAML over SOAP over HTTP). In general, any SAML exchange over a secure back channel is conducted as a SOAP message exchange. On the back channel, SAML specifies the use of SOAP 1.1. The use of SOAP as a binding mechanism is optional, however. Any given SAML deployment will choose whatever bindings are appropriate.
https://en.wikipedia.org/wiki/Security_Assertion_Markup_Language
In situadaptive tabulation(ISAT) is analgorithmfor the approximation ofnonlinearrelationships. ISAT is based onmultiple linear regressionsthat are dynamically added as additional information is discovered. The technique is adaptive as it adds new linear regressions dynamically to a store of possible retrieval points. ISAT maintains error control by defining finer granularity in regions of increased nonlinearity. A binary tree search transverses cutting hyper-planes to locate a local linear approximation. ISAT is an alternative toartificial neural networksthat is receiving increased attention for desirable characteristics, namely: ISAT was first proposed byStephen B. Popefor computational reduction ofturbulentcombustionsimulation[1]and later extended to model predictive control.[2]It has been generalized to anISAT frameworkthat operates based on any input and output data regardless of the application. An improved version of the algorithm[3]was proposed just over a decade later of the original publication, including new features that allow you to improve the efficiency of the search for tabulated data, as well as error control.
https://en.wikipedia.org/wiki/In_Situ_Adaptive_Tabulation
Adetached signatureis a type ofdigital signaturethat is kept separate from its signed data, as opposed to bundled together into a single file. This approach offers several advantages, such as preventing unauthorized modifications to the original data objects. However, there is a risk that the detached signature could become separated from itsassociateddata, making the data inaccessible. This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Detached_signature
Instatistics, thegeneralizedcanonical correlationanalysis(gCCA), is a way of making sense ofcross-correlationmatrices between the sets of random variables when there are more than two sets. While a conventional CCA generalizesprincipal component analysis(PCA) to two sets of random variables, a gCCA generalizes PCA to more than two sets of random variables. Thecanonical variablesrepresent thosecommon factorsthat can be found by a large PCA of all of the transformed random variables after each set underwent its own PCA. TheHelmert-Wolf blocking(HWB) method of estimatinglinear regressionparameters can find an optimal solution only if all cross-correlations between the data blocks are zero. They can always be made to vanish by introducing a new regression parameter for each common factor. The gCCA method can be used for finding those harmful common factors that create cross-correlation between the blocks. However, no optimal HWB solution exists if the random variables do not contain enough information on all of the new regression parameters.
https://en.wikipedia.org/wiki/Generalized_canonical_correlation
Computer security(alsocybersecurity,digital security, orinformation technology (IT) security) is a subdiscipline within the field ofinformation security. It consists of the protection ofcomputer software,systemsandnetworksfromthreatsthat can lead to unauthorized information disclosure, theft or damage tohardware,software, ordata, as well as from the disruption or misdirection of theservicesthey provide.[1][2] The significance of the field stems from the expanded reliance oncomputer systems, theInternet,[3]andwireless network standards. Its importance is further amplified by the growth ofsmart devices, includingsmartphones,televisions, and the various devices that constitute theInternet of things(IoT). Cybersecurity has emerged as one of the most significant new challenges facing the contemporary world, due to both the complexity ofinformation systemsand the societies they support. Security is particularly crucial for systems that govern large-scale systems with far-reaching physical effects, such aspower distribution,elections, andfinance.[4][5] Although many aspects of computer security involve digital security, such as electronicpasswordsandencryption,physical securitymeasures such asmetal locksare still used to prevent unauthorized tampering. IT security is not a perfect subset ofinformation security, therefore does not completely align into thesecurity convergenceschema. A vulnerability refers to a flaw in the structure, execution, functioning, or internal oversight of a computer or system that compromises its security. Most of the vulnerabilities that have been discovered are documented in theCommon Vulnerabilities and Exposures(CVE) database.[6]Anexploitablevulnerability is one for which at least one workingattackorexploitexists.[7]Actors maliciously seeking vulnerabilities are known asthreats. Vulnerabilities can be researched, reverse-engineered, hunted, or exploited usingautomated toolsor customized scripts.[8][9] Various people or parties are vulnerable to cyber attacks; however, different groups are likely to experience different types of attacks more than others.[10] In April 2023, theUnited KingdomDepartment for Science, Innovation & Technology released a report on cyber attacks over the previous 12 months.[11]They surveyed 2,263 UK businesses, 1,174 UK registered charities, and 554 education institutions. The research found that "32% of businesses and 24% of charities overall recall any breaches or attacks from the last 12 months." These figures were much higher for "medium businesses (59%), large businesses (69%), and high-income charities with £500,000 or more in annual income (56%)."[11]Yet, although medium or large businesses are more often the victims, since larger companies have generally improved their security over the last decade,small and midsize businesses(SMBs) have also become increasingly vulnerable as they often "do not have advanced tools to defend the business."[10]SMBs are most likely to be affected by malware, ransomware, phishing,man-in-the-middle attacks, and Denial-of Service (DoS) Attacks.[10] Normal internet users are most likely to be affected by untargeted cyberattacks.[12]These are where attackers indiscriminately target as many devices, services, or users as possible. They do this using techniques that take advantage of the openness of the Internet. These strategies mostly includephishing,ransomware,water holingand scanning.[12] To secure a computer system, it is important to understand the attacks that can be made against it, and thesethreatscan typically be classified into one of the following categories: Abackdoorin a computer system, acryptosystem, or analgorithmis any secret method of bypassing normalauthenticationor security controls. These weaknesses may exist for many reasons, including original design or poor configuration.[13]Due to the nature of backdoors, they are of greater concern to companies and databases as opposed to individuals. Backdoors may be added by an authorized party to allow some legitimate access or by an attacker for malicious reasons.Criminalsoften usemalwareto install backdoors, giving them remote administrative access to a system.[14]Once they have access, cybercriminals can "modify files, steal personal information, install unwanted software, and even take control of the entire computer."[14] Backdoors can be difficult to detect, as they often remain hidden within the source code or system firmware intimate knowledge of theoperating systemof the computer. Denial-of-service attacks(DoS) are designed to make a machine or network resource unavailable to its intended users.[15]Attackers can deny service to individual victims, such as by deliberately entering a wrong password enough consecutive times to cause the victim's account to be locked, or they may overload the capabilities of a machine or network and block all users at once. While a network attack from a singleIP addresscan be blocked by adding a new firewall rule, many forms ofdistributed denial-of-service(DDoS) attacks are possible, where the attack comes from a large number of points. In this case, defending against these attacks is much more difficult. Such attacks can originate from thezombie computersof abotnetor from a range of other possible techniques, includingdistributed reflective denial-of-service(DRDoS), where innocent systems are fooled into sending traffic to the victim.[15]With such attacks, the amplification factor makes the attack easier for the attacker because they have to use little bandwidth themselves. To understand why attackers may carry out these attacks, see the 'attacker motivation' section. A direct-access attack is when an unauthorized user (an attacker) gains physical access to a computer, most likely to directly copy data from it or steal information.[16]Attackers may also compromise security by making operating system modifications, installingsoftware worms,keyloggers,covert listening devicesor using wireless microphones. Even when the system is protected by standard security measures, these may be bypassed by booting another operating system or tool from aCD-ROMor other bootable media.Disk encryptionand theTrusted Platform Modulestandard are designed to prevent these attacks. Direct service attackers are related in concept todirect memory attackswhich allow an attacker to gain direct access to a computer's memory.[17]The attacks "take advantage of a feature of modern computers that allows certain devices, such as external hard drives, graphics cards, or network cards, to access the computer's memory directly."[17] Eavesdroppingis the act of surreptitiously listening to a private computer conversation (communication), usually between hosts on a network. It typically occurs when a user connects to a network where traffic is not secured or encrypted and sends sensitive business data to a colleague, which, when listened to by an attacker, could be exploited.[18]Data transmitted across anopen networkallows an attacker to exploit a vulnerability and intercept it via various methods. Unlikemalware, direct-access attacks, or other forms of cyber attacks, eavesdropping attacks are unlikely to negatively affect the performance of networks or devices, making them difficult to notice.[18]In fact, "the attacker does not need to have any ongoing connection to the software at all. The attacker can insert the software onto a compromised device, perhaps by direct insertion or perhaps by a virus or other malware, and then come back some time later to retrieve any data that is found or trigger the software to send the data at some determined time."[19] Using avirtual private network(VPN), which encrypts data between two points, is one of the most common forms of protection against eavesdropping. Using the best form of encryption possible for wireless networks is best practice, as well as usingHTTPSinstead of an unencryptedHTTP.[20] Programs such asCarnivoreandNarusInSighthave been used by theFederal Bureau of Investigation(FBI) and NSA to eavesdrop on the systems ofinternet service providers. Even machines that operate as a closed system (i.e., with no contact with the outside world) can be eavesdropped upon by monitoring the faintelectromagnetictransmissions generated by the hardware.TEMPESTis a specification by the NSA referring to these attacks. Malicious software (malware) is any software code or computer program "intentionally written to harm a computer system or its users."[21]Once present on a computer, it can leak sensitive details such as personal information, business information and passwords, can give control of the system to the attacker, and can corrupt or delete data permanently.[22][23] Man-in-the-middle attacks(MITM) involve a malicious attacker trying to intercept, surveil or modify communications between two parties by spoofing one or both party's identities and injecting themselves in-between.[24]Types of MITM attacks include: Surfacing in 2017, a new class of multi-vector,[25]polymorphic[26]cyber threats combine several types of attacks and change form to avoid cybersecurity controls as they spread. Multi-vector polymorphic attacks, as the name describes, are both multi-vectored and polymorphic.[27]Firstly, they are a singular attack that involves multiple methods of attack. In this sense, they are "multi-vectored (i.e. the attack can use multiple means of propagation such as via the Web, email and applications." However, they are also multi-staged, meaning that "they can infiltrate networks and move laterally inside the network."[27]The attacks can be polymorphic, meaning that the cyberattacks used such as viruses, worms or trojans "constantly change ("morph") making it nearly impossible to detect them using signature-based defences."[27] Phishingis the attempt of acquiring sensitive information such as usernames, passwords, and credit card details directly from users by deceiving the users.[28]Phishing is typically carried out byemail spoofing,instant messaging,text message, or on aphonecall. They often direct users to enter details at a fake website whoselook and feelare almost identical to the legitimate one.[29]The fake website often asks for personal information, such as login details and passwords. This information can then be used to gain access to the individual's real account on the real website. Preying on a victim's trust, phishing can be classified as a form ofsocial engineering. Attackers can use creative ways to gain access to real accounts. A common scam is for attackers to send fake electronic invoices[30]to individuals showing that they recently purchased music, apps, or others, and instructing them to click on a link if the purchases were not authorized. A more strategic type of phishing is spear-phishing which leverages personal or organization-specific details to make the attacker appear like a trusted source. Spear-phishing attacks target specific individuals, rather than the broad net cast by phishing attempts.[31] Privilege escalationdescribes a situation where an attacker with some level of restricted access is able to, without authorization, elevate their privileges or access level.[32]For example, a standard computer user may be able to exploit avulnerabilityin the system to gain access to restricted data; or even becomerootand have full unrestricted access to a system. The severity of attacks can range from attacks simply sending an unsolicited email to aransomware attackon large amounts of data. Privilege escalation usually starts withsocial engineeringtechniques, oftenphishing.[32] Privilege escalation can be separated into two strategies, horizontal and vertical privilege escalation: Any computational system affects its environment in some form. This effect it has on its environment can range from electromagnetic radiation, to residual effect on RAM cells which as a consequence make aCold boot attackpossible, to hardware implementation faults that allow for access or guessing of other values that normally should be inaccessible. In Side-channel attack scenarios, the attacker would gather such information about a system or network to guess its internal state and as a result access the information which is assumed by the victim to be secure. The target information in a side channel can be challenging to detect due to its low amplitude when combined with other signals[33] Social engineering, in the context of computer security, aims to convince a user to disclose secrets such as passwords, card numbers, etc. or grant physical access by, for example, impersonating a senior executive, bank, a contractor, or a customer.[34]This generally involves exploiting people's trust, and relying on theircognitive biases. A common scam involves emails sent to accounting and finance department personnel, impersonating their CEO and urgently requesting some action. One of the main techniques of social engineering arephishingattacks. In early 2016, theFBIreported that suchbusiness email compromise(BEC) scams had cost US businesses more than $2 billion in about two years.[35] In May 2016, theMilwaukee BucksNBAteam was the victim of this type of cyber scam with a perpetrator impersonating the team's presidentPeter Feigin, resulting in the handover of all the team's employees' 2015W-2tax forms.[36] Spoofing is an act of pretending to be a valid entity through the falsification of data (such as an IP address or username), in order to gain access to information or resources that one is otherwise unauthorized to obtain. Spoofing is closely related tophishing.[37][38]There are several types of spoofing, including: In 2018, the cybersecurity firmTrellixpublished research on the life-threatening risk of spoofing in the healthcare industry.[40] Tamperingdescribes amalicious modificationor alteration of data. It is an intentional but unauthorized act resulting in the modification of a system, components of systems, its intended behavior, or data. So-calledEvil Maid attacksand security services planting ofsurveillancecapability into routers are examples.[41] HTMLsmuggling allows an attacker tosmugglea malicious code inside a particular HTML or web page.[42]HTMLfiles can carry payloads concealed as benign, inert data in order to defeatcontent filters. These payloads can be reconstructed on the other side of the filter.[43] When a target user opens the HTML, the malicious code is activated; the web browser thendecodesthe script, which then unleashes the malware onto the target's device.[42] Employee behavior can have a big impact oninformation securityin organizations. Cultural concepts can help different segments of the organization work effectively or work against effectiveness toward information security within an organization. Information security culture is the "...totality of patterns of behavior in an organization that contributes to the protection of information of all kinds."[44] Andersson and Reimers (2014) found that employees often do not see themselves as part of their organization's information security effort and often take actions that impede organizational changes.[45]Indeed, the Verizon Data Breach Investigations Report 2020, which examined 3,950 security breaches, discovered 30% of cybersecurity incidents involved internal actors within a company.[46]Research shows information security culture needs to be improved continuously. In "Information Security Culture from Analysis to Change", authors commented, "It's a never-ending process, a cycle of evaluation and change or maintenance." To manage the information security culture, five steps should be taken: pre-evaluation, strategic planning, operative planning, implementation, and post-evaluation.[47] In computer security, acountermeasureis an action, device, procedure or technique that reduces a threat, a vulnerability, or anattackby eliminating or preventing it, by minimizing the harm it can cause, or by discovering and reporting it so that corrective action can be taken.[48][49][50] Some common countermeasures are listed in the following sections: Security by design, or alternately secure by design, means that the software has been designed from the ground up to be secure. In this case, security is considered a main feature. The UK government's National Cyber Security Centre separates secure cyber design principles into five sections:[51] These design principles of security by design can include some of the following techniques: Security architecture can be defined as the "practice of designing computer systems to achieve security goals."[52]These goals have overlap with the principles of "security by design" explored above, including to "make initial compromise of the system difficult," and to "limit the impact of any compromise."[52]In practice, the role of a security architect would be to ensure the structure of a system reinforces the security of the system, and that new changes are safe and meet the security requirements of the organization.[53][54] Similarly, Techopedia defines security architecture as "a unified security design that addresses the necessities and potential risks involved in a certain scenario or environment. It also specifies when and where to apply security controls. The design process is generally reproducible." The key attributes of security architecture are:[55] Practicing security architecture provides the right foundation to systematically address business, IT and security concerns in an organization. A state of computer security is the conceptual ideal, attained by the use of three processes: threat prevention, detection, and response. These processes are based on various policies and system components, which include the following: Today, computer security consists mainly of preventive measures, likefirewallsor anexit procedure. A firewall can be defined as a way of filtering network data between a host or a network and another network, such as theInternet. They can be implemented as software running on the machine, hooking into thenetwork stack(or, in the case of mostUNIX-based operating systems such asLinux, built into the operating systemkernel) to provide real-time filtering and blocking.[56]Another implementation is a so-calledphysical firewall, which consists of a separate machine filtering network traffic. Firewalls are common amongst machines that are permanently connected to the Internet. Some organizations are turning tobig dataplatforms, such asApache Hadoop, to extend data accessibility andmachine learningto detectadvanced persistent threats.[58] In order to ensure adequate security, the confidentiality, integrity and availability of a network, better known as the CIA triad, must be protected and is considered the foundation to information security.[59]To achieve those objectives, administrative, physical and technical security measures should be employed. The amount of security afforded to an asset can only be determined when its value is known.[60] Vulnerability management is the cycle of identifying, fixing or mitigatingvulnerabilities,[61]especially in software andfirmware. Vulnerability management is integral to computer security andnetwork security. Vulnerabilities can be discovered with avulnerability scanner, which analyzes a computer system in search of known vulnerabilities,[62]such asopen ports, insecure software configuration, and susceptibility tomalware. In order for these tools to be effective, they must be kept up to date with every new update the vendor release. Typically, these updates will scan for the new vulnerabilities that were introduced recently. Beyond vulnerability scanning, many organizations contract outside security auditors to run regularpenetration testsagainst their systems to identify vulnerabilities. In some sectors, this is a contractual requirement.[63] The act of assessing and reducing vulnerabilities to cyber attacks is commonly referred to asinformation technology security assessments. They aim to assess systems for risk and to predict and test for their vulnerabilities. Whileformal verificationof the correctness of computer systems is possible,[64][65]it is not yet common. Operating systems formally verified includeseL4,[66]andSYSGO'sPikeOS[67][68]– but these make up a very small percentage of the market. It is possible to reduce an attacker's chances by keeping systems up to date with security patches and updates and by hiring people with expertise in security. Large companies with significant threats can hire Security Operations Centre (SOC) Analysts. These are specialists in cyber defences, with their role ranging from "conducting threat analysis to investigating reports of any new issues and preparing and testing disaster recovery plans."[69] Whilst no measures can completely guarantee the prevention of an attack, these measures can help mitigate the damage of possible attacks. The effects of data loss/damage can be also reduced by carefulbacking upandinsurance. Outside of formal assessments, there are various methods of reducing vulnerabilities.Two factor authenticationis a method for mitigating unauthorized access to a system or sensitive information.[70]It requiressomething you know:a password or PIN, andsomething you have: a card, dongle, cellphone, or another piece of hardware. This increases security as an unauthorized person needs both of these to gain access. Protecting against social engineering and direct computer access (physical) attacks can only happen by non-computer means, which can be difficult to enforce, relative to the sensitivity of the information. Training is often involved to help mitigate this risk by improving people's knowledge of how to protect themselves and by increasing people's awareness of threats.[71]However, even in highly disciplined environments (e.g. military organizations), social engineering attacks can still be difficult to foresee and prevent. Inoculation, derived frominoculation theory, seeks to prevent social engineering and other fraudulent tricks and traps by instilling a resistance to persuasion attempts through exposure to similar or related attempts.[72] Hardware-based or assisted computer security also offers an alternative to software-only computer security. Using devices and methods such asdongles,trusted platform modules, intrusion-aware cases, drive locks, disabling USB ports, and mobile-enabled access may be considered more secure due to the physical access (or sophisticated backdoor access) required in order to be compromised. Each of these is covered in more detail below. One use of the termcomputer securityrefers to technology that is used to implementsecure operating systems. Using secure operating systems is a good way of ensuring computer security. These are systems that have achieved certification from an external security-auditing organization, the most popular evaluations areCommon Criteria(CC).[86] In software engineering,secure codingaims to guard against the accidental introduction of security vulnerabilities. It is also possible to create software designed from the ground up to be secure. Such systems aresecure by design. Beyond this, formal verification aims to prove thecorrectnessof thealgorithmsunderlying a system;[87]important forcryptographic protocolsfor example. Within computer systems, two of the mainsecurity modelscapable of enforcing privilege separation areaccess control lists(ACLs) androle-based access control(RBAC). Anaccess-control list(ACL), with respect to a computer file system, is a list of permissions associated with an object. An ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects. Role-based access control is an approach to restricting system access to authorized users,[88][89][90]used by the majority of enterprises with more than 500 employees,[91]and can implementmandatory access control(MAC) ordiscretionary access control(DAC). A further approach,capability-based securityhas been mostly restricted to research operating systems. Capabilities can, however, also be implemented at the language level, leading to a style of programming that is essentially a refinement of standard object-oriented design. An open-source project in the area is theE language. The end-user is widely recognized as the weakest link in the security chain[92]and it is estimated that more than 90% of security incidents and breaches involve some kind of human error.[93][94]Among the most commonly recorded forms of errors and misjudgment are poor password management, sending emails containing sensitive data and attachments to the wrong recipient, the inability to recognize misleading URLs and to identify fake websites and dangerous email attachments. A common mistake that users make is saving their user id/password in their browsers to make it easier to log in to banking sites. This is a gift to attackers who have obtained access to a machine by some means. The risk may be mitigated by the use of two-factor authentication.[95] As the human component of cyber risk is particularly relevant in determining the global cyber risk[96]an organization is facing, security awareness training, at all levels, not only provides formal compliance with regulatory and industry mandates but is considered essential[97]in reducing cyber risk and protecting individuals and companies from the great majority of cyber threats. The focus on the end-user represents a profound cultural change for many security practitioners, who have traditionally approached cybersecurity exclusively from a technical perspective, and moves along the lines suggested by major security centers[98]to develop a culture of cyber awareness within the organization, recognizing that a security-aware user provides an important line of defense against cyber attacks. Related to end-user training,digital hygieneorcyber hygieneis a fundamental principle relating to information security and, as the analogy withpersonal hygieneshows, is the equivalent of establishing simple routine measures to minimize the risks from cyber threats. The assumption is that good cyber hygiene practices can give networked users another layer of protection, reducing the risk that one vulnerable node will be used to either mount attacks or compromise another node or network, especially from common cyberattacks.[99]Cyber hygiene should also not be mistaken forproactive cyber defence, a military term.[100] The most common acts of digital hygiene can include updating malware protection, cloud back-ups, passwords, and ensuring restricted admin rights and network firewalls.[101]As opposed to a purely technology-based defense against threats, cyber hygiene mostly regards routine measures that are technically simple to implement and mostly dependent on discipline[102]or education.[103]It can be thought of as an abstract list of tips or measures that have been demonstrated as having a positive effect on personal or collective digital security. As such, these measures can be performed by laypeople, not just security experts. Cyber hygiene relates to personal hygiene as computer viruses relate to biological viruses (or pathogens). However, while the termcomputer viruswas coined almost simultaneously with the creation of the first working computer viruses,[104]the termcyber hygieneis a much later invention, perhaps as late as 2000[105]by Internet pioneerVint Cerf. It has since been adopted by theCongress[106]andSenateof the United States,[107]the FBI,[108]EUinstitutions[99]and heads of state.[100] Responding to attemptedsecurity breachesis often very difficult for a variety of reasons, including: Where an attack succeeds and a breach occurs, many jurisdictions now have in place mandatorysecurity breach notification laws. The growth in the number of computer systems and the increasing reliance upon them by individuals, businesses, industries, and governments means that there are an increasing number of systems at risk. The computer systems of financial regulators and financial institutions like theU.S. Securities and Exchange Commission, SWIFT, investment banks, and commercial banks are prominent hacking targets forcybercriminalsinterested in manipulating markets and making illicit gains.[109]Websites and apps that accept or storecredit card numbers, brokerage accounts, andbank accountinformation are also prominent hacking targets, because of the potential for immediate financial gain from transferring money, making purchases, or selling the information on theblack market.[110]In-store payment systems andATMshave also been tampered with in order to gather customer account data andPINs. TheUCLAInternet Report: Surveying the Digital Future (2000) found that the privacy of personal data created barriers to online sales and that more than nine out of 10 internet users were somewhat or very concerned aboutcredit cardsecurity.[111] The most common web technologies for improving security between browsers and websites are named SSL (Secure Sockets Layer), and its successor TLS (Transport Layer Security),identity managementandauthenticationservices, anddomain nameservices allow companies and consumers to engage in secure communications and commerce. Several versions of SSL and TLS are commonly used today in applications such as web browsing, e-mail, internet faxing,instant messaging, andVoIP(voice-over-IP). There are variousinteroperableimplementations of these technologies, including at least one implementation that isopen source. Open source allows anyone to view the application'ssource code, and look for and report vulnerabilities. The credit card companiesVisaandMasterCardcooperated to develop the secureEMVchip which is embedded in credit cards. Further developments include theChip Authentication Programwhere banks give customers hand-held card readers to perform online secure transactions. Other developments in this arena include the development of technology such as Instant Issuance which has enabled shoppingmall kiosksacting on behalf of banks to issue on-the-spot credit cards to interested customers. Computers control functions at many utilities, including coordination oftelecommunications, thepower grid,nuclear power plants, and valve opening and closing in water and gas networks. The Internet is a potential attack vector for such machines if connected, but theStuxnetworm demonstrated that even equipment controlled by computers not connected to the Internet can be vulnerable. In 2014, theComputer Emergency Readiness Team, a division of theDepartment of Homeland Security, investigated 79 hacking incidents at energy companies.[112] Theaviationindustry is very reliant on a series of complex systems which could be attacked.[113]A simple power outage at one airport can cause repercussions worldwide,[114]much of the system relies on radio transmissions which could be disrupted,[115]and controlling aircraft over oceans is especially dangerous because radar surveillance only extends 175 to 225 miles offshore.[116]There is also potential for attack from within an aircraft.[117] Implementing fixes in aerospace systems poses a unique challenge because efficient air transportation is heavily affected by weight and volume. Improving security by adding physical devices to airplanes could increase their unloaded weight, and could potentially reduce cargo or passenger capacity.[118] In Europe, with the (Pan-European Network Service)[119]and NewPENS,[120]and in the US with the NextGen program,[121]air navigation service providersare moving to create their own dedicated networks. Many modern passports are nowbiometric passports, containing an embeddedmicrochipthat stores a digitized photograph and personal information such as name, gender, and date of birth. In addition, more countries[which?]are introducingfacial recognition technologyto reduceidentity-related fraud. The introduction of the ePassport has assisted border officials in verifying the identity of the passport holder, thus allowing for quick passenger processing.[122]Plans are under way in the US, theUK, andAustraliato introduce SmartGate kiosks with both retina andfingerprint recognitiontechnology.[123]The airline industry is moving from the use of traditional paper tickets towards the use ofelectronic tickets(e-tickets). These have been made possible by advances in online credit card transactions in partnership with the airlines. Long-distance bus companies[which?]are also switching over to e-ticketing transactions today. The consequences of a successful attack range from loss of confidentiality to loss of system integrity,air traffic controloutages, loss of aircraft, and even loss of life. Desktop computers and laptops are commonly targeted to gather passwords or financial account information or to construct a botnet to attack another target.Smartphones,tablet computers,smart watches, and othermobile devicessuch asquantified selfdevices likeactivity trackershave sensors such as cameras, microphones, GPS receivers, compasses, andaccelerometerswhich could be exploited, and may collect personal information, including sensitive health information. WiFi, Bluetooth, and cell phone networks on any of these devices could be used as attack vectors, and sensors might be remotely activated after a successful breach.[124] The increasing number ofhome automationdevices such as theNest thermostatare also potential targets.[124] Today many healthcare providers andhealth insurancecompanies use the internet to provide enhanced products and services. Examples are the use oftele-healthto potentially offer better quality and access to healthcare, or fitness trackers to lower insurance premiums.[citation needed]Patient records are increasingly being placed on secure in-house networks, alleviating the need for extra storage space.[125] Large corporations are common targets. In many cases attacks are aimed at financial gain throughidentity theftand involvedata breaches. Examples include the loss of millions of clients' credit card and financial details byHome Depot,[126]Staples,[127]Target Corporation,[128]andEquifax.[129] Medical records have been targeted in general identify theft, health insurance fraud, and impersonating patients to obtain prescription drugs for recreational purposes or resale.[130]Although cyber threats continue to increase, 62% of all organizations did not increase security training for their business in 2015.[131] Not all attacks are financially motivated, however: security firmHBGary Federalhad a serious series of attacks in 2011 fromhacktivistgroupAnonymousin retaliation for the firm's CEO claiming to have infiltrated their group,[132][133]andSony Pictureswashacked in 2014with the apparent dual motive of embarrassing the company through data leaks and crippling the company by wiping workstations and servers.[134][135] Vehicles are increasingly computerized, with engine timing,cruise control,anti-lock brakes, seat belt tensioners, door locks,airbagsandadvanced driver-assistance systemson many models. Additionally,connected carsmay use WiFi and Bluetooth to communicate with onboard consumer devices and the cell phone network.[136]Self-driving carsare expected to be even more complex. All of these systems carry some security risks, and such issues have gained wide attention.[137][138][139] Simple examples of risk include a maliciouscompact discbeing used as an attack vector,[140]and the car's onboard microphones being used for eavesdropping. However, if access is gained to a car's internalcontroller area network, the danger is much greater[136]– and in a widely publicized 2015 test, hackers remotely carjacked a vehicle from 10 miles away and drove it into a ditch.[141][142] Manufacturers are reacting in numerous ways, withTeslain 2016 pushing out some security fixesover the airinto its cars' computer systems.[143]In the area of autonomous vehicles, in September 2016 theUnited States Department of Transportationannounced some initial safety standards, and called for states to come up with uniform policies.[144][145][146] Additionally, e-Drivers' licenses are being developed using the same technology. For example, Mexico's licensing authority (ICV) has used a smart card platform to issue the first e-Drivers' licenses to the city ofMonterrey, in the state ofNuevo León.[147] Shipping companies[148]have adoptedRFID(Radio Frequency Identification) technology as an efficient, digitally secure,tracking device. Unlike abarcode, RFID can be read up to 20 feet away. RFID is used byFedEx[149]andUPS.[150] Government andmilitarycomputer systems are commonly attacked by activists[151][152][153]and foreign powers.[154][155][156][157]Local and regional government infrastructure such astraffic lightcontrols, police and intelligence agency communications,personnel records, as well as student records.[158] TheFBI,CIA, andPentagon, all utilize secure controlled access technology for any of their buildings. However, the use of this form of technology is spreading into the entrepreneurial world. More and more companies are taking advantage of the development of digitally secure controlled access technology. GE's ACUVision, for example, offers a single panel platform for access control, alarm monitoring and digital recording.[159] TheInternet of things(IoT) is the network of physical objects such as devices, vehicles, and buildings that areembeddedwithelectronics,software,sensors, andnetwork connectivitythat enables them to collect and exchange data.[160]Concerns have been raised that this is being developed without appropriate consideration of the security challenges involved.[161][162] While the IoT creates opportunities for more direct integration of the physical world into computer-based systems,[163][164]it also provides opportunities for misuse. In particular, as the Internet of Things spreads widely, cyberattacks are likely to become an increasingly physical (rather than simply virtual) threat.[165]If a front door's lock is connected to the Internet, and can be locked/unlocked from a phone, then a criminal could enter the home at the press of a button from a stolen or hacked phone. People could stand to lose much more than their credit card numbers in a world controlled by IoT-enabled devices. Thieves have also used electronic means to circumvent non-Internet-connected hotel door locks.[166] An attack aimed at physical infrastructure or human lives is often called a cyber-kinetic attack. As IoT devices and appliances become more widespread, the prevalence and potential damage of cyber-kinetic attacks can increase substantially. Medical deviceshave either been successfully attacked or had potentially deadly vulnerabilities demonstrated, including both in-hospital diagnostic equipment[167]and implanted devices includingpacemakers[168]andinsulin pumps.[169]There are many reports of hospitals and hospital organizations getting hacked, includingransomwareattacks,[170][171][172][173]Windows XPexploits,[174][175]viruses,[176][177]and data breaches of sensitive data stored on hospital servers.[178][171][179][180]On 28 December 2016 the USFood and Drug Administrationreleased its recommendations for how medicaldevice manufacturersshould maintain the security of Internet-connected devices – but no structure for enforcement.[181][182] In distributed generation systems, the risk of a cyber attack is real, according toDaily Energy Insider. An attack could cause a loss of power in a large area for a long period of time, and such an attack could have just as severe consequences as a natural disaster. The District of Columbia is considering creating a Distributed Energy Resources (DER) Authority within the city, with the goal being for customers to have more insight into their own energy use and giving the local electric utility,Pepco, the chance to better estimate energy demand. The D.C. proposal, however, would "allow third-party vendors to create numerous points of energy distribution, which could potentially create more opportunities for cyber attackers to threaten the electric grid."[183] Perhaps the most widely known digitally secure telecommunication device is theSIM(Subscriber Identity Module) card, a device that is embedded in most of the world's cellular devices before any service can be obtained. The SIM card is just the beginning of this digitally secure environment. The Smart Card Web Servers draft standard (SCWS) defines the interfaces to anHTTP serverin asmart card.[184]Tests are being conducted to secure OTA ("over-the-air") payment and credit card information from and to a mobile phone. Combination SIM/DVD devices are being developed through Smart Video Card technology which embeds aDVD-compliantoptical discinto the card body of a regular SIM card. Other telecommunication developments involving digital security includemobile signatures, which use the embedded SIM card to generate a legally bindingelectronic signature. Serious financial damage has been caused bysecurity breaches, but because there is no standard model for estimating the cost of an incident, the only data available is that which is made public by the organizations involved. "Several computer security consulting firms produce estimates of total worldwide losses attributable tovirusand worm attacks and to hostile digital acts in general. The 2003 loss estimates by these firms range from $13 billion (worms and viruses only) to $226 billion (for all forms of covert attacks). The reliability of these estimates is often challenged; the underlying methodology is basically anecdotal."[185] However, reasonable estimates of the financial cost of security breaches can actually help organizations make rational investment decisions. According to the classicGordon-Loeb Modelanalyzing the optimal investment level in information security, one can conclude that the amount a firm spends to protect information should generally be only a small fraction of the expected loss (i.e., theexpected valueof the loss resulting from a cyber/informationsecurity breach).[186] As withphysical security, the motivations for breaches of computer security vary between attackers. Some are thrill-seekers orvandals, some are activists, others are criminals looking for financial gain. State-sponsored attackers are now common and well resourced but started with amateurs such as Markus Hess who hacked for theKGB, as recounted byClifford StollinThe Cuckoo's Egg. Attackers motivations can vary for all types of attacks from pleasure to political goals.[15]For example, hacktivists may target a company or organization that carries out activities they do not agree with. This would be to create bad publicity for the company by having its website crash. High capability hackers, often with larger backing or state sponsorship, may attack based on the demands of their financial backers. These attacks are more likely to attempt more serious attack. An example of a more serious attack was the2015 Ukraine power grid hack, which reportedly utilised the spear-phising, destruction of files, and denial-of-service attacks to carry out the full attack.[187][188] Additionally, recent attacker motivations can be traced back to extremist organizations seeking to gain political advantage or disrupt social agendas.[189]The growth of the internet, mobile technologies, and inexpensive computing devices have led to a rise in capabilities but also to the risk to environments that are deemed as vital to operations. All critical targeted environments are susceptible to compromise and this has led to a series of proactive studies on how to migrate the risk by taking into consideration motivations by these types of actors. Several stark differences exist between the hacker motivation and that ofnation stateactors seeking to attack based on an ideological preference.[190] A key aspect of threat modeling for any system is identifying the motivations behind potential attacks and the individuals or groups likely to carry them out. The level and detail of security measures will differ based on the specific system being protected. For instance, a home personal computer, a bank, and a classified military network each face distinct threats, despite using similar underlying technologies.[191] Computer security incident managementis an organized approach to addressing and managing the aftermath of a computer security incident or compromise with the goal of preventing a breach or thwarting a cyberattack. An incident that is not identified and managed at the time of intrusion typically escalates to a more damaging event such as adata breachor system failure. The intended outcome of a computer security incident response plan is to contain the incident, limit damage and assist recovery to business as usual. Responding to compromises quickly can mitigate exploited vulnerabilities, restore services and processes and minimize losses.[192]Incident response planning allows an organization to establish a series of best practices to stop an intrusion before it causes damage. Typical incident response plans contain a set of written instructions that outline the organization's response to a cyberattack. Without a documented plan in place, an organization may not successfully detect an intrusion or compromise and stakeholders may not understand their roles, processes and procedures during an escalation, slowing the organization's response and resolution. There are four key components of a computer security incident response plan: Some illustrative examples of different types of computer security breaches are given below. In 1988, 60,000 computers were connected to the Internet, and most were mainframes, minicomputers and professional workstations. On 2 November 1988, many started to slow down, because they were running a malicious code that demanded processor time and that spread itself to other computers – the first internetcomputer worm.[194]The software was traced back to 23-year-oldCornell Universitygraduate studentRobert Tappan Morriswho said "he wanted to count how many machines were connected to the Internet".[194] In 1994, over a hundred intrusions were made by unidentified crackers into theRome Laboratory, the US Air Force's main command and research facility. Usingtrojan horses, hackers were able to obtain unrestricted access to Rome's networking systems and remove traces of their activities. The intruders were able to obtain classified files, such as air tasking order systems data and furthermore able to penetrate connected networks ofNational Aeronautics and Space Administration's Goddard Space Flight Center, Wright-Patterson Air Force Base, some Defense contractors, and other private sector organizations, by posing as a trusted Rome center user.[195] In early 2007, American apparel and home goods companyTJXannounced that it was the victim of anunauthorized computer systems intrusion[196]and that the hackers had accessed a system that stored data oncredit card,debit card,check, and merchandise return transactions.[197] In 2010, the computer worm known asStuxnetreportedly ruined almost one-fifth of Iran'snuclear centrifuges.[198]It did so by disrupting industrialprogrammable logic controllers(PLCs) in a targeted attack. This is generally believed to have been launched by Israel and the United States to disrupt Iran's nuclear program[199][200][201][202]– although neither has publicly admitted this. In early 2013, documents provided byEdward Snowdenwere published byThe Washington PostandThe Guardian[203][204]exposing the massive scale ofNSAglobal surveillance. There were also indications that the NSA may have inserted a backdoor in aNISTstandard for encryption.[205]This standard was later withdrawn due to widespread criticism.[206]The NSA additionally were revealed to have tapped the links betweenGoogle's data centers.[207] A Ukrainian hacker known asRescatorbroke intoTarget Corporationcomputers in 2013, stealing roughly 40 million credit cards,[208]and thenHome Depotcomputers in 2014, stealing between 53 and 56 million credit card numbers.[209]Warnings were delivered at both corporations, but ignored; physical security breaches usingself checkout machinesare believed to have played a large role. "The malware utilized is absolutely unsophisticated and uninteresting," says Jim Walter, director of threat intelligence operations at security technology company McAfee – meaning that the heists could have easily been stopped by existingantivirus softwarehad administrators responded to the warnings. The size of the thefts has resulted in major attention from state and Federal United States authorities and the investigation is ongoing. In April 2015, theOffice of Personnel Managementdiscovered it had been hackedmore than a year earlier in a data breach, resulting in the theft of approximately 21.5 million personnel records handled by the office.[210]The Office of Personnel Management hack has been described by federal officials as among the largest breaches of government data in the history of the United States.[211]Data targeted in the breach includedpersonally identifiable informationsuch asSocial Security numbers, names, dates and places of birth, addresses, and fingerprints of current and former government employees as well as anyone who had undergone a government background check.[212][213]It is believed the hack was perpetrated by Chinese hackers.[214] In July 2015, a hacker group is known as The Impact Team successfully breached the extramarital relationship website Ashley Madison, created by Avid Life Media. The group claimed that they had taken not only company data but user data as well. After the breach, The Impact Team dumped emails from the company's CEO, to prove their point, and threatened to dump customer data unless the website was taken down permanently.[215]When Avid Life Media did not take the site offline the group released two more compressed files, one 9.7GB and the second 20GB. After the second data dump, Avid Life Media CEO Noel Biderman resigned; but the website remained to function. In June 2021, the cyber attack took down the largest fuel pipeline in the U.S. and led to shortages across the East Coast.[216] International legal issues of cyber attacks are complicated in nature. There is no global base of common rules to judge, and eventually punish, cybercrimes and cybercriminals - and where security firms or agencies do locate the cybercriminal behind the creation of a particular piece ofmalwareor form ofcyber attack, often the local authorities cannot take action due to lack of laws under which to prosecute.[217][218]Provingattribution for cybercrimes and cyberattacksis also a major problem for all law enforcement agencies. "Computer virusesswitch from one country to another, from one jurisdiction to another – moving around the world, using the fact that we don't have the capability to globally police operations like this. So the Internet is as if someone [had] given free plane tickets to all the online criminals of the world."[217]The use of techniques such asdynamic DNS,fast fluxandbullet proof serversadd to the difficulty of investigation and enforcement. The role of the government is to makeregulationsto force companies and organizations to protect their systems, infrastructure and information from any cyberattacks, but also to protect its own national infrastructure such as the nationalpower-grid.[219] The government's regulatory role incyberspaceis complicated. For some, cyberspace was seen as avirtual spacethat was to remain free of government intervention, as can be seen in many of today's libertarianblockchainandbitcoindiscussions.[220] Many government officials and experts think that the government should do more and that there is a crucial need for improved regulation, mainly due to the failure of the private sector to solve efficiently the cybersecurity problem.R. Clarkesaid during a panel discussion at theRSA Security ConferenceinSan Francisco, he believes that the "industry only responds when you threaten regulation. If the industry doesn't respond (to the threat), you have to follow through."[221]On the other hand, executives from the private sector agree that improvements are necessary, but think that government intervention would affect their ability to innovate efficiently. Daniel R. McCarthy analyzed this public-private partnership in cybersecurity and reflected on the role of cybersecurity in the broader constitution of political order.[222] On 22 May 2020, the UN Security Council held its second ever informal meeting on cybersecurity to focus on cyber challenges tointernational peace. According to UN Secretary-GeneralAntónio Guterres, new technologies are too often used to violate rights.[223] Many different teams and organizations exist, including: On 14 April 2016, theEuropean Parliamentand theCouncil of the European Unionadopted theGeneral Data Protection Regulation(GDPR). The GDPR, which came into force on 25 May 2018, grants individuals within the European Union (EU) and the European Economic Area (EEA) the right to theprotection of personal data. The regulation requires that any entity that processes personal data incorporate data protection by design and by default. It also requires that certain organizations appoint a Data Protection Officer (DPO). The IT Security AssociationTeleTrusTexist inGermanysince June 1986, which is an international competence network for IT security. Most countries have their own computer emergency response team to protect network security. Since 2010, Canada has had a cybersecurity strategy.[229][230]This functions as a counterpart document to the National Strategy and Action Plan for Critical Infrastructure.[231]The strategy has three main pillars: securing government systems, securing vital private cyber systems, and helping Canadians to be secure online.[230][231]There is also a Cyber Incident Management Framework to provide a coordinated response in the event of a cyber incident.[232][233] TheCanadian Cyber Incident Response Centre(CCIRC) is responsible for mitigating and responding to threats to Canada's critical infrastructure and cyber systems. It provides support to mitigate cyber threats, technical support to respond & recover from targeted cyber attacks, and provides online tools for members of Canada's critical infrastructure sectors.[234]It posts regular cybersecurity bulletins[235]& operates an online reporting tool where individuals and organizations can report a cyber incident.[236] To inform the general public on how to protect themselves online, Public Safety Canada has partnered with STOP.THINK.CONNECT, a coalition of non-profit, private sector, and government organizations,[237]and launched the Cyber Security Cooperation Program.[238][239]They also run the GetCyberSafe portal for Canadian citizens, and Cyber Security Awareness Month during October.[240] Public Safety Canada aims to begin an evaluation of Canada's cybersecurity strategy in early 2015.[231] Australian federal governmentannounced an $18.2 million investment to fortify thecybersecurityresilience of small and medium enterprises (SMEs) and enhance their capabilities in responding to cyber threats. This financial backing is an integral component of the soon-to-be-unveiled2023-2030 Australian Cyber Security Strategy, slated for release within the current week. A substantial allocation of $7.2 million is earmarked for the establishment of a voluntary cyber health check program, facilitating businesses in conducting a comprehensive and tailored self-assessment of their cybersecurity upskill. This avant-garde health assessment serves as a diagnostic tool, enabling enterprises to ascertain the robustness ofAustralia's cyber security regulations. Furthermore, it affords them access to a repository of educational resources and materials, fostering the acquisition of skills necessary for an elevated cybersecurity posture. This groundbreaking initiative was jointly disclosed by Minister for Cyber SecurityClare O'Neiland Minister for Small BusinessJulie Collins.[241] Some provisions for cybersecurity have been incorporated into rules framed under the Information Technology Act 2000.[242] TheNational Cyber Security Policy 2013is a policy framework by the Ministry of Electronics and Information Technology (MeitY) which aims to protect the public and private infrastructure from cyberattacks, and safeguard "information, such as personal information (of web users), financial and banking information and sovereign data".CERT- Inis the nodal agency which monitors the cyber threats in the country. The post ofNational Cyber Security Coordinatorhas also been created in thePrime Minister's Office (PMO). The Indian Companies Act 2013 has also introduced cyber law and cybersecurity obligations on the part of Indian directors. Some provisions for cybersecurity have been incorporated into rules framed under the Information Technology Act 2000 Update in 2013.[243] Following cyberattacks in the first half of 2013, when the government, news media, television stations, and bank websites were compromised, the national government committed to the training of 5,000 new cybersecurity experts by 2017. The South Korean government blamed its northern counterpart for these attacks, as well as incidents that occurred in 2009, 2011,[244]and 2012, but Pyongyang denies the accusations.[245] TheUnited Stateshas its first fully formed cyber plan in 15 years, as a result of the release of this National Cyber plan.[246]In this policy, the US says it will: Protect the country by keeping networks, systems, functions, and data safe; Promote American wealth by building a strong digital economy and encouraging strong domestic innovation; Peace and safety should be kept by making it easier for the US to stop people from using computer tools for bad things, working with friends and partners to do this; and increase the United States' impact around the world to support the main ideas behind an open, safe, reliable, and compatible Internet.[247] The new U.S. cyber strategy[248]seeks to allay some of those concerns by promoting responsible behavior incyberspace, urging nations to adhere to a set of norms, both through international law and voluntary standards. It also calls for specific measures to harden U.S. government networks from attacks, like the June 2015 intrusion into theU.S. Office of Personnel Management(OPM), which compromised the records of about 4.2 million current and former government employees. And the strategy calls for the U.S. to continue to name and shame bad cyber actors, calling them out publicly for attacks when possible, along with the use of economic sanctions and diplomatic pressure.[249] The 198618 U.S.C.§ 1030, theComputer Fraud and Abuse Actis the key legislation. It prohibits unauthorized access or damage ofprotected computersas defined in18 U.S.C.§ 1030(e)(2). Although various other measures have been proposed[250][251]– none have succeeded. In 2013,executive order13636Improving Critical Infrastructure Cybersecuritywas signed, which prompted the creation of theNIST Cybersecurity Framework. In response to theColonial Pipeline ransomware attack[252]PresidentJoe Bidensigned Executive Order 14028[253]on May 12, 2021, to increase software security standards for sales to the government, tighten detection and security on existing systems, improve information sharing and training, establish a Cyber Safety Review Board, and improve incident response. TheGeneral Services Administration(GSA) has[when?]standardized thepenetration testservice as a pre-vetted support service, to rapidly address potential vulnerabilities, and stop adversaries before they impact US federal, state and local governments. These services are commonly referred to as Highly Adaptive Cybersecurity Services (HACS). TheDepartment of Homeland Securityhas a dedicated division responsible for the response system,risk managementprogram and requirements for cybersecurity in the United States called theNational Cyber Security Division.[254][255]The division is home to US-CERT operations and the National Cyber Alert System.[255]The National Cybersecurity and Communications Integration Center brings together government organizations responsible for protecting computer networks and networked infrastructure.[256] The third priority of the FBI is to: "Protect the United States against cyber-based attacks and high-technology crimes",[257]and they, along with theNational White Collar Crime Center(NW3C), and theBureau of Justice Assistance(BJA) are part of the multi-agency task force, TheInternet Crime Complaint Center, also known as IC3.[258] In addition to its own specific duties, the FBI participates alongside non-profit organizations such asInfraGard.[259][260] TheComputer Crime and Intellectual Property Section(CCIPS) operates in theUnited States Department of Justice Criminal Division. The CCIPS is in charge of investigatingcomputer crimeandintellectual propertycrime and is specialized in the search and seizure ofdigital evidencein computers andnetworks.[261]In 2017, CCIPS published A Framework for a Vulnerability Disclosure Program for Online Systems to help organizations "clearly describe authorized vulnerability disclosure and discovery conduct, thereby substantially reducing the likelihood that such described activities will result in a civil or criminal violation of law under the Computer Fraud and Abuse Act (18 U.S.C. § 1030)."[262] TheUnited States Cyber Command, also known as USCYBERCOM, "has the mission to direct, synchronize, and coordinate cyberspace planning and operations to defend and advance national interests in collaboration with domestic and international partners."[263]It has no role in the protection of civilian networks.[264][265] The U.S.Federal Communications Commission's role in cybersecurity is to strengthen the protection of critical communications infrastructure, to assist in maintaining the reliability of networks during disasters, to aid in swift recovery after, and to ensure that first responders have access to effective communications services.[266] TheFood and Drug Administrationhas issued guidance for medical devices,[267]and theNational Highway Traffic Safety Administration[268]is concerned with automotive cybersecurity. After being criticized by theGovernment Accountability Office,[269]and following successful attacks on airports and claimed attacks on airplanes, theFederal Aviation Administrationhas devoted funding to securing systems on board the planes of private manufacturers, and theAircraft Communications Addressing and Reporting System.[270]Concerns have also been raised about the futureNext Generation Air Transportation System.[271] The US Department of Defense (DoD) issued DoD Directive 8570 in 2004, supplemented by DoD Directive 8140, requiring all DoD employees and all DoD contract personnel involved in information assurance roles and activities to earn and maintain various industry Information Technology (IT) certifications in an effort to ensure that all DoD personnel involved in network infrastructure defense have minimum levels of IT industry recognized knowledge, skills and abilities (KSA). Andersson and Reimers (2019) report these certifications range from CompTIA's A+ and Security+ through the ICS2.org's CISSP, etc.[272] Computer emergency response teamis a name given to expert groups that handle computer security incidents. In the US, two distinct organizations exist, although they do work closely together. In the context ofU.S. nuclear power plants, theU.S. Nuclear Regulatory Commission (NRC)outlines cybersecurity requirements under10 CFR Part 73, specifically in §73.54.[274] TheNuclear Energy Institute's NEI 08-09 document,Cyber Security Plan for Nuclear Power Reactors,[275]outlines a comprehensive framework forcybersecurityin thenuclear power industry. Drafted with input from theU.S. NRC, this guideline is instrumental in aidinglicenseesto comply with theCode of Federal Regulations (CFR), which mandates robust protection of digital computers and equipment and communications systems at nuclear power plants against cyber threats.[276] There is growing concern that cyberspace will become the next theater of warfare. As Mark Clayton fromThe Christian Science Monitorwrote in a 2015 article titled "The New Cyber Arms Race": In the future, wars will not just be fought by soldiers with guns or with planes that drop bombs. They will also be fought with the click of a mouse a half a world away that unleashes carefully weaponized computer programs that disrupt or destroy critical industries like utilities, transportation, communications, and energy. Such attacks could also disable military networks that control the movement of troops, the path of jet fighters, the command and control of warships.[277] This has led to new terms such ascyberwarfareandcyberterrorism. TheUnited States Cyber Commandwas created in 2009[278]and many other countrieshave similar forces. There are a few critical voices that question whether cybersecurity is as significant a threat as it is made out to be.[279][280][281] Cybersecurity is a fast-growing field ofITconcerned with reducing organizations' risk of hack or data breaches.[282]According to research from the Enterprise Strategy Group, 46% of organizations say that they have a "problematic shortage" of cybersecurity skills in 2016, up from 28% in 2015.[283]Commercial, government and non-governmental organizations all employ cybersecurity professionals. The fastest increases in demand for cybersecurity workers are in industries managing increasing volumes of consumer data such as finance, health care, and retail.[284]However, the use of the termcybersecurityis more prevalent in government job descriptions.[285] Typical cybersecurity job titles and descriptions include:[286] Student programs are also available for people interested in beginning a career in cybersecurity.[290][291]Meanwhile, a flexible and effective option for information security professionals of all experience levels to keep studying is online security training, including webcasts.[292][293]A wide range of certified courses are also available.[294] In the United Kingdom, a nationwide set of cybersecurity forums, known as theU.K Cyber Security Forum, were established supported by the Government's cybersecurity strategy[295]in order to encourage start-ups and innovation and to address the skills gap[296]identified by theU.K Government. In Singapore, theCyber Security Agencyhas issued a Singapore Operational Technology (OT) Cybersecurity Competency Framework (OTCCF). The framework defines emerging cybersecurity roles in Operational Technology. The OTCCF was endorsed by theInfocomm Media Development Authority(IMDA). It outlines the different OT cybersecurity job positions as well as the technical skills and core competencies necessary. It also depicts the many career paths available, including vertical and lateral advancement opportunities.[297] The following terms used with regards to computer security are explained below: Since theInternet's arrival and with the digital transformation initiated in recent years, the notion of cybersecurity has become a familiar subject in both our professional and personal lives. Cybersecurity and cyber threats have been consistently present for the last 60 years of technological change. In the 1970s and 1980s, computer security was mainly limited toacademiauntil the conception of the Internet, where, with increased connectivity, computer viruses and network intrusions began to take off. After the spread of viruses in the 1990s, the 2000s marked the institutionalization of organized attacks such asdistributed denial of service.[301]This led to the formalization of cybersecurity as a professional discipline.[302] TheApril 1967 sessionorganized byWillis Wareat theSpring Joint Computer Conference, and the later publication of theWare Report, were foundational moments in the history of the field of computer security.[303]Ware's work straddled the intersection of material, cultural, political, and social concerns.[303] A 1977NISTpublication[304]introduced theCIA triadof confidentiality, integrity, and availability as a clear and simple way to describe key security goals.[305]While still relevant, many more elaborate frameworks have since been proposed.[306][307] However, in the 1970s and 1980s, there were no grave computer threats because computers and the internet were still developing, and security threats were easily identifiable. More often, threats came from malicious insiders who gained unauthorized access to sensitive documents and files. Although malware and network breaches existed during the early years, they did not use them for financial gain. By the second half of the 1970s, established computer firms likeIBMstarted offering commercial access control systems and computer security software products.[308] One of the earliest examples of an attack on a computer network was thecomputer wormCreeperwritten by Bob Thomas atBBN, which propagated through theARPANETin 1971.[309]The program was purely experimental in nature and carried no malicious payload. A later program,Reaper, was created byRay Tomlinsonin 1972 and used to destroy Creeper.[citation needed] Between September 1986 and June 1987, a group of German hackers performed the first documented case of cyber espionage.[310]The group hacked into American defense contractors, universities, and military base networks and sold gathered information to the Soviet KGB. The group was led byMarkus Hess, who was arrested on 29 June 1987. He was convicted of espionage (along with two co-conspirators) on 15 Feb 1990. In 1988, one of the first computer worms, called theMorris worm, was distributed via the Internet. It gained significant mainstream media attention.[311] Netscapestarted developing the protocolSSL, shortly after the National Center for Supercomputing Applications (NCSA) launched Mosaic 1.0, the first web browser, in 1993.[312][313]Netscape had SSL version 1.0 ready in 1994, but it was never released to the public due to many serious security vulnerabilities.[312]However, in 1995, Netscape launched Version 2.0.[314] TheNational Security Agency(NSA) is responsible for theprotectionof U.S. information systems and also for collecting foreign intelligence.[315]The agency analyzes commonly used software and system configurations to find security flaws, which it can use for offensive purposes against competitors of the United States.[316] NSA contractors created and soldclick-and-shootattack tools to US agencies and close allies, but eventually, the tools made their way to foreign adversaries.[317]In 2016, NSAs own hacking tools were hacked, and they have been used by Russia and North Korea.[citation needed]NSA's employees and contractors have been recruited at high salaries by adversaries, anxious to compete incyberwarfare.[citation needed]In 2007, the United States andIsraelbegan exploiting security flaws in theMicrosoft Windowsoperating system to attack and damage equipment used in Iran to refine nuclear materials. Iran responded by heavily investing in their own cyberwarfare capability, which it began using against the United States.[316]
https://en.wikipedia.org/wiki/Computer_security#Mitigation
Experimental mathematicsis an approach tomathematicsin which computation is used to investigate mathematical objects and identify properties and patterns.[1]It has been defined as "that branch of mathematics that concerns itself ultimately with the codification and transmission of insights within the mathematical community through the use of experimental (in either the Galilean, Baconian, Aristotelian or Kantian sense) exploration ofconjecturesand more informal beliefs and a careful analysis of the data acquired in this pursuit."[2] As expressed byPaul Halmos: "Mathematics is not adeductive science—that's a cliché. When you try to prove a theorem, you don't just list thehypotheses, and then start to reason. What you do istrial and error, experimentation, guesswork. You want to find out what the facts are, and what you do is in that respect similar to what a laboratory technician does."[3] Mathematicians have always practiced experimental mathematics. Existing records of early mathematics, such asBabylonian mathematics, typically consist of lists of numerical examples illustrating algebraic identities. However, modern mathematics, beginning in the 17th century, developed a tradition of publishing results in a final, formal and abstract presentation. The numerical examples that may have led a mathematician to originally formulate a general theorem were not published, and were generally forgotten. Experimental mathematics as a separate area of study re-emerged in the twentieth century, when the invention of the electronic computer vastly increased the range of feasible calculations, with a speed and precision far greater than anything available to previous generations of mathematicians. A significant milestone and achievement of experimental mathematics was the discovery in 1995 of theBailey–Borwein–Plouffe formulafor the binary digits ofπ. This formula was discovered not by formal reasoning, but instead by numerical searches on a computer; only afterwards was a rigorousprooffound.[4] The objectives of experimental mathematics are "to generate understanding and insight; to generate and confirm or confront conjectures; and generally to make mathematics more tangible, lively and fun for both the professional researcher and the novice".[5] The uses of experimental mathematics have been defined as follows:[6] Experimental mathematics makes use ofnumerical methodsto calculate approximate values forintegralsandinfinite series.Arbitrary precision arithmeticis often used to establish these values to a high degree of precision – typically 100 significant figures or more.Integer relation algorithmsare then used to search for relations between these values andmathematical constants. Working with high precision values reduces the possibility of mistaking amathematical coincidencefor a true relation. A formal proof of a conjectured relation will then be sought – it is often easier to find a formal proof once the form of a conjectured relation is known. If acounterexampleis being sought or a large-scaleproof by exhaustionis being attempted,distributed computingtechniques may be used to divide the calculations between multiple computers. Frequent use is made of generalmathematical softwareor domain-specific software written for attacks on problems that require high efficiency. Experimental mathematics software usually includeserror detection and correctionmechanisms, integrity checks and redundant calculations designed to minimise the possibility of results being invalidated by a hardware or software error. Applications and examples of experimental mathematics include: Some plausible relations hold to a high degree of accuracy, but are still not true. One example is: The two sides of this expression actually differ after the 42nd decimal place.[13] Another example is that the maximumheight(maximum absolute value of coefficients) of all the factors ofxn− 1 appears to be the same as the height of thenthcyclotomic polynomial. This was shown by computer to be true forn< 10000 and was expected to be true for alln. However, a larger computer search showed that this equality fails to hold forn= 14235, when the height of thenth cyclotomic polynomial is 2, but maximum height of the factors is 3.[14] The followingmathematiciansandcomputer scientistshave made significant contributions to the field of experimental mathematics:
https://en.wikipedia.org/wiki/Experimental_mathematics
Inmachine learning, thekernel embedding of distributions(also called thekernel meanormean map) comprises a class ofnonparametricmethods in which aprobability distributionis represented as an element of areproducing kernel Hilbert space(RKHS).[1]A generalization of the individual data-point feature mapping done in classicalkernel methods, the embedding of distributions into infinite-dimensional feature spaces can preserve all of the statistical features of arbitrary distributions, while allowing one to compare and manipulate distributions using Hilbert space operations such asinner products, distances,projections,linear transformations, andspectral analysis.[2]Thislearningframework is very general and can be applied to distributions over any spaceΩ{\displaystyle \Omega }on which a sensiblekernel function(measuring similarity between elements ofΩ{\displaystyle \Omega }) may be defined. For example, various kernels have been proposed for learning from data which are:vectorsinRd{\displaystyle \mathbb {R} ^{d}}, discrete classes/categories,strings,graphs/networks, images,time series,manifolds,dynamical systems, and other structured objects.[3][4]The theory behind kernel embeddings of distributions has been primarily developed byAlex Smola,Le Song,Arthur Gretton, andBernhard Schölkopf. A review of recent works on kernel embedding of distributions can be found in.[5] The analysis of distributions is fundamental inmachine learningandstatistics, and many algorithms in these fields rely on information theoretic approaches such asentropy,mutual information, orKullback–Leibler divergence. However, to estimate these quantities, one must first either perform density estimation, or employ sophisticated space-partitioning/bias-correction strategies which are typically infeasible for high-dimensional data.[6]Commonly, methods for modeling complex distributions rely on parametric assumptions that may be unfounded or computationally challenging (e.g.Gaussian mixture models), while nonparametric methods likekernel density estimation(Note: the smoothing kernels in this context have a different interpretation than the kernels discussed here) orcharacteristic functionrepresentation (via theFourier transformof the distribution) break down in high-dimensional settings.[2] Methods based on the kernel embedding of distributions sidestep these problems and also possess the following advantages:[6] Thus, learning via the kernel embedding of distributions offers a principled drop-in replacement for information theoretic approaches and is a framework which not only subsumes many popular methods in machine learning and statistics as special cases, but also can lead to entirely new learning algorithms. LetX{\displaystyle X}denote a random variable with domainΩ{\displaystyle \Omega }and distributionP{\displaystyle P}. Given a symmetric,positive-definite kernelk:Ω×Ω→R{\displaystyle k:\Omega \times \Omega \rightarrow \mathbb {R} }theMoore–Aronszajn theoremasserts the existence of a unique RKHSH{\displaystyle {\mathcal {H}}}onΩ{\displaystyle \Omega }(aHilbert spaceof functionsf:Ω→R{\displaystyle f:\Omega \to \mathbb {R} }equipped with an inner product⟨⋅,⋅⟩H{\displaystyle \langle \cdot ,\cdot \rangle _{\mathcal {H}}}and a norm‖⋅‖H{\displaystyle \|\cdot \|_{\mathcal {H}}}) for whichk{\displaystyle k}is a reproducing kernel, i.e., in which the elementk(x,⋅){\displaystyle k(x,\cdot )}satisfies the reproducing property One may alternatively considerx↦k(x,⋅){\displaystyle x\mapsto k(x,\cdot )}as an implicit feature mappingφ:Ω→H{\displaystyle \varphi :\Omega \rightarrow {\mathcal {H}}}(which is therefore also called the feature space), so thatk(x,x′)=⟨φ(x),φ(x′)⟩H{\displaystyle k(x,x')=\langle \varphi (x),\varphi (x')\rangle _{\mathcal {H}}}can be viewed as a measure of similarity between pointsx,x′∈Ω.{\displaystyle x,x'\in \Omega .}While thesimilarity measureis linear in the feature space, it may be highly nonlinear in the original space depending on the choice of kernel. The kernel embedding of the distributionP{\displaystyle P}inH{\displaystyle {\mathcal {H}}}(also called thekernel meanormean map) is given by:[1] IfP{\displaystyle P}allows a square integrable densityp{\displaystyle p}, thenμX=Ekp{\displaystyle \mu _{X}={\mathcal {E}}_{k}p}, whereEk{\displaystyle {\mathcal {E}}_{k}}is theHilbert–Schmidt integral operator. A kernel ischaracteristicif the mean embeddingμ:{family of distributions overΩ}→H{\displaystyle \mu :\{{\text{family of distributions over }}\Omega \}\to {\mathcal {H}}}is injective.[7]Each distribution can thus be uniquely represented in the RKHS and all statistical features of distributions are preserved by the kernel embedding if a characteristic kernel is used. Givenn{\displaystyle n}training examples{x1,…,xn}{\displaystyle \{x_{1},\ldots ,x_{n}\}}drawnindependently and identically distributed(i.i.d.) fromP,{\displaystyle P,}the kernel embedding ofP{\displaystyle P}can be empirically estimated as IfY{\displaystyle Y}denotes another random variable (for simplicity, assume the co-domain ofY{\displaystyle Y}is alsoΩ{\displaystyle \Omega }with the same kernelk{\displaystyle k}which satisfies⟨φ(x)⊗φ(y),φ(x′)⊗φ(y′)⟩=k(x,x′)k(y,y′){\displaystyle \langle \varphi (x)\otimes \varphi (y),\varphi (x')\otimes \varphi (y')\rangle =k(x,x')k(y,y')}), then thejoint distributionP(x,y)){\displaystyle P(x,y))}can be mapped into atensor productfeature spaceH⊗H{\displaystyle {\mathcal {H}}\otimes {\mathcal {H}}}via[2] By the equivalence between atensorand alinear map, this joint embedding may be interpreted as an uncenteredcross-covarianceoperatorCXY:H→H{\displaystyle {\mathcal {C}}_{XY}:{\mathcal {H}}\to {\mathcal {H}}}from which the cross-covariance of functionsf,g∈H{\displaystyle f,g\in {\mathcal {H}}}can be computed as[8] Givenn{\displaystyle n}pairs of training examples{(x1,y1),…,(xn,yn)}{\displaystyle \{(x_{1},y_{1}),\dots ,(x_{n},y_{n})\}}drawn i.i.d. fromP{\displaystyle P}, we can also empirically estimate the joint distribution kernel embedding via Given aconditional distributionP(y∣x),{\displaystyle P(y\mid x),}one can define the corresponding RKHS embedding as[2] Note that the embedding ofP(y∣x){\displaystyle P(y\mid x)}thus defines a family of points in the RKHS indexed by the valuesx{\displaystyle x}taken by conditioning variableX{\displaystyle X}. By fixingX{\displaystyle X}to a particular value, we obtain a single element inH{\displaystyle {\mathcal {H}}}, and thus it is natural to define the operator which given the feature mapping ofx{\displaystyle x}outputs the conditional embedding ofY{\displaystyle Y}givenX=x.{\displaystyle X=x.}Assuming that for allg∈H:E[g(Y)∣X]∈H,{\displaystyle g\in {\mathcal {H}}:\mathbb {E} [g(Y)\mid X]\in {\mathcal {H}},}it can be shown that[8] This assumption is always true for finite domains with characteristic kernels, but may not necessarily hold for continuous domains.[2]Nevertheless, even in cases where the assumption fails,CY∣Xφ(x){\displaystyle {\mathcal {C}}_{Y\mid X}\varphi (x)}may still be used to approximate the conditional kernel embeddingμY∣x,{\displaystyle \mu _{Y\mid x},}and in practice, the inversion operator is replaced with a regularized version of itself(CXX+λI)−1{\displaystyle ({\mathcal {C}}_{XX}+\lambda \mathbf {I} )^{-1}}(whereI{\displaystyle \mathbf {I} }denotes theidentity matrix). Given training examples{(x1,y1),…,(xn,yn)},{\displaystyle \{(x_{1},y_{1}),\dots ,(x_{n},y_{n})\},}the empirical kernel conditional embedding operator may be estimated as[2] whereΦ=(φ(y1),…,φ(yn)),Υ=(φ(x1),…,φ(xn)){\displaystyle {\boldsymbol {\Phi }}=\left(\varphi (y_{1}),\dots ,\varphi (y_{n})\right),{\boldsymbol {\Upsilon }}=\left(\varphi (x_{1}),\dots ,\varphi (x_{n})\right)}are implicitly formed feature matrices,K=ΥTΥ{\displaystyle \mathbf {K} ={\boldsymbol {\Upsilon }}^{T}{\boldsymbol {\Upsilon }}}is the Gram matrix for samples ofX{\displaystyle X}, andλ{\displaystyle \lambda }is aregularizationparameter needed to avoidoverfitting. Thus, the empirical estimate of the kernel conditional embedding is given by a weighted sum of samples ofY{\displaystyle Y}in the feature space: whereβ(x)=(K+λI)−1Kx{\displaystyle {\boldsymbol {\beta }}(x)=(\mathbf {K} +\lambda \mathbf {I} )^{-1}\mathbf {K} _{x}}andKx=(k(x1,x),…,k(xn,x))T{\displaystyle \mathbf {K} _{x}=\left(k(x_{1},x),\dots ,k(x_{n},x)\right)^{T}} This section illustrates how basic probabilistic rules may be reformulated as (multi)linear algebraic operations in the kernel embedding framework and is primarily based on the work of Song et al.[2][8]The following notation is adopted: In practice, all embeddings are empirically estimated from data{(x1,y1),…,(xn,yn)}{\displaystyle \{(x_{1},y_{1}),\dots ,(x_{n},y_{n})\}}and it assumed that a set of samples{y~1,…,y~n~}{\displaystyle \{{\widetilde {y}}_{1},\ldots ,{\widetilde {y}}_{\widetilde {n}}\}}may be used to estimate the kernel embedding of the prior distributionπ(Y){\displaystyle \pi (Y)}. In probability theory, the marginal distribution ofX{\displaystyle X}can be computed by integrating outY{\displaystyle Y}from the joint density (including the prior distribution onY{\displaystyle Y}) The analog of this rule in the kernel embedding framework states thatμXπ,{\displaystyle \mu _{X}^{\pi },}the RKHS embedding ofQ(X){\displaystyle Q(X)}, can be computed via whereμYπ{\displaystyle \mu _{Y}^{\pi }}is the kernel embedding ofπ(Y).{\displaystyle \pi (Y).}In practical implementations, the kernel sum rule takes the following form where is the empirical kernel embedding of the prior distribution,α=(α1,…,αn~)T,{\displaystyle {\boldsymbol {\alpha }}=(\alpha _{1},\ldots ,\alpha _{\widetilde {n}})^{T},}Υ=(φ(x1),…,φ(xn)){\displaystyle {\boldsymbol {\Upsilon }}=\left(\varphi (x_{1}),\ldots ,\varphi (x_{n})\right)}, andG,G~{\displaystyle \mathbf {G} ,{\widetilde {\mathbf {G} }}}are Gram matrices with entriesGij=k(yi,yj),G~ij=k(yi,y~j){\displaystyle \mathbf {G} _{ij}=k(y_{i},y_{j}),{\widetilde {\mathbf {G} }}_{ij}=k(y_{i},{\widetilde {y}}_{j})}respectively. In probability theory, a joint distribution can be factorized into a product between conditional and marginal distributions The analog of this rule in the kernel embedding framework states thatCXYπ,{\displaystyle {\mathcal {C}}_{XY}^{\pi },}the joint embedding ofQ(X,Y),{\displaystyle Q(X,Y),}can be factorized as a composition of conditional embedding operator with the auto-covariance operator associated withπ(Y){\displaystyle \pi (Y)} where In practical implementations, the kernel chain rule takes the following form In probability theory, a posterior distribution can be expressed in terms of a prior distribution and a likelihood function as The analog of this rule in the kernel embedding framework expresses the kernel embedding of the conditional distribution in terms of conditional embedding operators which are modified by the prior distribution where from the chain rule: In practical implementations, the kernel Bayes' rule takes the following form where Two regularization parameters are used in this framework:λ{\displaystyle \lambda }for the estimation ofC^YXπ,C^XXπ=ΥDΥT{\displaystyle {\widehat {\mathcal {C}}}_{YX}^{\pi },{\widehat {\mathcal {C}}}_{XX}^{\pi }={\boldsymbol {\Upsilon }}\mathbf {D} {\boldsymbol {\Upsilon }}^{T}}andλ~{\displaystyle {\widetilde {\lambda }}}for the estimation of the final conditional embedding operator The latter regularization is done on square ofC^XXπ{\displaystyle {\widehat {\mathcal {C}}}_{XX}^{\pi }}becauseD{\displaystyle D}may not bepositive definite. Themaximum mean discrepancy (MMD)is a distance-measure between distributionsP(X){\displaystyle P(X)}andQ(Y){\displaystyle Q(Y)}which is defined as the distance between their embeddings in the RKHS[6] While most distance-measures between distributions such as the widely usedKullback–Leibler divergenceeither require density estimation (either parametrically or nonparametrically) or space partitioning/bias correction strategies,[6]the MMD is easily estimated as an empirical mean which is concentrated around the true value of the MMD. The characterization of this distance as themaximum mean discrepancyrefers to the fact that computing the MMD is equivalent to finding the RKHS function that maximizes the difference in expectations between the two probability distributions a form ofintegral probability metric. Givenntraining examples fromP(X){\displaystyle P(X)}andmsamples fromQ(Y){\displaystyle Q(Y)}, one can formulate a test statistic based on the empirical estimate of the MMD to obtain atwo-sample test[15]of the null hypothesis that both samples stem from the same distribution (i.e.P=Q{\displaystyle P=Q}) against the broad alternativeP≠Q{\displaystyle P\neq Q}. Although learning algorithms in the kernel embedding framework circumvent the need for intermediate density estimation, one may nonetheless use the empirical embedding to perform density estimation based onnsamples drawn from an underlying distributionPX∗{\displaystyle P_{X}^{*}}. This can be done by solving the following optimization problem[6][16] where the maximization is done over the entire space of distributions onΩ.{\displaystyle \Omega .}Here,μX[PX]{\displaystyle \mu _{X}[P_{X}]}is the kernel embedding of the proposed densityPX{\displaystyle P_{X}}andH{\displaystyle H}is an entropy-like quantity (e.g.Entropy,KL divergence,Bregman divergence). The distribution which solves this optimization may be interpreted as a compromise between fitting the empirical kernel means of the samples well, while still allocating a substantial portion of the probability mass to all regions of the probability space (much of which may not be represented in the training examples). In practice, a good approximate solution of the difficult optimization may be found by restricting the space of candidate densities to a mixture ofMcandidate distributions with regularized mixing proportions. Connections between the ideas underlyingGaussian processesandconditional random fieldsmay be drawn with the estimation of conditional probability distributions in this fashion, if one views the feature mappings associated with the kernel as sufficient statistics in generalized (possibly infinite-dimensional)exponential families.[6] A measure of the statistical dependence between random variablesX{\displaystyle X}andY{\displaystyle Y}(from any domains on which sensible kernels can be defined) can be formulated based on the Hilbert–Schmidt Independence Criterion[17] and can be used as a principled replacement formutual information,Pearson correlationor any other dependence measure used in learning algorithms. Most notably, HSIC can detect arbitrary dependencies (when a characteristic kernel is used in the embeddings, HSIC is zero if and only if the variables areindependent), and can be used to measure dependence between different types of data (e.g. images and text captions). Givenni.i.d. samples of each random variable, a simple parameter-freeunbiasedestimator of HSIC which exhibitsconcentrationabout the true value can be computed inO(n(df2+dg2)){\displaystyle O(n(d_{f}^{2}+d_{g}^{2}))}time,[6]where the Gram matrices of the two datasets are approximated usingAAT,BBT{\displaystyle \mathbf {A} \mathbf {A} ^{T},\mathbf {B} \mathbf {B} ^{T}}withA∈Rn×df,B∈Rn×dg{\displaystyle \mathbf {A} \in \mathbb {R} ^{n\times d_{f}},\mathbf {B} \in \mathbb {R} ^{n\times d_{g}}}. The desirable properties of HSIC have led to the formulation of numerous algorithms which utilize this dependence measure for a variety of common machine learning tasks such as:feature selection(BAHSIC[18]),clustering(CLUHSIC[19]), anddimensionality reduction(MUHSIC[20]). HSIC can be extended to measure the dependence of multiple random variables. The question of when HSIC captures independence in this case has recently been studied:[21]for more than two variables Belief propagationis a fundamental algorithm for inference ingraphical modelsin which nodes repeatedly pass and receive messages corresponding to the evaluation of conditional expectations. In the kernel embedding framework, the messages may be represented as RKHS functions and the conditional distribution embeddings can be applied to efficiently compute message updates. Givennsamples of random variables represented by nodes in aMarkov random field, the incoming message to nodetfrom nodeucan be expressed as if it assumed to lie in the RKHS. Thekernel belief propagation updatemessage fromtto nodesis then given by[2] where⊙{\displaystyle \odot }denotes the element-wise vector product,N(t)∖s{\displaystyle N(t)\backslash s}is the set of nodes connected totexcluding nodes,βut=(βut1,…,βutn){\displaystyle {\boldsymbol {\beta }}_{ut}=\left(\beta _{ut}^{1},\dots ,\beta _{ut}^{n}\right)},Kt,Ks{\displaystyle \mathbf {K} _{t},\mathbf {K} _{s}}are the Gram matrices of the samples from variablesXt,Xs{\displaystyle X_{t},X_{s}}, respectively, andΥs=(φ(xs1),…,φ(xsn)){\displaystyle {\boldsymbol {\Upsilon }}_{s}=\left(\varphi (x_{s}^{1}),\dots ,\varphi (x_{s}^{n})\right)}is the feature matrix for the samples fromXs{\displaystyle X_{s}}. Thus, if the incoming messages to nodetare linear combinations of feature mapped samples fromXt{\displaystyle X_{t}}, then the outgoing message from this node is also a linear combination of feature mapped samples fromXs{\displaystyle X_{s}}. This RKHS function representation of message-passing updates therefore produces an efficient belief propagation algorithm in which thepotentialsare nonparametric functions inferred from the data so that arbitrary statistical relationships may be modeled.[2] In thehidden Markov model(HMM), two key quantities of interest are the transition probabilities between hidden statesP(St∣St−1){\displaystyle P(S^{t}\mid S^{t-1})}and the emission probabilitiesP(Ot∣St){\displaystyle P(O^{t}\mid S^{t})}for observations. Using the kernel conditional distribution embedding framework, these quantities may be expressed in terms of samples from the HMM. A serious limitation of the embedding methods in this domain is the need for training samples containing hidden states, as otherwise inference with arbitrary distributions in the HMM is not possible. One common use of HMMs isfilteringin which the goal is to estimate posterior distribution over the hidden statest{\displaystyle s^{t}}at time steptgiven a history of previous observationsht=(o1,…,ot){\displaystyle h^{t}=(o^{1},\dots ,o^{t})}from the system. In filtering, abelief stateP(St+1∣ht+1){\displaystyle P(S^{t+1}\mid h^{t+1})}is recursively maintained via a prediction step (where updatesP(St+1∣ht)=E[P(St+1∣St)∣ht]{\displaystyle P(S^{t+1}\mid h^{t})=\mathbb {E} [P(S^{t+1}\mid S^{t})\mid h^{t}]}are computed by marginalizing out the previous hidden state) followed by a conditioning step (where updatesP(St+1∣ht,ot+1)∝P(ot+1∣St+1)P(St+1∣ht){\displaystyle P(S^{t+1}\mid h^{t},o^{t+1})\propto P(o^{t+1}\mid S^{t+1})P(S^{t+1}\mid h^{t})}are computed by applying Bayes' rule to condition on a new observation).[2]The RKHS embedding of the belief state at timet+1can be recursively expressed as by computing the embeddings of the prediction step via thekernel sum ruleand the embedding of the conditioning step viakernel Bayes' rule. Assuming a training sample(s~1,…,s~T,o~1,…,o~T){\displaystyle ({\widetilde {s}}^{1},\dots ,{\widetilde {s}}^{T},{\widetilde {o}}^{1},\dots ,{\widetilde {o}}^{T})}is given, one can in practice estimate and filtering with kernel embeddings is thus implemented recursively using the following updates for the weightsα=(α1,…,αT){\displaystyle {\boldsymbol {\alpha }}=(\alpha _{1},\dots ,\alpha _{T})}[2] whereG,K{\displaystyle \mathbf {G} ,\mathbf {K} }denote the Gram matrices ofs~1,…,s~T{\displaystyle {\widetilde {s}}^{1},\dots ,{\widetilde {s}}^{T}}ando~1,…,o~T{\displaystyle {\widetilde {o}}^{1},\dots ,{\widetilde {o}}^{T}}respectively,G~{\displaystyle {\widetilde {\mathbf {G} }}}is a transfer Gram matrix defined asG~ij=k(s~i,s~j+1),{\displaystyle {\widetilde {\mathbf {G} }}_{ij}=k({\widetilde {s}}_{i},{\widetilde {s}}_{j+1}),}andKot+1=(k(o~1,ot+1),…,k(o~T,ot+1))T.{\displaystyle \mathbf {K} _{o^{t+1}}=(k({\widetilde {o}}^{1},o^{t+1}),\dots ,k({\widetilde {o}}^{T},o^{t+1}))^{T}.} Thesupport measure machine(SMM) is a generalization of thesupport vector machine(SVM) in which the training examples are probability distributions paired with labels{Pi,yi}i=1n,yi∈{+1,−1}{\displaystyle \{P_{i},y_{i}\}_{i=1}^{n},\ y_{i}\in \{+1,-1\}}.[22]SMMs solve the standard SVMdual optimization problemusing the followingexpected kernel which is computable in closed form for many common specific distributionsPi{\displaystyle P_{i}}(such as the Gaussian distribution) combined with popular embedding kernelsk{\displaystyle k}(e.g. the Gaussian kernel or polynomial kernel), or can be accurately empirically estimated from i.i.d. samples{xi}i=1n∼P(X),{zj}j=1m∼Q(Z){\displaystyle \{x_{i}\}_{i=1}^{n}\sim P(X),\{z_{j}\}_{j=1}^{m}\sim Q(Z)}via Under certain choices of the embedding kernelk{\displaystyle k}, the SMM applied to training examples{Pi,yi}i=1n{\displaystyle \{P_{i},y_{i}\}_{i=1}^{n}}is equivalent to a SVM trained on samples{xi,yi}i=1n{\displaystyle \{x_{i},y_{i}\}_{i=1}^{n}}, and thus the SMM can be viewed as aflexibleSVM in which a different data-dependent kernel (specified by the assumed form of the distributionPi{\displaystyle P_{i}}) may be placed on each training point.[22] The goal ofdomain adaptationis the formulation of learning algorithms which generalize well when the training and test data have different distributions. Given training examples{(xitr,yitr)}i=1n{\displaystyle \{(x_{i}^{\text{tr}},y_{i}^{\text{tr}})\}_{i=1}^{n}}and a test set{(xjte,yjte)}j=1m{\displaystyle \{(x_{j}^{\text{te}},y_{j}^{\text{te}})\}_{j=1}^{m}}where theyjte{\displaystyle y_{j}^{\text{te}}}are unknown, three types of differences are commonly assumed between the distribution of the training examplesPtr(X,Y){\displaystyle P^{\text{tr}}(X,Y)}and the test distributionPte(X,Y){\displaystyle P^{\text{te}}(X,Y)}:[23][24] By utilizing the kernel embedding of marginal and conditional distributions, practical approaches to deal with the presence of these types of differences between training and test domains can be formulated. Covariate shift may be accounted for by reweighting examples via estimates of the ratioPte(X)/Ptr(X){\displaystyle P^{\text{te}}(X)/P^{\text{tr}}(X)}obtained directly from the kernel embeddings of the marginal distributions ofX{\displaystyle X}in each domain without any need for explicit estimation of the distributions.[24]Target shift, which cannot be similarly dealt with since no samples fromY{\displaystyle Y}are available in the test domain, is accounted for by weighting training examples using the vectorβ∗(ytr){\displaystyle {\boldsymbol {\beta }}^{*}(\mathbf {y} ^{\text{tr}})}which solves the following optimization problem (where in practice, empirical approximations must be used)[23] To deal with location scale conditional shift, one can perform a LS transformation of the training points to obtain new transformed training dataXnew=Xtr⊙W+B{\displaystyle \mathbf {X} ^{\text{new}}=\mathbf {X} ^{\text{tr}}\odot \mathbf {W} +\mathbf {B} }(where⊙{\displaystyle \odot }denotes the element-wise vector product). To ensure similar distributions between the new transformed training samples and the test data,W,B{\displaystyle \mathbf {W} ,\mathbf {B} }are estimated by minimizing the following empirical kernel embedding distance[23] In general, the kernel embedding methods for dealing with LS conditional shift and target shift may be combined to find a reweighted transformation of the training data which mimics the test distribution, and these methods may perform well even in the presence of conditional shifts other than location-scale changes.[23] GivenNsets of training examples sampled i.i.d. from distributionsP(1)(X,Y),P(2)(X,Y),…,P(N)(X,Y){\displaystyle P^{(1)}(X,Y),P^{(2)}(X,Y),\ldots ,P^{(N)}(X,Y)}, the goal ofdomain generalizationis to formulate learning algorithms which perform well on test examples sampled from a previously unseen domainP∗(X,Y){\displaystyle P^{*}(X,Y)}where no data from the test domain is available at training time. If conditional distributionsP(Y∣X){\displaystyle P(Y\mid X)}are assumed to be relatively similar across all domains, then a learner capable of domain generalization must estimate a functional relationship between the variables which is robust to changes in the marginalsP(X){\displaystyle P(X)}. Based on kernel embeddings of these distributions, Domain Invariant Component Analysis (DICA) is a method which determines the transformation of the training data that minimizes the difference between marginal distributions while preserving a common conditional distribution shared between all training domains.[25]DICA thus extractsinvariants, features that transfer across domains, and may be viewed as a generalization of many popular dimension-reduction methods such askernel principal component analysis, transfer component analysis, and covariance operator inverse regression.[25] Defining a probability distributionP{\displaystyle {\mathcal {P}}}on the RKHSH{\displaystyle {\mathcal {H}}}with DICA measures dissimilarity between domains viadistributional variancewhich is computed as where soG{\displaystyle \mathbf {G} }is aN×N{\displaystyle N\times N}Gram matrix over the distributions from which the training data are sampled. Finding anorthogonal transformonto a low-dimensionalsubspaceB(in the feature space) which minimizes the distributional variance, DICA simultaneously ensures thatBaligns with thebasesof acentral subspaceCfor whichY{\displaystyle Y}becomes independent ofX{\displaystyle X}givenCTX{\displaystyle C^{T}X}across all domains. In the absence of target valuesY{\displaystyle Y}, an unsupervised version of DICA may be formulated which finds a low-dimensional subspace that minimizes distributional variance while simultaneously maximizing the variance ofX{\displaystyle X}(in the feature space) across all domains (rather than preserving a central subspace).[25] In distribution regression, the goal is to regress from probability distributions to reals (or vectors). Many importantmachine learningand statistical tasks fit into this framework, includingmulti-instance learning, andpoint estimationproblems without analytical solution (such ashyperparameterorentropy estimation). In practice only samples from sampled distributions are observable, and the estimates have to rely on similarities computed betweensets of points. Distribution regression has been successfully applied for example in supervised entropy learning, and aerosol prediction using multispectral satellite images.[26] Given({Xi,n}n=1Ni,yi)i=1ℓ{\displaystyle {\left(\{X_{i,n}\}_{n=1}^{N_{i}},y_{i}\right)}_{i=1}^{\ell }}training data, where theXi^:={Xi,n}n=1Ni{\displaystyle {\hat {X_{i}}}:=\{X_{i,n}\}_{n=1}^{N_{i}}}bag contains samples from a probability distributionXi{\displaystyle X_{i}}and theith{\displaystyle i^{\text{th}}}output label isyi∈R{\displaystyle y_{i}\in \mathbb {R} }, one can tackle the distribution regression task by taking the embeddings of the distributions, and learning the regressor from the embeddings to the outputs. In other words, one can consider the following kernelridge regressionproblem(λ>0){\displaystyle (\lambda >0)} where with ak{\displaystyle k}kernel on the domain ofXi{\displaystyle X_{i}}-s(k:Ω×Ω→R){\displaystyle (k:\Omega \times \Omega \to \mathbb {R} )},K{\displaystyle K}is a kernel on the embedded distributions, andH(K){\displaystyle {\mathcal {H}}(K)}is the RKHS determined byK{\displaystyle K}. Examples forK{\displaystyle K}include the linear kernel[K(μP,μQ)=⟨μP,μQ⟩H(k)]{\displaystyle \left[K(\mu _{P},\mu _{Q})=\langle \mu _{P},\mu _{Q}\rangle _{{\mathcal {H}}(k)}\right]}, the Gaussian kernel[K(μP,μQ)=e−‖μP−μQ‖H(k)2/(2σ2)]{\displaystyle \left[K(\mu _{P},\mu _{Q})=e^{-\left\|\mu _{P}-\mu _{Q}\right\|_{H(k)}^{2}/(2\sigma ^{2})}\right]}, the exponential kernel[K(μP,μQ)=e−‖μP−μQ‖H(k)/(2σ2)]{\displaystyle \left[K(\mu _{P},\mu _{Q})=e^{-\left\|\mu _{P}-\mu _{Q}\right\|_{H(k)}/(2\sigma ^{2})}\right]}, the Cauchy kernel[K(μP,μQ)=(1+‖μP−μQ‖H(k)2/σ2)−1]{\displaystyle \left[K(\mu _{P},\mu _{Q})=\left(1+\left\|\mu _{P}-\mu _{Q}\right\|_{H(k)}^{2}/\sigma ^{2}\right)^{-1}\right]}, the generalized t-student kernel[K(μP,μQ)=(1+‖μP−μQ‖H(k)σ)−1,(σ≤2)]{\displaystyle \left[K(\mu _{P},\mu _{Q})=\left(1+\left\|\mu _{P}-\mu _{Q}\right\|_{H(k)}^{\sigma }\right)^{-1},(\sigma \leq 2)\right]}, or the inverse multiquadrics kernel[K(μP,μQ)=(‖μP−μQ‖H(k)2+σ2)−12]{\displaystyle \left[K(\mu _{P},\mu _{Q})=\left(\left\|\mu _{P}-\mu _{Q}\right\|_{H(k)}^{2}+\sigma ^{2}\right)^{-{\frac {1}{2}}}\right]}. The prediction on a new distribution(X^){\displaystyle ({\hat {X}})}takes the simple, analytical form wherek=[K(μX^i,μX^)]∈R1×ℓ{\displaystyle \mathbf {k} ={\big [}K{\big (}\mu _{{\hat {X}}_{i}},\mu _{\hat {X}}{\big )}{\big ]}\in \mathbb {R} ^{1\times \ell }},G=[Gij]∈Rℓ×ℓ{\displaystyle \mathbf {G} =[G_{ij}]\in \mathbb {R} ^{\ell \times \ell }},Gij=K(μX^i,μX^j)∈R{\displaystyle G_{ij}=K{\big (}\mu _{{\hat {X}}_{i}},\mu _{{\hat {X}}_{j}}{\big )}\in \mathbb {R} },y=[y1;…;yℓ]∈Rℓ{\displaystyle \mathbf {y} =[y_{1};\ldots ;y_{\ell }]\in \mathbb {R} ^{\ell }}. Under mild regularity conditions this estimator can be shown to be consistent and it can achieve the one-stage sampled (as if one had access to the trueXi{\displaystyle X_{i}}-s)minimax optimalrate.[26]In theJ{\displaystyle J}objective functionyi{\displaystyle y_{i}}-s are real numbers; the results can also be extended to the case whenyi{\displaystyle y_{i}}-s ared{\displaystyle d}-dimensional vectors, or more generally elements of aseparableHilbert spaceusing operator-valuedK{\displaystyle K}kernels. In this simple example, which is taken from Song et al.,[2]X,Y{\displaystyle X,Y}are assumed to bediscrete random variableswhich take values in the set{1,…,K}{\displaystyle \{1,\ldots ,K\}}and the kernel is chosen to be theKronecker deltafunction, sok(x,x′)=δ(x,x′){\displaystyle k(x,x')=\delta (x,x')}. The feature map corresponding to this kernel is thestandard basisvectorφ(x)=ex{\displaystyle \varphi (x)=\mathbf {e} _{x}}. The kernel embeddings of such a distributions are thus vectors of marginal probabilities while the embeddings of joint distributions in this setting areK×K{\displaystyle K\times K}matrices specifying joint probability tables, and the explicit form of these embeddings is WhenP(X=s)>0{\displaystyle P(X=s)>0}, for alls∈{1,…,K}{\displaystyle s\in \{1,\ldots ,K\}}, the conditional distribution embedding operator, is in this setting a conditional probability table and Thus, the embeddings of the conditional distribution under a fixed value ofX{\displaystyle X}may be computed as In this discrete-valued setting with the Kronecker delta kernel, thekernel sum rulebecomes Thekernel chain rulein this case is given by
https://en.wikipedia.org/wiki/Kernel_embedding_of_distributions
Inmachine learningandnatural language processing, thepachinko allocation model (PAM)is atopic model. Topic models are a suite of algorithms to uncover the hidden thematic structure of a collection of documents.[1]The algorithm improves upon earlier topic models such aslatent Dirichlet allocation(LDA) by modeling correlations between topics in addition to the word correlations which constitute topics. PAM provides more flexibility and greater expressive power than latent Dirichlet allocation.[2]While first described and implemented in the context of natural language processing, the algorithm may have applications in other fields such asbioinformatics. The model is named forpachinkomachines—a game popular in Japan, in which metal balls bounce down around a complex collection of pins until they land in various bins at the bottom.[3] Pachinko allocation was first described by Wei Li andAndrew McCallumin 2006.[3]The idea was extended with hierarchical Pachinko allocation by Li, McCallum, and David Mimno in 2007.[4]In 2007, McCallum and his colleagues proposed a nonparametric Bayesian prior for PAM based on a variant of the hierarchical Dirichlet process (HDP).[2]The algorithm has been implemented in theMALLETsoftware package published by McCallum's group at theUniversity of Massachusetts Amherst. PAM connects words in V and topics in T with an arbitrarydirected acyclic graph(DAG), where topic nodes occupy the interior levels and the leaves are words. The probability of generating a whole corpus is the product of the probabilities for every document:[3] P(D|α)=∏dP(d|α){\displaystyle P(\mathbf {D} |\alpha )=\prod _{d}P(d|\alpha )} Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Pachinko_allocation
Development testingis asoftware developmentprocess that involves synchronized application of a broad spectrum ofdefectprevention and detection strategies in order to reduce software development risks, time, and costs. Depending on the organization's expectations for software development, development testing might includestatic code analysis,data flow analysis,metrics analysis,peer code reviews,unit testing,code coverage analysis,traceability, and other software verification practices. Development testing is performed by the software developer or engineer during theconstruction phaseof thesoftware development lifecycle.[1] Rather than replace traditionalQAfocuses, it augments it.[2]Development testing aims to eliminate construction errors before code is promoted to QA; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development and QA process.[3] Development testing is applied for the following main purposes: VDC research reports that the standardized implementation of development testing processes within an overarching standardized process not only improves software quality (by aligning development activities with proven best practices) but also increases project predictability.[4]voke research reports that development testing makes software more predictable, traceable, visible, and transparent throughout the software development lifecycle.[2] In each of the above applications, development testing starts by defining policies that express the organization's expectations forreliability, security, performance, and regulatory compliance. Then, after the team is trained on these policies, development testing practices are implemented to align software development activities with these policies.[5]These development testing practices include: The emphasis on applying a broad spectrum of defect prevention and defect detection practices is based on the premise that different development testing techniques are tuned to expose different types of defects at different points in the software development lifecycle, so applying multiple techniques in concert decreases the risk of defects slipping through the cracks.[3]The importance of applying broad set of practices is confirmed by Boehm and Basili in the often-referenced "Software Defect Reduction Top 10 List."[7] The term "development testing" has occasionally been used to describe the application of static analysis tools. Numerous industry leaders have taken issue with this conflation because static analysis is not technically testing; even static analysis that "covers" every line of code is incapable ofvalidatingthat the code does what it is supposed to do—or of exposing certain types of defects orsecurity vulnerabilitiesthat manifest themselves only as software is dynamically executed. Although many warn that static analysis alone should not be considered a silver bullet or panacea, most industry experts agree that static analysis is a proven method for eliminating many security, reliability, and performance defects. In other words, while static analysis is not the same as development testing, it is commonly considered a component of development testing.[8][9] In addition to various implementations of static analysis, such asflow analysis, and unit testing, development testing also includes peer code review as a primary quality activity. Code review is widely considered one of the most effective defect detection and prevention methods in software development.[10]
https://en.wikipedia.org/wiki/Development_testing
Inintegral geometry(otherwise called geometric probability theory),Hadwiger's theoremcharacterises thevaluationsonconvex bodiesinRn.{\displaystyle \mathbb {R} ^{n}.}It was proved byHugo Hadwiger. LetKn{\displaystyle \mathbb {K} ^{n}}be the collection of all compact convex sets inRn.{\displaystyle \mathbb {R} ^{n}.}Avaluationis a functionv:Kn→R{\displaystyle v:\mathbb {K} ^{n}\to \mathbb {R} }such thatv(∅)=0{\displaystyle v(\varnothing )=0}and for everyS,T∈Kn{\displaystyle S,T\in \mathbb {K} ^{n}}that satisfyS∪T∈Kn,{\displaystyle S\cup T\in \mathbb {K} ^{n},}v(S)+v(T)=v(S∩T)+v(S∪T).{\displaystyle v(S)+v(T)=v(S\cap T)+v(S\cup T)~.} A valuation is called continuous if it is continuous with respect to theHausdorff metric. A valuation is called invariant under rigid motions ifv(φ(S))=v(S){\displaystyle v(\varphi (S))=v(S)}wheneverS∈Kn{\displaystyle S\in \mathbb {K} ^{n}}andφ{\displaystyle \varphi }is either atranslationor arotationofRn.{\displaystyle \mathbb {R} ^{n}.} The quermassintegralsWj:Kn→R{\displaystyle W_{j}:\mathbb {K} ^{n}\to \mathbb {R} }are defined via Steiner's formulaVoln(K+tB)=∑j=0n(nj)Wj(K)tj,{\displaystyle \mathrm {Vol} _{n}(K+tB)=\sum _{j=0}^{n}{\binom {n}{j}}W_{j}(K)t^{j}~,}whereB{\displaystyle B}is the Euclidean ball. For example,W0{\displaystyle W_{0}}is the volume,W1{\displaystyle W_{1}}is proportional to thesurface measure,Wn−1{\displaystyle W_{n-1}}is proportional to themean width, andWn{\displaystyle W_{n}}is the constantVoln⁡(B).{\displaystyle \operatorname {Vol} _{n}(B).} Wj{\displaystyle W_{j}}is a valuation which ishomogeneousof degreen−j,{\displaystyle n-j,}that is,Wj(tK)=tn−jWj(K),t≥0.{\displaystyle W_{j}(tK)=t^{n-j}W_{j}(K)~,\quad t\geq 0~.} Any continuous valuationv{\displaystyle v}onKn{\displaystyle \mathbb {K} ^{n}}that is invariant under rigid motions can be represented asv(S)=∑j=0ncjWj(S).{\displaystyle v(S)=\sum _{j=0}^{n}c_{j}W_{j}(S)~.} Any continuous valuationv{\displaystyle v}onKn{\displaystyle \mathbb {K} ^{n}}that is invariant under rigid motions and homogeneous of degreej{\displaystyle j}is a multiple ofWn−j.{\displaystyle W_{n-j}.} An account and a proof of Hadwiger's theorem may be found in An elementary and self-contained proof was given by Beifang Chen in
https://en.wikipedia.org/wiki/Hadwiger%27s_theorem
CmapToolsisconcept mappingsoftware developed by theFlorida Institute for Human and Machine Cognition(IHMC).[1]It allows users to easily create graphical nodes representing concepts, and to connect nodes using lines and linking words to form anetwork of interrelated propositionsthat represent knowledge of a topic.[2]The software has been used in classrooms and research labs,[3][4]and in corporate training.[5][6] The various uses ofconcept mapsare supported by CmapTools. Multiple linkscan be added to each concept to form a dynamic map that opens web pages or local documents; The links added receive a category chosen by the user on the provided list of types, to help with organization, some categories are: URLs; Documents; Images; and so on. Each link will be disposed accordingly with the category set by the user. The links are stacked by each category type under the chosen concept form (like show on the image sideway). Even other concept maps can be linked to concepts letting the user construct a strong navigation tool. Multiple maps connected can form aknowledge base, for example of a company structure, repository of standards, personal contacts and other important general information. Thissoftwarearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/CmapTools
Inmathematicsandcomputer programming, theorder of operationsis a collection of rules that reflect conventions about which operations to perform first in order to evaluate a givenmathematical expression. These rules are formalized with a ranking of the operations. The rank of an operation is called itsprecedence, and an operation with ahigherprecedence is performed before operations withlowerprecedence.Calculatorsgenerally perform operations with the same precedence from left to right,[1]but someprogramming languagesand calculators adopt different conventions. For example, multiplication is granted a higher precedence than addition, and it has been this way since the introduction of modernalgebraic notation.[2][3]Thus, in the expression1 + 2 × 3, the multiplication is performed before addition, and the expression has the value1 + (2 × 3) = 7, and not(1 + 2) × 3 = 9. When exponents were introduced in the 16th and 17th centuries, they were given precedence over both addition and multiplication and placed as a superscript to the right of their base.[2]Thus3 + 52= 28and3 × 52= 75. These conventions exist to avoid notationalambiguitywhile allowing notation to remain brief.[4]Where it is desired to override the precedence conventions, or even simply to emphasize them,parentheses( ) can be used. For example,(2 + 3) × 4 = 20forces addition to precede multiplication, while(3 + 5)2= 64forces addition to precedeexponentiation. If multiple pairs of parentheses are required in a mathematical expression (such as in the case of nested parentheses), the parentheses may be replaced by other types ofbracketsto avoid confusion, as in[2 × (3 + 4)] − 5 = 9. These rules are meaningful only when the usual notation (calledinfix notation) is used. WhenfunctionalorPolish notationare used for all operations, the order of operations results from the notation itself. The order of operations, that is, the order in which the operations in an expression are usually performed, results from a convention adopted throughout mathematics, science, technology and many computerprogramming languages. It is summarized as:[2][5] This means that to evaluate an expression, one first evaluates any sub-expression inside parentheses, working inside to outside if there is more than one set. Whether inside parentheses or not, the operation that is higher in the above list should be applied first. Operations of the same precedence are conventionally evaluated from left to right. If each division is replaced with multiplication by thereciprocal(multiplicative inverse) then theassociativeandcommutativelaws of multiplication allow the factors in eachtermto be multiplied together in any order. Sometimes multiplication and division are given equal precedence, or sometimes multiplication is given higher precedence than division; see§ Mixed division and multiplicationbelow. If each subtraction is replaced with addition of theopposite(additive inverse), then the associative and commutative laws of addition allow terms to be added in any order. Theradical symbol⁠t{\displaystyle {\sqrt {\vphantom {t}}}}⁠is traditionally extended by a bar (calledvinculum) over the radicand (this avoids the need for parentheses around the radicand). Other functions use parentheses around the input to avoid ambiguity.[6][7][a]The parentheses can be omitted if the input is a single numerical variable or constant,[2]as in the case ofsinx= sin(x)andsinπ= sin(π).[a]Traditionally this convention extends tomonomials; thus,sin 3x= sin(3x)and evensin⁠1/2⁠xy= sin(⁠1/2⁠xy), butsinx+y= sin(x) +y, becausex+yis not a monomial. However, this convention is not universally understood, and some authors prefer explicit parentheses.[b]Some calculators and programming languages require parentheses around function inputs, some do not. Parentheses and alternate symbols of grouping can be used to override the usual order of operations or to make the intended order explicit. Grouped symbols can be treated as a single expression.[2] Multiplication before addition: Parenthetical subexpressions are evaluated first: Exponentiation before multiplication, multiplication before subtraction: When an expression is written as a superscript, the superscript is considered to be grouped by its position above its base: The operand of a root symbol is determined by the overbar: A horizontal fractional line forms two grouped subexpressions, one above divided by another below: Parentheses can be nested, and should be evaluated from the inside outward. For legibility, outer parentheses can be made larger than inner parentheses. Alternately, other grouping symbols, such as curly braces{} or square brackets[ ], are sometimes used along with parentheses( ). For example: There are differing conventions concerning theunary operation'−'(usually pronounced "minus"). In written or printed mathematics, the expression −32is interpreted to mean−(32) = −9.[2][8] In some applications and programming languages, notablyMicrosoft Excel,PlanMaker(and other spreadsheet applications) andthe programming language bc, unary operations have a higher priority than binary operations, that is, the unary minus has higher precedence than exponentiation, so in those languages −32will be interpreted as(−3)2= 9.[9]This does not apply to the binary minusoperation '−';for example in Microsoft Excel while the formulas=-2^2,=(-2)^2and=0+-2^2return 4, the formulas=0-2^2and=-(2^2)return −4. There is no universal convention for interpreting an expression containing both division denoted by '÷' and multiplication denoted by '×'. Proposed conventions include assigning the operations equal precedence and evaluating them from left to right, or equivalently treating division as multiplication by the reciprocal and then evaluating in any order;[10]evaluating all multiplications first followed by divisions from left to right; or eschewing such expressions and instead always disambiguating them by explicit parentheses.[11] Beyond primary education, the symbol '÷' for division is seldom used, but is replaced by the use ofalgebraic fractions,[12]typically written vertically with the numerator stacked above the denominator – which makes grouping explicit and unambiguous – but sometimes written inline using theslashor solidus symbol '/'.[13] Multiplication denoted by juxtaposition (also known asimplied multiplication) creates a visual unit and is often given higher precedence than most other operations. In academic literature, when inline fractions are combined with implied multiplication without explicit parentheses, the multiplication is conventionally interpreted as having higher precedence than division, so that e.g.1 / 2nis interpreted to mean1 / (2 ·n)rather than(1 / 2) ·n.[2][10][14][15]For instance, the manuscript submission instructions for thePhysical Reviewjournals directly state that multiplication has precedence over division,[16]and this is also the convention observed in physics textbooks such as theCourse of Theoretical PhysicsbyLandauandLifshitz[c]and mathematics textbooks such asConcrete MathematicsbyGraham,Knuth, andPatashnik.[17]However, some authors recommend against expressions such asa/bc, preferring the explicit use of parenthesisa/ (bc).[3] More complicated cases are more ambiguous. For instance, the notation1 / 2π(a+b)could plausibly mean either1 / [2π· (a+b)]or[1 / (2π)] · (a+b).[18]Sometimes interpretation depends on context. ThePhysical Reviewsubmission instructions recommend against expressions of the forma/b/c; more explicit expressions(a/b) /cora/ (b/c)are unambiguous.[16] This ambiguity has been the subject ofInternet memessuch as "8 ÷ 2(2 + 2)", for which there are two conflicting interpretations: 8 ÷ [2 · (2 + 2)] = 1 and (8 ÷ 2) · (2 + 2) = 16.[15][19]Mathematics education researcher Hung-Hsi Wu points out that "one never gets a computation of this type in real life", and calls such contrived examples "a kind of Gotcha! parlor game designed to trap an unsuspecting person by phrasing it in terms of a set of unreasonably convoluted rules".[12] Ifexponentiationis indicated by stacked symbols using superscript notation, the usual rule is to work from the top down:[2][7] which typically is not equal to (ab)c. This convention is useful because there isa property of exponentiationthat (ab)c=abc, so it's unnecessary to use serial exponentiation for this. However, when exponentiation is represented by an explicit symbol such as acaret(^) orarrow(↑), there is no common standard. For example,Microsoft Exceland computation programming languageMATLABevaluatea^b^cas (ab)c, butGoogle SearchandWolfram Alphaasa(bc). Thus4^3^2is evaluated to 4,096 in the first case and to 262,144 in the second case. Mnemonicacronymsare often taught in primary schools to help students remember the order of operations.[20][21]The acronymPEMDAS, which stands forParentheses,Exponents,Multiplication/Division,Addition/Subtraction,[22]is common in theUnited States[23]and France.[24]Sometimes the letters are expanded into words of a mnemonic sentence such as "Please Excuse My Dear Aunt Sally".[25]The United Kingdom and otherCommonwealthcountries may useBODMAS(or sometimesBOMDAS), standing forBrackets,Of,Division/Multiplication,Addition/Subtraction, with "of" meaning fraction multiplication.[26][27]Sometimes theOis instead expanded asOrder, meaning exponent or root,[27][28]or replaced byIforIndices in the alternative mnemonicBIDMAS.[27][29]In Canada and New ZealandBEDMASis common.[30] In Germany, the convention is simply taught asPunktrechnung vor Strichrechnung, "dot operations before line operations" referring to the graphical shapes of the taught operator signsU+00B7·MIDDLE DOT(multiplication),U+2236∶RATIO(division), andU+002B+PLUS SIGN(addition),U+2212−MINUS SIGN(subtraction). These mnemonics may be misleading when written this way.[25]For example, misinterpreting any of the above rules to mean "addition first, subtraction afterward" would incorrectly evaluate the expression[25]a−b+c{\displaystyle a-b+c}asa−(b+c){\displaystyle a-(b+c)}, while the correct evaluation is(a−b)+c{\displaystyle (a-b)+c}. These values are different whenc≠0{\displaystyle c\neq 0}. Mnemonic acronyms have been criticized for not developing a conceptual understanding of the order of operations, and not addressing student questions about its purpose or flexibility.[31][32]Students learning the order of operations via mnemonic acronyms routinely make mistakes,[33]as do some pre-service teachers.[34]Even when students correctly learn the acronym, a disproportionate focus on memorization of trivia crowds out substantive mathematical content.[12]The acronym's procedural application does not match experts' intuitive understanding of mathematical notation: mathematical notation indicates groupings in ways other than parentheses or brackets and a mathematical expression is atree-like hierarchyrather than a linearly "ordered" structure; furthermore, there is no single order by which mathematical expressions must be simplified or evaluated and no universal canonical simplification for any particular expression, and experts fluently apply valid transformations and substitutions in whatever order is convenient, so learning a rigid procedure can lead students to a misleading and limiting understanding of mathematical notation.[35] Different calculators follow different orders of operations.[2]Many simple calculators without astackimplementchain input, working in button-press order without any priority given to different operations, give a different result from that given by more sophisticated calculators. For example, on a simple calculator, typing1 + 2 × 3 =yields 9, while a more sophisticated calculator will use a more standard priority, so typing1 + 2 × 3 =yields 7. Calculators may associate exponents to the left or to the right. For example, the expressiona^b^cis interpreted asa(bc)on theTI-92and theTI-30XS MultiViewin "Mathprint mode", whereas it is interpreted as (ab)con theTI-30XIIand the TI-30XS MultiView in "Classic mode". An expression like1/2xis interpreted as 1/(2x) byTI-82,[3]as well as many modernCasiocalculators[36](configurable on some like thefx-9750GIII), but as (1/2)xbyTI-83and every other TI calculator released since 1996,[37][3]as well as by allHewlett-Packardcalculators with algebraic notation. While the first interpretation may be expected by some users due to the nature ofimplied multiplication,[38]the latter is more in line with the rule that multiplication and division are of equal precedence.[3] When the user is unsure how a calculator will interpret an expression, parentheses can be used to remove the ambiguity.[3] Order of operations arose due to the adaptation ofinfix notationinstandard mathematical notation, which can be notationally ambiguous without such conventions, as opposed topostfix notationorprefix notation, which do not need orders of operations.[39][40]Hence, calculators utilizingreverse Polish notation(RPN) using astackto enter expressions in the correct order of precedence do not need parentheses or any possibly model-specific order of execution.[25][22] Mostprogramming languagesuse precedence levels that conform to the order commonly used in mathematics,[41]though others, such asAPL,Smalltalk,OccamandMary, have nooperatorprecedence rules (in APL, evaluation is strictly right to left; in Smalltalk, it is strictly left to right). Furthermore, because many operators are not associative, the order within any single level is usually defined by grouping left to right so that16/4/4is interpreted as(16/4)/4 = 1rather than16/(4/4) = 16; such operators are referred to as "left associative". Exceptions exist; for example, languages with operators corresponding to theconsoperation on lists usually make them group right to left ("right associative"), e.g. inHaskell,1:2:3:4:[] == 1:(2:(3:(4:[]))) == [1,2,3,4]. Dennis Ritchie, creator of theC language, said of the precedence in C (shared by programming languages that borrow those rules from C, for example,C++,PerlandPHP) that it would have been preferable to move thebitwise operatorsabove thecomparison operators.[42]Many programmers have become accustomed to this order, but more recent popular languages likePython[43]andRuby[44]do have this order reversed. The relative precedence levels of operators found in many C-style languages are as follows: Examples: (InPython,Ruby,PARI/GPand other popular languages,A & B == Cis interpreted as(A & B) == C.) Source-to-source compilersthat compile to multiple languages need to explicitly deal with the issue of different order of operations across languages.Haxefor example standardizes the order and enforces it by inserting brackets where it is appropriate. The accuracy of software developer knowledge about binary operator precedence has been found to closely follow their frequency of occurrence in source code.[46] The order of operations emerged progressively over centuries. The rule that multiplication has precedence over addition was incorporated into the development ofalgebraic notationin the 1600s, since the distributive property implies this as a natural hierarchy. As recently as the 1920s, the historian of mathematicsFlorian Cajoriidentifies disagreement about whether multiplication should have precedence over division, or whether they should be treated equally. The term "order of operations" and the "PEMDAS/BEDMAS" mnemonics were formalized only in the late 19th or early 20th century, as demand for standardized textbooks grew. Ambiguity about issues such as whether implicit multiplication takes precedence over explicit multiplication and division in such expressions asa/2b, which could be interpreted asa/(2b) or (a/2) ×b, imply that the conventions are not yet completely stable.[47][48] Chrystal's book was the canonical source in English about secondary school algebra of the turn of the 20th century, and plausibly the source for many later descriptions of the order of operations. However, while Chrystal's book initially establishes a rigid rule for evaluating expressions involving '÷' and '×' symbols, it later consistently gives implicit multiplication higher precedence than division when writing inline fractions, without ever explicitly discussing the discrepancy between formal rule and common practice.
https://en.wikipedia.org/wiki/Order_of_operations
Incomputer science, aunionis avaluethat may have any of multiple representations or formats within the same area ofmemory; that consists of avariablethat may hold such adata structure. Someprogramming languagessupport aunion typefor such adata type. In other words, a union type specifies the permitted types that may be stored in its instances, e.g.,floatandinteger. In contrast with arecord, which could be defined to contain both a floatandan integer; a union would hold only one at a time. A union can be pictured as a chunk of memory that is used to store variables of different data types. Once a new value is assigned to a field, the existing data is overwritten with the new data. The memory area storing the value has no intrinsic type (other than justbytesorwordsof memory), but the value can be treated as one of severalabstract data types, having the type of the value that was last written to the memory area. Intype theory, a union has asum type; this corresponds todisjoint unionin mathematics. Depending on the language and type, a union value may be used in some operations, such asassignmentand comparison for equality, without knowing its specific type. Other operations may require that knowledge, either by some external information, or by the use of atagged union. Because of the limitations of their use, untagged unions are generally only provided in untyped languages or in a type-unsafe way (as inC). They have the advantage over simple tagged unions of not requiring space to store a data type tag. The name "union" stems from the type's formal definition. If a type is considered as thesetof all values that that type can take on, a union type is simply the mathematicalunionof its constituting types, since it can take on any value any of its fields can. Also, because a mathematical union discards duplicates, if more than one field of the union can take on a single common value, it is impossible to tell from the value alone which field was last written. However, one useful programming function of unions is to map smaller data elements to larger ones for easier manipulation. A data structure consisting, for example, of 4 bytes and a 32-bit integer, can form a union with an unsigned 64-bit integer, and thus be more readily accessed for purposes of comparison etc. ALGOL 68has tagged unions, and uses a case clause to distinguish and extract the constituent type at runtime. A union containing another union is treated as the set of all its constituent possibilities, and if the context requires it a union is automatically coerced into the wider union. A union can explicitly contain no value, which can be distinguished at runtime. An example is: The syntax of the C/C++ union type and the notion of casts was derived from ALGOL 68, though in an untagged form.[1] InCandC++, untagged unions are expressed nearly exactly like structures (structs), except that each data member is located at the same memory address. The data members, as in structures, need not be primitive values, and in fact may be structures or even other unions. C++ (sinceC++11) also allows for a data member to be any type that has a full-fledged constructor/destructor and/or copy constructor, or a non-trivial copy assignment operator. For example, it is possible to have the standard C++stringas a member of a union. The primary use of a union is allowing access to a common location by different data types, for example hardware input/output access, bitfield and word sharing, ortype punning. Unions can also provide low-levelpolymorphism. However, there is no checking of types, so it is up to the programmer to be sure that the proper fields are accessed in different contexts. The relevant field of a union variable is typically determined by the state of other variables, possibly in an enclosing struct. One common C programming idiom uses unions to perform what C++ calls areinterpret_cast, by assigning to one field of a union and reading from another, as is done in code which depends on the raw representation of the values. A practical example is themethod of computing square roots using the IEEE representation. This is not, however, a safe use of unions in general. Structure and union specifiers have the same form. [ . . . ] The size of a union is sufficient to contain the largest of its members. The value of at most one of the members can be stored in a unionobjectat any time. A pointer to a union object, suitably converted, points to each of its members (or if a member is a bit-field, then to the unit in which it resides), and vice versa. In C++,C11, and as a non-standard extension in many compilers, unions can also be anonymous. Their data members do not need to be referenced, are instead accessed directly. They have some restrictions as opposed to traditional unions: in C11, they must be a member of another structure or union,[2]and in C++, they can not havemethodsor access specifiers. Simply omitting the class-name portion of the syntax does not make a union an anonymous union. For a union to qualify as an anonymous union, the declaration must not declare an object. Example: Anonymous unions are also useful in Cstructdefinitions to provide a sense of namespacing.[3] In compilers such as GCC, Clang, and IBM XL C for AIX, atransparent_unionattribute is available for union types. Types contained in the union can be converted transparently to the union type itself in a function call, provided that all types have the same size. It is mainly intended for function with multiple parameter interfaces, a use necessitated by early Unix extensions and later re-standardisation.[4] InCOBOL, union data items are defined in two ways. The first uses theRENAMES(66 level) keyword, which effectively maps a second alphanumeric data item on top of the same memory location as a preceding data item. In the example code below, data itemPERSON-RECis defined as a group containing another group and a numeric data item.PERSON-DATAis defined as an alphanumeric data item that renamesPERSON-REC, treating the data bytes continued within it as character data. The second way to define a union type is by using theREDEFINESkeyword. In the example code below, data itemVERS-NUMis defined as a 2-byte binary integer containing a version number. A second data itemVERS-BYTESis defined as a two-character alphanumeric variable. Since the second item isredefinedover the first item, the two items share the same address in memory, and therefore share the same underlying data bytes. The first item interprets the two data bytes as a binary value, while the second item interprets the bytes as character values. InPascal, there are two ways to create unions. One is the standard way through a variant record. The second is a nonstandard means of declaring a variable as absolute, meaning it is placed at the same memory location as another variable or at an absolute address. While all Pascal compilers support variant records, only some support absolute variables. For the purposes of this example, the following are all integer types: abyteconsists of 8 bits, awordis 16 bits, and anintegeris 32 bits. The following example shows the non-standard absolute form: In the first example, each of the elements of the array B maps to one of the specific bytes of the variable A. In the second example, the variable C is assigned to the exact machine address 0. In the following example, a record has variants, some of which share the same location as others: InPL/Ithe original term for a union wascell,[5]which is still accepted as a synonym for union by several compilers. The union declaration is similar to the structure definition, where elements at the same level within the union declaration occupy the same storage. Elements of the union can be any data type, including structures and array.[6]: pp192–193Here vers_num and vers_bytes occupy the same storage locations. An alternative to a union declaration is the DEFINED attribute, which allows alternative declarations of storage, however the data types of the base and defined variables must match.[6]: pp.289–293 Rustimplements both tagged and untagged unions. In Rust, tagged unions are implemented using theenumkeyword. Unlikeenumerated typesin most other languages, enum variants in Rust can contain additional data in the form of a tuple or struct, making them tagged unions rather than simple enumerated types.[7] Rust also supports untagged unions using theunionkeyword. The memory layout of unions in Rust is undefined by default,[8]but a union with the#[repr(C)]attribute will be laid out in memory exactly like the equivalent union in C.[9]Reading the fields of a union can only be done within anunsafefunction or block, as the compiler cannot guarantee that the data in the union will be valid for the type of the field; if this is not the case, it will result inundefined behavior.[10] In C and C++, the syntax is: A structure can also be a member of a union, as the following example shows: This example defines a variableuvaras a union (tagged asname1), which contains two members, a structure (tagged asname2) namedsvar(which in turn contains three members), and an integer variable namedd. Unions may occur within structures and arrays, and vice versa: The number ival is referred to assymtab[i].u.ivaland the first character of string sval by either of*symtab[i].u.svalorsymtab[i].u.sval[0]. Union types were introduced in PHP 8.0.[11]The values are implicitly "tagged" with a type by the language, and may be retrieved by "gettype()". Support for typing was introduced in Python 3.5.[12]The new syntax for union types were introduced in Python 3.10.[13] Union types are supported in TypeScript.[14]The values are implicitly "tagged" with a type by the language, and may be retrieved using atypeofcall for primitive values and aninstanceofcomparison for complex data types. Types with overlapping usage (e.g. a slice method exists on both strings and arrays, the plus operator works on both strings and numbers) don't need additional narrowing to use these features. Tagged unions in Rust use theenumkeyword, and can contain tuple and struct variants: Untagged unions in Rust use theunionkeyword: Reading from the fields of an untagged union results inundefined behaviorif the data in the union is not valid as the type of the field, and thus requires anunsafeblock:
https://en.wikipedia.org/wiki/Union_type
Electromagnetic compatibility(EMC) is the ability of electrical equipment and systems to function acceptably in theirelectromagnetic environment, by limiting the unintentional generation, propagation and reception of electromagnetic energy which may cause unwanted effects such aselectromagnetic interference(EMI) or even physical damage to operational equipment.[1][2]The goal of EMC is the correct operation of different equipment in a common electromagnetic environment. It is also the name given to the associated branch ofelectrical engineering. EMC pursues three main classes of issue.Emissionis the generation of electromagnetic energy, whether deliberate or accidental, by some source and its release into the environment. EMC studies the unwanted emissions and the countermeasures which may be taken in order to reduce unwanted emissions. The second class,susceptibility, is the tendency of electrical equipment, referred to as the victim, to malfunction or break down in the presence of unwanted emissions, which are known asRadio frequency interference(RFI).Immunityis the opposite of susceptibility, being the ability of equipment to function correctly in the presence of RFI, with the discipline of "hardening" equipment being known equally as susceptibility or immunity. A third class studied iscoupling, which is the mechanism by which emitted interference reaches the victim. Interference mitigation and hence electromagnetic compatibility may be achieved by addressing any or all of these issues, i.e., quieting the sources of interference, inhibiting coupling paths and/or hardening the potential victims. In practice, many of the engineering techniques used, such as grounding and shielding, apply to all three issues. The earliest EMC issue waslightningstrike (lightningelectromagnetic pulse, or LEMP) on ships and buildings.Lightning rodsor lightning conductors began to appear in the mid-18th century. With the advent of widespreadelectricity generationand power supply lines from the late 19th century on, problems also arose with equipmentshort-circuitfailure affecting the power supply, and with local fire and shock hazard when the power line was struck by lightning. Power stations were provided with outputcircuit breakers. Buildings and appliances would soon be provided with inputfuses, and later in the 20th century miniature circuit breakers (MCB) would come into use. It may be said that radio interference and its correction arose with the first spark-gap experiment ofMarconiin the late 1800s.[3]Asradio communicationsdeveloped in the first half of the 20th century, interference betweenbroadcastradio signals began to occur and an international regulatory framework was set up to ensure interference-free communications. Switching devices became commonplace through the middle of the 20th century, typically in petrol powered cars and motorcycles but also in domestic appliances such as thermostats and refrigerators. This caused transient interference with domestic radio and (after World War II) TV reception, and in due course laws were passed requiring the suppression of such interference sources. ESD problems first arose with accidentalelectric sparkdischarges in hazardous environments such as coal mines and when refuelling aircraft or motor cars. Safe working practices had to be developed. After World War II the military became increasingly concerned with the effects of nuclear electromagnetic pulse (NEMP), lightning strike, and even high-poweredradarbeams, on vehicle and mobile equipment of all kinds, and especially aircraft electrical systems. When high RF emission levels from other sources became a potential problem (such as with the advent ofmicrowave ovens), certain frequency bands were designated for Industrial, Scientific and Medical (ISM) use, allowing emission levels limited only by thermal safety standards. Later, the International Telecommunication Union adopted a Recommendation providing limits of radiation from ISM devices in order to protect radiocommunications. A variety of issues such as sideband and harmonic emissions, broadband sources, and the ever-increasing popularity of electrical switching devices and their victims, resulted in a steady development of standards and laws. From the late 1970s, the popularity of modern digital circuitry rapidly grew. As the technology developed, with ever-faster switching speeds (increasing emissions) and lower circuit voltages (increasing susceptibility), EMC increasingly became a source of concern. Many more nations became aware of EMC as a growing problem and issued directives to the manufacturers of digital electronic equipment, which set out the essential manufacturer requirements before their equipment could be marketed or sold. Organizations in individual nations, across Europe and worldwide, were set up to maintain these directives and associated standards. In 1979, the AmericanFCCpublished a regulation that required the electromagnetic emissions of all "digital devices" to be below certain limits.[3]This regulatory environment led to a sharp growth in the EMC industry supplying specialist devices and equipment, analysis and design software, and testing and certification services. Low-voltage digital circuits, especially CMOS transistors, became more susceptible to ESD damage as they were miniaturised and, despite the development of on-chip hardening techniques, a new ESD regulatory regime had to be developed. From the 1980s on the explosive growth inmobile communicationsand broadcast media channels put huge pressure on the available airspace. Regulatory authorities began squeezing band allocations closer and closer together, relying on increasingly sophisticated EMC control methods, especially in the digital communications domain, to keep cross-channel interference to acceptable levels. Digital systems are inherently less susceptible than analogue systems, and also offer far easier ways (such as software) to implement highly sophisticated protection anderror-correctionmeasures. In 1985, the USA released the ISM bands for low-power mobile digital communications, leading to the development ofWi-Fiand remotely-operated car door keys. This approach relies on the intermittent nature of ISM interference and use of sophisticated error-correction methods to ensure lossless reception during the quiet gaps between any bursts of interference. "Electromagnetic interference" (EMI) is defined as the "degradation in the performance of equipment or transmission channel or a system caused by an electromagnetic disturbance" (IEV161-01-06) while "electromagnetic disturbance" is defined as "an electromagnetic phenomenon that can degrade the performance of a device, equipment or system, or adversely affect living or inert matter(IEV 161-01-05). The terms "electromagnetic disturbance" and "electromagnetic interference" designate respectively the cause and the effect,[1] Electromagnetic compatibility (EMC) is an equipmentcharacteristicorpropertyand is defined as "the ability of equipment or a system to function satisfactorily in its electromagnetic environment without introducing intolerable electromagnetic disturbances to anything in that environment" (IEV 161-01-07).[1] EMC ensures the correct operation, in the same electromagnetic environment, of different equipment items which use or respond to electromagnetic phenomena, and the avoidance of any interference. Another way of saying this is that EMC is the control of EMI so that unwanted effects are prevented. Besides understanding the phenomena in themselves, EMC also addresses the countermeasures, such as control regimes, design and measurement, which should be taken in order to prevent emissions from causing any adverse effect. EMC is often understood as the control of electromagnetic interference (EMI). Electromagnetic interference divides into several categories according to the source and signal characteristics. The origin of interference, often called "noise" in this context, can be man-made (artificial) or natural. Continuous, or continuous wave (CW), interference comprises a given range of frequencies. This type is naturally divided into sub-categories according to frequency range, and as a whole is sometimes referred to as "DC to daylight". One common classification is intonarrowbandandbroadband, according to the spread of thefrequency range. Anelectromagnetic pulse(EMP), sometimes called atransientdisturbance, is a short-duration pulse of energy. This energy is usually broadband by nature, although it often excites a relatively narrow-banddamped sine waveresponse in the victim. Pulse signals divide broadly into isolated and repetitive events. When a source emits interference, it follows a route to the victim known as the coupling path. There are four basic coupling mechanisms:conductive,capacitive,magneticor inductive, andradiative. Any coupling path can be broken down into one or more of these coupling mechanisms working together. Conductive couplingoccurs when the coupling path between the source and victim is formed by direct electrical contact with a conducting body. Capacitive couplingoccurs when a varyingelectrical fieldexists between two adjacent conductors, inducing a change involtageon the receiving conductor. Inductive couplingor magnetic coupling occurs when a varyingmagnetic fieldexists between two parallel conductors, inducing a change involtagealong the receiving conductor. Radiative coupling or electromagnetic coupling occurs when source and victim are separated by a large distance. Source and victim act as radio antennas: the source emits or radiates anelectromagnetic wavewhich propagates across the space in between and is picked up or received by the victim. The damaging effects of electromagnetic interference pose unacceptable risks in many areas of technology, and it is necessary to control such interference and reduce the risks to acceptable levels. The control of electromagnetic interference (EMI) and assurance of EMC comprises a series of related disciplines: The risk posed by the threat is usually statistical in nature, so much of the work in threat characterisation and standards setting is based on reducing the probability of disruptive EMI to an acceptable level, rather than its assured elimination. For a complex or novel piece of equipment, this may require the production of a dedicatedEMC control plansummarizing the application of the above and specifying additional documents required. Characterisation of the problem requires understanding of: Breaking a coupling path is equally effective at either the start or the end of the path, therefore many aspects of good EMC design practice apply equally to potential sources and to potential victims. A design which easily couples energy to the outside world will equally easily couple energy in and will be susceptible. A single improvement will often reduce both emissions and susceptibility. Grounding and shielding aim to reduce emissions or divert EMI away from the victim by providing an alternative, low-impedance path. Techniques include: Other general measures include: Additional measures to reduce emissions include: Additional measures to reduce susceptibility include: Testing is required to confirm that a particular device meets the required standards. It is divided broadly into emissions testing and susceptibility testing. Open-area test sites, or OATS,[4]are the reference sites in most standards. They are especially useful for emissions testing of large equipment systems. However, RF testing of a physical prototype is most often carried out indoors, in a specialized EMC test chamber. Types of the chamber includeanechoic,reverberationand thegigahertz transverse electromagnetic cell(GTEM cell). Sometimescomputational electromagneticssimulations are used to test virtual models. Like all compliance testing, it is important that the test equipment, including the test chamber or site and any software used, be properly calibrated and maintained. Typically, a given run of tests for a particular piece of equipment will require anEMC test planand a follow-uptest report. The full test program may require the production of several such documents. Emissions are typically measured for radiated field strength and where appropriate for conducted emissions along cables and wiring. Inductive (magnetic) and capacitive (electric) field strengths are near-field effects and are only important if the device under test (DUT) is designed for a location close to other electrical equipment. For conducted emissions, typical transducers include theLISN(line impedance stabilization network) or AMN (artificial mains network) and the RFcurrent clamp. For radiated emission measurement, antennas are used as transducers. Typical antennas specified includedipole,biconical,log-periodic, double ridged guide and conical log-spiral designs. Radiated emissions must be measured in all directions around the DUT. Specialized EMI test receivers or EMI analyzers are used for EMC compliance testing. These incorporate bandwidths and detectors as specified by international EMC standards. An EMI receiver may be based on aspectrum analyserto measure the emission levels of the DUT across a wide band of frequencies (frequency domain), or on a tunable narrower-band device which is swept through the desired frequency range. EMI receivers along with specified transducers can often be used for both conducted and radiated emissions. Pre-selector filters may also be used to reduce the effect of strong out-of-band signals on the front-end of the receiver. Some pulse emissions are more usefully characterized using anoscilloscopeto capture the pulse waveform in the time domain. Radiated field susceptibility testing typically involves a high-powered source of RF or EM energy and a radiating antenna to direct the energy at the potential victim or device under test (DUT). Conducted voltage and current susceptibility testing typically involves a high-powered signal generator, and acurrent clampor other type oftransformerto inject the test signal. Transient or EMP signals are used to test the immunity of the DUT against powerline disturbances including surges, lightning strikes and switching noise.[5]In motor vehicles, similar tests are performed on battery and signal lines.[6][7]The transient pulse may be generated digitally and passed through a broadband pulse amplifier, or applied directly to the transducer from a specialized pulse generator.Electrostatic dischargetesting is typically performed with apiezo spark generatorcalled an "ESD pistol". Higher energy pulses, such as lightning or nuclear EMP simulations, can require a largecurrent clampor a large antenna which completely surrounds the DUT. Some antennas are so large that they are located outdoors, and care must be taken not to cause an EMP hazard to the surrounding environment. Several organizations, both national and international, work to promote international co-operation on standardization (harmonization), including publishing various EMC standards. Where possible, a standard developed by one organization may be adopted with little or no change by others. This helps for example to harmonize national standards across Europe. International standards organizations include: Among the main national organizations are: Compliance with national or international standards is usually laid down by laws passed by individual nations. Different nations can require compliance with different standards. InEuropean law, EU directive 2014/30/EU (previously 2004/108/EC) on EMC defines the rules for the placing on the market/putting into service of electric/electronic equipment within theEuropean Union. The Directive applies to a vast range of equipment including electrical and electronic appliances, systems and installations. Manufacturers ofelectric and electronic devicesare advised to run EMC tests in order to comply with compulsoryCE-labeling. More are given in thelist of EMC directives. Compliance with the applicable harmonised standards whose reference is listed in the OJEU under the EMC Directive gives presumption of conformity with the corresponding essential requirements of the EMC Directive. In 2019, the USA adopted a program for the protection of critical infrastructure against an electromagnetic pulse, whether caused by ageomagnetic stormor a high-altitude nuclear weapon.[8]
https://en.wikipedia.org/wiki/Electromagnetic_compatibility
Demonstratives(abbreviatedDEM) arewords, such asthisandthat, used to indicate which entities are being referred to and to distinguish those entities from others. They are typicallydeictic, their meaning depending on a particularframe of reference, and cannot be understood without context. Demonstratives are often used in spatial deixis (where the speaker or sometimes the listener is to provide context), but also in intra-discourse reference (includingabstract concepts) oranaphora, where the meaning is dependent on something other than the relative physical location of the speaker. An example is whether something is currently being said or was said earlier. Demonstrative constructions include demonstrativeadjectivesor demonstrativedeterminers, which specifynouns(as inPutthatcoat on), and demonstrativepronouns, which stand independently (as inPutthaton). The demonstratives inEnglisharethis,that,these,those, and the archaicyonandyonder, along withthis one, these ones,that oneandthose onesas substitutes for the pronouns. Many languages, such asEnglishandStandard Chinese, make a two-way distinction between demonstratives. Typically, one set of demonstratives isproximal, indicating objects close to the speaker (Englishthis), and the other series isdistal, indicating objects further removed from the speaker (Englishthat). Other languages, likeFinnish,Nandi,Hawaiian,Latin,Spanish,Portuguese,Italian(in some formal writing),Armenian,Serbo-Croatian,Macedonian,Georgian,Basque,Korean,Japanese,Ukrainian,Bengali, andSri Lankan Tamilmake a three-way distinction.[1]Typically there is a distinction betweenproximalor first person (objects near to the speaker),medialor second person (objects near to theaddressee), anddistalor third person[2](objects far from both). So for example, in Portuguese: Further oppositions are created with place adverbs. in Italian (medial pronouns, in most of Italy, only survive in historical texts and bureaucratic texts. However, they're of wide and very common usage in some Regions, like Tuscany): in Hawaiian: in Armenian (based on the proximal "s", medial "d/t", and distal "n"): այս ays խնձորը khndzorë այս խնձորը ays khndzorë "this apple" այդ ayd խնձորը khndzorë այդ խնձորը ayd khndzorë "that apple (near you)" այն ayn խնձորը khndzorë այն խնձորը ayn khndzorë "yon apple (over there, away from both of us)" and, in Georgian: ამისი amisi მამა mama ამისი მამა amisi mama "this one's father" იმისი imisi ცოლი coli იმისი ცოლი imisi coli "that one's wife" მაგისი magisi სახლი saxli მაგისი სახლი magisi saxli "that (by you) one's house" and, in Ukrainian (note that Ukrainian has not only number, but also threegrammatical gendersin singular): and, in Japanese: この kono リンゴ ringo この リンゴ kono ringo "this apple" その sono リンゴ ringo その リンゴ sono ringo "that apple" あの ano リンゴ ringo あの リンゴ ano ringo "that apple (over there)" In Nandi (Kalenjin of Kenya, Uganda and Eastern Congo): Chego chu, Chego choo, Chego chuun "this milk", "that milk" (near the second person) and "that milk" (away from the first and second person, near a third person or even further away). Ancient Greekhas a three-way distinction betweenὅδε(hóde"this here"),οὗτος(hoûtos"this"), andἐκεῖνος(ekeînos"that"). Spanish,TamilandSerialso make this distinction.Frenchhas a two-way distinction, with the use of postpositions "-ci" (proximal) and "-là" (distal) as incet homme-ciandcet homme-là, as well as the pronounsceandcela/ça. English has an archaic but occasionally used three-way distinction ofthis,that, andyonder. Arabichas also a three-way distinction in its formalClassicalandModern Standardvarieties. Very rich, with more than 70 variants, the demonstrative pronouns in Arabic principally change depending on the gender and the number. They mark a distinction in number for singular, dual, and plural. For example: InModern German(and theScandinavian languages), the non-selective deicticdasKind,derKleine,dieKleineand the selective onedasKind,derKleine,dieKleineare homographs, but they are spoken differently. The non-selective deictics are unstressed whereas the selective ones (demonstratives) are stressed. There is a second selective deictic, namelydiesesKind,dieserKleine,dieseKleine. Distance either from the speaker or from the addressee is either marked by the opposition between these two deictics or by the addition of a place deictic. Distance-marking Thing Demonstrative Thing Demonstrative plus Distance-marking Place Demonstrative A distal demonstrative exists inGerman, cognate to the Englishyonder, but it is used only in formal registers.[3] Cognates of "yonder" still exist in some Northern English and Scots dialects; There are languages which make a four-way distinction, such asNorthern Sami: These four-way distinctions are often termed proximal,mesioproximal,mesiodistal, and distal. Many non-European languages make further distinctions; for example, whether the object referred to is uphill or downhill from the speaker, whether the object is visible or not (as inMalagasy), and whether the object can be pointed to as a whole or only in part. TheEskimo–Aleut languages,[4]and theKirantibranch[5]of theSino-Tibetan language familyare particularly well known for their many contrasts. The demonstratives inSeriare compound forms based on the definite articles (themselves derived from verbs) and therefore incorporate the positional information of the articles (standing, sitting, lying, coming, going) in addition to the three-wayspatialdistinction. This results in a quite elaborated set of demonstratives. Latinhad several sets of demonstratives, includinghic,haec,hoc("this near me");iste,ista,istud("that near you"); andille,illa,illud("that over there") – note that Latin has not only number, but also threegrammatical genders. The third set of Latin demonstratives (ille, etc.), developed into thedefinite articlesin mostRomance languages, such asel,la,los,lasinSpanish, andle,la,lesinFrench. With the exception ofRomanian, and some varieties of Spanish and Portuguese, the neuter gender has been lost in the Romance languages. Spanish and Portuguese have kept neuter demonstratives: Some forms of Spanish (Caribbean Spanish,Andalusian Spanish, etc.) also occasionally employello, which is an archaic survival of the neuter pronoun from Latinillud.[citation needed] Neuter demonstratives refer to ideas of indeterminate gender, such as abstractions and groups of heterogeneous objects, and has a limited agreement in Portuguese, for example, "all of that" can be translated as "todo aquele" (m), "toda aquela" (f) or "tudo aquilo" (n) in Portuguese, although the neuter forms require a masculine adjective agreement: "Tudo (n) aquilo (n) está quebrado (m)" (All of that is broken). Classical Chinesehad three main demonstrative pronouns: proximal此(this), distal彼(that), and distance-neutral是(this or that).[6]The frequent use of是as aresumptivedemonstrative pronoun that reasserted thesubjectbefore a nounpredicatecaused it to develop into its colloquial use as acopulaby theHan periodand subsequently its standard use as a copula inModern Standard Chinese.[6]Modern Mandarin has two main demonstratives, proximal這/这and distal那; its use of the three Classical demonstratives has become mostlyidiomatic,[7]although此continues to be used with some frequency inmodern written Chinese.Cantoneseuses proximal呢and distal嗰instead of這and那, respectively. Similarly,Northern Wulanguages tend to also have a distance-neutral demonstrative搿, which is etymologically a checked-tone derivation of個. In lects such asShanghainese, distance-based demonstratives exist, but are only used constrastively.Suzhounese, on the other hand, has several demonstratives that form a two-way contrast, but also have搿, which is neutral.[8][9] Hungarianhas two spatial demonstratives:ez(this) andaz(that). These inflect for number and case even in attributive position (attributes usually remain uninflected in Hungarian) with possible orthographic changes; e.g.,ezzel(with this),abban(in that). A third degree of deixis is also possible in Hungarian, with the help of theam-prefix:amaz(that there). The use of this, however, is emphatic (when the speaker wishes to emphasize the distance) and not mandatory. TheCree languagehas a special demonstrative for "things just gone out of sight," andIlocano, a language of thePhilippines, has three words forthisreferring to a visible object, a fourth for things not in view and a fifth for things that no longer exist."[10]TheTiriyó languagehas a demonstrative for "things audible but non-visible"[11] While most languages andlanguage familieshave demonstrative systems, some have systems highly divergent from or more complex than the relatively simple systems employed inIndo-European languages. InYupik languages, notably in theChevak Cup’iklanguage, there exists a 29-way distinction in demonstratives, with demonstrative indicators distinguished according to placement in a three-dimensional field around the interlocutor(s), as well as by visibility and whether or not the object is in motion.[12][failed verification] It is relatively common for a language to distinguish betweendemonstrative determinersordemonstrative adjectives(sometimes also calleddeterminative demonstratives,adjectival demonstrativesoradjectival demonstrative pronouns) anddemonstrative pronouns(sometimes calledindependent demonstratives,substantival demonstratives,independent demonstrative pronounsorsubstantival demonstrative pronouns). A demonstrativedeterminerspecifies a noun asdefinite, singular or plural, and proximal or distal: A demonstrativepronounstands on its own, replacing rather than modifying a noun: There are four common demonstrative pronouns in English:this,that,these,those.[13]Some dialects, such asSouthern American English, also useyonandyonder, where the latter is usually employed as a demonstrative determiner.[14]AuthorBill Brysonlaments the "losses along the way" ofyonandyonder:[14] Today we have two demonstrative pronouns,thisandthat, but inShakespeare's day there was a third,yon(as in theMiltonline "Him that yon soars on golden wing"), which suggested a further distance thanthat. You could talk about this hat, that hat, and yon hat. Today the word survives as a colloquialadjective,yonder, but our speech is fractionally impoverished for its loss. Many languages have sets ofdemonstrative adverbsthat are closely related to the demonstrative pronouns in a language. For example, corresponding to the demonstrative pronounthatare the adverbs such asthen(= "at that time"),there(= "at that place"),thither(= "to that place"),thence(= "from that place"); equivalent adverbs corresponding to the demonstrative pronounthisarenow,here,hither,hence. A similar relationship exists between theinterrogative pronounwhatand theinterrogative adverbswhen,where,whither,whence. Seepro-formfor a full table. As mentioned above, while the primary function of demonstratives is to provide spatial references of concrete objects (that (building),this (table)), there is a secondary function: referring to items of discourse.[15]For example: In the above,this sentencerefers to the sentence being spoken, and the pronounthisrefers to what is about to be spoken;that wayrefers to "the previously mentioned way", and the pronounthatrefers to the content of the previous statement. These are abstract entities of discourse, not concrete objects. Each language may have subtly different rules on how to use demonstratives to refer to things previously spoken, currently being spoken, or about to be spoken. In English,that(or occasionallythose) refers to something previously spoken, whilethis(or occasionallythese) refers to something about to be spoken (or, occasionally, something being simultaneously spoken).[citation needed]
https://en.wikipedia.org/wiki/Demonstrative
ISO/IEC 9126Software engineering — Product qualitywas aninternational standardfor theevaluationofsoftware quality. It has been replaced byISO/IEC 25010:2011.[1] The fundamental objective of the ISO/IEC 9126 standard is to address some of the well-known human biases that can adversely affect the delivery and perception of a software development project. These biases include changing priorities after the start of a project or not having any clear definitions of "success". By clarifying, then agreeing on the project priorities and subsequently converting abstract priorities (compliance) to measurable values (output data can be validated against schema X with zero intervention), ISO/IEC 9126 tries to develop a common understanding of the project's objectives and goals. The standard is divided into four parts: The quality model presented in the first part of the standard, ISO/IEC 9126-1,[2]classifiessoftware qualityin a structured set of characteristics and sub-characteristics as follows: Each quality sub-characteristic (e.g. adaptability) is further divided into attributes. An attribute is an entity which can be verified or measured in the software product. Attributes are not defined in the standard, as they vary between different software products. Software product is defined in a broad sense: it encompasses executables, source code, architecture descriptions, and so on. As a result, the notion of user extends to operators as well as to programmers, which are users of components such as software libraries. The standard provides a framework for organizations to define a quality model for a software product. On doing so, however, it leaves up to each organization the task of specifying precisely its own model. This may be done, for example, by specifying target values for quality metrics which evaluates the degree of presence of quality attributes. Internal metrics are those which do not rely on software execution (static measure). External metrics are applicable to running software. Quality-in-use metrics are only available when the final product is used in real conditions. Ideally, the internal quality determines the external quality and external quality determines quality in use. This standard stems from the GE model for describing software quality, presented in 1977 by McCall et al., which is organized around three types of quality characteristic: ISO/IEC 9126 distinguishes between a defect and a nonconformity, adefectbeing "The nonfulfilment of intended usage requirements", whereas anonconformityis "The nonfulfilment of specified requirements". A similar distinction is made between validation and verification, known as V&V in the testing trade. ISO/IEC 9126 was issued on December 19, 1991. On June 15, 2001, ISO/IEC 9126:1991 was replaced by ISO/IEC 9126:2001 (four parts 9126–1 to 9126–4). On March 1, 2011, ISO/IEC 9126 was replaced by ISO/IEC25010:2011 Systems and software engineering - Systems and software Quality Requirements and Evaluation (SQuaRE) - System and software quality models. Compared to 9126, "security" and "compatibility" were added as main characteristics. ISO/IEC then started work onSQuaRE(Software product Quality Requirements and Evaluation), a more extensive series of standards to replace ISO/IEC 9126, with numbers of the form ISO/IEC 250mn. For instance, ISO/IEC 25000 was issued in 2005, andISO/IEC 25010, which supersedes ISO/IEC 9126-1, was issued in March 2011. ISO 25010 has eight product quality characteristics (in contrast to ISO 9126's six), and 31 subcharacteristics.[3]
https://en.wikipedia.org/wiki/ISO/IEC_9126
TheStream Control Transmission Protocol(SCTP) is acomputer networkingcommunications protocolin thetransport layerof theInternet protocol suite. Originally intended forSignaling System 7(SS7) message transport in telecommunication, the protocol provides the message-oriented feature of theUser Datagram Protocol(UDP), while ensuring reliable, in-sequence transport of messages withcongestion controllike theTransmission Control Protocol(TCP). Unlike UDP and TCP, the protocol supportsmultihomingand redundant paths to increase resilience and reliability. SCTP is standardized by theInternet Engineering Task Force(IETF) inRFC9260. The SCTP reference implementation was released as part ofFreeBSDversion 7, and has since been widely ported to other platforms. TheIETFSignaling Transport (SIGTRAN) working group defined the protocol (number 132[1]) in October 2000,[2]and the IETF Transport Area (TSVWG) working group maintains it.RFC9260defines the protocol.RFC3286provides an introduction. SCTP applications submit data for transmission in messages (groups of bytes) to the SCTP transport layer. SCTP places messages and control information into separatechunks(data chunks and control chunks), each identified by achunk header. The protocol can fragment a message into multiple data chunks, but each data chunk contains data from only one user message. SCTP bundles the chunks into SCTP packets. The SCTP packet, which is submitted to theInternet Protocol, consists of a packet header, SCTP control chunks (when necessary), followed by SCTP data chunks (when available). SCTP may be characterized as message-oriented, meaning it transports a sequence of messages (each being a group of bytes), rather than transporting an unbroken stream of bytes as in TCP. As in UDP, in SCTP a sender sends a message in one operation, and that exact message is passed to the receiving application process in one operation. In contrast, TCP is a stream-oriented protocol, transportingstreams of bytesreliably and in order. However TCP does not allow the receiver to know how many times the sender application called on the TCP transport passing it groups of bytes to be sent out. At the sender, TCP simply appends more bytes to a queue of bytes waiting to go out over the network, rather than having to keep a queue of individual separate outbound messages which must be preserved as such. The termmulti-streamingrefers to the capability of SCTP to transmit several independent streams of chunks in parallel, for example transmittingweb pageimages simultaneously with the web page text. In essence, it involves bundling several connections into a single SCTP association, operating on messages (or chunks) rather than bytes. TCP preserves byte order in the stream by including a byte sequence number with eachsegment. SCTP, on the other hand, assigns a sequence number or a message-id[note 1]to eachmessagesent in a stream. This allows independent ordering of messages in different streams. However, message ordering is optional in SCTP; a receiving application may choose to process messages in the order of receipt instead of in the order of sending. Features of SCTP include: The designers of SCTP originally intended it for the transport of telephony (i.e. Signaling System 7) over Internet Protocol, with the goal of duplicating some of the reliability attributes of the SS7 signaling network in IP. This IETF effort is known asSIGTRAN. In the meantime, other uses have been proposed, for example, theDiameterprotocol[3]andReliable Server Pooling(RSerPool).[4] TCP has provided the primary means to transfer data reliably across the Internet. However, TCP has imposed limitations on several applications. FromRFC4960: Adoption has been slowed by lack of awareness, lack of implementations (particularly in Microsoft Windows), lack of application support and lack of network support.[6] SCTP has seen adoption in themobile telephonyspace as the transport protocol for severalcore network interfaces.[7] SCTP provides redundant paths to increase reliability. Each SCTP end point needs to check reachability of the primary and redundant addresses of the remote end point using aheartbeat. Each SCTP end point needs to acknowledge the heartbeats it receives from the remote end point. When SCTP sends a message to a remote address, the source interface will only be decided by the routing table of the host (and not by SCTP). In asymmetric multihoming, one of the two endpoints does not support multihoming. In local multihoming and remote single homing, if the remote primary address is not reachable, the SCTP association fails even if an alternate path is possible. An SCTP packet consists of two basic sections: Each chunk starts with a one-byte type identifier, with 15 chunk types defined byRFC9260, and at least 5 more defined by additional RFCs.[note 2]Eight flag bits, a two-byte length field, and the data compose the remainder of the chunk. If the chunk does not form a multiple of 4 bytes (i.e., the length is not a multiple of 4), then it is padded with zeros, which are not included in the chunk length. The two-byte length field limits each chunk to a 65,535-byte length (including the type, flags and length fields). Although encryption was not part of the original SCTP design, SCTP was designed with features for improved security, such as 4-wayhandshake(compared toTCP 3-way handshake) to protect againstSYN floodingattacks, and large "cookies" for association verification and authenticity. Reliability was also a key part of the security design of SCTP. Multihoming enables an association to stay open even when some routes and interfaces are down. This is of particular importance forSIGTRANas it carriesSS7over an IP network using SCTP, and requires strong resilience during link outages to maintain telecommunication service even when enduring network anomalies. The SCTP reference implementation runs on FreeBSD, Mac OS X, Microsoft Windows, and Linux.[8] The followingoperating systemsimplement SCTP: Third-party drivers: Userspacelibrary: The following applications implement SCTP: In the absence of native SCTP support in operating systems, it is possible totunnelSCTP over UDP,[22]as well as to map TCP API calls to SCTP calls so existing applications can use SCTP without modification.[23]
https://en.wikipedia.org/wiki/Stream_Control_Transmission_Protocol
Aweb hosting serviceis a type ofInternet hosting servicethat hostswebsitesfor clients, i.e. it offers the facilities required for them to create and maintain a site and makes it accessible on theWorld Wide Web. Companies providing web hosting services are sometimes calledweb hosts. Typically, web hosting requires the following: Until 1991, theInternetwas restricted to use only "... for research and education in the sciences and engineering..."[1][2]and was used foremail,telnet,FTPandUSENETtraffic—but only a tiny number of web pages. The World Wide Web protocols had only just been written[3]and not until the end of 1993 would there be a graphical web browser for Mac or Windows computers.[4]Even after there was some opening up of Internet access, thesituation was confused[clarification needed]until 1995.[5] To host awebsiteon theinternet, an individual or company would need their owncomputerorserver.[2]As not all companies had the budget or expertise to do this, web hosting services began to offer to host users'websiteson their own servers, without the client needing to own the necessary infrastructure required to operate the website. The owners of the websites, also calledwebmasters, would be able to create a website that would be hosted on the web hosting service's server and published to the web by the web hosting service. As the number of users on the World Wide Web grew, the pressure for companies, both large and small, to have an online presence grew. By 1995, companies such asGeoCities,AngelfireandTripodwere offering free hosting.[6] Static web pagefiles can beuploadedviaFile Transfer Protocol(FTP) or a web interface. The files are usually delivered to the Web "as is" or with minimal processing. ManyInternet service providers(ISPs) offer this service free to subscribers. Individuals and organizations may also obtain web page hosting from alternative service providers. Free web hosting service is offered by different companies with limited services, sometimes supported by advertisements,[needs update?]and often limited when compared to paid hosting. Single page hosting is generally sufficient forpersonal web pages. Personal website hosting is typically free, advertisement-sponsored, or inexpensive. Business website hosting often has a higher expense depending upon the size and type of the site. Commercial services that provide static page hosting includeGitHub Pages, where the website version control is tracked usingGit. A complex site calls for a more comprehensive package that providesdatabasesupportand application development platforms (e.g.ASP.NET,ColdFusion,Java EE,Perl/Plack,PHPorRuby on Rails). These facilities allow customers to write or install scripts for applications likeforumsandcontent management. Web hosting packages often include aweb content management system, so the end-user does not have to worry about the more technical aspects.Secure Sockets Layer(SSL) is used for websites that wish to encrypt the transmitted data. Internet hosting services can runweb servers. The scope of web hosting services varies greatly. Some specific types of hosting provided by web host service providers: The host may also provide an interface orcontrol panelfor managing theweb serverand installing scripts, as well as other modules and service applications like e-mail. A web server that does not use acontrol panelfor managing the hosting account, is often referred to as a "headless" server. Some hosts specialize in certain software or services (e.g. e-commerce, blogs, etc.). Theavailabilityof a website is measured by the percentage of a year in which the website is publicly accessible and reachable via the Internet. This is different from measuring theuptimeof a system. Uptime refers to the system itself being online. Uptime does not take into account being able to reach it as in the event of a network outage.[citation needed]A hosting provider'sService Level Agreement(SLA) may include a certain amount of scheduleddowntimeper year in order to perform maintenance on the systems. This scheduled downtime is often excluded from the SLA timeframe, and needs to be subtracted from the Total Time when availability is calculated. Depending on the wording of an SLA, if the availability of a system drops below that in the signed SLA, a hosting provider often will provide a partial refund for time lost. How downtime is determined changes from provider to provider, therefore reading the SLA is imperative.[10]Not all providers release uptime statistics. Because web hosting services host websites belonging to their customers,online securityis an important concern. When a customer agrees to use a web hosting service, they are relinquishing control of the security of their site to the company that is hosting the site. The level of security that a web hosting service offers is extremely important to a prospective customer and can be a major factor when considering which provider a customer may choose.[11] Web hosting servers can be attacked by malicious users in different ways, including uploadingmalwareor maliciouscodeonto a hostedwebsite. These attacks may be done for different reasons, including stealing credit card data, launching aDistributed Denial of Service Attack(DDoS) orspamming.[12]
https://en.wikipedia.org/wiki/Web_hosting_service
Meno(/ˈmiːnoʊ/;Ancient Greek:Μένων,Ménōn) is aSocratic dialoguewritten byPlatoaround 385 BC., but set at an earlier date around 402 BC.[1]Menobegins the dialogue by asking Socrates whethervirtue(inAncient Greek:ἀρετή,aretē) can be taught, acquired by practice, or comes bynature.[2]In order to determine whether virtue is teachable or not,Socratestells Meno that they first need to determine what virtue is.[3]When the characters speak of virtue, oraretē, they refer to virtue in general, rather than particular virtues, such as justice or temperance. The first part of the work showcasesSocratic dialectical style; Meno, unable to adequately define virtue, is reduced to confusion oraporia.[4]Socrates suggests that they seek an adequate definition for virtue together. In response, Meno suggests that it is impossible to seek what one does not know, because one will be unable to determine whether one has found it.[5] Socrates challenges Meno's argument, often called "Meno's Paradox", "Learner's Paradox", or the "Arabic Paradox",[citation needed]by introducing the theory of knowledge as recollection (anamnesis). As presented in the dialogue, the theory proposes that souls are immortal and know all things in a disembodied state; learning in the embodied is actually a process of recollecting that which the soul knew before it came into a body.[6]Socrates demonstrates recollection in action by posing a mathematical puzzle to one of Meno's slaves.[7]Subsequently, Socrates and Meno return to the question of whether virtue is teachable, employing the method of hypothesis. Near the end of the dialogue, Meno poses another famous puzzle, called "the Meno problem" or "the Value Problem for Knowledge", which questions why knowledge is valued more highly than true belief.[8]In response, Socrates provides a famous and somewhat enigmatic distinction between knowledge and true belief.[9] Plato'sMenois a Socratic dialogue in which the two main speakers,SocratesandMeno(alsotransliteratedas "Menon"), discuss human virtue: what it is, and whether or not it can be taught. Meno is visiting Athens fromThessalywith a large entourage of slaves attending him. Young, good-looking and well-born, he is a student ofGorgias, a prominentsophistwhose views on virtue clearly influence that of Meno's. Early in the dialogue, Meno claims that he has held forth many times on the subject of virtue, and in front of large audiences.[10] One of Meno'sslavesalso has a speaking role, as one of the features of the dialogue is Socrates' engagement with the slave to demonstrate his idea ofanamnesis: certain knowledge is innate and "recollected" by the soul through proper inquiry. Another participant later in the dialogue is Athenian politicianAnytus,[11]later one of theprosecutors of Socrates,[12]with whom Meno is friendly. The dialogue begins with Meno asking Socrates to tell him whether virtue can be taught. Socrates says that he does not know what virtue is, and neither does anyone else he knows.[3]Meno responds that, according to Gorgias, his Sophist mentor, virtue means different for different people, that what is virtuous for a man is to conduct himself in the city so that he helps his friends, injures his enemies, and takes care all the while that he personally comes to no harm. Virtue is different for a woman, he says. Her domain is the management of the household, and she is supposed to obey her husband. He says that children (male and female) have their own proper virtue, and so do old men—free orslaves.[13]Socrates objects: there must be some virtue common to all human beings. Socrates rejects the idea that human virtue depends on a person's sex or age. He leads Meno towards the idea that virtues are common to all people, thatsophrosunê('temperance', i.e. exercise ofself-control) anddikê(akadikaiosunê; 'justice', i.e. refrain from harming others) are virtues even in children and old men.[14]Meno proposes to Socrates that the "capacity to govern men" may be a virtue common to all people. Socrates points out to the slaveholder that "governing well" cannot be a virtue of a slave, because then he would not be a slave.[15] One of the errors that Socrates points out is that Meno lists many particular virtues without defining a common feature inherent to virtues which makes them thus. Socrates remarks that Meno makes many out of one, like somebody who breaks a plate.[16] Meno proposes that virtue is the desire for good things and the power to get them. Socrates points out that this raises a second problem—many people do not recognize evil.[17]The discussion then turns to the question of accounting for the fact that so many people are mistaken about good and evil and take one for the other. Socrates asks Meno to consider whether good things must be acquired virtuously in order to be really good.[18]Socrates leads onto the question of whether virtue is one thing or many. No satisfactory definition of virtue emerges in theMeno. Socrates' comments, however, show that he considers a successful definition to be unitary, rather than a list of varieties of virtue, that it must contain all and only those terms which are genuine instances of virtue, and must not becircular.[19] Meno asks Socrates:[20][21] And how will you enquire, Socrates, into that which you do not know? What will you put forth as the subject of enquiry? And if you find what you want, how will you ever know that this is the thing which you did not know? Socrates rephrases the question, which has come to be the canonical statement of Meno's paradox or the paradox of inquiry:[20][22] [A] man cannot enquire either about that which he knows, or about that which he does not know; for if he knows, he has no need to enquire; and if not, he cannot; for he does not know the very subject about which he is to enquire. Socrates responds to thissophisticalparadox with amythos('narrative' or 'fiction') according to which souls are immortal and have learned everything prior totransmigratinginto the human body. Since the soul has had contact with real things prior to birth, we have only to 'recollect' them when alive. Such recollection requiresSocratic questioning, which according to Socrates is not teaching. Socrates demonstrates his method of questioning and recollection by interrogating a slave who is ignorant of geometry. Socrates begins one of the most influential dialogues of Western philosophy regarding the argument forinborn knowledge. By drawing geometric figures in the ground Socrates demonstrates that the slave is initially unaware of the length that a side must be in order to double the area of a square with 2-foot sides. The slave guesses first that the original side must be doubled in length (4 feet), and when this proves too much, that it must be 3 feet. This is still too much, and the slave is at a loss. Socrates claims that before he got hold of him the slave (who has been picked at random from Meno's entourage) might have thought he could speak "well and fluently" on the subject of a square double the size of a given square.[23]Socrates comments that this "numbing" he caused in the slave has done him no harm and has even benefited him.[24] Socrates then adds three more squares to the original square, to form a larger square four times the size. He draws four diagonal lines which bisect each of the smaller squares. Through questioning, Socrates leads the slave to the discovery that the square formed by these diagonals has an area of eight square feet, double that of the original. He says that the slave has "spontaneously recovered" knowledge which he knew from before he was born,[25]without having been taught. Socrates is satisfied that new beliefs were "newly aroused" in the slave. After witnessing the example with the slave boy, Meno tells Socrates that he thinks that Socrates is correct in his theory of recollection, to which Socrates agrees:[20][26] Some things I have said of which I am not altogether confident. But that we shall be better and braver and less helpless if we think that we ought to enquire, than we should have been if we indulged in the idle fancy that there was no knowing and no use in seeking to know what we do not know; that is a theme upon which I am ready to fight, in word and deed, to the utmost of my power. Meno now beseeches Socrates to return to the original question, how virtue is acquired, and in particular, whether or not it is acquired by teaching or through life experience. Socrates proceeds on the hypothesis that virtue is knowledge, and it is quickly agreed that, if this is true, virtue is teachable. They turn to the question of whether virtue is indeed knowledge. Socrates is hesitant, because, if virtue were knowledge, there should be teachers and learners of it, but there are none. CoincidentallyAnytusappears, whom Socrates praises as the son ofAnthemion, who earned his fortune with intelligence and hard work. He says that Anthemion had his son well-educated and so Anytus is well-suited to join the investigation. Socrates suggests that the sophists are teachers of virtue. Anytus is horrified, saying that he neither knows any, nor cares to know any. Socrates then questions why it is that men do not always produce sons of the same virtue as themselves. He alludes to other notable male figures, such asThemistocles,Aristides,PericlesandThucydides, and casts doubt on whether these men produced sons as capable of virtue as themselves. Anytus becomes offended and accuses Socrates ofslander, warning him to be careful expressing such opinions. (The historical Anytus was one of Socrates' accusers in histrial.) Socrates suggests that Anytus does not realize what slander is, and continues his dialogue with Meno as to the definition of virtue. After the discussion with Anytus, Socrates returns to quizzing Meno for his own thoughts on whether the sophists are teachers of virtue and whether virtue can be taught. Meno is again at a loss, and Socrates suggests that they have made a mistake in agreeing that knowledge is required for virtue. He points out the similarities and differences between "true belief" and "knowledge". True beliefs are as useful to us as knowledge, but they often fail to "stay in their place" and must be "tethered" by what he callsaitiaslogismos('calculation of reason' or 'reasoned explanation'), immediately adding that this isanamnesis, or recollection.[28] Whether Plato intends that the tethering of true beliefs with reasoned explanations must always involveanamnesisis explored in later interpretations of the text.[29][30]Socrates' distinction between "true belief" and "knowledge" forms the basis of the philosophicaldefinition of knowledgeas "justified true belief".Myles Burnyeatand others, however, have argued that the phraseaitias logismosrefers to a practical working out of a solution, rather than a justification.[31] Socrates concludes that, in the virtuous people of the present and the past, at least, virtue has been the result of divine inspiration, akin to the inspiration of the poets, whereas a knowledge of it will require answering the basic question,what is virtue?. In most modern readings these closing remarks are "evidently ironic",[32]but Socrates' invocation of the gods may be sincere, albeit "highly tentative".[33] This passage in the Meno is often seen as the first statement of the problem of the value of knowledge or "the Meno problem":how is knowledge more valuable than mere true belief?[34]The nature of knowledge and belief is also discussed in theThaetetus. Meno'stheme is also dealt with in the dialogueProtagoras, where Plato ultimately has Socrates arrive at the opposite conclusion: virtuecanbe taught. Likewise, while inProtagorasknowledge is uncompromisingly this-worldly, inMenothe theory of recollection points to a link between knowledge and eternal truths.[19] "[A] man cannot search either for what he knows or for what he does not know; He cannot search for what he knows--since he knows it, there is no need to search--nor for what he does not know, for he does not know what to look for."
https://en.wikipedia.org/wiki/Meno
Thetheory of belief functions, also referred to asevidence theoryorDempster–Shafer theory(DST), is a general framework for reasoning with uncertainty, with understood connections to other frameworks such asprobability,possibilityandimprecise probability theories. First introduced byArthur P. Dempster[1]in the context ofstatistical inference, the theory was later developed byGlenn Shaferinto a general framework for modeling epistemic uncertainty—a mathematical theory ofevidence.[2][3]The theory allows one to combine evidence from different sources and arrive at a degree of belief (represented by a mathematical object calledbelief function) that takes into account all the available evidence. In a narrow sense, the term Dempster–Shafer theory refers to the original conception of the theory by Dempster and Shafer. However, it is more common to use the term in the wider sense of the same general approach, as adapted to specific kinds of situations. In particular, many authors have proposed different rules for combining evidence, often with a view to handling conflicts in evidence better.[4]The early contributions have also been the starting points of many important developments, including thetransferable belief modeland the theory of hints.[5] Dempster–Shafer theory is a generalization of theBayesian theory of subjective probability. Belief functions base degrees of belief (or confidence, or trust) for one question on the subjective probabilities for a related question. The degrees of belief themselves may or may not have the mathematical properties of probabilities; how much they differ depends on how closely the two questions are related.[6]Put another way, it is a way of representingepistemicplausibilities, but it can yield answers that contradict those arrived at usingprobability theory. Often used as a method ofsensor fusion, Dempster–Shafer theory is based on two ideas: obtaining degrees of belief for one question from subjective probabilities for a related question, and Dempster's rule[7]for combining such degrees of belief when they are based on independent items of evidence. In essence, the degree of belief in a proposition depends primarily upon the number of answers (to the related questions) containing the proposition, and the subjective probability of each answer. Also contributing are the rules of combination that reflect general assumptions about the data. In this formalism adegree of belief(also referred to as amass) is represented as abelief functionrather than aBayesianprobability distribution. Probability values are assigned tosetsof possibilities rather than single events: their appeal rests on the fact they naturally encode evidence in favor of propositions. Dempster–Shafer theory assigns its masses to all of the subsets of the set of states of a system—inset-theoreticterms, thepower setof the states. For instance, assume a situation where there are two possible states of a system. For this system, any belief function assigns mass to the first state, the second, to both, and to neither. Shafer's formalism starts from a set ofpossibilitiesunder consideration, for instance numerical values of a variable, or pairs of linguistic variables like "date and place of origin of a relic" (asking whether it is antique or a recent fake). A hypothesis is represented by a subset of thisframe of discernment, like "(Ming dynasty, China)", or "(19th century, Germany)".[2]: p.35f. Shafer's framework allows for belief about such propositions to be represented as intervals, bounded by two values,belief(orsupport) andplausibility: In a first step, subjective probabilities (masses) are assigned to all subsets of the frame; usually, only a restricted number of sets will have non-zero mass (focal elements).[2]: 39f.Beliefin a hypothesis is constituted by the sum of the masses of all subsets of the hypothesis-set. It is the amount of belief that directly supports either the given hypothesis or a more specific one, thus forming a lower bound on its probability. Belief (usually denotedBel) measures the strength of the evidence in favor of a propositionp. It ranges from 0 (indicating no evidence) to 1 (denoting certainty).Plausibilityis 1 minus the sum of the masses of all sets whose intersection with the hypothesis is empty. Or, it can be obtained as the sum of the masses of all sets whose intersection with the hypothesis is not empty. It is an upper bound on the possibility that the hypothesis could be true, because there is only so much evidence that contradicts that hypothesis. Plausibility (denoted by Pl) is thus related to Bel by Pl(p) = 1 − Bel(~p). It also ranges from 0 to 1 and measures the extent to which evidence in favor of ~pleaves room for belief inp. For example, suppose we have a belief of 0.5 for a proposition, say "the cat in the box is dead." This means that we have evidence that allows us to state strongly that the proposition is true with a confidence of 0.5. However, the evidence contrary to that hypothesis (i.e. "the cat is alive") only has a confidence of 0.2. The remaining mass of 0.3 (the gap between the 0.5 supporting evidence on the one hand, and the 0.2 contrary evidence on the other) is "indeterminate," meaning that the cat could either be dead or alive. This interval represents the level of uncertainty based on the evidence in the system. The "neither" hypothesis is set to zero by definition (it corresponds to "no solution"). The orthogonal hypotheses "Alive" and "Dead" have probabilities of 0.2 and 0.5, respectively. This could correspond to "Live/Dead Cat Detector" signals, which have respective reliabilities of 0.2 and 0.5. Finally, the all-encompassing "Either" hypothesis (which simply acknowledges there is a cat in the box) picks up the slack so that the sum of the masses is 1. The belief for the "Alive" and "Dead" hypotheses matches their corresponding masses because they have no subsets; belief for "Either" consists of the sum of all three masses (Either, Alive, and Dead) because "Alive" and "Dead" are each subsets of "Either". The "Alive" plausibility is 1 −m(Dead): 0.5 and the "Dead" plausibility is 1 −m(Alive): 0.8. In other way, the "Alive" plausibility ism(Alive) +m(Either) and the "Dead" plausibility ism(Dead) +m(Either). Finally, the "Either" plausibility sumsm(Alive) +m(Dead) +m(Either). The universal hypothesis ("Either") will always have 100% belief and plausibility—it acts as achecksumof sorts. Here is a somewhat more elaborate example where the behavior of belief and plausibility begins to emerge. We're looking through a variety of detector systems at a single faraway signal light, which can only be coloured in one of three colours (red, yellow, or green): Events of this kind would not be modeled as distinct entities in probability space as they are here in mass assignment space. Rather the event "Red or Yellow" would be considered as the union of the events "Red" and "Yellow", and (seeprobability axioms)P(Red or Yellow) ≥P(Yellow), andP(Any) = 1, whereAnyrefers toRedorYelloworGreen. In DST the mass assigned toAnyrefers to the proportion of evidence that can not be assigned to any of the other states, which here means evidence that says there is a light but does not say anything about what color it is. In this example, the proportion of evidence that shows the light is eitherRedorGreenis given a mass of 0.05. Such evidence might, for example, be obtained from a R/G color blind person. DST lets us extract the value of this sensor's evidence. Also, in DST the empty set is considered to have zero mass, meaning here that the signal light system exists and we are examining its possible states, not speculating as to whether it exists at all. Beliefs from different sources can be combined with various fusion operators to model specific situations of belief fusion, e.g. withDempster's rule of combination, which combines belief constraints[8]that are dictated by independent belief sources, such as in the case of combining hints[5]or combining preferences.[9]Note that the probability masses from propositions that contradict each other can be used to obtain a measure of conflict between the independent belief sources. Other situations can be modeled with different fusion operators, such as cumulative fusion of beliefs from independent sources, which can be modeled with the cumulative fusion operator.[10] Dempster's rule of combination is sometimes interpreted as an approximate generalisation ofBayes' rule. In this interpretation the priors and conditionals need not be specified, unlike traditional Bayesian methods, which often use a symmetry (minimax error) argument to assign prior probabilities to random variables (e.g.assigning 0.5 to binary values for which no information is available about which is more likely). However, any information contained in the missing priors and conditionals is not used in Dempster's rule of combination unless it can be obtained indirectly—and arguably is then available for calculation using Bayes equations. Dempster–Shafer theory allows one to specify a degree of ignorance in this situation instead of being forced to supply prior probabilities that add to unity. This sort of situation, and whether there is a real distinction betweenriskandignorance, has been extensively discussed by statisticians and economists. See, for example, the contrasting views ofDaniel Ellsberg,Howard Raiffa,Kenneth ArrowandFrank Knight.[citation needed] LetXbe theuniverse: the set representing all possible states of a system under consideration. Thepower set is the set of all subsets ofX, including theempty set∅{\displaystyle \emptyset }. For example, if: then The elements of the power set can be taken to represent propositions concerning the actual state of the system, by containing all and only the states in which the proposition is true. The theory of evidence assigns a belief mass to each element of the power set. Formally, a function is called abasic belief assignment(BBA), when it has two properties. First, the mass of the empty set is zero: Second, the masses of all the members of the power set add up to a total of 1: The massm(A) ofA, a given member of the power set, expresses the proportion of all relevant and available evidence that supports the claim that the actual state belongs toAbut to no particular subset ofA. The value ofm(A) pertainsonlyto the setAand makes no additional claims about any subsets ofA, each of which have, by definition, their own mass. From the mass assignments, the upper and lower bounds of a probability interval can be defined. This interval contains the precise probability of a set of interest (in the classical sense), and is bounded by two non-additive continuous measures calledbelief(orsupport) andplausibility: The belief bel(A) for a setAis defined as the sum of all the masses of subsets of the set of interest: The plausibility pl(A) is the sum of all the masses of the setsBthat intersect the set of interestA: The two measures are related to each other as follows: And conversely, for finiteA, given the belief measure bel(B) for all subsetsBofA, we can find the massesm(A) with the following inverse function: where |A−B| is the difference of the cardinalities of the two sets.[4] Itfollows fromthe last two equations that, for a finite setX, one needs to know only one of the three (mass, belief, or plausibility) to deduce the other two; though one may need to know the values for many sets in order to calculate one of the other values for a particular set. In the case of an infiniteX, there can be well-defined belief and plausibility functions but no well-defined mass function.[11] The problem we now face is how to combine two independent sets of probability mass assignments in specific situations. In case different sources express their beliefs over the frame in terms of belief constraints such as in the case of giving hints or in the case of expressing preferences, then Dempster's rule of combination is the appropriate fusion operator. This rule derives common shared belief between multiple sources and ignoresallthe conflicting (non-shared) belief through a normalization factor. Use of that rule in other situations than that of combining belief constraints has come under serious criticism, such as in case of fusing separate belief estimates from multiple sources that are to be integrated in a cumulative manner, and not as constraints. Cumulative fusion means that all probability masses from the different sources are reflected in the derived belief, so no probability mass is ignored. Specifically, the combination (called thejoint mass) is calculated from the two sets of massesm1andm2in the following manner: where Kis a measure of the amount of conflict between the two mass sets. The normalization factor above, 1 −K, has the effect of completely ignoring conflict and attributinganymass associated with conflict to the empty set. This combination rule for evidence can therefore produce counterintuitive results, as we show next. The following example shows how Dempster's rule produces intuitive results when applied in a preference fusion situation, even when there is high conflict. An example with exactly the same numerical values was introduced byLotfi Zadehin 1979,[12][13][14]to point out counter-intuitive results generated by Dempster's rule when there is a high degree of conflict. The example goes as follows: Such result goes against common sense since both doctors agree that there is a little chance that the patient has a meningitis. This example has been the starting point of many research works for trying to find a solid justification for Dempster's rule and for foundations of Dempster–Shafer theory[15][16]or to show the inconsistencies of this theory.[17][18][19] The following example shows where Dempster's rule produces a counter-intuitive result, even when there is low conflict. This result impliescomplete supportfor the diagnosis of a brain tumor, which both doctors believedvery likely. The agreement arises from the low degree of conflict between the two sets of evidence comprised by the two doctors' opinions. In either case, it would be reasonable to expect that: since the existence of non-zero belief probabilities for other diagnoses impliesless than complete supportfor the brain tumour diagnosis. As in Dempster–Shafer theory, a Bayesian belief functionbel:2X→[0,1]{\displaystyle \operatorname {bel} :2^{X}\rightarrow [0,1]\,\!}has the propertiesbel⁡(∅)=0{\displaystyle \operatorname {bel} (\emptyset )=0}andbel⁡(X)=1{\displaystyle \operatorname {bel} (X)=1}. The third condition, however, is subsumed by, but relaxed in DS theory:[2]: p. 19 Either of the following conditions implies the Bayesian special case of the DS theory:[2]: p. 37, 45 As an example of how the two approaches differ, a Bayesian could model the color of a car as a probability distribution over (red, green, blue), assigning one number to each color. Dempster–Shafer would assign numbers to each of (red, green, blue, (red or green), (red or blue), (green or blue), (red or green or blue)). These numbers do not have to be coherent; for example, Bel(red)+Bel(green) does not have to equal Bel(red or green). Thus, Bayes' conditional probability can be considered as a special case of Dempster's rule of combination.[2]: p. 19f.However, it lacks many (if not most) of the properties that make Bayes' rule intuitively desirable, leading some to argue that it cannot be considered a generalization in any meaningful sense.[20]For example, DS theory violates the requirements forCox's theorem, which implies that it cannot be considered a coherent (contradiction-free) generalization ofclassical logic—specifically, DS theory violates the requirement that a statement be either true or false (but not both). As a result, DS theory is subject to theDutch Bookargument, implying that any agent using DS theory would agree to a series of bets that result in a guaranteed loss. The Bayesian approximation[21][22]reduces a given bpam{\displaystyle m}to a (discrete) probability distribution, i.e. only singleton subsets of the frame of discernment are allowed to be focal elements of the approximated versionm_{\displaystyle {\underline {m}}}ofm{\displaystyle m}: It's useful for those who are only interested in the single state hypothesis. We can perform it in the 'light' example. Judea Pearl(1988a, chapter 9;[23]1988b[24]and 1990)[25]has argued that it is misleading to interpret belief functions as representing either "probabilities of an event," or "the confidence one has in the probabilities assigned to various outcomes," or "degrees of belief (or confidence, or trust) in a proposition," or "degree of ignorance in a situation." Instead, belief functions represent the probability that a given proposition isprovablefrom a set of other propositions, to which probabilities are assigned. Confusing probabilities oftruthwith probabilities ofprovabilitymay lead to counterintuitive results in reasoning tasks such as (1) representing incomplete knowledge, (2) belief-updating and (3) evidence pooling. He further demonstrated that, if partial knowledge is encoded and updated by belief function methods, the resulting beliefs cannot serve as a basis for rational decisions. Kłopotek and Wierzchoń[26]proposed to interpret the Dempster–Shafer theory in terms of statistics of decision tables (of therough set theory), whereby the operator of combining evidence should be seen as relational joining of decision tables. In another interpretation M. A. Kłopotek and S. T. Wierzchoń[27]propose to view this theory as describing destructive material processing (under loss of properties), e.g. like in some semiconductor production processes. Under both interpretations reasoning in DST gives correct results, contrary to the earlier probabilistic interpretations, criticized by Pearl in the cited papers and by other researchers. Jøsang proved that Dempster's rule of combination actually is a method for fusing belief constraints.[8]It only represents an approximate fusion operator in other situations, such as cumulative fusion of beliefs, but generally produces incorrect results in such situations. The confusion around the validity of Dempster's rule therefore originates in the failure of correctly interpreting the nature of situations to be modeled. Dempster's rule of combination always produces correct and intuitive results in situation of fusing belief constraints from different sources. In considering preferences one might use thepartial orderof alatticeinstead of thetotal orderof the real line as found in Dempster–Schafer theory. Indeed,Gunther Schmidthas proposed this modification and outlined the method.[28] Given a set of criteriaCand abounded latticeLwith ordering ≤, Schmidt defines arelational measureto be a functionμfrom thepower setofCintoLthat respects the order ⊆ onP{\displaystyle \mathbb {P} }(C): and such thatμtakes the empty subset ofP{\displaystyle \mathbb {P} }(C) to the least element ofL, and takesCto the greatest element ofL. Schmidt comparesμwith the belief function of Schafer, and he also considers a method of combining measures generalizing the approach of Dempster (when new evidence is combined with previously held evidence). He also introduces arelational integraland compares it to theChoquet integralandSugeno integral. Any relationmbetweenCandLmay be introduced as a "direct valuation", then processed with thecalculus of relationsto obtain apossibility measureμ.
https://en.wikipedia.org/wiki/Dempster%E2%80%93Shafer_theory
Instatisticsandmachine learning,ensemble methodsuse multiple learning algorithms to obtain betterpredictive performancethan could be obtained from any of the constituent learning algorithms alone.[1][2][3]Unlike astatistical ensemblein statistical mechanics, which is usually infinite, a machine learning ensemble consists of only a concrete finite set of alternative models, but typically allows for much more flexible structure to exist among those alternatives. Supervised learningalgorithms search through ahypothesisspace to find a suitable hypothesis that will make good predictions with a particular problem.[4]Even if this space contains hypotheses that are very well-suited for a particular problem, it may be very difficult to find a good one. Ensembles combine multiple hypotheses to form one which should be theoretically better. Ensemble learningtrains two or more machine learning algorithms on a specificclassificationorregressiontask. The algorithms within the ensemble model are generally referred as "base models", "base learners", or "weak learners" in literature. These base models can be constructed using a single modelling algorithm, or several different algorithms. The idea is to train a diverse set of weak models on the same modelling task, such that the outputs of each weak learner have poor predictive ability (i.e., highbias), and among all weak learners, the outcome and error values exhibit highvariance. Fundamentally, an ensemble learning model trains at least two high-bias (weak) and high-variance (diverse) models to be combined into a better-performing model. The set of weak models — which would not produce satisfactory predictive results individually — are combined or averaged to produce a single, high performing, accurate, and low-variance model to fit the task as required. Ensemble learning typically refers to bagging (bootstrap aggregating),boostingor stacking/blending techniques to induce high variance among the base models. Bagging creates diversity by generating random samples from the training observations and fitting the same model to each different sample — also known ashomogeneous parallel ensembles. Boosting follows an iterative process by sequentially training each base model on the up-weighted errors of the previous base model, producing an additive model to reduce the final model errors — also known assequential ensemble learning. Stacking or blending consists of different base models, each trained independently (i.e. diverse/high variance) to be combined into the ensemble model — producing aheterogeneous parallel ensemble. Common applications of ensemble learning includerandom forests(an extension of bagging), Boosted Tree models, andGradient BoostedTree Models. Models in applications of stacking are generally more task-specific — such as combining clustering techniques with other parametric and/or non-parametric techniques.[5] Evaluating the prediction of an ensemble typically requires more computation than evaluating the prediction of a single model. In one sense, ensemble learning may be thought of as a way to compensate for poor learning algorithms by performing a lot of extra computation. On the other hand, the alternative is to do a lot more learning with one non-ensemble model. An ensemble may be more efficient at improving overall accuracy for the same increase in compute, storage, or communication resources by using that increase on two or more methods, than would have been improved by increasing resource use for a single method. Fast algorithms such asdecision treesare commonly used in ensemble methods (e.g., random forests), although slower algorithms can benefit from ensemble techniques as well. By analogy, ensemble techniques have been used also inunsupervised learningscenarios, for example inconsensus clusteringor inanomaly detection. Empirically, ensembles tend to yield better results when there is a significant diversity among the models.[6][7]Many ensemble methods, therefore, seek to promote diversity among the models they combine.[8][9]Although perhaps non-intuitive, more random algorithms (like random decision trees) can be used to produce a stronger ensemble than very deliberate algorithms (like entropy-reducing decision trees).[10]Using a variety of strong learning algorithms, however, has been shown to be more effective than using techniques that attempt todumb-downthe models in order to promote diversity.[11]It is possible to increase diversity in the training stage of the model using correlation for regression tasks[12]or using information measures such as cross entropy for classification tasks.[13] Theoretically, one can justify the diversity concept because the lower bound of the error rate of an ensemble system can be decomposed into accuracy, diversity, and the other term.[14] Ensemble learning, including both regression and classification tasks, can be explained using a geometric framework.[15]Within this framework, the output of each individual classifier or regressor for the entire dataset can be viewed as a point in a multi-dimensional space. Additionally, the target result is also represented as a point in this space, referred to as the "ideal point." The Euclidean distance is used as the metric to measure both the performance of a single classifier or regressor (the distance between its point and the ideal point) and the dissimilarity between two classifiers or regressors (the distance between their respective points). This perspective transforms ensemble learning into a deterministic problem. For example, within this geometric framework, it can be proved that the averaging of the outputs (scores) of all base classifiers or regressors can lead to equal or better results than the average of all the individual models. It can also be proved that if the optimal weighting scheme is used, then a weighted averaging approach can outperform any of the individual classifiers or regressors that make up the ensemble or as good as the best performer at least. While the number of component classifiers of an ensemble has a great impact on the accuracy of prediction, there is a limited number of studies addressing this problem.A prioridetermining of ensemble size and the volume and velocity of big data streams make this even more crucial for online ensemble classifiers. Mostly statistical tests were used for determining the proper number of components. More recently, a theoretical framework suggested that there is an ideal number of component classifiers for an ensemble such that having more or less than this number of classifiers would deteriorate the accuracy. It is called "the law of diminishing returns in ensemble construction." Their theoretical framework shows that using the same number of independent component classifiers as class labels gives the highest accuracy.[16][17] The Bayes optimal classifier is a classification technique. It is an ensemble of all the hypotheses in the hypothesis space. On average, no other ensemble can outperform it.[18]TheNaive Bayes classifieris a version of this that assumes that the data is conditionally independent on the class and makes the computation more feasible. Each hypothesis is given a vote proportional to the likelihood that the training dataset would be sampled from a system if that hypothesis were true. To facilitate training data of finite size, the vote of each hypothesis is also multiplied by the prior probability of that hypothesis. The Bayes optimal classifier can be expressed with the following equation: wherey{\displaystyle y}is the predicted class,C{\displaystyle C}is the set of all possible classes,H{\displaystyle H}is the hypothesis space,P{\displaystyle P}refers to aprobability, andT{\displaystyle T}is the training data. As an ensemble, the Bayes optimal classifier represents a hypothesis that is not necessarily inH{\displaystyle H}. The hypothesis represented by the Bayes optimal classifier, however, is the optimal hypothesis inensemble space(the space of all possible ensembles consisting only of hypotheses inH{\displaystyle H}). This formula can be restated usingBayes' theorem, which says that the posterior is proportional to the likelihood times the prior: hence, Bootstrap aggregation (bagging) involves training an ensemble onbootstrappeddata sets. A bootstrapped set is created by selecting from original training data set with replacement. Thus, a bootstrap set may contain a given example zero, one, or multiple times. Ensemble members can also have limits on the features (e.g., nodes of a decision tree), to encourage exploring of diverse features.[19]The variance of local information in the bootstrap sets and feature considerations promote diversity in the ensemble, and can strengthen the ensemble.[20]To reduce overfitting, a member can be validated using the out-of-bag set (the examples that are not in its bootstrap set).[21] Inference is done byvotingof predictions of ensemble members, calledaggregation. It is illustrated below with an ensemble of four decision trees. The query example is classified by each tree. Because three of the four predict thepositiveclass, the ensemble's overall classification ispositive.Random forestslike the one shown are a common application of bagging. Boosting involves training successive models by emphasizing training data mis-classified by previously learned models. Initially, all data (D1) has equal weight and is used to learn a base model M1. The examples mis-classified by M1 are assigned a weight greater than correctly classified examples. This boosted data (D2) is used to train a second base model M2, and so on. Inference is done by voting. In some cases, boosting has yielded better accuracy than bagging, but tends to over-fit more. The most common implementation of boosting isAdaboost, but some newer algorithms are reported to achieve better results.[citation needed] Bayesian model averaging (BMA) makes predictions by averaging the predictions of models weighted by their posterior probabilities given the data.[22]BMA is known to generally give better answers than a single model, obtained, e.g., viastepwise regression, especially where very different models have nearly identical performance in the training set but may otherwise perform quite differently. The question with any use ofBayes' theoremis the prior, i.e., the probability (perhaps subjective) that each model is the best to use for a given purpose. Conceptually, BMA can be used with any prior.Rpackages ensembleBMA[23]and BMA[24]use the prior implied by theBayesian information criterion, (BIC), following Raftery (1995).[25]Rpackage BAS supports the use of the priors implied byAkaike information criterion(AIC) and other criteria over the alternative models as well as priors over the coefficients.[26] The difference between BIC and AIC is the strength of preference for parsimony. BIC's penalty for model complexity isln⁡(n)k{\displaystyle \ln(n)k}, while AIC's is2k{\displaystyle 2k}. Large-sample asymptotic theory establishes that if there is a best model, then with increasing sample sizes, BIC is strongly consistent, i.e., will almost certainly find it, while AIC may not, because AIC may continue to place excessive posterior probability on models that are more complicated than they need to be. On the other hand, AIC and AICc are asymptotically "efficient" (i.e., minimum mean square prediction error), while BIC is not .[27] Haussler et al. (1994) showed that when BMA is used for classification, its expected error is at most twice the expected error of the Bayes optimal classifier.[28]Burnham and Anderson (1998, 2002) contributed greatly to introducing a wider audience to the basic ideas of Bayesian model averaging and popularizing the methodology.[29]The availability of software, including other free open-source packages forRbeyond those mentioned above, helped make the methods accessible to a wider audience.[30] Bayesian model combination (BMC) is an algorithmic correction to Bayesian model averaging (BMA). Instead of sampling each model in the ensemble individually, it samples from the space of possible ensembles (with model weights drawn randomly from a Dirichlet distribution having uniform parameters). This modification overcomes the tendency of BMA to converge toward giving all the weight to a single model. Although BMC is somewhat more computationally expensive than BMA, it tends to yield dramatically better results. BMC has been shown to be better on average (with statistical significance) than BMA and bagging.[31] Use of Bayes' law to compute model weights requires computing the probability of the data given each model. Typically, none of the models in the ensemble are exactly the distribution from which the training data were generated, so all of them correctly receive a value close to zero for this term. This would work well if the ensemble were big enough to sample the entire model-space, but this is rarely possible. Consequently, each pattern in the training data will cause the ensemble weight to shift toward the model in the ensemble that is closest to the distribution of the training data. It essentially reduces to an unnecessarily complex method for doing model selection. The possible weightings for an ensemble can be visualized as lying on a simplex. At each vertex of the simplex, all of the weight is given to a single model in the ensemble. BMA converges toward the vertex that is closest to the distribution of the training data. By contrast, BMC converges toward the point where this distribution projects onto the simplex. In other words, instead of selecting the one model that is closest to the generating distribution, it seeks the combination of models that is closest to the generating distribution. The results from BMA can often be approximated by using cross-validation to select the best model from a bucket of models. Likewise, the results from BMC may be approximated by using cross-validation to select the best ensemble combination from a random sampling of possible weightings. A "bucket of models" is an ensemble technique in which a model selection algorithm is used to choose the best model for each problem. When tested with only one problem, a bucket of models can produce no better results than the best model in the set, but when evaluated across many problems, it will typically produce much better results, on average, than any model in the set. The most common approach used for model-selection iscross-validationselection (sometimes called a "bake-off contest"). It is described with the following pseudo-code: Cross-Validation Selection can be summed up as: "try them all with the training set, and pick the one that works best".[32] Gating is a generalization of Cross-Validation Selection. It involves training another learning model to decide which of the models in the bucket is best-suited to solve the problem. Often, aperceptronis used for the gating model. It can be used to pick the "best" model, or it can be used to give a linear weight to the predictions from each model in the bucket. When a bucket of models is used with a large set of problems, it may be desirable to avoid training some of the models that take a long time to train. Landmark learning is a meta-learning approach that seeks to solve this problem. It involves training only the fast (but imprecise) algorithms in the bucket, and then using the performance of these algorithms to help determine which slow (but accurate) algorithm is most likely to do best.[33] The most common approach for training classifier is usingCross-entropycost function. However, one would like to train an ensemble of models that have diversity so when we combine them it would provide best results.[34][35]Assuming we use a simple ensemble of averagingK{\displaystyle K}classifiers. Then the Amended Cross-Entropy Cost is whereek{\displaystyle e^{k}}is the cost function of thekth{\displaystyle k^{th}}classifier,qk{\displaystyle q^{k}}is the probability of thekth{\displaystyle k^{th}}classifier,p{\displaystyle p}is the true probability that we need to estimate andλ{\displaystyle \lambda }is a parameter between 0 and 1 that define the diversity that we would like to establish. Whenλ=0{\displaystyle \lambda =0}we want each classifier to do its best regardless of the ensemble and whenλ=1{\displaystyle \lambda =1}we would like the classifier to be as diverse as possible. Stacking (sometimes calledstacked generalization) involves training a model to combine the predictions of several other learning algorithms. First, all of the other algorithms are trained using the available data, then a combiner algorithm (final estimator) is trained to make a final prediction using all the predictions of the other algorithms (base estimators) as additional inputs or using cross-validated predictions from the base estimators which can prevent overfitting.[36]If an arbitrary combiner algorithm is used, then stacking can theoretically represent any of the ensemble techniques described in this article, although, in practice, alogistic regressionmodel is often used as the combiner. Stacking typically yields performance better than any single one of the trained models.[37]It has been successfully used on both supervised learning tasks (regression,[38]classification and distance learning[39]) and unsupervised learning (density estimation).[40]It has also been used to estimate bagging's error rate.[3][41]It has been reported to out-perform Bayesian model-averaging.[42]The two top-performers in the Netflix competition utilized blending, which may be considered a form of stacking.[43] Voting is another form of ensembling. See e.g.Weighted majority algorithm (machine learning). In recent years, due to growing computational power, which allows for training in large ensemble learning in a reasonable time frame, the number of ensemble learning applications has grown increasingly.[49]Some of the applications of ensemble classifiers include: Land cover mappingis one of the major applications ofEarth observation satellitesensors, usingremote sensingandgeospatial data, to identify the materials and objects which are located on the surface of target areas. Generally, the classes of target materials include roads, buildings, rivers, lakes, and vegetation.[50]Some different ensemble learning approaches based onartificial neural networks,[51]kernel principal component analysis(KPCA),[52]decision treeswithboosting,[53]random forest[50][54]and automatic design of multiple classifier systems,[55]are proposed to efficiently identifyland coverobjects. Change detectionis animage analysisproblem, consisting of the identification of places where theland coverhas changed over time.Change detectionis widely used in fields such asurban growth,forest and vegetation dynamics,land useanddisaster monitoring.[56]The earliest applications of ensemble classifiers in change detection are designed with the majorityvoting,[57]Bayesian model averaging,[58]and themaximum posterior probability.[59]Given the growth of satellite data over time, the past decade sees more use of time series methods for continuous change detection from image stacks.[60]One example is a Bayesian ensemble changepoint detection method called BEAST, with the software available as a package Rbeast in R, Python, and Matlab.[61] Distributed denial of serviceis one of the most threateningcyber-attacksthat may happen to aninternet service provider.[49]By combining the output of single classifiers, ensemble classifiers reduce the total error of detecting and discriminating such attacks from legitimateflash crowds.[62] Classification ofmalwarecodes such ascomputer viruses,computer worms,trojans,ransomwareandspywareswith the usage ofmachine learningtechniques, is inspired by thedocument categorization problem.[63]Ensemble learning systems have shown a proper efficacy in this area.[64][65] Anintrusion detection systemmonitorscomputer networkorcomputer systemsto identify intruder codes like ananomaly detectionprocess. Ensemble learning successfully aids such monitoring systems to reduce their total error.[66][67] Face recognition, which recently has become one of the most popular research areas ofpattern recognition, copes with identification or verification of a person by theirdigital images.[68] Hierarchical ensembles based on Gabor Fisher classifier andindependent component analysispreprocessingtechniques are some of the earliest ensembles employed in this field.[69][70][71] Whilespeech recognitionis mainly based ondeep learningbecause most of the industry players in this field likeGoogle,MicrosoftandIBMreveal that the core technology of theirspeech recognitionis based on this approach, speech-basedemotion recognitioncan also have a satisfactory performance with ensemble learning.[72][73] It is also being successfully used infacial emotion recognition.[74][75][76] Fraud detectiondeals with the identification ofbank fraud, such asmoney laundering,credit card fraudandtelecommunication fraud, which have vast domains of research and applications ofmachine learning. Because ensemble learning improves the robustness of the normal behavior modelling, it has been proposed as an efficient technique to detect such fraudulent cases and activities in banking and credit card systems.[77][78] The accuracy of prediction of business failure is a very crucial issue in financial decision-making. Therefore, different ensemble classifiers are proposed to predictfinancial crisesandfinancial distress.[79]Also, in thetrade-based manipulationproblem, where traders attempt to manipulatestock pricesby buying and selling activities, ensemble classifiers are required to analyze the changes in thestock marketdata and detect suspicious symptom ofstock pricemanipulation.[79] Ensemble classifiers have been successfully applied inneuroscience,proteomicsandmedical diagnosislike inneuro-cognitive disorder(i.e.Alzheimerormyotonic dystrophy) detection based on MRI datasets,[80][81][82]cervical cytology classification.[83][84] Besides, ensembles have been successfully applied in medical segmentation tasks, for example brain tumor[85][86]and hyperintensities segmentation.[87]
https://en.wikipedia.org/wiki/Ensemble_learning
Inintegrated circuits,optical interconnectsrefers to any system of transmitting signals from one part of an integrated circuit to another using light. Optical interconnects have been the topic of study due to the high latency and power consumption incurred by conventional metalinterconnectsin transmitting electrical signals over long distances, such as in interconnects classed asglobal interconnects. TheInternational Technology Roadmap for Semiconductors(ITRS) has highlighted interconnect scaling as a problem for the semiconductor industry. In electrical interconnects, nonlinearsignals(e.g. digital signals) are transmitted by copper wires conventionally, and these electrical wires all haveresistanceandcapacitancewhich severely limits the rise time of signals when the dimension of the wires are scaled down. Optical solution are used to transmit signals through long distances to substitute interconnection between dies within theintegrated circuit(IC) package. In order to control the optical signals inside the small IC package properly,microelectromechanical system(MEMS) technology can be used to integrate the optical components (i.e.optical waveguides,optical fibers,lens,mirrors, opticalactuators, opticalsensorsetc.) and the electronic parts together effectively. Conventional physical metal wires possess bothresistanceandcapacitance, limiting the rise time of signals. Bits of information will overlap with each other when the frequency of signal is increased to a certain level.[1] Optical interconnections can provide benefits over conventional metal wires which include:[1] However, there are still many technical challenges in implementing dense optical interconnects to silicon CMOS chips. These challenges are listed as below:[2]
https://en.wikipedia.org/wiki/Optical_interconnect
In software quality assurance,performance testingis in general a testing practice performed to determine how a system performs in terms of responsiveness and stability under a particular workload.[1]It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage. Performance testing, a subset of performance engineering, is a computer science practice which strives to build performance standards into the implementation, design and architecture of a system. Load testingis the simplest form of performance testing. A load test is usually conducted to understand the behavior of the system under a specific expected load. This load can be the expected concurrent number of users on theapplicationperforming a specific number oftransactionswithin the set duration. This test will give out the response times of all the important business critical transactions. Thedatabase,application server, etc. are also monitored during the test, this will assist in identifyingbottlenecksin the application software and the hardware that the software is installed on Stress testingis normally used to understand the upper limits of capacity within the system. This kind of test is done to determine the system's robustness in terms of extreme load and helps application administrators to determine if the system will perform sufficiently if the current load goes well above the expected maximum. Soak testing, also known as endurance testing, is usually done to determine if the system can sustain the continuous expected load. During soak tests, memory utilization is monitored to detect potential leaks. Also important, but often overlooked is performance degradation, i.e. to ensure that the throughput and/or response times after some long period of sustained activity are as good as or better than at the beginning of the test. It essentially involves applying a significant load to a system for an extended, significant period of time. The goal is to discover how the system behaves under sustained use. Spike testing is done by suddenly increasing or decreasing the load generated by a very large number of users, and observing the behavior of the system. The goal is to determine whether performance will suffer, the system will fail, or it will be able to handle dramatic changes in load. Breakpoint testing is similar to stress testing. An incremental load is applied over time while the system is monitored for predetermined failure conditions. Breakpoint testing is sometimes referred to as Capacity Testing because it can be said to determine the maximum capacity below which the system will perform to its required specifications or Service Level Agreements. The results of breakpoint analysis applied to a fixed environment can be used to determine the optimal scaling strategy in terms of required hardware or conditions that should trigger scaling-out events in a cloud environment. Rather than testing for performance from a load perspective, tests are created to determine the effects of configuration changes to the system's components on the system's performance and behavior. A common example would be experimenting with different methods ofload-balancing. Isolation testing is not unique to performance testing but involves repeating a test execution that resulted in a system problem. Such testing can often isolate and confirm the fault domain. This is a relatively new form of performance testing when global applications such as Facebook, Google and Wikipedia, are performance tested from load generators that are placed on the actual target continent whether physical machines or cloud VMs. These tests usually requires an immense amount of preparation and monitoring to be executed successfully. Performance testing can serve different purposes: Many performance tests are undertaken without setting sufficiently realistic, goal-oriented performance goals. The first question from a business perspective should always be, "why are we performance-testing?". These considerations are part of thebusiness caseof the testing. Performance goals will differ depending on the system's technology and purpose, but should always include some of the following: If a system identifies end-users by some form of log-in procedure then a concurrency goal is highly desirable. By definition this is the largest number of concurrent system users that the system is expected to support at any given moment. The work-flow of a scripted transaction may impact trueconcurrencyespecially if the iterative part contains the log-in and log-out activity. If the system has no concept of end-users, then performance goal is likely to be based on a maximum throughput or transaction rate. This refers to the time taken for one system node to respond to the request of another. A simple example would be a HTTP 'GET' request from browser client to web server. In terms of response time this is what allload testingtools actually measure. It may be relevant to set server response time goals between all nodes of the system. Load-testing tools have difficulty measuring render-response time, since they generally have no concept of what happens within anodeapart from recognizing a period of time where there is no activity 'on the wire'. To measure render response time, it is generally necessary to include functionaltest scriptsas part of the performance test scenario. Many load testing tools do not offer this feature. It is critical to detail performance specifications (requirements) and document them in any performance test plan. Ideally, this is done during the requirements development phase of any system development project, prior to any design effort. SeePerformance Engineeringfor more details. However, performance testing is frequently not performed against a specification; e.g., no one will have expressed what the maximum acceptable response time for a given population of users should be. Performance testing is frequently used as part of the process of performance profile tuning. The idea is to identify the "weakest link" – there is inevitably a part of the system which, if it is made to respond faster, will result in the overall system running faster. It is sometimes a difficult task to identify which part of the system represents this critical path, and some test tools include (or can have add-ons that provide) instrumentation that runs on the server (agents) and reports transaction times, database access times, network overhead, and other server monitors, which can be analyzed together with the raw performance statistics. Without such instrumentation one might have to have someone crouched overWindows Task Managerat the server to see how much CPU load the performance tests are generating (assuming a Windows system is under test). Performance testing can be performed across the web, and even done in different parts of the country, since it is known that the response times of the internet itself vary regionally. It can also be done in-house, althoughrouterswould then need to be configured to introduce the lag that would typically occur on public networks. Loads should be introduced to the system from realistic points. For example, if 50% of a system's user base will be accessing the system via a 56K modem connection and the other half over aT1, then the load injectors (computers that simulate real users) should either inject load over the same mix of connections (ideal) or simulate the network latency of such connections, following the same user profile. It is always helpful to have a statement of the likely peak number of users that might be expected to use the system at peak times. If there can also be a statement of what constitutes the maximum allowable 95 percentile response time, then an injector configuration could be used to test whether the proposed system met that specification. Performance specifications should ask the following questions, at a minimum: A stable build of the system which must resemble the production environment as closely as is possible. To ensure consistent results, the performance testing environment should be isolated from other environments, such asuser acceptance testing(UAT) or development. As a best practice it is always advisable to have a separate performance testing environment resembling the production environment as much as possible. In performance testing, it is often crucial for the test conditions to be similar to the expected actual use. However, in practice this is hard to arrange and not wholly possible, since production systems are subjected to unpredictable workloads. Test workloads may mimic occurrences in the production environment as far as possible, but only in the simplest systems can one exactly replicate this workload variability. Loosely-coupled architectural implementations (e.g.:SOA) have created additional complexities with performance testing. To truly replicate production-like states, enterprise services or assets that share a commoninfrastructureor platform require coordinated performance testing, with all consumers creating production-like transaction volumes and load on shared infrastructures or platforms. Because this activity is so complex and costly in money and time, some organizations now use tools to monitor and simulate production-like conditions (also referred as "noise") in their performance testing environments (PTE) to understand capacity and resource requirements and verify / validate quality attributes. It is critical to the cost performance of a new system that performance test efforts begin at the inception of the development project and extend through to deployment. The later a performance defect is detected, the higher the cost of remediation. This is true in the case of functional testing, but even more so with performance testing, due to the end-to-end nature of its scope. It is crucial for a performance test team to be involved as early as possible, because it is time-consuming to acquire and prepare the testing environment and other key performance requisites. Performance testing is mainly divided into two main categories: This part of performance testing mainly deals with creating/scripting the work flows of key identified business processes. This can be done using a wide variety of tools. Each of the tools mentioned in the above list (which is not exhaustive nor complete) either employs a scripting language (C, Java, JS) or some form of visual representation (drag and drop) to create and simulate end user work flows. Most of the tools allow for something called "Record & Replay", where in the performance tester will launch the testing tool, hook it on a browser or thick client and capture all the network transactions which happen between the client and server. In doing so a script is developed which can be enhanced/modified to emulate various business scenarios. This forms the other face of performance testing. With performance monitoring, the behavior and response characteristics of the application under test are observed. The below parameters are usually monitored during the a performance test execution Server hardware Parameters As a first step, the patterns generated by these 4 parameters provide a good indication on where the bottleneck lies. To determine the exact root cause of the issue,software engineersuse tools such asprofilersto measure what parts of a device or software contribute most to the poor performance, or to establish throughput levels (and thresholds) for maintained acceptable response time. Performance testing technology employs one or more PCs or Unix servers to act as injectors, each emulating the presence of numbers of users and each running an automated sequence of interactions (recorded as a script, or as a series of scripts to emulate different types of user interaction) with the host whose performance is being tested. Usually, a separate PC acts as a test conductor, coordinating and gathering metrics from each of the injectors and collating performance data for reporting purposes. The usual sequence is to ramp up the load: to start with a few virtual users and increase the number over time to a predetermined maximum. The test result shows how the performance varies with the load, given as number of users vs. response time. Various tools are available to perform such tests. Tools in this category usually execute a suite of tests which emulate real users against the system. Sometimes the results can reveal oddities, e.g., that while the average response time might be acceptable, there are outliers of a few key transactions that take considerably longer to complete – something that might be caused by inefficient database queries, pictures, etc. Performance testing can be combined withstress testing, in order to see what happens when an acceptable load is exceeded. Does the system crash? How long does it take to recover if a large load is reduced? Does its failure cause collateral damage? Analytical Performance Modelingis a method to model the behavior of a system in a spreadsheet. The model is fed with measurements of transaction resource demands (CPU, disk I/O,LAN,WAN), weighted by the transaction-mix (business transactions per hour). The weighted transaction resource demands are added up to obtain the hourly resource demands and divided by the hourly resource capacity to obtain the resource loads. Using the response time formula (R=S/(1-U), R=response time, S=service time, U=load), response times can be calculated and calibrated with the results of the performance tests. Analytical performance modeling allows evaluation of design options and system sizing based on actual or anticipated business use. It is therefore much faster and cheaper than performance testing, though it requires thorough understanding of the hardware platforms. Tasks to perform such a test would include: According to the Microsoft Developer Network the Performance Testing Methodology consists of the following activities:
https://en.wikipedia.org/wiki/Software_performance_testing
Inmathematics(in particular,functional analysis),convolutionis amathematical operationon twofunctionsf{\displaystyle f}andg{\displaystyle g}that produces a third functionf∗g{\displaystyle f*g}, as theintegralof the product of the two functions after one is reflected about the y-axis and shifted. The termconvolutionrefers to both the resulting function and to the process of computing it. The integral is evaluated for all values of shift, producing the convolution function. The choice of which function is reflected and shifted before the integral does not change the integral result (seecommutativity). Graphically, it expresses how the 'shape' of one function is modified by the other. Some features of convolution are similar tocross-correlation: for real-valued functions, of a continuous or discrete variable, convolutionf∗g{\displaystyle f*g}differs from cross-correlationf⋆g{\displaystyle f\star g}only in that eitherf(x){\displaystyle f(x)}org(x){\displaystyle g(x)}is reflected about the y-axis in convolution; thus it is a cross-correlation ofg(−x){\displaystyle g(-x)}andf(x){\displaystyle f(x)}, orf(−x){\displaystyle f(-x)}andg(x){\displaystyle g(x)}.[A]For complex-valued functions, the cross-correlation operator is theadjointof the convolution operator. Convolution has applications that includeprobability,statistics,acoustics,spectroscopy,signal processingandimage processing,geophysics,engineering,physics,computer visionanddifferential equations.[1] The convolution can be defined for functions onEuclidean spaceand othergroups(asalgebraic structures).[citation needed]For example,periodic functions, such as thediscrete-time Fourier transform, can be defined on acircleand convolved byperiodic convolution. (See row 18 atDTFT § Properties.) Adiscrete convolutioncan be defined for functions on the set ofintegers. Generalizations of convolution have applications in the field ofnumerical analysisandnumerical linear algebra, and in the design and implementation offinite impulse responsefilters in signal processing.[citation needed] Computing theinverseof the convolution operation is known asdeconvolution. The convolution off{\displaystyle f}andg{\displaystyle g}is writtenf∗g{\displaystyle f*g}, denoting the operator with the symbol∗{\displaystyle *}.[B]It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. As such, it is a particular kind ofintegral transform: An equivalent definition is (seecommutativity): While the symbolt{\displaystyle t}is used above, it need not represent the time domain. At eacht{\displaystyle t}, the convolution formula can be described as the area under the functionf(τ){\displaystyle f(\tau )}weighted by the functiong(−τ){\displaystyle g(-\tau )}shifted by the amountt{\displaystyle t}. Ast{\displaystyle t}changes, the weighting functiong(t−τ){\displaystyle g(t-\tau )}emphasizes different parts of the input functionf(τ){\displaystyle f(\tau )}; Ift{\displaystyle t}is a positive value, theng(t−τ){\displaystyle g(t-\tau )}is equal tog(−τ){\displaystyle g(-\tau )}that slides or is shifted along theτ{\displaystyle \tau }-axis toward the right (toward+∞{\displaystyle +\infty }) by the amount oft{\displaystyle t}, while ift{\displaystyle t}is a negative value, theng(t−τ){\displaystyle g(t-\tau )}is equal tog(−τ){\displaystyle g(-\tau )}that slides or is shifted toward the left (toward−∞{\displaystyle -\infty }) by the amount of|t|{\displaystyle |t|}. For functionsf{\displaystyle f},g{\displaystyle g}supportedon only[0,∞){\displaystyle [0,\infty )}(i.e., zero for negative arguments), the integration limits can be truncated, resulting in: For the multi-dimensional formulation of convolution, seedomain of definition(below). A common engineering notational convention is:[2] which has to be interpreted carefully to avoid confusion. For instance,f(t)∗g(t−t0){\displaystyle f(t)*g(t-t_{0})}is equivalent to(f∗g)(t−t0){\displaystyle (f*g)(t-t_{0})}, butf(t−t0)∗g(t−t0){\displaystyle f(t-t_{0})*g(t-t_{0})}is in fact equivalent to(f∗g)(t−2t0){\displaystyle (f*g)(t-2t_{0})}.[3] Given two functionsf(t){\displaystyle f(t)}andg(t){\displaystyle g(t)}withbilateral Laplace transforms(two-sided Laplace transform) and respectively, the convolution operation(f∗g)(t){\displaystyle (f*g)(t)}can be defined as theinverse Laplace transformof the product ofF(s){\displaystyle F(s)}andG(s){\displaystyle G(s)}.[4][5]More precisely, Lett=u+v{\displaystyle t=u+v}, then Note thatF(s)⋅G(s){\displaystyle F(s)\cdot G(s)}is the bilateral Laplace transform of(f∗g)(t){\displaystyle (f*g)(t)}. A similar derivation can be done using theunilateral Laplace transform(one-sided Laplace transform). The convolution operation also describes the output (in terms of the input) of an important class of operations known aslinear time-invariant(LTI). SeeLTI system theoryfor a derivation of convolution as the result of LTI constraints. In terms of theFourier transformsof the input and output of an LTI operation, no new frequency components are created. The existing ones are only modified (amplitude and/or phase). In other words, the output transform is the pointwise product of the input transform with a third transform (known as atransfer function). SeeConvolution theoremfor a derivation of that property of convolution. Conversely, convolution can be derived as the inverse Fourier transform of the pointwise product of two Fourier transforms. The resultingwaveform(not shown here) is the convolution of functionsf{\displaystyle f}andg{\displaystyle g}. Iff(t){\displaystyle f(t)}is aunit impulse, the result of this process is simplyg(t){\displaystyle g(t)}. Formally: One of the earliest uses of the convolution integral appeared inD'Alembert's derivation ofTaylor's theoreminRecherches sur différents points importants du système du monde,published in 1754.[6] Also, an expression of the type: is used bySylvestre François Lacroixon page 505 of his book entitledTreatise on differences and series, which is the last of 3 volumes of the encyclopedic series:Traité du calcul différentiel et du calcul intégral, Chez Courcier, Paris, 1797–1800.[7]Soon thereafter, convolution operations appear in the works ofPierre Simon Laplace,Jean-Baptiste Joseph Fourier,Siméon Denis Poisson, and others. The term itself did not come into wide use until the 1950s or 1960s. Prior to that it was sometimes known asFaltung(which meansfoldinginGerman),composition product,superposition integral, andCarson's integral.[8]Yet it appears as early as 1903, though the definition is rather unfamiliar in older uses.[9][10] The operation: is a particular case of composition products considered by the Italian mathematicianVito Volterrain 1913.[11] When a functiongT{\displaystyle g_{T}}is periodic, with periodT{\displaystyle T}, then for functions,f{\displaystyle f}, such thatf∗gT{\displaystyle f*g_{T}}exists, the convolution is also periodic and identical to: wheret0{\displaystyle t_{0}}is an arbitrary choice. The summation is called aperiodic summationof the functionf{\displaystyle f}. WhengT{\displaystyle g_{T}}is a periodic summation of another function,g{\displaystyle g}, thenf∗gT{\displaystyle f*g_{T}}is known as acircularorcyclicconvolution off{\displaystyle f}andg{\displaystyle g}. And if the periodic summation above is replaced byfT{\displaystyle f_{T}}, the operation is called aperiodicconvolution offT{\displaystyle f_{T}}andgT{\displaystyle g_{T}}. For complex-valued functionsf{\displaystyle f}andg{\displaystyle g}defined on the setZ{\displaystyle \mathbb {Z} }of integers, thediscrete convolutionoff{\displaystyle f}andg{\displaystyle g}is given by:[12] or equivalently (seecommutativity) by: The convolution of two finite sequences is defined by extending the sequences to finitely supported functions on the set of integers. When the sequences are the coefficients of twopolynomials, then the coefficients of theordinary product of the two polynomialsare the convolution of the original two sequences. This is known as theCauchy productof the coefficients of the sequences. Thus whenghas finite support in the set{−M,−M+1,…,M−1,M}{\displaystyle \{-M,-M+1,\dots ,M-1,M\}}(representing, for instance, afinite impulse response), a finite summation may be used:[13] When a functiongN{\displaystyle g_{_{N}}}is periodic, with periodN,{\displaystyle N,}then for functions,f,{\displaystyle f,}such thatf∗gN{\displaystyle f*g_{_{N}}}exists, the convolution is also periodic and identical to: The summation onk{\displaystyle k}is called aperiodic summationof the functionf.{\displaystyle f.} IfgN{\displaystyle g_{_{N}}}is a periodic summation of another function,g,{\displaystyle g,}thenf∗gN{\displaystyle f*g_{_{N}}}is known as acircular convolutionoff{\displaystyle f}andg.{\displaystyle g.} When the non-zero durations of bothf{\displaystyle f}andg{\displaystyle g}are limited to the interval[0,N−1],{\displaystyle [0,N-1],}f∗gN{\displaystyle f*g_{_{N}}}reduces to these common forms: The notationf∗Ng{\displaystyle f*_{N}g}forcyclic convolutiondenotes convolution over thecyclic groupofintegers moduloN. Circular convolution arises most often in the context of fast convolution with afast Fourier transform(FFT) algorithm. In many situations, discrete convolutions can be converted to circular convolutions so that fast transforms with a convolution property can be used to implement the computation. For example, convolution of digit sequences is the kernel operation inmultiplicationof multi-digit numbers, which can therefore be efficiently implemented with transform techniques (Knuth 1997, §4.3.3.C;von zur Gathen & Gerhard 2003, §8.2). Eq.1requiresNarithmetic operations per output value andN2operations forNoutputs. That can be significantly reduced with any of several fast algorithms.Digital signal processingand other applications typically use fast convolution algorithms to reduce the cost of the convolution to O(NlogN) complexity. The most common fast convolution algorithms usefast Fourier transform(FFT) algorithms via thecircular convolution theorem. Specifically, thecircular convolutionof two finite-length sequences is found by taking an FFT of each sequence, multiplying pointwise, and then performing an inverse FFT. Convolutions of the type defined above are then efficiently implemented using that technique in conjunction with zero-extension and/or discarding portions of the output. Other fast convolution algorithms, such as theSchönhage–Strassen algorithmor the Mersenne transform,[14]use fast Fourier transforms in otherrings. The Winograd method is used as an alternative to the FFT.[15]It significantly speeds up 1D,[16]2D,[17]and 3D[18]convolution. If one sequence is much longer than the other, zero-extension of the shorter sequence and fast circular convolution is not the most computationally efficient method available.[19]Instead, decomposing the longer sequence into blocks and convolving each block allows for faster algorithms such as theoverlap–save methodandoverlap–add method.[20]A hybrid convolution method that combines block andFIRalgorithms allows for a zero input-output latency that is useful for real-time convolution computations.[21] The convolution of two complex-valued functions onRdis itself a complex-valued function onRd, defined by: and is well-defined only iffandgdecay sufficiently rapidly at infinity in order for the integral to exist. Conditions for the existence of the convolution may be tricky, since a blow-up ingat infinity can be easily offset by sufficiently rapid decay inf. The question of existence thus may involve different conditions onfandg: Iffandgarecompactly supportedcontinuous functions, then their convolution exists, and is also compactly supported and continuous (Hörmander 1983, Chapter 1). More generally, if either function (sayf) is compactly supported and the other islocally integrable, then the convolutionf∗gis well-defined and continuous. Convolution offandgis also well defined when both functions are locally square integrable onRand supported on an interval of the form[a, +∞)(or both supported on[−∞,a]). The convolution offandgexists iffandgare bothLebesgue integrable functionsinL1(Rd), and in this casef∗gis also integrable (Stein & Weiss 1971, Theorem 1.3). This is a consequence ofTonelli's theorem. This is also true for functions inL1, under the discrete convolution, or more generally for theconvolution on any group. Likewise, iff∈L1(Rd)  andg∈Lp(Rd)  where1 ≤p≤ ∞,  thenf*g∈Lp(Rd),  and In the particular casep= 1, this shows thatL1is aBanach algebraunder the convolution (and equality of the two sides holds iffandgare non-negative almost everywhere). More generally,Young's inequalityimplies that the convolution is a continuous bilinear map between suitableLpspaces. Specifically, if1 ≤p,q,r≤ ∞satisfy: then so that the convolution is a continuous bilinear mapping fromLp×LqtoLr. The Young inequality for convolution is also true in other contexts (circle group, convolution onZ). The preceding inequality is not sharp on the real line: when1 <p,q,r< ∞, there exists a constantBp,q< 1such that: The optimal value ofBp,qwas discovered in 1975[22]and independently in 1976,[23]seeBrascamp–Lieb inequality. A stronger estimate is true provided1 <p,q,r< ∞: where‖g‖q,w{\displaystyle \|g\|_{q,w}}is theweakLqnorm. Convolution also defines a bilinear continuous mapLp,w×Lq,w→Lr,w{\displaystyle L^{p,w}\times L^{q,w}\to L^{r,w}}for1<p,q,r<∞{\displaystyle 1<p,q,r<\infty }, owing to the weak Young inequality:[24] In addition to compactly supported functions and integrable functions, functions that have sufficiently rapid decay at infinity can also be convolved. An important feature of the convolution is that iffandgboth decay rapidly, thenf∗galso decays rapidly. In particular, iffandgarerapidly decreasing functions, then so is the convolutionf∗g. Combined with the fact that convolution commutes with differentiation (see#Properties), it follows that the class ofSchwartz functionsis closed under convolution (Stein & Weiss 1971, Theorem 3.3). Iffis a smooth function that iscompactly supportedandgis a distribution, thenf∗gis a smooth function defined by More generally, it is possible to extend the definition of the convolution in a unique way withφ{\displaystyle \varphi }the same asfabove, so that the associative law remains valid in the case wherefis a distribution, andga compactly supported distribution (Hörmander 1983, §4.2). The convolution of any twoBorel measuresμandνofbounded variationis the measureμ∗ν{\displaystyle \mu *\nu }defined by (Rudin 1962) In particular, whereA⊂Rd{\displaystyle A\subset \mathbf {R} ^{d}}is a measurable set and1A{\displaystyle 1_{A}}is theindicator functionofA{\displaystyle A}. This agrees with the convolution defined above when μ and ν are regarded as distributions, as well as the convolution of L1functions when μ and ν are absolutely continuous with respect to the Lebesgue measure. The convolution of measures also satisfies the following version of Young's inequality where the norm is thetotal variationof a measure. Because the space of measures of bounded variation is aBanach space, convolution of measures can be treated with standard methods offunctional analysisthat may not apply for the convolution of distributions. The convolution defines a product on thelinear spaceof integrable functions. This product satisfies the following algebraic properties, which formally mean that the space of integrable functions with the product given by convolution is a commutativeassociative algebrawithoutidentity(Strichartz 1994, §3.3). Other linear spaces of functions, such as the space of continuous functions of compact support, areclosedunder the convolution, and so also form commutative associative algebras. Proof (usingconvolution theorem): q(t)⟺FQ(f)=R(f)S(f){\displaystyle q(t)\ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ Q(f)=R(f)S(f)} q(−t)⟺FQ(−f)=R(−f)S(−f){\displaystyle q(-t)\ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ Q(-f)=R(-f)S(-f)} q(−t)=F−1{R(−f)S(−f)}=F−1{R(−f)}∗F−1{S(−f)}=r(−t)∗s(−t){\displaystyle {\begin{aligned}q(-t)&={\mathcal {F}}^{-1}{\bigg \{}R(-f)S(-f){\bigg \}}\\&={\mathcal {F}}^{-1}{\bigg \{}R(-f){\bigg \}}*{\mathcal {F}}^{-1}{\bigg \{}S(-f){\bigg \}}\\&=r(-t)*s(-t)\end{aligned}}} Iffandgare integrable functions, then the integral of their convolution on the whole space is simply obtained as the product of their integrals:[25] This follows fromFubini's theorem. The same result holds iffandgare only assumed to be nonnegative measurable functions, byTonelli's theorem. In the one-variable case, whereddx{\displaystyle {\frac {d}{dx}}}is thederivative. More generally, in the case of functions of several variables, an analogous formula holds with thepartial derivative: A particular consequence of this is that the convolution can be viewed as a "smoothing" operation: the convolution offandgis differentiable as many times asfandgare in total. These identities hold for example under the condition thatfandgare absolutely integrable and at least one of them has an absolutely integrable (L1) weak derivative, as a consequence ofYoung's convolution inequality. For instance, whenfis continuously differentiable with compact support, andgis an arbitrary locally integrable function, These identities also hold much more broadly in the sense of tempered distributions if one offorgis arapidly decreasing tempered distribution, a compactly supported tempered distribution or a Schwartz function and the other is a tempered distribution. On the other hand, two positive integrable and infinitely differentiable functions may have a nowhere continuous convolution. In the discrete case, thedifference operatorDf(n) =f(n+ 1) −f(n) satisfies an analogous relationship: Theconvolution theoremstates that[26] whereF{f}{\displaystyle {\mathcal {F}}\{f\}}denotes theFourier transformoff{\displaystyle f}. Versions of this theorem also hold for theLaplace transform,two-sided Laplace transform,Z-transformandMellin transform. IfW{\displaystyle {\mathcal {W}}}is theFourier transform matrix, then where∙{\displaystyle \bullet }isface-splitting product,[27][28][29][30][31]⊗{\displaystyle \otimes }denotesKronecker product,∘{\displaystyle \circ }denotesHadamard product(this result is an evolving ofcount sketchproperties[32]). This can be generalized for appropriate matricesA,B{\displaystyle \mathbf {A} ,\mathbf {B} }: from the properties of theface-splitting product. The convolution commutes with translations, meaning that where τxf is the translation of the functionfbyxdefined by Iffis aSchwartz function, thenτxfis the convolution with a translated Dirac delta functionτxf=f∗τxδ. So translation invariance of the convolution of Schwartz functions is a consequence of the associativity of convolution. Furthermore, under certain conditions, convolution is the most general translation invariant operation. Informally speaking, the following holds Thus some translation invariant operations can be represented as convolution. Convolutions play an important role in the study oftime-invariant systems, and especiallyLTI system theory. The representing functiongSis theimpulse responseof the transformationS. A more precise version of the theorem quoted above requires specifying the class of functions on which the convolution is defined, and also requires assuming in addition thatSmust be acontinuous linear operatorwith respect to the appropriatetopology. It is known, for instance, that every continuous translation invariant continuous linear operator onL1is the convolution with a finiteBorel measure. More generally, every continuous translation invariant continuous linear operator onLpfor 1 ≤p< ∞ is the convolution with atempered distributionwhoseFourier transformis bounded. To wit, they are all given by boundedFourier multipliers. IfGis a suitablegroupendowed with ameasureλ, and iffandgare real or complex valuedintegrablefunctions onG, then we can define their convolution by It is not commutative in general. In typical cases of interestGis alocally compactHausdorfftopological groupand λ is a (left-)Haar measure. In that case, unlessGisunimodular, the convolution defined in this way is not the same as∫f(xy−1)g(y)dλ(y){\textstyle \int f\left(xy^{-1}\right)g(y)\,d\lambda (y)}. The preference of one over the other is made so that convolution with a fixed functiongcommutes with left translation in the group: Furthermore, the convention is also required for consistency with the definition of the convolution of measures given below. However, with a right instead of a left Haar measure, the latter integral is preferred over the former. Onlocally compact abelian groups, a version of theconvolution theoremholds: the Fourier transform of a convolution is the pointwise product of the Fourier transforms. Thecircle groupTwith the Lebesgue measure is an immediate example. For a fixedginL1(T), we have the following familiar operator acting on theHilbert spaceL2(T): The operatorTiscompact. A direct calculation shows that its adjointT*is convolution with By the commutativity property cited above,Tisnormal:T*T=TT* . Also,Tcommutes with the translation operators. Consider the familySof operators consisting of all such convolutions and the translation operators. ThenSis a commuting family of normal operators. According tospectral theory, there exists an orthonormal basis {hk} that simultaneously diagonalizesS. This characterizes convolutions on the circle. Specifically, we have which are precisely thecharactersofT. Each convolution is a compactmultiplication operatorin this basis. This can be viewed as a version of the convolution theorem discussed above. A discrete example is a finitecyclic groupof ordern. Convolution operators are here represented bycirculant matrices, and can be diagonalized by thediscrete Fourier transform. A similar result holds for compact groups (not necessarily abelian): the matrix coefficients of finite-dimensionalunitary representationsform an orthonormal basis inL2by thePeter–Weyl theorem, and an analog of the convolution theorem continues to hold, along with many other aspects ofharmonic analysisthat depend on the Fourier transform. LetGbe a (multiplicatively written) topological group. If μ and ν areRadon measuresonG, then their convolutionμ∗νis defined as thepushforward measureof thegroup actionand can be written as[33] for each measurable subsetEofG. The convolution is also a Radon measure, whosetotal variationsatisfies In the case whenGislocally compactwith (left-)Haar measureλ, and μ and ν areabsolutely continuouswith respect to a λ,so that each has a density function, then the convolution μ∗ν is also absolutely continuous, and its density function is just the convolution of the two separate density functions. In fact, ifeithermeasure is absolutely continuous with respect to the Haar measure, then so is their convolution.[34] If μ and ν areprobability measureson the topological group(R,+),then the convolutionμ∗νis theprobability distributionof the sumX+Yof twoindependentrandom variablesXandYwhose respective distributions are μ and ν. Inconvex analysis, theinfimal convolutionof proper (not identically+∞{\displaystyle +\infty })convex functionsf1,…,fm{\displaystyle f_{1},\dots ,f_{m}}onRn{\displaystyle \mathbb {R} ^{n}}is defined by:[35](f1∗⋯∗fm)(x)=infx{f1(x1)+⋯+fm(xm)|x1+⋯+xm=x}.{\displaystyle (f_{1}*\cdots *f_{m})(x)=\inf _{x}\{f_{1}(x_{1})+\cdots +f_{m}(x_{m})|x_{1}+\cdots +x_{m}=x\}.}It can be shown that the infimal convolution of convex functions is convex. Furthermore, it satisfies an identity analogous to that of the Fourier transform of a traditional convolution, with the role of the Fourier transform is played instead by theLegendre transform:φ∗(x)=supy(x⋅y−φ(y)).{\displaystyle \varphi ^{*}(x)=\sup _{y}(x\cdot y-\varphi (y)).}We have:(f1∗⋯∗fm)∗(x)=f1∗(x)+⋯+fm∗(x).{\displaystyle (f_{1}*\cdots *f_{m})^{*}(x)=f_{1}^{*}(x)+\cdots +f_{m}^{*}(x).} Let (X, Δ, ∇,ε,η) be abialgebrawith comultiplication Δ, multiplication ∇, unit η, and counitε. The convolution is a product defined on theendomorphism algebraEnd(X) as follows. Letφ,ψ∈ End(X), that is,φ,ψ:X→Xare functions that respect all algebraic structure ofX, then the convolutionφ∗ψis defined as the composition The convolution appears notably in the definition ofHopf algebras(Kassel 1995, §III.3). A bialgebra is a Hopf algebra if and only if it has an antipode: an endomorphismSsuch that Convolution and related operations are found in many applications in science, engineering and mathematics.
https://en.wikipedia.org/wiki/Convolution
Instatistics,explained variationmeasures the proportion to which a mathematical model accounts for the variation (dispersion) of a given data set. Often, variation is quantified asvariance; then, the more specific termexplained variancecan be used. The complementary part of the total variation is calledunexplainedorresidualvariation; likewise, when discussing variance as such, this is referred to asunexplainedorresidual variance. Following Kent (1983),[1]we use the Fraser information (Fraser 1965)[2] whereg(r){\displaystyle g(r)}is the probability density of a random variableR{\displaystyle R\,}, andf(r;θ){\displaystyle f(r;\theta )\,}withθ∈Θi{\displaystyle \theta \in \Theta _{i}}(i=0,1{\displaystyle i=0,1\,}) are two families of parametric models. Model family 0 is the simpler one, with a restricted parameter spaceΘ0⊂Θ1{\displaystyle \Theta _{0}\subset \Theta _{1}}. Parameters are determined bymaximum likelihood estimation, The information gain of model 1 over model 0 is written as where a factor of 2 is included for convenience. Γ is always nonnegative; it measures the extent to which the best model of family 1 is better than the best model of family 0 in explainingg(r). Assume a two-dimensional random variableR=(X,Y){\displaystyle R=(X,Y)}whereXshall be considered as an explanatory variable, andYas a dependent variable. Models of family 1 "explain"Yin terms ofX, whereas in family 0,XandYare assumed to be independent. We define the randomness ofYbyD(Y)=exp⁡[−2F(θ0)]{\displaystyle D(Y)=\exp[-2F(\theta _{0})]}, and the randomness ofY, givenX, byD(Y∣X)=exp⁡[−2F(θ1)]{\displaystyle D(Y\mid X)=\exp[-2F(\theta _{1})]}. Then, can be interpreted as proportion of the data dispersion which is "explained" byX. The fraction of variance unexplained is an established concept in the context oflinear regression. The usual definition of thecoefficient of determinationis based on the fundamental concept of explained variance. LetXbe a random vector, andYa random variable that is modeled by a normal distribution with centreμ=ΨTX{\displaystyle \mu =\Psi ^{\textrm {T}}X}. In this case, the above-derived proportion of explained variationρC2{\displaystyle \rho _{C}^{2}}equals the squaredcorrelation coefficientR2{\displaystyle R^{2}}. Note the strong model assumptions: the centre of theYdistribution must be a linear function ofX, and for any givenx, theYdistribution must be normal. In other situations, it is generally not justified to interpretR2{\displaystyle R^{2}}as proportion of explained variance. Explained variance is routinely used inprincipal component analysis. The relation to the Fraser–Kent information gain remains to be clarified. As the fraction of "explained variance" equals the squared correlation coefficientR2{\displaystyle R^{2}}, it shares all the disadvantages of the latter: it reflects not only the quality of the regression, but also the distribution of the independent (conditioning) variables. In the words of one critic: "ThusR2{\displaystyle R^{2}}gives the 'percentage of variance explained' by the regression, an expression that, for most social scientists, is of doubtful meaning but great rhetorical value. If this number is large, the regression gives a good fit, and there is little point in searching for additional variables. Other regression equations on different data sets are said to be less satisfactory or less powerful if theirR2{\displaystyle R^{2}}is lower. Nothing aboutR2{\displaystyle R^{2}}supports these claims".[3]: 58And, after constructing an example whereR2{\displaystyle R^{2}}is enhanced just by jointly considering data from two different populations: "'Explained variance' explains nothing."[3][page needed][4]: 183
https://en.wikipedia.org/wiki/Explained_variance
Time delay neural network(TDNN)[1]is a multilayerartificial neural networkarchitecture whose purpose is to 1) classify patterns with shift-invariance, and 2) model context at each layer of the network. Shift-invariant classification means that the classifier does not require explicit segmentation prior to classification. For the classification of a temporal pattern (such as speech), the TDNN thus avoids having to determine the beginning and end points of sounds before classifying them. For contextual modelling in a TDNN, each neural unit at each layer receives input not only from activations/features at the layer below, but from a pattern of unit output and its context. For time signals each unit receives as input the activation patterns over time from units below. Applied to two-dimensional classification (images, time-frequency patterns), the TDNN can be trained with shift-invariance in the coordinate space and avoids precise segmentation in the coordinate space. The TDNN was introduced in the late 1980s and applied to a task ofphonemeclassification for automaticspeech recognitionin speech signals where the automatic determination of precise segments or feature boundaries was difficult or impossible. Because the TDNN recognizes phonemes and their underlying acoustic/phonetic features, independent of position in time, it improved performance over static classification.[1][2]It was also applied to two-dimensional signals (time-frequency patterns in speech,[3]and coordinate space pattern in OCR[4]). Kunihiko Fukushimapublished theneocognitronin 1980.[5]Max poolingappears in a 1982 publication on the neocognitron[6]and was in the 1989 publication inLeNet-5.[7] In 1990, Yamaguchi et al. used max pooling in TDNNs in order to realize a speaker independent isolated word recognition system.[8] The Time Delay Neural Network, like other neural networks, operates with multiple interconnected layers ofperceptrons, and is implemented as afeedforward neural network. All neurons (at each layer) of a TDNN receive inputs from the outputs of neurons at the layer below but with two differences: In the case of a speech signal, inputs are spectral coefficients over time. In order to learn critical acoustic-phonetic features (for example formant transitions, bursts, frication, etc.) without first requiring precise localization, the TDNN is trained time-shift-invariantly. Time-shift invariance is achieved through weight sharing across time during training: Time shifted copies of the TDNN are made over the input range (from left to right in Fig.1). Backpropagation is then performed from an overall classification target vector (see TDNN diagram, three phoneme class targets (/b/, /d/, /g/) are shown in the output layer), resulting in gradients that will generally vary for each of the time-shifted network copies. Since such time-shifted networks are only copies, however, the position dependence is removed by weight sharing. In this example, this is done by averaging the gradients from each time-shifted copy before performing the weight update. In speech, time-shift invariant training was shown to learn weight matrices that are independent of precise positioning of the input. The weight matrices could also be shown to detect important acoustic-phonetic features that are known to be important for human speech perception, such as formant transitions, bursts, etc.[1]TDNNs could also be combined or grown by way of pre-training.[9] The precise architecture of TDNNs (time-delays, number of layers) is mostly determined by the designer depending on the classification problem and the most useful context sizes. The delays or context windows are chosen specific to each application. Work has also been done to create adaptable time-delay TDNNs[10]where this manual tuning is eliminated. TDNN-based phoneme recognizers compared favourably in early comparisons with HMM-based phone models.[1][9]Modern deep TDNN architectures include many more hidden layers and sub-sample or pool connections over broader contexts at higher layers. They achieve up to 50% word error reduction overGMM-based acoustic models.[11][12]While the different layers of TDNNs are intended to learn features of increasing context width, they do model local contexts. When longer-distance relationships and pattern sequences have to be processed, learning states and state-sequences is important and TDNNs can be combined with other modelling techniques.[13][3][4] TDNNs used to solve problems in speech recognition that were introduced in 1989[2]and initially focused on shift-invariant phoneme recognition. Speech lends itself nicely to TDNNs as spoken sounds are rarely of uniform length and precise segmentation is difficult or impossible. By scanning a sound over past and future, the TDNN is able to construct a model for the key elements of that sound in a time-shift invariant manner. This is particularly useful as sounds are smeared out through reverberation.[11][12]Large phonetic TDNNs can be constructed modularly through pre-training and combining smaller networks.[9] Large vocabulary speech recognition requires recognizing sequences of phonemes that make up words subject to the constraints of a large pronunciation vocabulary. Integration of TDNNs into large vocabulary speech recognizers is possible by introducing state transitions and search between phonemes that make up a word. The resulting Multi-State Time-Delay Neural Network (MS-TDNN) can be trained discriminative from the word level, thereby optimizing the entire arrangement toward word recognition instead of phoneme classification.[13][14][4] Two-dimensional variants of the TDNNs were proposed for speaker independence.[3]Here, shift-invariance is applied to the timeas well asto the frequency axis in order to learn hidden features that are independent of precise location in time and in frequency (the latter being due to speaker variability). One of the persistent problems in speech recognition is recognizing speech when it is corrupted by echo and reverberation (as is the case in large rooms and distant microphones). Reverberation can be viewed as corrupting speech with delayed versions of itself. In general, it is difficult, however, to de-reverberate a signal as the impulse response function (and thus the convolutional noise experienced by the signal) is not known for any arbitrary space. The TDNN was shown to be effective to recognize speech robustly despite different levels of reverberation.[11][12] TDNNs were also successfully used in early demonstrations of audio-visual speech, where the sounds of speech are complemented by visually reading lip movement.[14]Here, TDNN-based recognizers used visual and acoustic features jointly to achieve improved recognition accuracy, particularly in the presence of noise, where complementary information from an alternate modality could be fused nicely in a neural net. TDNNs have been used effectively in compact and high-performancehandwriting recognitionsystems. Shift-invariance was also adapted to spatial patterns (x/y-axes) in image offline handwriting recognition.[4] Video has a temporal dimension that makes a TDNN an ideal solution to analysing motion patterns. An example of this analysis is a combination of vehicle detection and recognizing pedestrians.[15]When examining videos, subsequent images are fed into the TDNN as input where each image is the next frame in the video. The strength of the TDNN comes from its ability to examine objects shifted in time forward and backward to define an object detectable as the time is altered. If an object can be recognized in this manner, an application can plan on that object to be found in the future and perform an optimal action. Two-dimensional TDNNs were later applied to other image-recognition tasks under the name of "Convolutional Neural Networks", where shift-invariant training is applied to the x/y axes of an image. [1] [2][3][4]
https://en.wikipedia.org/wiki/Time_delay_neural_network
TheRule Interchange Format(RIF) is aW3C Recommendation. RIF is part of the infrastructure for thesemantic web, along with (principally)SPARQL,RDFandOWL. Although originally envisioned by many as a "rules layer" for the semantic web, in reality the design of RIF is based on the observation that there are many "rules languages" in existence, and what is needed is to exchange rules between them.[1] RIF includes three dialects, a Core dialect which is extended into a Basic Logic Dialect (BLD) and Production Rule Dialect (PRD).[2] The RIF working group was chartered in late 2005. Among its goals was drawing in members of the commercial rules marketplace. The working group started with more than 50 members and two chairs drawn from industry, Christian de Sainte Marie ofILOG, andChris WeltyofIBM. The charter, to develop an interchange formatbetween existing rule systemswas influenced by a workshop in the spring of 2005 in which it was clear that one rule language would not serve the needs of all interested parties (Dr. Welty described the outcome of the workshop asNash Equilibrium[3]). RIF became aW3C Recommendationon June 22, 2010.[4] Aruleis perhaps one of the simplest notions in computer science: it is an IF - THEN construct. If some condition (the IF part) that is checkable in some dataset holds, then the conclusion (the THEN part) is processed. Deriving somewhat from its roots inlogic, rule systems use a notion of predicates that hold or not of some data object or objects. For example, the fact that two people are married might be represented with predicates as: MARRIEDis a predicate that can be said toholdbetweenLISAandJOHN. Adding the notion of variables, a rule could be something like: We would expect that for every pair of ?x and ?y (e.g.LISAandJOHN) for which theMARRIEDpredicate holds, some computer system that could understand this rule would conclude that theLOVESpredicate holds for that pair as well. Rules are a simple way of encoding knowledge, and are a drastic simplification offirst order logic, for which it is relatively easy to implement inference engines that can process the conditions and draw the right conclusions. Arule systemis an implementation of a particularsyntaxandsemanticsof rules, which may extend the simple notion described above to includeexistential quantification,disjunction,logical conjunction,negation,functions,non monotonicity, and many other features. Rule systems have been implemented and studied since the mid-1970s and saw significant uptake in the 1980s during the height of so-calledExpert Systems. The standard RIF dialects are Core, BLD and PRD. These dialects depend on an extensive list of datatypes with builtin functions and predicates on those datatypes. Relations of various RIF dialects are shown in the following Venn diagram.[5] Datatypes and Built-Ins (DTB) specifies a list of datatypes, built-in functions and built-in predicates expected to be supported by RIF dialects. Some of the datatypes are adapted fromXML SchemaDatatypes,[6]XPathfunctions[7]and rdf:PlainLiteral functions.[8] The Core dialect comprises a common subset of most rule dialect. RIF-Core is a subset of both RIF-BLD and RIF-PRD. Framework for Logic Dialects (FLD) describes mechanisms for specifying the syntax and semantics of logic RIF dialects, including the RIF-BLD and RIF-Core, but not RIF-PRD which is not a logic-based RIF dialect. The Basic Logic Dialect (BLD) adds features to the Core dialect that are not directly available such as: logic functions, equality in the then-part andnamed arguments. RIF BLD corresponds to positive datalogs, that is, logic programs without functions or negations. RIF-BLD has amodel-theoreticsemantics. Theframesyntax of RIF BLD is based onF-logic, but RIF BLD doesn't have thenon-monotonic reasoningfeatures of F-logic.[9] The Production Rules Dialect (PRD) can be used to modelproduction rules. Features that are notably in PRD but not BLD include negation and retraction of facts (thus, PRD is not monotonic). PRD rules are order dependent, hence conflict resolution strategies are needed when multiple rules can be fired. The PRD specification defines one such resolution strategy based onforward chainingreasoning. RIF-PRD has anoperational semantics, whereas the condition formulas also have a model-theoretic semantics. Example (Example 1.2 in[10]) Several other RIF dialects exist. None of them are officially endorsed by W3C and they are not part of the RIF specification. The Core Answer Set Programming Dialect (CASPD)[11]is based onanswer set programming, that is, declarative logic programming based on the answer set semantics (stable model semantics). Example: The Uncertainty Rule Dialect (URD)[12]supports a direct representation of uncertain knowledge. Example: RIF-SILK[13]can be used to modeldefault logic. It is based on declarative logic programming with thewell-founded semantics. RIF-SILK also includes a number of other features present in more sophisticated declarative logic programming languages such as SILK.[14] Example
https://en.wikipedia.org/wiki/Rule_Interchange_Format
Named-entity recognition(NER) (also known as(named)entity identification,entity chunking, andentity extraction) is a subtask ofinformation extractionthat seeks to locate and classifynamed entitiesmentioned inunstructured textinto pre-defined categories such as person names, organizations, locations,medical codes, time expressions, quantities, monetary values, percentages, etc. Most research on NER/NEE systems has been structured as taking an unannotated block of text, such as this one: Jim bought 300 shares of Acme Corp. in 2006. And producing an annotated block of text that highlights the names of entities: [Jim]Personbought 300 shares of [Acme Corp.]Organizationin [2006]Time. In this example, a person name consisting of one token, a two-token company name and a temporal expression have been detected and classified. State-of-the-art NER systems for English produce near-human performance. For example, the best system enteringMUC-7scored 93.39% ofF-measurewhile human annotators scored 97.60% and 96.95%.[1][2] Notable NER platforms include: In the expressionnamed entity, the wordnamedrestricts the task to those entities for which one or many strings, such as words or phrases, stand (fairly) consistently for some referent. This is closely related torigid designators, as defined byKripke,[5][6]although in practice NER deals with many names and referents that are not philosophically "rigid". For instance, theautomotive company created by Henry Ford in 1903can be referred to asFordorFord Motor Company, although "Ford" can refer to many other entities as well (seeFord). Rigid designators include proper names as well as terms for certain biological species and substances,[7]but exclude pronouns (such as "it"; seecoreference resolution), descriptions that pick out a referent by its properties (see alsoDe dicto and de re), and names for kinds of things as opposed to individuals (for example "Bank"). Full named-entity recognition is often broken down, conceptually and possibly also in implementations,[8]as two distinct problems: detection of names, andclassificationof the names by the type of entity they refer to (e.g. person, organization, or location).[9]The first phase is typically simplified to a segmentation problem: names are defined to be contiguous spans of tokens, with no nesting, so that "Bank of America" is a single name, disregarding the fact that inside this name, the substring "America" is itself a name. This segmentation problem is formally similar tochunking. The second phase requires choosing anontologyby which to organize categories of things. Temporal expressionsand some numerical expressions (e.g., money, percentages, etc.) may also be considered as named entities in the context of the NER task. While some instances of these types are good examples of rigid designators (e.g., the year 2001) there are also many invalid ones (e.g., I take my vacations in “June”). In the first case, the year2001refers to the2001st year of the Gregorian calendar. In the second case, the monthJunemay refer to the month of an undefined year (past June,next June,every June, etc.). It is arguable that the definition ofnamed entityis loosened in such cases for practical reasons. The definition of the termnamed entityis therefore not strict and often has to be explained in the context in which it is used.[10] Certainhierarchiesof named entity types have been proposed in the literature.BBNcategories, proposed in 2002, are used forquestion answeringand consists of 29 types and 64 subtypes.[11]Sekine's extended hierarchy, proposed in 2002, is made of 200 subtypes.[12]More recently, in 2011 Ritter used a hierarchy based on commonFreebaseentity types in ground-breaking experiments on NER oversocial mediatext.[13] To evaluate the quality of an NER system's output, several measures have been defined. The usual measures are calledprecision, recall, andF1 score. However, several issues remain in just how to calculate those values. These statistical measures work reasonably well for the obvious cases of finding or missing a real entity exactly; and for finding a non-entity. However, NER can fail in many other ways, many of which are arguably "partially correct", and should not be counted as complete success or failures. For example, identifying a real entity, but: One overly simple method of measuring accuracy is merely to count what fraction of all tokens in the text were correctly or incorrectly identified as part of entity references (or as being entities of the correct type). This suffers from at least two problems: first, the vast majority of tokens in real-world text are not part of entity names, so the baseline accuracy (always predict "not an entity") is extravagantly high, typically >90%; and second, mispredicting the full span of an entity name is not properly penalized (finding only a person's first name when his last name follows might be scored as ½ accuracy). In academic conferences such as CoNLL, a variant of theF1 scorehas been defined as follows:[9] It follows from the above definition that any prediction that misses a single token, includes a spurious token, or has the wrong class, is a hard error and does not contribute positively to either precision or recall. Thus, this measure may be said to be pessimistic: it can be the case that many "errors" are close to correct, and might be adequate for a given purpose. For example, one system might always omit titles such as "Ms." or "Ph.D.", but be compared to a system or ground-truth data that expects titles to be included. In that case, every such name is treated as an error. Because of such issues, it is important actually to examine the kinds of errors, and decide how important they are given one's goals and requirements. Evaluation models based on a token-by-token matching have been proposed.[14]Such models may be given partial credit for overlapping matches (such as using theIntersection over Unioncriterion). They allow a finer grained evaluation and comparison of extraction systems. NER systems have been created that use linguisticgrammar-based techniques as well asstatistical modelssuch asmachine learning. Hand-crafted grammar-based systems typically obtain better precision, but at the cost of lower recall and months of work by experiencedcomputational linguists.[15]Statistical NER systems typically require a large amount of manuallyannotatedtraining data.Semisupervisedapproaches have been suggested to avoid part of the annotation effort.[16][17] Many different classifier types have been used to perform machine-learned NER, withconditional random fieldsbeing a typical choice.[18] In 2001, research indicated that even state-of-the-art NER systems were brittle, meaning that NER systems developed for one domain did not typically perform well on other domains.[19]Considerable effort is involved in tuning NER systems to perform well in a new domain; this is true for both rule-based and trainable statistical systems. Early work in NER systems in the 1990s was aimed primarily at extraction from journalistic articles. Attention then turned to processing of military dispatches and reports. Later stages of theautomatic content extraction(ACE) evaluation also included several types of informal text styles, such asweblogsandtext transcriptsfrom conversational telephone speech conversations. Since about 1998, there has been a great deal of interest in entity identification in themolecular biology,bioinformatics, and medicalnatural language processingcommunities. The most common entity of interest in that domain has been names ofgenesand gene products. There has been also considerable interest in the recognition ofchemical entitiesand drugs in the context of the CHEMDNER competition, with 27 teams participating in this task.[20] Despite high F1 numbers reported on the MUC-7 dataset, the problem of named-entity recognition is far from being solved. The main efforts are directed to reducing the annotations labor by employingsemi-supervised learning,[16][21]robust performance across domains[22][23]and scaling up to fine-grained entity types.[12][24]In recent years, many projects have turned tocrowdsourcing, which is a promising solution to obtain high-quality aggregate human judgments forsupervisedand semi-supervised machine learning approaches to NER.[25]Another challenging task is devising models to deal with linguistically complex contexts such as Twitter and search queries.[26] There are some researchers who did some comparisons about the NER performances from different statistical models such as HMM (hidden Markov model), ME (maximum entropy), and CRF (conditional random fields), and feature sets.[27]And some researchers recently proposed graph-based semi-supervised learning model for language specific NER tasks.[28] A recently emerging task of identifying "important expressions" in text andcross-linking them to Wikipedia[29][30][31]can be seen as an instance of extremely fine-grained named-entity recognition, where the types are the actual Wikipedia pages describing the (potentially ambiguous) concepts. Below is an example output of a Wikification system: Another field that has seen progress but remains challenging is the application of NER toTwitterand other microblogs, considered "noisy" due to non-standard orthography, shortness and informality of texts.[32][33]NER challenges in English Tweets have been organized by research communities to compare performances of various approaches, such asbidirectional LSTMs, Learning-to-Search, or CRFs.[34][35][36]
https://en.wikipedia.org/wiki/Named-entity_recognition
Inmathematics, asetiscountableif either it isfiniteor it can be made inone to one correspondencewith the set ofnatural numbers.[a]Equivalently, a set iscountableif there exists aninjective functionfrom it into the natural numbers; this means that each element in the set may be associated to a unique natural number, or that the elements of the set can be counted one at a time, although the counting may never finish due to an infinite number of elements. In more technical terms, assuming theaxiom of countable choice, a set iscountableif itscardinality(the number of elements of the set) is not greater than that of the natural numbers. A countable set that is not finite is said to becountably infinite. The concept is attributed toGeorg Cantor, who proved the existence ofuncountable sets, that is, sets that are not countable; for example the set of thereal numbers. Although the terms "countable" and "countably infinite" as defined here are quite common, the terminology is not universal.[1]An alternative style usescountableto mean what is here called countably infinite, andat most countableto mean what is here called countable.[2][3] The termsenumerable[4]anddenumerable[5][6]may also be used, e.g. referring to countable and countably infinite respectively,[7]definitions vary and care is needed respecting the difference withrecursively enumerable.[8] A setS{\displaystyle S}iscountableif: All of these definitions are equivalent. A setS{\displaystyle S}iscountablyinfiniteif: A set isuncountableif it is not countable, i.e. its cardinality is greater thanℵ0{\displaystyle \aleph _{0}}.[9] In 1874, inhis first set theory article, Cantor proved that the set ofreal numbersis uncountable, thus showing that not all infinite sets are countable.[16]In 1878, he used one-to-one correspondences to define and compare cardinalities.[17]In 1883, he extended the natural numbers with his infiniteordinals, and used sets of ordinals to produce an infinity of sets having different infinite cardinalities.[18] Asetis a collection ofelements, and may be described in many ways. One way is simply to list all of its elements; for example, the set consisting of the integers 3, 4, and 5 may be denoted{3,4,5}{\displaystyle \{3,4,5\}}, called roster form.[19]This is only effective for small sets, however; for larger sets, this would be time-consuming and error-prone. Instead of listing every single element, sometimes an ellipsis ("...") is used to represent many elements between the starting element and the end element in a set, if the writer believes that the reader can easily guess what ... represents; for example,{1,2,3,…,100}{\displaystyle \{1,2,3,\dots ,100\}}presumably denotes the set ofintegersfrom 1 to 100. Even in this case, however, it is stillpossibleto list all the elements, because the number of elements in the set is finite. If we number the elements of the set 1, 2, and so on, up ton{\displaystyle n}, this gives us the usual definition of "sets of sizen{\displaystyle n}". Some sets areinfinite; these sets have more thann{\displaystyle n}elements wheren{\displaystyle n}is any integer that can be specified. (No matter how large the specified integern{\displaystyle n}is, such asn=101000{\displaystyle n=10^{1000}}, infinite sets have more thann{\displaystyle n}elements.) For example, the set of natural numbers, denotable by{0,1,2,3,4,5,…}{\displaystyle \{0,1,2,3,4,5,\dots \}},[a]has infinitely many elements, and we cannot use any natural number to give its size. It might seem natural to divide the sets into different classes: put all the sets containing one element together; all the sets containing two elements together; ...; finally, put together all infinite sets and consider them as having the same size. This view works well for countably infinite sets and was the prevailing assumption before Georg Cantor's work. For example, there are infinitely many odd integers, infinitely many even integers, and also infinitely many integers overall. We can consider all these sets to have the same "size" because we can arrange things such that, for every integer, there is a distinct even integer:…−2→−4,−1→−2,0→0,1→2,2→4⋯{\displaystyle \ldots \,-\!2\!\rightarrow \!-\!4,\,-\!1\!\rightarrow \!-\!2,\,0\!\rightarrow \!0,\,1\!\rightarrow \!2,\,2\!\rightarrow \!4\,\cdots }or, more generally,n→2n{\displaystyle n\rightarrow 2n}(see picture). What we have done here is arrange the integers and the even integers into aone-to-one correspondence(orbijection), which is afunctionthat maps between two sets such that each element of each set corresponds to a single element in the other set. This mathematical notion of "size", cardinality, is that two sets are of the same size if and only if there is a bijection between them. We call all sets that are in one-to-one correspondence with the integerscountably infiniteand say they have cardinalityℵ0{\displaystyle \aleph _{0}}. Georg Cantorshowed that not all infinite sets are countably infinite. For example, the real numbers cannot be put into one-to-one correspondence with the natural numbers (non-negative integers). The set of real numbers has a greater cardinality than the set of natural numbers and is said to be uncountable. By definition, a setS{\displaystyle S}iscountableif there exists abijectionbetweenS{\displaystyle S}and a subset of thenatural numbersN={0,1,2,…}{\displaystyle \mathbb {N} =\{0,1,2,\dots \}}. For example, define the correspondencea↔1,b↔2,c↔3{\displaystyle a\leftrightarrow 1,\ b\leftrightarrow 2,\ c\leftrightarrow 3}Since every element ofS={a,b,c}{\displaystyle S=\{a,b,c\}}is paired withprecisely oneelement of{1,2,3}{\displaystyle \{1,2,3\}},andvice versa, this defines a bijection, and shows thatS{\displaystyle S}is countable. Similarly we can show all finite sets are countable. As for the case of infinite sets, a setS{\displaystyle S}is countably infinite if there is abijectionbetweenS{\displaystyle S}and all ofN{\displaystyle \mathbb {N} }. As examples, consider the setsA={1,2,3,…}{\displaystyle A=\{1,2,3,\dots \}}, the set of positiveintegers, andB={0,2,4,6,…}{\displaystyle B=\{0,2,4,6,\dots \}}, the set of even integers. We can show these sets are countably infinite by exhibiting a bijection to the natural numbers. This can be achieved using the assignmentsn↔n+1{\displaystyle n\leftrightarrow n+1}andn↔2n{\displaystyle n\leftrightarrow 2n}, so that0↔1,1↔2,2↔3,3↔4,4↔5,…0↔0,1↔2,2↔4,3↔6,4↔8,…{\displaystyle {\begin{matrix}0\leftrightarrow 1,&1\leftrightarrow 2,&2\leftrightarrow 3,&3\leftrightarrow 4,&4\leftrightarrow 5,&\ldots \\[6pt]0\leftrightarrow 0,&1\leftrightarrow 2,&2\leftrightarrow 4,&3\leftrightarrow 6,&4\leftrightarrow 8,&\ldots \end{matrix}}}Every countably infinite set is countable, and every infinite countable set is countably infinite. Furthermore, any subset of the natural numbers is countable, and more generally: Theorem—A subset of a countable set is countable.[20] The set of allordered pairsof natural numbers (theCartesian productof two sets of natural numbers,N×N{\displaystyle \mathbb {N} \times \mathbb {N} }is countably infinite, as can be seen by following a path like the one in the picture: The resultingmappingproceeds as follows: 0↔(0,0),1↔(1,0),2↔(0,1),3↔(2,0),4↔(1,1),5↔(0,2),6↔(3,0),…{\displaystyle 0\leftrightarrow (0,0),1\leftrightarrow (1,0),2\leftrightarrow (0,1),3\leftrightarrow (2,0),4\leftrightarrow (1,1),5\leftrightarrow (0,2),6\leftrightarrow (3,0),\ldots }This mapping covers all such ordered pairs. This form of triangular mappingrecursivelygeneralizes ton{\displaystyle n}-tuplesof natural numbers, i.e.,(a1,a2,a3,…,an){\displaystyle (a_{1},a_{2},a_{3},\dots ,a_{n})}whereai{\displaystyle a_{i}}andn{\displaystyle n}are natural numbers, by repeatedly mapping the first two elements of ann{\displaystyle n}-tuple to a natural number. For example,(0,2,3){\displaystyle (0,2,3)}can be written as((0,2),3){\displaystyle ((0,2),3)}. Then(0,2){\displaystyle (0,2)}maps to 5 so((0,2),3){\displaystyle ((0,2),3)}maps to(5,3){\displaystyle (5,3)}, then(5,3){\displaystyle (5,3)}maps to 39. Since a different 2-tuple, that is a pair such as(a,b){\displaystyle (a,b)}, maps to a different natural number, a difference between two n-tuples by a single element is enough to ensure the n-tuples being mapped to different natural numbers. So, an injection from the set ofn{\displaystyle n}-tuples to the set of natural numbersN{\displaystyle \mathbb {N} }is proved. For the set ofn{\displaystyle n}-tuples made by the Cartesian product of finitely many different sets, each element in each tuple has the correspondence to a natural number, so every tuple can be written in natural numbers then the same logic is applied to prove the theorem. Theorem—TheCartesian productof finitely many countable sets is countable.[21][b] The set of allintegersZ{\displaystyle \mathbb {Z} }and the set of allrational numbersQ{\displaystyle \mathbb {Q} }may intuitively seem much bigger thanN{\displaystyle \mathbb {N} }. But looks can be deceiving. If a pair is treated as thenumeratoranddenominatorof avulgar fraction(a fraction in the form ofa/b{\displaystyle a/b}wherea{\displaystyle a}andb≠0{\displaystyle b\neq 0}are integers), then for every positive fraction, we can come up with a distinct natural number corresponding to it. This representation also includes the natural numbers, since every natural numbern{\displaystyle n}is also a fractionn/1{\displaystyle n/1}. So we can conclude that there are exactly as many positive rational numbers as there are positive integers. This is also true for all rational numbers, as can be seen below. Theorem—Z{\displaystyle \mathbb {Z} }(the set of all integers) andQ{\displaystyle \mathbb {Q} }(the set of all rational numbers) are countable.[c] In a similar manner, the set ofalgebraic numbersis countable.[23][d] Sometimes more than one mapping is useful: a setA{\displaystyle A}to be shown as countable is one-to-one mapped (injection) to another setB{\displaystyle B}, thenA{\displaystyle A}is proved as countable ifB{\displaystyle B}is one-to-one mapped to the set of natural numbers. For example, the set of positiverational numberscan easily be one-to-one mapped to the set of natural number pairs (2-tuples) becausep/q{\displaystyle p/q}maps to(p,q){\displaystyle (p,q)}. Since the set of natural number pairs is one-to-one mapped (actually one-to-one correspondence or bijection) to the set of natural numbers as shown above, the positive rational number set is proved as countable. Theorem—Any finiteunionof countable sets is countable.[24][25][e] With the foresight of knowing that there are uncountable sets, we can wonder whether or not this last result can be pushed any further. The answer is "yes" and "no", we can extend it, but we need to assume a new axiom to do so. Theorem—(Assuming theaxiom of countable choice) The union of countably many countable sets is countable.[f] For example, given countable setsa,b,c,…{\displaystyle {\textbf {a}},{\textbf {b}},{\textbf {c}},\dots }, we first assign each element of each set a tuple, then we assign each tuple an index using a variant of the triangular enumeration we saw above:IndexTupleElement0(0,0)a01(0,1)a12(1,0)b03(0,2)a24(1,1)b15(2,0)c06(0,3)a37(1,2)b28(2,1)c19(3,0)d010(0,4)a4⋮{\displaystyle {\begin{array}{c|c|c }{\text{Index}}&{\text{Tuple}}&{\text{Element}}\\\hline 0&(0,0)&{\textbf {a}}_{0}\\1&(0,1)&{\textbf {a}}_{1}\\2&(1,0)&{\textbf {b}}_{0}\\3&(0,2)&{\textbf {a}}_{2}\\4&(1,1)&{\textbf {b}}_{1}\\5&(2,0)&{\textbf {c}}_{0}\\6&(0,3)&{\textbf {a}}_{3}\\7&(1,2)&{\textbf {b}}_{2}\\8&(2,1)&{\textbf {c}}_{1}\\9&(3,0)&{\textbf {d}}_{0}\\10&(0,4)&{\textbf {a}}_{4}\\\vdots &&\end{array}}} We need theaxiom of countable choiceto indexallthe setsa,b,c,…{\displaystyle {\textbf {a}},{\textbf {b}},{\textbf {c}},\dots }simultaneously. Theorem—The set of all finite-lengthsequencesof natural numbers is countable. This set is the union of the length-1 sequences, the length-2 sequences, the length-3 sequences, and so on, each of which is a countable set (finite Cartesian product). Thus the set is a countable union of countable sets, which is countable by the previous theorem. Theorem—The set of all finitesubsetsof the natural numbers is countable. The elements of any finite subset can be ordered into a finite sequence. There are only countably many finite sequences, so also there are only countably many finite subsets. Theorem—LetS{\displaystyle S}andT{\displaystyle T}be sets. These follow from the definitions of countable set as injective / surjective functions.[g] Cantor's theoremasserts that ifA{\displaystyle A}is a set andP(A){\displaystyle {\mathcal {P}}(A)}is itspower set, i.e. the set of all subsets ofA{\displaystyle A}, then there is no surjective function fromA{\displaystyle A}toP(A){\displaystyle {\mathcal {P}}(A)}. A proof is given in the articleCantor's theorem. As an immediate consequence of this and the Basic Theorem above we have: Proposition—The setP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}is not countable; i.e. it isuncountable. For an elaboration of this result seeCantor's diagonal argument. The set ofreal numbersis uncountable,[h]and so is the set of all infinitesequencesof natural numbers. If there is a set that is a standard model (seeinner model) of ZFC set theory, then there is a minimal standard model (seeConstructible universe). TheLöwenheim–Skolem theoremcan be used to show that this minimal model is countable. The fact that the notion of "uncountability" makes sense even in this model, and in particular that this modelMcontains elements that are: was seen as paradoxical in the early days of set theory; seeSkolem's paradoxfor more. The minimal standard model includes all thealgebraic numbersand all effectively computabletranscendental numbers, as well as many other kinds of numbers. Countable sets can betotally orderedin various ways, for example: In both examples of well orders here, any subset has aleast element; and in both examples of non-well orders,somesubsets do not have aleast element. This is the key definition that determines whether a total order is also a well order.
https://en.wikipedia.org/wiki/Countability
JSON-LD(JavaScript Object Notation for Linked Data) is a method of encodinglinked datausingJSON. One goal for JSON-LD was to require as little effort as possible from developers to transform their existing JSON to JSON-LD.[1]JSON-LD allows data to be serialized in a way that is similar to traditional JSON.[2]It was initially developed by the JSON for Linking Data Community Group[3]before being transferred to the RDF Working Group[4]for review, improvement, and standardization,[5]and is currently maintained by the JSON-LD Working Group.[6]JSON-LD is aWorld Wide Web Consortium Recommendation. JSON-LD is designed around the concept of a "context" to provide additional mappings from JSON to anRDFmodel. The context links object properties in a JSON document to concepts in anontology. In order to map the JSON-LD syntax to RDF, JSON-LD allows values to be coerced to a specified type or to be tagged with a language. A context can be embedded directly in a JSON-LD document or put into a separate file and referenced from different documents (from traditional JSON documents via anHTTPLinkheader). The example above describes a person, based on theFOAF (friend of a friend) ontology. First, the two JSON propertiesnameandhomepageand the typePersonare mapped to concepts in the FOAF vocabulary and the value of thehomepageproperty is specified to be of the type@id. In other words, the homepage id is specified to be anIRIin the context definition. Based on the RDF model, this allows the person described in the document to be unambiguously identified by anIRI. The use of resolvable IRIs allows RDF documents containing more information to betranscludedwhich enables clients to discover new data by simply following those links; this principle is known as 'Follow Your Nose'.[7] By having all data semantically annotated as in the example, an RDF processor can identify that the document contains information about a person (@type) and if the processor understands the FOAF vocabulary it can determine which properties specify the person's name and homepage. The encoding is used bySchema.org,[8]Google Knowledge Graph,[9][10]and used mostly forsearch engine optimizationactivities. It has also been used for applications such asbiomedical informatics,[11]and representingprovenanceinformation.[12]It is also the basis ofActivity Streams, a format for "the exchange of information about potential and completed activities",[13]and is used inActivityPub, the federated social networking protocol.[14]Additionally, it is used in the context ofInternet of Things (IoT), where a Thing Description,[15]which is a JSON-LD document, describes the network facing interfaces of IoT devices.
https://en.wikipedia.org/wiki/JSON-LD
Most real databases contain data whose correctness is uncertain. In order to work with such data, there is a need to quantify the integrity of the data. This is achieved by using probabilistic databases. Aprobabilistic databaseis anuncertain databasein which thepossible worldshave associatedprobabilities. Probabilisticdatabase management systemsare currently an active area of research. "While there are currently no commercial probabilistic database systems, several research prototypes exist..."[1] Probabilistic databases distinguish between thelogical data modeland the physical representation of the data much likerelational databasesdo in theANSI-SPARC Architecture. In probabilistic databases this is even more crucial since such databases have to represent very large numbers of possible worlds, often exponential in the size of one world (a classicaldatabase),succinctly.[2][3] In a probabilistic database, each tuple is associated with a probability between 0 and 1, with 0 representing that the data is certainly incorrect, and 1 representing that it is certainly correct. A probabilistic database could exist in multiple states. For example, if there is uncertainty about the existence of a tuple in the database, then the database could be in two different states with respect to that tuple—the first state contains the tuple, while the second one does not. Similarly, if an attribute can take one of the valuesx,yorz, then the database can be in three different states with respect to that attribute. Each of thesestatesis called a possible world. Consider the following database: (Here{b3, b3′, b3′′}denotes that the attribute can take any of the valuesb3,b3′orb3′′) Then the actual state of the database may or may not contain the first tuple (depending on whether it is correct or not). Similarly, the value of the attributeBmay beb3,b3′orb3′′. Consequently, the possible worlds corresponding to the database are as follows: There are essentially two kinds of uncertainties that could exist in a probabilistic database, as described in the table below: By assigning values to random variables associated with the data items, different possible worlds can be represented. The first published use of the term "probabilistic database" was probably in the 1987 VLDB conference paper "The theory of probabilistic databases", by Cavallo and Pittarelli.[4]The title (of the 11 page paper) was intended as a bit of a joke, since David Maier's 600 page monograph, The Theory of Relational Databases, would have been familiar at that time to many of the conference participants and readers of the conference proceedings.
https://en.wikipedia.org/wiki/Probabilistic_database
Cascadingis a particular case ofensemble learningbased on the concatenation of severalclassifiers, using all information collected from the output from a given classifier as additional information for the next classifier in the cascade. Unlike voting or stacking ensembles, which are multiexpert systems, cascading is a multistage one. Cascading classifiers are trained with several hundred "positive" sample views of a particular object and arbitrary "negative" images of the same size. After the classifier is trained it can be applied to a region of an image and detect the object in question. To search for the object in the entire frame, the search window can be moved across the image and check every location with the classifier. This process is most commonly used inimage processingfor object detection and tracking, primarilyfacial detectionand recognition. The first cascading classifier was the face detector ofViola and Jones (2001). The requirement for this classifier was to be fast in order to be implemented on low-powerCPUs, such as cameras and phones. It can be seen from this description that the classifier will not accept faces that are upside down (the eyebrows are not in a correct position) or the side of the face (the nose is no longer in the center, and shadows on the side of the nose might be missing). Separate cascade classifiers have to be trained for every rotation that is not in the image plane (side of face) and will have to be retrained or run on rotated features for every rotation that is in the image plane (face upside down or tilted to the side). Scaling is not a problem, since the features can be scaled (centerpixel, leftpixels and rightpixels have a dimension only relative to the rectangle examined). In recent cascades, pixel value from some part of a rectangle compared to another have been replaced withHaar wavelets. To have good overall performance, the following criteria must be met: The training procedure for one stage is therefore to have many weak learners (simple pixel difference operators), train them as a group (raise their weight if they give correct result), but be mindful of having only a few active weak learners so the computation time remains low. The first detector of Viola & Jones had 38 stages, with 1 feature in the first stage, then 10, 25, 25, 50 in the next five stages, for a total of 6000 features. The first stages remove unwanted rectangles rapidly to avoid paying the computational costs of the next stages, so that computational time is spent analyzing deeply the part of the image that have a high probability of containing the object. Cascades are usually done through cost-aware ADAboost. The sensitivity threshold (0.8 in our example) can be adjusted so that there is close to 100% true positives and some false positives. The procedure can then be started again for stage 2, until the desired accuracy/computation time is reached. After the initial algorithm, it was understood that training the cascade as a whole can be optimized, to achieve a desired true detection rate with minimal complexity. Examples of such algorithms are RCBoost, ECBoost or RCECBoost. In their most basic versions, they can be understood as choosing, at each step, between adding a stage or adding a weak learner to a previous stage, whichever is less costly, until the desired accuracy has been reached. Every stage of the classifier cannot have a detection rate (sensitivity) below the desired rate, so this is aconstrained optimizationproblem. To be precise, the total sensitivity will be the product of stage sensitivities. Cascade classifiers are available inOpenCV, with pre-trained cascades for frontal faces and upper body. Training a new cascade in OpenCV is also possible with either haar_training or train_cascades methods. This can be used for rapid object detection of more specific targets, including non-human objects withHaar-like features. The process requires two sets of samples: negative and positive, where the negative samples correspond to arbitrary non-object images. The time constraint in training a cascade classifier can be circumvented usingcloud-computingmethods. The term is also used in statistics to describe a model that is staged. For example, a classifier (for examplek-means), takes a vector of features (decision variables) and outputs for each possible classification result the probability that the vector belongs to the class. This is usually used to take a decision (classify into the class with highest probability), but cascading classifiers use this output as the input to another model (another stage). This is particularly useful for models that have highly combinatorial or counting rules (for example, class1 if exactly two features are negative, class2 otherwise), which cannot be fitted without looking at all the interaction terms. Having cascading classifiers enables the successive stage to gradually approximate the combinatorial nature of the classification, or to add interaction terms in classification algorithms that cannot express them in one stage. As a simple example, if we try to match the rule (class1 if exactly 2 features out of 3 are negative, class2 otherwise), a decision tree would be: The tree has all the combinations of possible leaves to express the full ruleset, whereas (feature1 positive, feature2 negative) and (feature1 negative, feature2 positive) should actually join to the same rule. This leads to a tree with too few samples on the leaves. A two-stage algorithm can effectively merge these two cases by giving a medium-high probability to class1 if feature1 or (exclusive) feature2 is negative. The second classifier can pick up this higher probability and make a decision on the sign of feature3. In abias-variancedecomposition, cascaded models are usually seen as lowering bias while raising variance.
https://en.wikipedia.org/wiki/Cascading_classifiers
Multimodal learningis a type ofdeep learningthat integrates and processes multiple types of data, referred to asmodalities, such as text, audio, images, or video. This integration allows for a more holistic understanding of complex data, improving model performance in tasks like visual question answering, cross-modal retrieval,[1]text-to-image generation,[2]aesthetic ranking,[3]and image captioning.[4] Large multimodal models, such asGoogle GeminiandGPT-4o, have become increasingly popular since 2023, enabling increased versatility and a broader understanding of real-world phenomena.[5] Data usually comes with different modalities which carry different information. For example, it is very common to caption an image to convey the information not presented in the image itself. Similarly, sometimes it is more straightforward to use an image to describe information which may not be obvious from text. As a result, if different words appear in similar images, then these words likely describe the same thing. Conversely, if a word is used to describe seemingly dissimilar images, then these images may represent the same object. Thus, in cases dealing with multi-modal data, it is important to use a model which is able to jointly represent the information such that the model can capture the combined information from different modalities. Transformers can also be used/adapted for modalities (input or output) beyond just text, usually by finding a way to "tokenize" the modality. Multimodal models can either be trained from scratch, or by finetuning. A 2022 study found that Transformers pretrained only on natural language can be finetuned on only 0.03% of parameters and become competitive with LSTMs on a variety of logical and visual tasks, demonstratingtransfer learning.[6]The LLaVA was a vision-language model composed of a language model (Vicuna-13B)[7]and a vision model (ViT-L/14), connected by a linear layer. Only the linear layer is finetuned.[8] Vision transformers[9]adapt the transformer to computer vision by breaking down input images as a series of patches, turning them into vectors, and treating them like tokens in a standard transformer. Conformer[10]and laterWhisper[11]follow the same pattern forspeech recognition, first turning the speech signal into aspectrogram, which is then treated like an image, i.e. broken down into a series of patches, turned into vectors and treated like tokens in a standard transformer. Perceivers[12][13]are a variant of Transformers designed for multimodality. Multimodality means "having several modalities", and a"modality"refers to a type of input or output, such as video, image, audio, text,proprioception, etc.[19]There have been many AI models trained specifically to ingest one modality and output another modality, such asAlexNetfor image to label,[20]visual question answeringfor image-text to text,[21]andspeech recognitionfor speech to text. A common method to create multimodal models out of an LLM is to "tokenize" the output of a trained encoder. Concretely, one can construct an LLM that can understand images as follows: take a trained LLM, and take a trained image encoderE{\displaystyle E}. Make a small multilayered perceptronf{\displaystyle f}, so that for any imagey{\displaystyle y}, the post-processed vectorf(E(y)){\displaystyle f(E(y))}has the same dimensions as an encoded token. That is an "image token". Then, one can interleave text tokens and image tokens. The compound model is then fine-tuned on an image-text dataset. This basic construction can be applied with more sophistication to improve the model. The image encoder may be frozen to improve stability.[22] Flamingo demonstrated the effectiveness of the tokenization method, finetuning a pair of pretrained language model and image encoder to perform better on visual question answering than models trained from scratch.[23]Google PaLMmodel was fine-tuned into a multimodal model PaLM-E using the tokenization method, and applied to robotic control.[24]LLaMAmodels have also been turned multimodal using the tokenization method, to allow image inputs,[25]and video inputs.[26] ABoltzmann machineis a type ofstochastic neural networkinvented byGeoffrey HintonandTerry Sejnowskiin 1985. Boltzmann machines can be seen as thestochastic,generativecounterpart ofHopfield nets. They are named after theBoltzmann distributionin statistical mechanics. The units in Boltzmann machines are divided into two groups: visible units and hidden units. Each unit is like a neuron with a binary output that represents whether it is activated or not.[31]General Boltzmann machines allow connection between any units. However, learning is impractical using general Boltzmann Machines because the computational time is exponential to the size of the machine[citation needed]. A more efficient architecture is calledrestricted Boltzmann machinewhere connection is only allowed between hidden unit and visible unit, which is described in the next section. Multimodal deep Boltzmann machines can process and learn from different types of information, such as images and text, simultaneously. This can notably be done by having a separate deep Boltzmann machine for each modality, for example one for images and one for text, joined at an additional top hidden layer.[32] Multimodal machine learning has numerous applications across various domains: Cross-modal retrieval allows users to search for data across different modalities (e.g., retrieving images based on text descriptions), improving multimedia search engines and content recommendation systems. Models likeCLIPfacilitate efficient, accurate retrieval by embedding data in a shared space, demonstrating strong performance even in zero-shot settings.[33] Multimodal Deep Boltzmann Machines outperform traditional models likesupport vector machinesandlatent Dirichlet allocationin classification tasks and can predict missing data in multimodal datasets, such as images and text. Multimodal models integrate medical imaging, genomic data, and patient records to improve diagnostic accuracy and early disease detection, especially in cancer screening.[34][35][36] Models like DALL·E generate images from textual descriptions, benefiting creative industries, while cross-modal retrieval enables dynamic multimedia searches.[37] Multimodal learning improves interaction in robotics and AI by integrating sensory inputs like speech, vision, and touch, aiding autonomous systems and human-computer interaction. Combining visual, audio, and text data, multimodal systems enhance sentiment analysis and emotion recognition, applied in customer service, social media, and marketing.
https://en.wikipedia.org/wiki/Multimodal_learning
Inmathematics, particularlylinear algebra, anorthonormal basisfor aninner product spaceV{\displaystyle V}with finitedimensionis abasisforV{\displaystyle V}whose vectors areorthonormal, that is, they are allunit vectorsandorthogonalto each other.[1][2][3]For example, thestandard basisfor aEuclidean spaceRn{\displaystyle \mathbb {R} ^{n}}is an orthonormal basis, where the relevant inner product is thedot productof vectors. Theimageof the standard basis under arotationorreflection(or anyorthogonal transformation) is also orthonormal, and every orthonormal basis forRn{\displaystyle \mathbb {R} ^{n}}arises in this fashion. An orthonormal basis can be derived from anorthogonal basisvianormalization.The choice of anoriginand an orthonormal basis forms acoordinate frameknown as anorthonormal frame. For a general inner product spaceV,{\displaystyle V,}an orthonormal basis can be used to define normalizedorthogonal coordinatesonV.{\displaystyle V.}Under these coordinates, the inner product becomes a dot product of vectors. Thus the presence of an orthonormal basis reduces the study of afinite-dimensionalinner product space to the study ofRn{\displaystyle \mathbb {R} ^{n}}under the dot product. Every finite-dimensional inner product space has an orthonormal basis, which may be obtained from an arbitrary basis using theGram–Schmidt process. Infunctional analysis, the concept of an orthonormal basis can be generalized to arbitrary (infinite-dimensional)inner product spaces.[4]Given a pre-Hilbert spaceH,{\displaystyle H,}anorthonormal basisforH{\displaystyle H}is an orthonormal set of vectors with the property that every vector inH{\displaystyle H}can be written as aninfinite linear combinationof the vectors in the basis. In this case, the orthonormal basis is sometimes called aHilbert basisforH.{\displaystyle H.}Note that an orthonormal basis in this sense is not generally aHamel basis, since infinite linear combinations are required.[5]Specifically, thelinear spanof the basis must bedenseinH,{\displaystyle H,}although not necessarily the entire space. If we go on toHilbert spaces, a non-orthonormal set of vectors having the same linear span as an orthonormal basis may not be a basis at all. For instance, anysquare-integrable functionon the interval[−1,1]{\displaystyle [-1,1]}can be expressed (almost everywhere) as an infinite sum ofLegendre polynomials(an orthonormal basis), but not necessarily as an infinite sum of themonomialsxn.{\displaystyle x^{n}.} A different generalisation is to pseudo-inner product spaces, finite-dimensional vector spacesM{\displaystyle M}equipped with a non-degeneratesymmetric bilinear formknown as themetric tensor. In such a basis, the metric takes the formdiag(+1,⋯,+1,−1,⋯,−1){\displaystyle {\text{diag}}(+1,\cdots ,+1,-1,\cdots ,-1)}withp{\displaystyle p}positive ones andq{\displaystyle q}negative ones. IfB{\displaystyle B}is an orthogonal basis ofH,{\displaystyle H,}then every elementx∈H{\displaystyle x\in H}may be written asx=∑b∈B⟨x,b⟩‖b‖2b.{\displaystyle x=\sum _{b\in B}{\frac {\langle x,b\rangle }{\lVert b\rVert ^{2}}}b.} WhenB{\displaystyle B}is orthonormal, this simplifies tox=∑b∈B⟨x,b⟩b{\displaystyle x=\sum _{b\in B}\langle x,b\rangle b}and the square of thenormofx{\displaystyle x}can be given by‖x‖2=∑b∈B|⟨x,b⟩|2.{\displaystyle \|x\|^{2}=\sum _{b\in B}|\langle x,b\rangle |^{2}.} Even ifB{\displaystyle B}isuncountable, only countably many terms in this sum will be non-zero, and the expression is therefore well-defined. This sum is also called theFourier expansionofx,{\displaystyle x,}and the formula is usually known asParseval's identity. IfB{\displaystyle B}is an orthonormal basis ofH,{\displaystyle H,}thenH{\displaystyle H}isisomorphictoℓ2(B){\displaystyle \ell ^{2}(B)}in the following sense: there exists abijectivelinearmapΦ:H→ℓ2(B){\displaystyle \Phi :H\to \ell ^{2}(B)}such that⟨Φ(x),Φ(y)⟩=⟨x,y⟩∀x,y∈H.{\displaystyle \langle \Phi (x),\Phi (y)\rangle =\langle x,y\rangle \ \ \forall \ x,y\in H.} A setS{\displaystyle S}of mutually orthonormal vectors in a Hilbert spaceH{\displaystyle H}is called an orthonormal system. An orthonormal basis is an orthonormal system with the additional property that the linear span ofS{\displaystyle S}is dense inH{\displaystyle H}.[6]Alternatively, the setS{\displaystyle S}can be regarded as eithercompleteorincompletewith respect toH{\displaystyle H}. That is, we can take the smallest closed linear subspaceV⊆H{\displaystyle V\subseteq H}containingS.{\displaystyle S.}ThenS{\displaystyle S}will be an orthonormal basis ofV;{\displaystyle V;}which may of course be smaller thanH{\displaystyle H}itself, being anincompleteorthonormal set, or beH,{\displaystyle H,}when it is acompleteorthonormal set. UsingZorn's lemmaand theGram–Schmidt process(or more simply well-ordering and transfinite recursion), one can show thateveryHilbert space admits an orthonormal basis;[7]furthermore, any two orthonormal bases of the same space have the samecardinality(this can be proven in a manner akin to that of the proof of the usualdimension theorem for vector spaces, with separate cases depending on whether the larger basis candidate is countable or not). A Hilbert space isseparableif and only if it admits acountableorthonormal basis. (One can prove this last statement without using theaxiom of choice. However, one would have to use theaxiom of countable choice.) For concreteness we discuss orthonormal bases for a real,n{\displaystyle n}-dimensional vector spaceV{\displaystyle V}with a positive definite symmetric bilinear formϕ=⟨⋅,⋅⟩{\displaystyle \phi =\langle \cdot ,\cdot \rangle }. One way to view an orthonormal basis with respect toϕ{\displaystyle \phi }is as a set of vectorsB={ei}{\displaystyle {\mathcal {B}}=\{e_{i}\}}, which allow us to writev=viei∀v∈V{\displaystyle v=v^{i}e_{i}\ \ \forall \ v\in V}, andvi∈R{\displaystyle v^{i}\in \mathbb {R} }or(vi)∈Rn{\displaystyle (v^{i})\in \mathbb {R} ^{n}}. With respect to this basis, the components ofϕ{\displaystyle \phi }are particularly simple:ϕ(ei,ej)=δij{\displaystyle \phi (e_{i},e_{j})=\delta _{ij}}(whereδij{\displaystyle \delta _{ij}}is theKronecker delta). We can now view the basis as a mapψB:V→Rn{\displaystyle \psi _{\mathcal {B}}:V\rightarrow \mathbb {R} ^{n}}which is an isomorphism of inner product spaces: to make this more explicit we can write Explicitly we can write(ψB(v))i=ei(v)=ϕ(ei,v){\displaystyle (\psi _{\mathcal {B}}(v))^{i}=e^{i}(v)=\phi (e_{i},v)}whereei{\displaystyle e^{i}}is the dual basis element toei{\displaystyle e_{i}}. The inverse is a component map These definitions make it manifest that there is a bijection The space of isomorphisms admits actions of orthogonal groups at either theV{\displaystyle V}side or theRn{\displaystyle \mathbb {R} ^{n}}side. For concreteness we fix the isomorphisms to point in the directionRn→V{\displaystyle \mathbb {R} ^{n}\rightarrow V}, and consider the space of such maps,Iso(Rn→V){\displaystyle {\text{Iso}}(\mathbb {R} ^{n}\rightarrow V)}. This space admits a left action by the group of isometries ofV{\displaystyle V}, that is,R∈GL(V){\displaystyle R\in {\text{GL}}(V)}such thatϕ(⋅,⋅)=ϕ(R⋅,R⋅){\displaystyle \phi (\cdot ,\cdot )=\phi (R\cdot ,R\cdot )}, with the action given by composition:R∗C=R∘C.{\displaystyle R*C=R\circ C.} This space also admits a right action by the group of isometries ofRn{\displaystyle \mathbb {R} ^{n}}, that is,Rij∈O(n)⊂Matn×n(R){\displaystyle R_{ij}\in {\text{O}}(n)\subset {\text{Mat}}_{n\times n}(\mathbb {R} )}, with the action again given by composition:C∗Rij=C∘Rij{\displaystyle C*R_{ij}=C\circ R_{ij}}. The set of orthonormal bases forRn{\displaystyle \mathbb {R} ^{n}}with the standard inner product is aprincipal homogeneous spaceor G-torsor for theorthogonal groupG=O(n),{\displaystyle G={\text{O}}(n),}and is called theStiefel manifoldVn(Rn){\displaystyle V_{n}(\mathbb {R} ^{n})}of orthonormaln{\displaystyle n}-frames.[8] In other words, the space of orthonormal bases is like the orthogonal group, but without a choice of base point: given the space of orthonormal bases, there is no natural choice of orthonormal basis, but once one is given one, there is a one-to-one correspondence between bases and the orthogonal group. Concretely, a linear map is determined by where it sends a given basis: just as an invertible map can take any basis to any other basis, an orthogonal map can take anyorthogonalbasis to any otherorthogonalbasis. The other Stiefel manifoldsVk(Rn){\displaystyle V_{k}(\mathbb {R} ^{n})}fork<n{\displaystyle k<n}ofincompleteorthonormal bases (orthonormalk{\displaystyle k}-frames) are still homogeneous spaces for the orthogonal group, but notprincipalhomogeneous spaces: anyk{\displaystyle k}-frame can be taken to any otherk{\displaystyle k}-frame by an orthogonal map, but this map is not uniquely determined.
https://en.wikipedia.org/wiki/Orthonormal_basis
In mathematics, theindicator vector,characteristic vector, orincidence vectorof asubsetTof asetSis the vectorxT:=(xs)s∈S{\displaystyle x_{T}:=(x_{s})_{s\in S}}such thatxs=1{\displaystyle x_{s}=1}ifs∈T{\displaystyle s\in T}andxs=0{\displaystyle x_{s}=0}ifs∉T.{\displaystyle s\notin T.} IfSiscountableand its elements are numbered so thatS={s1,s2,…,sn}{\displaystyle S=\{s_{1},s_{2},\ldots ,s_{n}\}}, thenxT=(x1,x2,…,xn){\displaystyle x_{T}=(x_{1},x_{2},\ldots ,x_{n})}wherexi=1{\displaystyle x_{i}=1}ifsi∈T{\displaystyle s_{i}\in T}andxi=0{\displaystyle x_{i}=0}ifsi∉T.{\displaystyle s_{i}\notin T.} To put it more simply, the indicator vector ofTis a vector with one element for each element inS, with that element being one if the corresponding element ofSis inT, and zero if it is not.[1][2][3] An indicator vector is a special (countable) case of anindicator function. IfSis the set ofnatural numbersN{\displaystyle \mathbb {N} }, andTis some subset of the natural numbers, then the indicator vector is naturally a single point in theCantor space: that is, an infinite sequence of 1's and 0's, indicating membership, or lack thereof, inT. Such vectors commonly occur in the study ofarithmetical hierarchy.
https://en.wikipedia.org/wiki/Indicator_vector
Acryptographickeyis calledephemeralif it is generated for each execution of a key establishment process.[1]In some cases ephemeral keys are used more than once, within a single session (e.g., in broadcast applications) where the sender generates only one ephemeral key pair per message and theprivate keyis combined separately with each recipient'spublic key. Contrast with astatic key. Private (resp. public) ephemeral key agreement keys are the private (resp. public) keys of asymmetric key pairs that are used a single key establishment transaction to establish one or more keys (e.g., key wrapping keys, data encryption keys, orMACkeys) and, optionally, other keying material (e.g.,initialization vectors). This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Ephemeral_key
Arandomness test(ortest for randomness), in data evaluation, is a test used to analyze the distribution of a set of data to see whether it can be described asrandom(patternless). Instochastic modeling, as in somecomputer simulations, the hoped-for randomness of potential input data can be verified, by a formal test for randomness, to show that the data are valid for use in simulation runs. In some cases, data reveals an obvious non-random pattern, as with so-called "runs in the data" (such as expecting random 0–9 but finding "4 3 2 1 0 4 3 2 1..." and rarely going above 4). If a selected set of data fails the tests, then parameters can be changed or other randomized data can be used which does pass the tests for randomness. The issue of randomness is an important philosophical and theoretical question. Tests for randomness can be used to determine whether a data set has a recognisable pattern, which would indicate that the process that generated it is significantly non-random. For the most part, statistical analysis has, in practice, been much more concerned with finding regularities in data as opposed to testing for randomness. Many "random number generators" in use today are defined by algorithms, and so are actuallypseudo-randomnumber generators. The sequences they produce are called pseudo-random sequences. These generators do not always generate sequences which are sufficiently random, but instead can produce sequences which contain patterns. For example, the infamousRANDUroutine fails many randomness tests dramatically, including thespectral test. Stephen Wolframused randomness tests on the output ofRule 30to examine its potential for generating random numbers,[1]though it was shown to have an effective key size far smaller than its actual size[2]and to perform poorly on achi-squared test.[3]The use of an ill-conceived random number generator can put the validity of an experiment in doubt by violating statistical assumptions. Though there are commonly used statistical testing techniques such as NIST standards, Yongge Wang showed that NIST standards are not sufficient. Furthermore, Yongge Wang[4]designed statistical–distance–based and law–of–the–iterated–logarithm–based testing techniques. Using this technique, Yongge Wang and Tony Nicol[5]detected the weakness in commonly used pseudorandom generators such as the well knownDebian version of OpenSSL pseudorandom generatorwhich was fixed in 2008. There have been a fairly small number of different types of (pseudo-)random number generators used in practice. They can be found in thelist of random number generators, and have included: These different generators have varying degrees of success in passing the accepted test suites. Several widely used generators fail the tests more or less badly, while other 'better' and prior generators (in the sense that they passed all current batteries of tests and they already existed) have been largely ignored. There are many practical measures ofrandomnessfor abinary sequence. These include measures based onstatistical tests,transforms, andcomplexityor a mixture of these. A well-known and widely used collection of tests wasthe Diehard Battery of Tests, introduced by Marsaglia; this was extended to theTestU01suite by L'Ecuyer and Simard. The use ofHadamard transformto measure randomness was proposed byS. Kakand developed further by Phillips, Yuen, Hopkins, Beth and Dai, Mund, andMarsagliaand Zaman.[6] Several of these tests, which are of linear complexity, provide spectral measures of randomness. T. Beth and Z-D. Dai purported to show thatKolmogorov complexityand linear complexity are practically the same,[7]although Y. Wang later showed their claims are incorrect.[8]Nevertheless, Wang also demonstrated that forMartin-Löf randomsequences, the Kolmogorov complexity is essentially the same as linear complexity. These practical tests make it possible to compare the randomness ofstrings. On probabilistic grounds, all strings of a given length have the same randomness. However different strings have a different Kolmogorov complexity. For example, consider the following two strings. String 1 admits a short linguistic description: "32 repetitions of '01'". This description has 22 characters, and it can be efficiently constructed out of some basis sequences.[clarification needed]String 2 has no obvious simple description other than writing down the string itself, which has 64 characters,[clarification needed]and it has no comparably efficientbasis functionrepresentation. Using linear Hadamard spectral tests (seeHadamard transform), the first of these sequences will be found to be of much less randomness than the second one, which agrees with intuition.
https://en.wikipedia.org/wiki/Randomness_test
Incomputer graphics,mipmaps(alsoMIP maps) orpyramids[1][2][3]are pre-calculated,optimizedsequences ofimages, each of which is a progressively lowerresolutionrepresentation of the previous. The height and width of each image, or level, in the mipmap is a factor of two smaller than the previous level. Mipmaps do not have to be square. They are intended to increaserenderingspeed and reducealiasingartifacts. A high-resolution mipmap image is used for high-density samples, such as for objects close to the camera; lower-resolution images are used as the object appears farther away. This is a more efficient way ofdownscalingatexturethan sampling alltexelsin the original texture that would contribute to a screenpixel; it is faster to take a constant number of samples from the appropriately downfiltered textures. Mipmaps are widely used in 3Dcomputer games,flight simulators, other 3D imaging systems fortexture filtering, and 2D and 3DGIS software. Their use is known asmipmapping. The lettersMIPin the name are an acronym of theLatinphrasemultum in parvo, meaning "much in little".[4] Since mipmaps, by definition, arepre-allocated, additionalstorage spaceis required to take advantage of them. They are also related towavelet compression. Mipmap textures are used in 3D scenes to decrease the time required to render a scene. They also improveimage qualityby reducing aliasing andMoiré patternsthat occur at large viewing distances,[5]at the cost of33%more memory per texture. Mipmaps are used for: Mipmapping was invented byLance Williamsin 1983 and is described in his paperPyramidal parametrics.[4]From the abstract: "This paper advances a 'pyramidal parametric' prefiltering and sampling geometry which minimizes aliasing effects and assures continuity within and between target images." The referenced pyramid can be imagined as the set of mipmaps stacked in front of each other. The first patent issued on Mipmap and texture generation was in 1983 by Johnson Yan, Nicholas Szabo, and Lish-Yann Chen of Link Flight Simulation (Singer). Using their approach, texture could be generated and superimposed on surfaces (curvilinear and planar) of any orientation and could be done in real-time. Texture patterns could be modeled suggestive of the real world material they were intended to represent in a continuous way and free of aliasing, ultimately providing level of detail and gradual (imperceptible) detail level transitions. Texture generating became repeatable and coherent from frame to frame and remained in correct perspective and appropriate occultation. Because the application of real time texturing was applied to early three dimensional flight simulator CGI systems, and texture being a prerequsite for realistic graphics, this patent became widely cited and many of these techniques were later applied in graphics computing and gaming as applications expanded over the years.[9] The origin of the term mipmap is an initialism of the Latin phrasemultum in parvo("much in little"), and map, modeled on bitmap.[4]The termpyramidsis still commonly used in aGIScontext. In GIS software, pyramids are primarily used for speeding up rendering times. Each bitmap image of the mipmap set is a downsized duplicate of the maintexture, but at a certain reduced level of detail. Although the main texture would still be used when the view is sufficient to render it in full detail, the renderer will switch to a suitable mipmap image (or in fact,interpolatebetween the two nearest, iftrilinear filteringis activated) when the texture is viewed from a distance or at a small size. Rendering speed increases since the number of texture pixels (texels) being processed per display pixel can be much lower for similar results with the simpler mipmap textures. If using a limited number of texture samples per display pixel (as is the case withbilinear filtering) then artifacts are reduced since the mipmap images are effectively alreadyanti-aliased. Scaling down and up is made more efficient with mipmaps as well. If the texture has a basic size of 256 by 256 pixels, then the associated mipmap set may contain a series of 8 images, each one-fourth the total area of the previous one: 128×128 pixels, 64×64, 32×32, 16×16, 8×8, 4×4, 2×2, 1×1 (a single pixel). If, for example, a scene is rendering this texture in a space of 40×40 pixels, then either a scaled-up version of the 32×32 (withouttrilinear interpolation) or an interpolation of the 64×64 and the 32×32 mipmaps (with trilinear interpolation) would be used. The simplest way to generate these textures is by successive averaging; however, more sophisticated algorithms (perhaps based onsignal processingandFourier transforms) can also be used. The increase in storage space required for all of these mipmaps is a third of the original texture, because the sum of the areas1/4 + 1/16 + 1/64 + 1/256 + ⋯converges to 1/3. In the case of an RGB image with three channels stored as separate planes, the total mipmap can be visualized as fitting neatly into a square area twice as large as the dimensions of the original image on each side (twice as large on each side is four times the original area - one plane of the original size for each of red, green and blue makes three times the original area, and then since the smaller textures take 1/3 of the original, 1/3 of three is one, so they will take the same total space as just one of the original red, green, or blue planes). This is the inspiration for the tagmultum in parvo. When a texture is viewed at a steep angle, the filtering should not be uniform in each direction (it should beanisotropicrather thanisotropic), and a compromise resolution is required. If a higher resolution is used, thecache coherencegoes down, and the aliasing is increased in one direction, but the image tends to be clearer. If a lower resolution is used, the cache coherence is improved, but the image is overly blurry. This would be a tradeoff of MIP level of detail (LOD) for aliasing vs blurriness. However anisotropic filtering attempts to resolve this trade-off by sampling a non isotropic texture footprint for each pixel rather than merely adjusting the MIP LOD. This non isotropic texture sampling requires either a more sophisticated storage scheme or a summation of more texture fetches at higher frequencies.[10] Summed-area tablescan conserve memory and provide more resolutions. However, they again hurt cache coherence, and need wider types to store the partial sums, which are larger than the base texture's word size. Thus, modern graphics hardware does not support them.
https://en.wikipedia.org/wiki/Mipmap
Incomputing, theExecutable and Linkable Format[2](ELF, formerly namedExtensible Linking Format) is a common standardfile formatforexecutablefiles,object code,shared libraries, andcore dumps. First published in the specification for theapplication binary interface(ABI) of theUnixoperating system version namedSystem V Release 4(SVR4),[3]and later in the Tool Interface Standard,[1]it was quickly accepted among different vendors ofUnixsystems. In 1999, it was chosen as the standard binary file format for Unix andUnix-likesystems onx86processors by the86openproject. By design, the ELF format is flexible, extensible, andcross-platform. For instance, it supports differentendiannessesand address sizes so it does not exclude any particularCPUorinstruction set architecture. This has allowed it to be adopted by many differentoperating systemson many different hardwareplatforms. Each ELF file is made up of one ELF header, followed by file data. The data can include: The segments contain information that is needed forrun timeexecution of the file, while sections contain important data for linking and relocation. Anybytein the entire file can be owned by one section at most, and orphan bytes can occur which are unowned by any section. The ELF header defines whether to use32-bitor64-bitaddresses. The header contains three fields that are affected by this setting and offset other fields that follow them. The ELF header is 52 or 64 bytes long for 32-bit and 64-bit binaries, respectively. glibc 2.12+ in casee_ident[EI_OSABI] == 3treats this field as ABI version of thedynamic linker:[6]it defines a list of dynamic linker's features,[7]treatse_ident[EI_ABIVERSION]as a feature level requested by the shared object (executable or dynamic library) and refuses to load it if an unknown feature is requested, i.e.e_ident[EI_ABIVERSION]is greater than the largest known feature.[8] [9] The program header table tells the system how to create a process image. It is found at file offsete_phoff, and consists ofe_phnumentries, each with sizee_phentsize. The layout is slightly different in32-bitELF vs64-bitELF, because thep_flagsare in a different structure location for alignment reasons. Each entry is structured as: The ELF format has replaced older executable formats in various environments. It has replaceda.outandCOFFformats inUnix-likeoperating systems: ELF has also seen some adoption in non-Unix operating systems, such as: Microsoft Windowsalso uses the ELF format, but only for itsWindows Subsystem for Linuxcompatibility system.[17] Some game consoles also use ELF: Other (operating) systems running on PowerPC that use ELF: Some operating systems for mobile phones and mobile devices use ELF: Some phones can run ELF files through the use of a patch that adds assembly code to the main firmware, which is a feature known asELFPackin the underground modding culture. The ELF file format is also used with theAtmel AVR(8-bit), AVR32[22]and with Texas Instruments MSP430 microcontroller architectures. Some implementations of Open Firmware can also load ELF files, most notably Apple's implementation used in almost all PowerPC machines the company produced. 86openwas a project to form consensus on a common binary file format for Unix and Unix-like operating systems on the common PC compatible x86 architecture, to encourage software developers to port to the architecture.[24]The initial idea was to standardize on a small subset of Spec 1170, a predecessor of the Single UNIX Specification, and the GNU C Library (glibc) to enable unmodified binaries to run on the x86 Unix-like operating systems. The project was originally designated "Spec 150". The format eventually chosen was ELF, specifically the Linux implementation of ELF, after it had turned out to be ade factostandard supported by all involved vendors and operating systems. The group began email discussions in 1997 and first met together at the Santa Cruz Operation offices on August 22, 1997. The steering committee was Marc Ewing, Dion Johnson, Evan Leibovitch,Bruce Perens, Andrew Roach, Bryan Wayne Sparks and Linus Torvalds. Other people on the project were Keith Bostic, Chuck Cranor, Michael Davidson, Chris G. Demetriou, Ulrich Drepper, Don Dugger, Steve Ginzburg, Jon "maddog" Hall, Ron Holt, Jordan Hubbard, Dave Jensen, Kean Johnston, Andrew Josey, Robert Lipe, Bela Lubkin, Tim Marsland, Greg Page, Ronald Joe Record, Tim Ruckle, Joel Silverstein, Chia-pi Tien, and Erik Troan. Operating systems and companies represented were BeOS, BSDI, FreeBSD,Intel, Linux, NetBSD, SCO and SunSoft. The project progressed and in mid-1998, SCO began developing lxrun, an open-source compatibility layer able to run Linux binaries on OpenServer, UnixWare, and Solaris. SCO announced official support of lxrun at LinuxWorld in March 1999. Sun Microsystems began officially supporting lxrun for Solaris in early 1999,[25]and later moved to integrated support of the Linux binary format via Solaris Containers for Linux Applications. With the BSDs having long supported Linux binaries (through a compatibility layer) and the main x86 Unix vendors having added support for the format, the project decided that Linux ELF was the format chosen by the industry and "declare[d] itself dissolved" on July 25, 1999.[26] FatELF is an ELF binary-format extension that adds fat binary capabilities.[27]It is aimed for Linux and other Unix-like operating systems. Additionally to the CPU architecture abstraction (byte order, word size,CPUinstruction set etc.), there is the potential advantage of software-platform abstraction e.g., binaries which support multiple kernel ABI versions. As of 2021[update], FatELF has not been integrated into the mainline Linux kernel.[28][29][30] [1]
https://en.wikipedia.org/wiki/Executable_and_linkable_format
Inprobabilitytheory, aCauchy processis a type ofstochastic process. There aresymmetricandasymmetricforms of the Cauchy process.[1]The unspecified term "Cauchy process" is often used to refer to the symmetric Cauchy process.[2] The Cauchy process has a number of properties: The symmetric Cauchy process can be described by aBrownian motionorWiener processsubject to aLévysubordinator.[7]The Lévy subordinator is a process associated with aLévy distributionhaving location parameter of0{\displaystyle 0}and a scale parameter oft2/2{\displaystyle t^{2}/2}.[7]The Lévy distribution is a special case of theinverse-gamma distribution. So, usingC{\displaystyle C}to represent the Cauchy process andL{\displaystyle L}to represent the Lévy subordinator, the symmetric Cauchy process can be described as: The Lévy distribution is the probability of the first hitting time for a Brownian motion, and thus the Cauchy process is essentially the result of twoindependentBrownian motion processes.[7] TheLévy–Khintchine representationfor the symmetric Cauchy process is a triplet with zero drift and zero diffusion, giving a Lévy–Khintchine triplet of(0,0,W){\displaystyle (0,0,W)}, whereW(dx)=dx/(πx2){\displaystyle W(dx)=dx/(\pi x^{2})}.[8] The marginalcharacteristic functionof the symmetric Cauchy process has the form:[1][8] The marginalprobability distributionof the symmetric Cauchy process is theCauchy distributionwhose density is[8][9] The asymmetric Cauchy process is defined in terms of a parameterβ{\displaystyle \beta }. Hereβ{\displaystyle \beta }is theskewnessparameter, and itsabsolute valuemust be less than or equal to 1.[1]In the case where|β|=1{\displaystyle |\beta |=1}the process is considered a completely asymmetric Cauchy process.[1] The Lévy–Khintchine triplet has the form(0,0,W){\displaystyle (0,0,W)}, whereW(dx)={Ax−2dxifx>0Bx−2dxifx<0{\displaystyle W(dx)={\begin{cases}Ax^{-2}\,dx&{\text{if }}x>0\\Bx^{-2}\,dx&{\text{if }}x<0\end{cases}}}, whereA≠B{\displaystyle A\neq B},A>0{\displaystyle A>0}andB>0{\displaystyle B>0}.[1] Given this,β{\displaystyle \beta }is a function ofA{\displaystyle A}andB{\displaystyle B}. The characteristic function of the asymmetric Cauchy distribution has the form:[1] The marginal probability distribution of the asymmetric Cauchy process is astable distributionwith index of stability (i.e., α parameter) equal to 1.
https://en.wikipedia.org/wiki/Cauchy_process
Inmachine learning, a common task is the study and construction ofalgorithmsthat can learn from and make predictions ondata.[1]Such algorithms function by making data-driven predictions or decisions,[2]through building amathematical modelfrom input data. These input data used to build the model are usually divided into multipledata sets. In particular, three data sets are commonly used in different stages of the creation of the model: training, validation, and test sets. The model is initially fit on atraining data set,[3]which is a set of examples used to fit the parameters (e.g. weights of connections between neurons inartificial neural networks) of the model.[4]The model (e.g. anaive Bayes classifier) is trained on the training data set using asupervised learningmethod, for example using optimization methods such asgradient descentorstochastic gradient descent. In practice, the training data set often consists of pairs of an inputvector(or scalar) and the corresponding output vector (or scalar), where the answer key is commonly denoted as thetarget(orlabel). The current model is run with the training data set and produces a result, which is then compared with thetarget, for each input vector in the training data set. Based on the result of the comparison and the specific learning algorithm being used, the parameters of the model are adjusted. The model fitting can include bothvariable selectionand parameterestimation. Successively, the fitted model is used to predict the responses for the observations in a second data set called thevalidation data set.[3]The validation data set provides an unbiased evaluation of a model fit on the training data set while tuning the model'shyperparameters[5](e.g. the number of hidden units—layers and layer widths—in a neural network[4]). Validation data sets can be used forregularizationbyearly stopping(stopping training when the error on the validation data set increases, as this is a sign ofover-fittingto the training data set).[6]This simple procedure is complicated in practice by the fact that the validation data set's error may fluctuate during training, producing multiple local minima. This complication has led to the creation of many ad-hoc rules for deciding when over-fitting has truly begun.[6] Finally, thetest data setis a data set used to provide an unbiased evaluation of afinalmodel fit on the training data set.[5]If the data in the test data set has never been used in training (for example incross-validation), the test data set is also called aholdout data set. The term "validation set" is sometimes used instead of "test set" in some literature (e.g., if the original data set was partitioned into only two subsets, the test set might be referred to as the validation set).[5] Deciding the sizes and strategies for data set division in training, test and validation sets is very dependent on the problem and data available.[7] A training data set is adata setof examples used during the learning process and is used to fit the parameters (e.g., weights) of, for example, aclassifier.[9][10] For classification tasks, a supervised learning algorithm looks at the training data set to determine, or learn, the optimal combinations of variables that will generate a goodpredictive model.[11]The goal is to produce a trained (fitted) model that generalizes well to new, unknown data.[12]The fitted model is evaluated using “new” examples from the held-out data sets (validation and test data sets) to estimate the model’s accuracy in classifying new data.[5]To reduce the risk of issues such as over-fitting, the examples in the validation and test data sets should not be used to train the model.[5] Most approaches that search through training data for empirical relationships tend tooverfitthe data, meaning that they can identify and exploit apparent relationships in the training data that do not hold in general. When a training set is continuously expanded with new data, then this isincremental learning. A validation data set is adata setof examples used to tune thehyperparameters(i.e. the architecture) of a model. It is sometimes also called the development set or the "dev set".[13]An example of a hyperparameter forartificial neural networksincludes the number of hidden units in each layer.[9][10]It, as well as the testing set (as mentioned below), should follow the same probability distribution as the training data set. In order to avoid overfitting, when anyclassificationparameter needs to be adjusted, it is necessary to have a validation data set in addition to the training and test data sets. For example, if the most suitable classifier for the problem is sought, the training data set is used to train the different candidate classifiers, the validation data set is used to compare their performances and decide which one to take and, finally, the test data set is used to obtain the performance characteristics such asaccuracy,sensitivity,specificity,F-measure, and so on. The validation data set functions as a hybrid: it is training data used for testing, but neither as part of the low-level training nor as part of the final testing. The basic process of using a validation data set formodel selection(as part of training data set, validation data set, and test data set) is:[10][14] Since our goal is to find the network having the best performance on new data, the simplest approach to the comparison of different networks is to evaluate the error function using data which is independent of that used for training. Various networks are trained by minimization of an appropriate error function defined with respect to a training data set. The performance of the networks is then compared by evaluating the error function using an independent validation set, and the network having the smallest error with respect to the validation set is selected. This approach is called thehold outmethod. Since this procedure can itself lead to some overfitting to the validation set, the performance of the selected network should be confirmed by measuring its performance on a third independent set of data called a test set. An application of this process is inearly stopping, where the candidate models are successive iterations of the same network, and training stops when the error on the validation set grows, choosing the previous model (the one with minimum error). A test data set is adata setthat isindependentof the training data set, but that follows the sameprobability distributionas the training data set. If a model fit to the training data set also fits the test data set well, minimaloverfittinghas taken place (see figure below). A better fitting of the training data set as opposed to the test data set usually points to over-fitting. A test set is therefore a set of examples used only to assess the performance (i.e. generalization) of a fully specified classifier.[9][10]To do this, the final model is used to predict classifications of examples in the test set. Those predictions are compared to the examples' true classifications to assess the model's accuracy.[11] In a scenario where both validation and test data sets are used, the test data set is typically used to assess the final model that is selected during the validation process. In the case where the original data set is partitioned into two subsets (training and test data sets), the test data set might assess the model only once (e.g., in theholdout method).[15]Note that some sources advise against such a method.[12]However, when using a method such ascross-validation, two partitions can be sufficient and effective since results are averaged after repeated rounds of model training and testing to help reduce bias and variability.[5][12] Testing is trying something to find out about it ("To put to the proof; to prove the truth, genuineness, or quality of by experiment" according to the Collaborative International Dictionary of English) and to validate is to prove that something is valid ("To confirm; to render valid" Collaborative International Dictionary of English). With this perspective, the most common use of the termstest setandvalidation setis the one here described. However, in both industry and academia, they are sometimes used interchanged, by considering that the internal process is testing different models to improve (test set as a development set) and the final model is the one that needs to be validated before real use with an unseen data (validation set). "The literature on machine learning often reverses the meaning of 'validation' and 'test' sets. This is the most blatant example of the terminological confusion that pervades artificial intelligence research."[16]Nevertheless, the important concept that must be kept is that the final set, whether called test or validation, should only be used in the final experiment. In order to get more stable results and use all valuable data for training, a data set can be repeatedly split into several training and a validation data sets. This is known ascross-validation. To confirm the model's performance, an additional test data set held out from cross-validation is normally used. It is possible to use cross-validation on training and validation sets, andwithineach training set have further cross-validation for a test set for hyperparameter tuning. This is known asnested cross-validation. Omissions in the training of algorithms are a major cause of erroneous outputs.[17]Types of such omissions include:[17] An example of an omission of particular circumstances is a case where a boy was able to unlock the phone because his mother registered her face under indoor, nighttime lighting, a condition which was not appropriately included in the training of the system.[17][18] Usage of relatively irrelevant input can include situations where algorithms use the background rather than the object of interest forobject detection, such as being trained by pictures of sheep on grasslands, leading to a risk that a different object will be interpreted as a sheep if located on a grassland.[17]
https://en.wikipedia.org/wiki/Training_set
"Somebody else's problem"or"someone else's problem"is an issue which is dismissed by a person on the grounds that they consider somebody else to be responsible for it. A 1976 edition of the journalEkisticsused the phrase in the context of bureaucratic inaction onlow-income housing, describing "the principle ofsomebody else's problem" as something that prevented progress. Where responsibility for a complex problem falls across many different departments of government, even those agencies who wish to tackle the issue are unable to do so.[1] Referring to a team working on acomputer programmingproject,Alan F. Blackwellwrote in 1997 that: "Many sub-goals can be deferred to the degree that they become what is known amongst professional programmers as an 'S.E.P.' – somebody else's problem."[2] Douglas Adams' 1982 novelLife, the Universe and Everything(inThe Hitchhiker's Guide to the Galaxycomedy science fictionseries) introduces the idea of an "SEPfield" as a kind ofcloaking device. The characterFord Prefectsays, An SEP is something we can't see, or don't see, or our brain doesn't let us see, because we think that it's somebody else's problem. That’s what SEP means. Somebody Else’s Problem. The brain just edits it out, it's like a blind spot. The narration then explains: The Somebody Else's Problem field... relies on people's natural predisposition not to see anything they don't want to, weren't expecting, or can't explain. If Effrafax had painted the mountain pink and erected a cheap and simple Somebody Else’s Problem field on it, then people would have walked past the mountain, round it, even over it, and simply never have noticed that the thing was there. Adams' description of an SEP field is quoted in an article of "psychological invisibility", where it is compared to other fictional effects such as the perception filter inDoctor Who, as well as cognitive biases such asinattentional blindnessandchange blindness.[3]
https://en.wikipedia.org/wiki/Somebody_Else%27s_Problem
Cladistics(/kləˈdɪstɪks/klə-DIST-iks; fromAncient Greekκλάδοςkládos'branch')[1]is an approach tobiological classificationin whichorganismsare categorized in groups ("clades") based on hypotheses of most recentcommon ancestry. The evidence for hypothesized relationships is typically sharedderivedcharacteristics (synapomorphies) that are not present in more distant groups and ancestors. However, from an empirical perspective, common ancestors are inferences based on a cladistic hypothesis of relationships of taxa whosecharacter statescan be observed. Theoretically, a last common ancestor and all its descendants constitute a (minimal) clade. Importantly, all descendants stay in their overarching ancestral clade. For example, if the termswormsorfisheswere used within astrictcladistic framework, these terms would include humans. Many of these terms are normally usedparaphyletically, outside of cladistics, e.g. as a 'grade', which are fruitless to precisely delineate, especially when including extinct species.Radiationresults in the generation of new subclades by bifurcation, but in practice sexual hybridization may blur very closely related groupings.[2][3][4][5] As a hypothesis, a clade can be rejected only if some groupings were explicitly excluded. It may then be found that the excluded group did actually descend from the last common ancestor of the group, and thus emerged within the group. ("Evolved from" is misleading, because in cladistics all descendants stay in the ancestral group). To keep only valid clades, upon finding that the group is paraphyletic this way, either such excluded groups should be granted to the clade, or the group should be abolished.[6] Branches down to the divergence to the next significant (e.g. extant) sister are considered stem-groupings of the clade, but in principle each level stands on its own, to be assigned a unique name. For a fully bifurcated tree, adding a group to a tree also adds an additional (named) clade, and a new level on that branch. Specifically, also extinct groups are always put on a side-branch, not distinguishing whether an actual ancestor of other groupings was found. The techniques and nomenclature of cladistics have been applied to disciplines other than biology. (Seephylogenetic nomenclature.) Cladistics findings are posing a difficulty fortaxonomy, where the rank and (genus-)naming of established groupings may turn out to be inconsistent. Cladistics is now the most commonly used method to classify organisms.[7] The original methods used in cladistic analysis and the school of taxonomy derived from the work of the GermanentomologistWilli Hennig, who referred to it asphylogenetic systematics(also the title of his 1966 book); but the terms "cladistics" and "clade" were popularized by other researchers. Cladistics in the original sense refers to a particular set of methods used inphylogeneticanalysis, although it is now sometimes used to refer to the whole field.[8] What is now called the cladistic method appeared as early as 1901 with a work byPeter Chalmers Mitchellfor birds[9][10]and subsequently byRobert John Tillyard(for insects) in 1921,[11]andW. Zimmermann(for plants) in 1943.[12]The term "clade" was introduced in 1958 byJulian Huxleyafter having been coined byLucien Cuénotin 1940,[13]"cladogenesis" in 1958,[14]"cladistic" byArthur Cainand Harrison in 1960,[15]"cladist" (for an adherent of Hennig's school) byErnst Mayrin 1965,[16]and "cladistics" in 1966.[14]Hennig referred to his own approach as "phylogenetic systematics". From the time of his original formulation until the end of the 1970s, cladistics competed as an analytical and philosophical approach to systematics withpheneticsand so-calledevolutionary taxonomy. Phenetics was championed at this time by thenumerical taxonomistsPeter SneathandRobert Sokal, and evolutionary taxonomy byErnst Mayr.[17] Originally conceived, if only in essence, by Willi Hennig in a book published in 1950, cladistics did not flourish until its translation into English in 1966 (Lewin 1997). Today, cladistics is the most popular method for inferring phylogenetic trees from morphological data. In the 1990s, the development of effectivepolymerase chain reactiontechniques allowed the application of cladistic methods tobiochemicalandmolecular genetictraits of organisms, vastly expanding the amount of data available for phylogenetics. At the same time, cladistics rapidly became popular in evolutionary biology, becausecomputersmade it possible to process large quantities of data about organisms and their characteristics. The cladistic method interprets each shared character state transformation as a potential piece of evidence for grouping.Synapomorphies(shared, derived character states) are viewed as evidence of grouping, whilesymplesiomorphies(shared ancestral character states) are not. The outcome of a cladistic analysis is acladogram– atree-shaped diagram (dendrogram)[18]that is interpreted to represent the best hypothesis of phylogenetic relationships. Although traditionally such cladograms were generated largely on the basis of morphological characters and originally calculated by hand,genetic sequencingdata andcomputational phylogeneticsare now commonly used in phylogenetic analyses, and theparsimonycriterion has been abandoned by many phylogeneticists in favor of more "sophisticated" but less parsimonious evolutionary models of character state transformation. Cladists contend that these models are unjustified because there is no evidence that they recover more "true" or "correct" results from actual empirical data sets[19] Every cladogram is based on a particular dataset analyzed with a particular method. Datasets are tables consisting ofmolecular, morphological,ethological[20]and/or other characters and a list ofoperational taxonomic units(OTUs), which may be genes, individuals, populations, species, or larger taxa that are presumed to be monophyletic and therefore to form, all together, one large clade; phylogenetic analysis infers the branching pattern within that clade. Different datasets and different methods, not to mention violations of the mentioned assumptions, often result in different cladograms. Only scientific investigation can show which is more likely to be correct. Until recently, for example, cladograms like the following have generally been accepted as accurate representations of the ancestral relations among turtles, lizards, crocodilians, and birds:[21] turtles lizards crocodilians birds If this phylogenetic hypothesis is correct, then the last common ancestor of turtles and birds, at the branch near the▼lived earlier than the last common ancestor of lizards and birds, near the♦. Mostmolecular evidence, however, produces cladograms more like this:[22] lizards turtles crocodilians birds If this is accurate, then the last common ancestor of turtles and birds lived later than the last common ancestor of lizards and birds. Since the cladograms show two mutually exclusive hypotheses to describe the evolutionary history, at most one of them is correct. The cladogram to the right represents the current universally accepted hypothesis that allprimates, includingstrepsirrhineslike thelemursandlorises, had a common ancestor all of whose descendants are or were primates, and so form a clade; the name Primates is therefore recognized for this clade. Within the primates, all anthropoids (monkeys, apes, and humans) are hypothesized to have had a common ancestor all of whose descendants are or were anthropoids, so they form the clade called Anthropoidea. The "prosimians", on the other hand, form a paraphyletic taxon. The name Prosimii is not used inphylogenetic nomenclature, which names only clades; the "prosimians" are instead divided between the cladesStrepsirhiniandHaplorhini, where the latter contains Tarsiiformes and Anthropoidea. Lemurs and tarsiers may have looked closely related to humans, in the sense of being close on the evolutionary tree to humans. However, from the perspective of a tarsier, humans and lemurs would have looked close, in the exact same sense. Cladistics forces a neutral perspective, treating all branches (extant or extinct) in the same manner. It also forces one to try to make statements, and honestly take into account findings, about the exact historic relationships between the groups. The following terms, coined by Hennig, are used to identify shared or distinct character states among groups:[23][24][25] The terms plesiomorphy and apomorphy are relative; their application depends on the position of a group within a tree. For example, when trying to decide whether the tetrapods form a clade, an important question is whether having four limbs is a synapomorphy of the earliest taxa to be included within Tetrapoda: did all the earliest members of the Tetrapoda inherit four limbs from a common ancestor, whereas all other vertebrates did not, or at least not homologously? By contrast, for a group within the tetrapods, such as birds, having four limbs is a plesiomorphy. Using these two terms allows a greater precision in the discussion of homology, in particular allowing clear expression of the hierarchical relationships among different homologous features. It can be difficult to decide whether a character state is in fact the same and thus can be classified as a synapomorphy, which may identify a monophyletic group, or whether it only appears to be the same and is thus a homoplasy, which cannot identify such a group. There is a danger of circular reasoning: assumptions about the shape of a phylogenetic tree are used to justify decisions about character states, which are then used as evidence for the shape of the tree.[28]Phylogeneticsuses various forms ofparsimonyto decide such questions; the conclusions reached often depend on the dataset and the methods. Such is the nature of empirical science, and for this reason, most cladists refer to their cladograms as hypotheses of relationship. Cladograms that are supported by a large number and variety of different kinds of characters are viewed as more robust than those based on more limited evidence.[29] Mono-, para- and polyphyletic taxa can be understood based on the shape of the tree (as done above), as well as based on their character states.[24][25][30]These are compared in the table below. Cladistics, either generally or in specific applications, has been criticized from its beginnings. Decisions as to whether particular character states arehomologous, a precondition of their being synapomorphies, have been challenged as involvingcircular reasoningand subjective judgements.[34]Of course, the potential unreliability of evidence is a problem for any systematic method, or for that matter, for any empirical scientific endeavor at all.[35][36] Transformed cladisticsarose in the late 1970s[37]in an attempt to resolve some of these problems by removing a priori assumptions about phylogeny from cladistic analysis, but it has remained unpopular.[38] The cladistic method does not identify fossil species as actual ancestors of a clade.[39]Instead, fossil taxa are identified as belonging to separate extinct branches. While a fossil species could be the actual ancestor of a clade, there is no way to know that. Therefore, a more conservative hypothesis is that the fossil taxon is related to other fossil and extant taxa, as implied by the pattern of shared apomorphic features.[40] An otherwise extinct group with any extant descendants, is not considered (literally) extinct,[41]and for instance does not have a date of extinction. Anything having to do with biology and sex is complicated and messy, and cladistics is no exception.[42]Many species reproduce sexually, and are capable of interbreeding for millions of years. Worse, during such a period, many branches may have radiated, and it may take hundreds of millions of years for them to have whittled down to just two.[43]Only then one can theoretically assign proper last common ancestors of groupings which do not inadvertently include earlier branches.[44]The process of true cladistic bifurcation can thus take a much more extended time than one is usually aware of.[45]In practice, for recent radiations, cladistically guided findings only give a coarse impression of the complexity. A more detailed account will give details about fractions of introgressions between groupings, and even geographic variations thereof. This has been used as an argument for the use of paraphyletic groupings,[44]but typically other reasons are quoted. Horizontal gene transfer is the mobility of genetic info between different organisms that can have immediate or delayed effects for the reciprocal host.[46]There are several processes in nature which can causehorizontal gene transfer. This does typically not directly interfere with ancestry of the organism, but can complicate the determination of that ancestry. On another level, one can map the horizontal gene transfer processes, by determining the phylogeny of the individual genes using cladistics. If there is unclarity in mutual relationships, there are a lot of possible trees. Assigning names to each possible clade may not be prudent. Furthermore, established names are discarded in cladistics, or alternatively carry connotations which may no longer hold, such as when additional groups are found to have emerged in them.[47]Naming changes are the direct result of changes in the recognition of mutual relationships, which often is still in flux, especially for extinct species. Hanging on to older naming and/or connotations is counter-productive, as they typically do not reflect actual mutual relationships precisely at all. E.g. Archaea, Asgard archaea, protists, slime molds, worms, invertebrata, fishes, reptilia, monkeys,Ardipithecus,Australopithecus,Homo erectusall containHomo sapienscladistically, in theirsensu latomeaning. For originally extinct stem groups,sensu latogenerally means generously keeping previously included groups, which then may come to include even living species. A prunedsensu strictomeaning is often adopted instead, but the group would need to be restricted to a single branch on the stem. Other branches then get their own name and level. This is commensurate to the fact that more senior stem branches are in fact closer related to the resulting group than the more basal stem branches; that those stem branches only may have lived for a short time does not affect that assessment in cladistics. The comparisons used to acquire data on whichcladogramscan be based are not limited to the field of biology.[48]Any group of individuals or classes that are hypothesized to have a common ancestor, and to which a set of common characteristics may or may not apply, can be compared pairwise. Cladograms can be used to depict the hypothetical descent relationships within groups of items in many different academic realms. The only requirement is that the items have characteristics that can be identified and measured. Anthropologyandarchaeology:[49]Cladistic methods have been used to reconstruct the development of cultures or artifacts using groups of cultural traits or artifact features. Comparative mythologyandfolktaleuse cladistic methods to reconstruct the protoversion of many myths. Mythological phylogenies constructed with mythemes clearly support low horizontal transmissions (borrowings), historical (sometimes Palaeolithic) diffusions and punctuated evolution.[50]They also are a powerful way to test hypotheses about cross-cultural relationships among folktales.[51][52] Literature: Cladistic methods have been used in the classification of the surviving manuscripts of theCanterbury Tales,[53]and the manuscripts of the SanskritCharaka Samhita.[54] Historical linguistics:[55]Cladistic methods have been used to reconstruct the phylogeny of languages using linguistic features. This is similar to the traditionalcomparative methodof historical linguistics, but is more explicit in its use ofparsimonyand allows much faster analysis of large datasets (computational phylogenetics). Textual criticismorstemmatics:[54][56]Cladistic methods have been used to reconstruct the phylogeny of manuscripts of the same work (and reconstruct the lost original) using distinctive copying errors as apomorphies. This differs from traditional historical-comparative linguistics in enabling the editor to evaluate and place in genetic relationship large groups of manuscripts with large numbers of variants that would be impossible to handle manually. It also enablesparsimonyanalysis of contaminated traditions of transmission that would be impossible to evaluate manually in a reasonable period of time. Astrophysics[57]infers the history of relationships between galaxies to create branching diagram hypotheses of galaxy diversification. Biology portalEvolutionary biology portal
https://en.wikipedia.org/wiki/Cladistics
Inmathematics, anasymptotic expansion,asymptotic seriesorPoincaré expansion(afterHenri Poincaré) is aformal seriesof functions which has the property thattruncatingthe series after a finite number of terms provides an approximation to a given function as the argument of the function tends towards a particular, often infinite, point. Investigations byDingle (1973)revealed that the divergent part of an asymptotic expansion is latently meaningful, i.e. contains information about the exact value of the expanded function. The theory of asymptotic series was created by Poincaré (and independently byStieltjes) in 1886.[1] The most common type of asymptotic expansion is apower seriesin either positive or negative powers. Methods of generating such expansions include theEuler–Maclaurin summation formulaand integral transforms such as theLaplaceandMellintransforms. Repeatedintegration by partswill often lead to an asymptotic expansion. Since aconvergentTaylor seriesfits the definition of asymptotic expansion as well, the phrase "asymptotic series" usually implies anon-convergentseries. Despite non-convergence, the asymptotic expansion is useful when truncated to a finite number of terms. The approximation may provide benefits by being more mathematically tractable than the function being expanded, or by an increase in the speed of computation of the expanded function. Typically, the best approximation is given when the series is truncated at the smallest term. This way of optimally truncating an asymptotic expansion is known assuperasymptotics.[2]The error is then typically of the form~ exp(−c/ε)whereεis the expansion parameter. The error is thus beyond all orders in the expansion parameter. It is possible to improve on the superasymptotic error, e.g. by employing resummation methods such asBorel resummationto the divergent tail. Such methods are often referred to ashyperasymptotic approximations. Seeasymptotic analysisandbig O notationfor the notation used in this article. First we define an asymptotic scale, and then give the formal definition of an asymptotic expansion. Ifφn{\displaystyle \ \varphi _{n}\ }is a sequence ofcontinuous functionson some domain, and ifL{\displaystyle \ L\ }is alimit pointof the domain, then the sequence constitutes anasymptotic scaleif for everyn, (L{\displaystyle \ L\ }may be taken to be infinity.) In other words, a sequence of functions is an asymptotic scale if each function in the sequence grows strictly slower (in the limitx→L{\displaystyle \ x\to L\ }) than the preceding function. Iff{\displaystyle \ f\ }is a continuous function on the domain of the asymptotic scale, thenfhas an asymptotic expansion of orderN{\displaystyle \ N\ }with respect to the scale as a formal series if or the weaker condition is satisfied. Here,o{\displaystyle o}is thelittle onotation. If one or the other holds for allN{\displaystyle \ N\ }, then we write[citation needed] In contrast to a convergent series forf{\displaystyle \ f\ }, wherein the series converges for anyfixedx{\displaystyle \ x\ }in the limitN→∞{\displaystyle N\to \infty }, one can think of the asymptotic series as converging forfixedN{\displaystyle \ N\ }in the limitx→L{\displaystyle \ x\to L\ }(withL{\displaystyle \ L\ }possibly infinite). Asymptotic expansions often occur when an ordinary series is used in a formal expression that forces the taking of values outside of itsdomain of convergence. Thus, for example, one may start with the ordinary series The expression on the left is valid on the entirecomplex planew≠1{\displaystyle w\neq 1}, while the right hand side converges only for|w|<1{\displaystyle |w|<1}. Multiplying bye−w/t{\displaystyle e^{-w/t}}and integrating both sides yields after the substitutionu=w/t{\displaystyle u=w/t}on the right hand side. The integral on the left hand side, understood as aCauchy principal value, can be expressed in terms of theexponential integral. The integral on the right hand side may be recognized as thegamma function. Evaluating both, one obtains the asymptotic expansion Here, the right hand side is clearly not convergent for any non-zero value oft. However, by truncating the series on the right to a finite number of terms, one may obtain a fairly good approximation to the value ofEi⁡(1t){\displaystyle \operatorname {Ei} \left({\tfrac {1}{t}}\right)}for sufficiently smallt. Substitutingx=−1t{\displaystyle x=-{\tfrac {1}{t}}}and noting thatEi⁡(x)=−E1(−x){\displaystyle \operatorname {Ei} (x)=-E_{1}(-x)}results in the asymptotic expansion given earlier in this article. Using integration by parts, we can obtain an explicit formula[3]Ei⁡(z)=ezz(∑k=0nk!zk+en(z)),en(z)≡(n+1)!ze−z∫−∞zettn+2dt{\displaystyle \operatorname {Ei} (z)={\frac {e^{z}}{z}}\left(\sum _{k=0}^{n}{\frac {k!}{z^{k}}}+e_{n}(z)\right),\quad e_{n}(z)\equiv (n+1)!\ ze^{-z}\int _{-\infty }^{z}{\frac {e^{t}}{t^{n+2}}}\,dt}For any fixedz{\displaystyle z}, the absolute value of the error term|en(z)|{\displaystyle |e_{n}(z)|}decreases, then increases. The minimum occurs atn∼|z|{\displaystyle n\sim |z|}, at which point|en(z)|≤2π|z|e−|z|{\displaystyle \vert e_{n}(z)\vert \leq {\sqrt {\frac {2\pi }{\vert z\vert }}}e^{-\vert z\vert }}. This bound is said to be "asymptotics beyond all orders". For a given asymptotic scale{φn(x)}{\displaystyle \{\varphi _{n}(x)\}}the asymptotic expansion of functionf(x){\displaystyle f(x)}is unique.[4]That is the coefficients{an}{\displaystyle \{a_{n}\}}are uniquely determined in the following way:a0=limx→Lf(x)φ0(x)a1=limx→Lf(x)−a0φ0(x)φ1(x)⋮aN=limx→Lf(x)−∑n=0N−1anφn(x)φN(x){\displaystyle {\begin{aligned}a_{0}&=\lim _{x\to L}{\frac {f(x)}{\varphi _{0}(x)}}\\a_{1}&=\lim _{x\to L}{\frac {f(x)-a_{0}\varphi _{0}(x)}{\varphi _{1}(x)}}\\&\;\;\vdots \\a_{N}&=\lim _{x\to L}{\frac {f(x)-\sum _{n=0}^{N-1}a_{n}\varphi _{n}(x)}{\varphi _{N}(x)}}\end{aligned}}}whereL{\displaystyle L}is the limit point of this asymptotic expansion (may be±∞{\displaystyle \pm \infty }). A given functionf(x){\displaystyle f(x)}may have many asymptotic expansions (each with a different asymptotic scale).[4] An asymptotic expansion may be an asymptotic expansion to more than one function.[4]
https://en.wikipedia.org/wiki/Asymptotic_expansion
Agent miningis a research field that combines two areas of computer science:multiagent systemsanddata mining. It explores how intelligent computer agents can work together to discover, analyze, and learn from large amounts of data more effectively than traditional methods.[1][2] The interaction and the integration between multiagent systems and data mining have a long history.[3][4]The very early work on agent mining focused on agent-based knowledge discovery,[5]agent-based distributed data mining,[6][7]and agent-based distributed machine learning,[8]and using data mining to enhance agent intelligence.[9] The International Workshop on Agents and Data Mining Interaction[10]has been held for more than 10 times, co-located with the International Conference on Autonomous Agents and Multi-Agent Systems. Several proceedings are available from SpringerLecture Notes in Computer Science.
https://en.wikipedia.org/wiki/Agent_mining
TheUniversal Mobile Telecommunications System(UMTS) is a3Gmobile cellular system for networks based on theGSMstandard.[1]UMTS useswideband code-division multiple access(W-CDMA) radio access technology to offer greater spectral efficiency and bandwidth tomobile network operatorscompared to previous2Gsystems likeGPRSandCSD.[2]UMTS on its provides a peak theoretical data rate of 2Mbit/s.[3] Developed and maintained by the3GPP(3rd Generation Partnership Project), UMTS is a component of theInternational Telecommunication UnionIMT-2000standard set and compares with theCDMA2000standard set for networks based on the competingcdmaOnetechnology. The technology described in UMTS is sometimes also referred to asFreedom of Mobile Multimedia Access(FOMA)[4]or 3GSM. UMTS specifies a complete network system, which includes theradio access network(UMTS Terrestrial Radio Access Network, or UTRAN), thecore network(Mobile Application Part, or MAP) and the authentication of users via SIM (subscriber identity module) cards. UnlikeEDGE(IMT Single-Carrier, based on GSM) and CDMA2000 (IMT Multi-Carrier), UMTS requires new base stations and new frequency allocations. UMTS has since been enhanced asHigh Speed Packet Access(HSPA).[5] UMTS supports theoretical maximum datatransfer ratesof 42Mbit/swhenEvolved HSPA(HSPA+) is implemented in the network.[6]Users in deployed networks can expect a transfer rate of up to 384 kbit/s for Release '99 (R99) handsets (the original UMTS release), and 7.2 Mbit/s forHigh-Speed Downlink Packet Access(HSDPA) handsets in the downlink connection. These speeds are significantly faster than the 9.6 kbit/s of a single GSM error-corrected circuit switched data channel, multiple 9.6 kbit/s channels inHigh-Speed Circuit-Switched Data(HSCSD) and 14.4 kbit/s for CDMAOne channels. Since 2006, UMTS networks in many countries have been or are in the process of being upgraded with High-Speed Downlink Packet Access (HSDPA), sometimes known as3.5G. Currently, HSDPA enablesdownlinktransfer speeds of up to 21 Mbit/s. Work is also progressing on improving the uplink transfer speed with theHigh-Speed Uplink Packet Access(HSUPA). The 3GPPLTEstandard succeeds UMTS and initially provided 4G speeds of 100 Mbit/s down and 50 Mbit/s up, with scalability up to 3 Gbps, using a next generation air interface technology based uponorthogonal frequency-division multiplexing. The first national consumer UMTS networks launched in 2002 with a heavy emphasis on telco-provided mobile applications such as mobile TV andvideo calling. The high data speeds of UMTS are now most often utilised for Internet access: experience in Japan and elsewhere has shown that user demand for video calls is not high, and telco-provided audio/video content has declined in popularity in favour of high-speed access to the World Wide Web – either directly on a handset or connected to a computer viaWi-Fi,BluetoothorUSB.[citation needed] UMTS combines three different terrestrialair interfaces,GSM'sMobile Application Part(MAP) core, and the GSM family ofspeech codecs. The air interfaces are called UMTS Terrestrial Radio Access (UTRA).[7]All air interface options are part ofITU'sIMT-2000. In the currently most popular variant for cellular mobile telephones, W-CDMA (IMT Direct Spread) is used. It is also called "Uu interface", as it links User Equipment to the UMTS Terrestrial Radio Access Network. Please note that the termsW-CDMA,TD-CDMAandTD-SCDMAare misleading. While they suggest covering just achannel access method(namely a variant ofCDMA), they are actually the common names for the whole air interface standards.[8] W-CDMA (WCDMA; WidebandCode-Division Multiple Access), along with UMTS-FDD, UTRA-FDD, or IMT-2000 CDMA Direct Spread is an air interface standard found in3Gmobile telecommunicationsnetworks. It supports conventional cellular voice, text andMMSservices, but can also carry data at high speeds, allowing mobile operators to deliver higher bandwidth applications including streaming and broadband Internet access.[9] W-CDMA uses theDS-CDMAchannel access method with a pair of 5 MHz wide channels. In contrast, the competingCDMA2000system uses one or more available 1.25 MHz channels for each direction of communication. W-CDMA systems are widely criticized for their large spectrum usage, which delayed deployment in countries that acted relatively slowly in allocating new frequencies specifically for 3G services (such as the United States). The specificfrequency bandsoriginally defined by the UMTS standard are 1885–2025 MHz for the mobile-to-base (uplink) and 2110–2200 MHz for the base-to-mobile (downlink). In the US, 1710–1755 MHz and 2110–2155 MHz are used instead, as the 1900 MHz band was already used.[10]While UMTS2100 is the most widely deployed UMTS band, some countries' UMTS operators use the 850 MHz (900 MHz in Europe) and/or 1900 MHz bands (independently, meaning uplink and downlink are within the same band), notably in the US byAT&T Mobility, New Zealand byTelecom New Zealandon theXT Mobile Networkand in Australia byTelstraon theNext Gnetwork. Some carriers such asT-Mobileuse band numbers to identify the UMTS frequencies. For example, Band I (2100 MHz), Band IV (1700/2100 MHz), and Band V (850 MHz). UMTS-FDD is an acronym for Universal Mobile Telecommunications System (UMTS) –frequency-division duplexing(FDD) and a3GPPstandardizedversion of UMTS networks that makes use of frequency-division duplexing forduplexingover an UMTS Terrestrial Radio Access (UTRA) air interface.[11] W-CDMA is the basis of Japan'sNTT DoCoMo'sFOMAservice and the most-commonly used member of the Universal Mobile Telecommunications System (UMTS) family and sometimes used as a synonym for UMTS.[12]It uses the DS-CDMA channel access method and the FDD duplexing method to achieve higher speeds and support more users compared to most previously usedtime-division multiple access(TDMA) andtime-division duplex(TDD) schemes. While not an evolutionary upgrade on the airside, it uses the samecore networkas the2GGSM networks deployed worldwide, allowingdual-mode mobileoperation along with GSM/EDGE; a feature it shares with other members of the UMTS family. In the late 1990s, W-CDMA was developed by NTT DoCoMo as the air interface for their 3G networkFOMA. Later NTT DoCoMo submitted the specification to theInternational Telecommunication Union(ITU) as a candidate for the international 3G standard known as IMT-2000. The ITU eventually accepted W-CDMA as part of the IMT-2000 family of 3G standards, as an alternative to CDMA2000, EDGE, and the short rangeDECTsystem. Later, W-CDMA was selected as an air interface forUMTS. As NTT DoCoMo did not wait for the finalisation of the 3G Release 99 specification, their network was initially incompatible with UMTS.[13]However, this has been resolved by NTT DoCoMo updating their network. Code-Division Multiple Access communication networks have been developed by a number of companies over the years, but development of cell-phone networks based on CDMA (prior to W-CDMA) was dominated byQualcomm, the first company to succeed in developing a practical and cost-effective CDMA implementation for consumer cell phones and its earlyIS-95air interface standard has evolved into the current CDMA2000 (IS-856/IS-2000) standard. Qualcomm created an experimental wideband CDMA system called CDMA2000 3x which unified the W-CDMA (3GPP) and CDMA2000 (3GPP2) network technologies into a single design for a worldwide standard air interface. Compatibility with CDMA2000 would have beneficially enabled roaming on existing networks beyond Japan, since Qualcomm CDMA2000 networks are widely deployed, especially in the Americas, with coverage in 58 countries as of 2006[update]. However, divergent requirements resulted in the W-CDMA standard being retained and deployed globally. W-CDMA has then become the dominant technology with 457 commercial networks in 178 countries as of April 2012.[14]Several CDMA2000 operators have even converted their networks to W-CDMA for international roaming compatibility and smooth upgrade path toLTE. Despite incompatibility with existing air-interface standards, late introduction and the high upgrade cost of deploying an all-new transmitter technology, W-CDMA has become the dominant standard. W-CDMA transmits on a pair of 5 MHz-wide radio channels, while CDMA2000 transmits on one or several pairs of 1.25 MHz radio channels. Though W-CDMA does use adirect-sequenceCDMA transmission technique like CDMA2000, W-CDMA is not simply a wideband version of CDMA2000 and differs in many aspects from CDMA2000. From an engineering point of view, W-CDMA provides a different balance of trade-offs between cost, capacity, performance, and density[citation needed]; it also promises to achieve a benefit of reduced cost for video phone handsets. W-CDMA may also be better suited for deployment in the very dense cities of Europe and Asia. However, hurdles remain, andcross-licensingofpatentsbetween Qualcomm and W-CDMA vendors has not eliminated possible patent issues due to the features of W-CDMA which remain covered by Qualcomm patents.[15] W-CDMA has been developed into a complete set of specifications, a detailed protocol that defines how a mobile phone communicates with the tower, how signals are modulated, how datagrams are structured, and system interfaces are specified allowing free competition on technology elements. The world's first commercial W-CDMA service, FOMA, was launched by NTT DoCoMo inJapanin 2001. Elsewhere, W-CDMA deployments are usually marketed under the UMTS brand. W-CDMA has also been adapted for use in satellite communications on the U.S.Mobile User Objective Systemusing geosynchronous satellites in place of cell towers. J-PhoneJapan (onceVodafoneand nowSoftBank Mobile) soon followed by launching their own W-CDMA based service, originally branded "Vodafone Global Standard" and claiming UMTS compatibility. The name of the service was changed to "Vodafone 3G" (now "SoftBank 3G") in December 2004. Beginning in 2003,Hutchison Whampoagradually launched their upstart UMTS networks. Most countries have, since the ITU approved of the 3G mobile service, either "auctioned" the radio frequencies to the company willing to pay the most, or conducted a "beauty contest" – asking the various companies to present what they intend to commit to if awarded the licences. This strategy has been criticised for aiming to drain the cash of operators to the brink of bankruptcy in order to honour their bids or proposals. Most of them have a time constraint for the rollout of the service – where a certain "coverage" must be achieved within a given date or the licence will be revoked. Vodafone launched several UMTS networks in Europe in February 2004.MobileOneofSingaporecommercially launched its 3G (W-CDMA) services in February 2005.New Zealandin August 2005 andAustraliain October 2005. AT&T Mobilityutilized a UMTS network, with HSPA+, from 2005 until its shutdown in February 2022. Rogers inCanadaMarch 2007 has launched HSDPA in the Toronto Golden Horseshoe district on W-CDMA at 850/1900 MHz and plan the launch the service commercial in the top 25 cities October, 2007. TeliaSoneraopened W-CDMA service inFinlandOctober 13, 2004, with speeds up to 384 kbit/s. Availability only in main cities. Pricing is approx. €2/MB.[citation needed] SK TelecomandKTF, two largest mobile phone service providers inSouth Korea, have each started offering W-CDMA service in December 2003. Due to poor coverage and lack of choice in handhelds, the W-CDMA service has barely made a dent in the Korean market which was dominated by CDMA2000. By October 2006 both companies are covering more than 90 cities whileSK Telecomhas announced that it will provide nationwide coverage for its WCDMA network in order for it to offer SBSM (Single Band Single Mode) handsets by the first half of 2007.KT Freecelwill thus cut funding to its CDMA2000 network development to the minimum. InNorway,Telenorintroduced W-CDMA in major cities by the end of 2004, while their competitor,NetCom, followed suit a few months later. Both operators have 98% national coverage on EDGE, but Telenor has parallel WLAN roaming networks on GSM, where the UMTS service is competing with this. For this reason Telenor is dropping support of their WLAN service in Austria (2006). Maxis CommunicationsandCelcom, two mobile phone service providers inMalaysia, started offering W-CDMA services in 2005. InSweden,Teliaintroduced W-CDMA in March 2004. UMTS-TDD, an acronym for Universal Mobile Telecommunications System (UMTS) – time-division duplexing (TDD), is a 3GPP standardized version of UMTS networks that use UTRA-TDD.[11]UTRA-TDD is a UTRA that usestime-division duplexingfor duplexing.[11]While a full implementation of UMTS, it is mainly used to provide Internet access in circumstances similar to those whereWiMAXmight be used.[citation needed]UMTS-TDD is not directly compatible with UMTS-FDD: a device designed to use one standard cannot, unless specifically designed to, work on the other, because of the difference in air interface technologies and frequencies used.[citation needed]It is more formally as IMT-2000 CDMA-TDD or IMT 2000 Time-Division (IMT-TD).[16][17] The two UMTS air interfaces (UTRAs) for UMTS-TDD are TD-CDMA and TD-SCDMA. Both air interfaces use a combination of two channel access methods,code-division multiple access(CDMA) and time-division multiple access (TDMA): the frequency band is divided into time slots (TDMA), which are further divided into channels using CDMA spreading codes. These air interfaces are classified as TDD, because time slots can be allocated to either uplink or downlink traffic. TD-CDMA, an acronym for Time-Division-Code-Division Multiple Access, is a channel-access method based on usingspread-spectrummultiple-access (CDMA) across multiple time slots (TDMA). TD-CDMA is the channel access method for UTRA-TDD HCR, which is an acronym for UMTS Terrestrial Radio Access-Time Division Duplex High Chip Rate.[16] UMTS-TDD's air interfaces that use the TD-CDMA channel access technique are standardized as UTRA-TDD HCR, which uses increments of 5MHzof spectrum, each slice divided into 10 ms frames containing fifteen time slots (1500 per second).[16]The time slots (TS) are allocated in fixed percentage for downlink and uplink. TD-CDMA is used to multiplex streams from or to multiple transceivers. Unlike W-CDMA, it does not need separate frequency bands for up- and downstream, allowing deployment in tightfrequency bands.[18] TD-CDMA is a part of IMT-2000, defined as IMT-TD Time-Division (IMT CDMA TDD), and is one of the three UMTS air interfaces (UTRAs), as standardized by the 3GPP in UTRA-TDD HCR. UTRA-TDD HCR is closely related to W-CDMA, and provides the same types of channels where possible. UMTS's HSDPA/HSUPA enhancements are also implemented under TD-CDMA.[19] In the United States, the technology has been used for public safety and government use in theNew York Cityand a few other areas.[needs update][20]In Japan, IPMobile planned to provide TD-CDMA service in year 2006, but it was delayed, changed to TD-SCDMA, and bankrupt before the service officially started. Time-Division Synchronous Code-Division Multiple Access(TD-SCDMA) or UTRA TDD 1.28Mcpslow chip rate (UTRA-TDD LCR)[17][8]is an air interface[17]found in UMTS mobile telecommunications networks in China as an alternative to W-CDMA. TD-SCDMA uses the TDMA channel access method combined with an adaptivesynchronous CDMAcomponent[17]on 1.6 MHz slices of spectrum, allowing deployment in even tighter frequency bands than TD-CDMA. It is standardized by the 3GPP and also referred to as "UTRA-TDD LCR". However, the main incentive for development of this Chinese-developed standard was avoiding or reducing the license fees that have to be paid to non-Chinese patent owners. Unlike the other air interfaces, TD-SCDMA was not part of UMTS from the beginning but has been added in Release 4 of the specification. Like TD-CDMA, TD-SCDMA is known as IMT CDMA TDD within IMT-2000. The term "TD-SCDMA" is misleading. While it suggests covering only a channel access method, it is actually the common name for the whole air interface specification.[8] TD-SCDMA / UMTS-TDD (LCR) networks are incompatible with W-CDMA / UMTS-FDD and TD-CDMA / UMTS-TDD (HCR) networks. TD-SCDMA was developed in the People's Republic of China by the Chinese Academy of Telecommunications Technology (CATT),Datang TelecomandSiemensin an attempt to avoid dependence on Western technology. This is likely primarily for practical reasons, since other 3G formats require the payment of patent fees to a large number of Western patent holders. TD-SCDMA proponents also claim it is better suited for densely populated areas.[17]Further, it is supposed to cover all usage scenarios, whereas W-CDMA is optimised for symmetric traffic and macro cells, while TD-CDMA is best used in low mobility scenarios within micro or pico cells.[17] TD-SCDMA is based on spread-spectrum technology which makes it unlikely that it will be able to completely escape the payment of license fees to western patent holders. The launch of a national TD-SCDMA network was initially projected by 2005[21]but only reached large scale commercial trials with 60,000 users across eight cities in 2008.[22] On January 7, 2009, China granted a TD-SCDMA 3G licence toChina Mobile.[23] On September 21, 2009, China Mobile officially announced that it had 1,327,000 TD-SCDMA subscribers as of the end of August, 2009. TD-SCDMA is not commonly used outside of China.[24] TD-SCDMA uses TDD, in contrast to the FDD scheme used byW-CDMA. By dynamically adjusting the number of timeslots used for downlink anduplink, the system can more easily accommodate asymmetric traffic with different data rate requirements on downlink and uplink than FDD schemes. Since it does not require paired spectrum for downlink and uplink, spectrum allocation flexibility is also increased. Using the same carrier frequency for uplink and downlink also means that the channel condition is the same on both directions, and thebase stationcan deduce the downlink channel information from uplink channel estimates, which is helpful to the application ofbeamformingtechniques. TD-SCDMA also uses TDMA in addition to the CDMA used in WCDMA. This reduces the number of users in each timeslot, which reduces the implementation complexity ofmultiuser detectionand beamforming schemes, but the non-continuous transmission also reducescoverage(because of the higher peak power needed), mobility (because of lowerpower controlfrequency) and complicatesradio resource managementalgorithms. The "S" in TD-SCDMA stands for "synchronous", which means that uplink signals are synchronized at the base station receiver, achieved by continuous timing adjustments. This reduces theinterferencebetween users of the same timeslot using different codes by improving theorthogonalitybetween the codes, therefore increasing system capacity, at the cost of some hardware complexity in achieving uplink synchronization. On January 20, 2006,Ministry of Information Industryof the People's Republic of China formally announced that TD-SCDMA is the country's standard of 3G mobile telecommunication. On February 15, 2006, a timeline for deployment of the network in China was announced, stating pre-commercial trials would take place starting after completion of a number of test networks in select cities. These trials ran from March to October, 2006, but the results were apparently unsatisfactory. In early 2007, the Chinese government instructed the dominant cellular carrier, China Mobile, to build commercial trial networks in eight cities, and the two fixed-line carriers,China TelecomandChina Netcom, to build one each in two other cities. Construction of these trial networks was scheduled to finish during the fourth quarter of 2007, but delays meant that construction was not complete until early 2008. The standard has been adopted by 3GPP since Rel-4, known as "UTRA TDD 1.28 Mcps Option".[17] On March 28, 2008, China Mobile Group announced TD-SCDMA "commercial trials" for 60,000 test users in eight cities from April 1, 2008. Networks using other 3G standards (WCDMA and CDMA2000 EV/DO) had still not been launched in China, as these were delayed until TD-SCDMA was ready for commercial launch. In January 2009, theMinistry of Industry and Information Technology(MIIT) in China took the unusual step of assigning licences for 3 different third-generation mobile phone standards to three carriers in a long-awaited step that is expected to prompt $41 billion in spending on new equipment. The Chinese-developed standard, TD-SCDMA, was assigned to China Mobile, the world's biggest phone carrier by subscribers. That appeared to be an effort to make sure the new system has the financial and technical backing to succeed. Licences for two existing 3G standards, W-CDMA andCDMA2000 1xEV-DO, were assigned toChina Unicomand China Telecom, respectively. Third-generation, or 3G, technology supports Web surfing, wireless video and other services and the start of service is expected to spur new revenue growth. The technical split by MIIT has hampered the performance of China Mobile in the 3G market, with users and China Mobile engineers alike pointing to the lack of suitable handsets to use on the network.[25]Deployment of base stations has also been slow, resulting in lack of improvement of service for users.[26]The network connection itself has consistently been slower than that from the other two carriers, leading to a sharp decline in market share. By 2011 China Mobile has already moved its focus onto TD-LTE.[27][28]Gradual closures of TD-SCDMA stations started in 2016.[29][30] The following is a list ofmobile telecommunicationsnetworks using third-generation TD-SCDMA / UMTS-TDD (LCR) technology. In Europe,CEPTallocated the 2010–2020 MHz range for a variant of UMTS-TDD designed for unlicensed, self-provided use.[33]Some telecom groups and jurisdictions have proposed withdrawing this service in favour of licensed UMTS-TDD,[34]due to lack of demand, and lack of development of a UMTS TDD air interface technology suitable for deployment in this band. Ordinary UMTS uses UTRA-FDD as an air interface and is known asUMTS-FDD. UMTS-FDD uses W-CDMA for multiple access andfrequency-division duplexfor duplexing, meaning that the up-link and down-link transmit on different frequencies. UMTS is usually transmitted on frequencies assigned for1G,2G, or 3G mobile telephone service in the countries of operation. UMTS-TDD uses time-division duplexing, allowing the up-link and down-link to share the same spectrum. This allows the operator to more flexibly divide the usage of available spectrum according to traffic patterns. For ordinary phone service, you would expect the up-link and down-link to carry approximately equal amounts of data (because every phone call needs a voice transmission in either direction), but Internet-oriented traffic is more frequently one-way. For example, when browsing a website, the user will send commands, which are short, to the server, but the server will send whole files, that are generally larger than those commands, in response. UMTS-TDD tends to be allocated frequency intended for mobile/wireless Internet services rather than used on existing cellular frequencies. This is, in part, because TDD duplexing is not normally allowed oncellular,PCS/PCN, and 3G frequencies. TDD technologies open up the usage of left-over unpaired spectrum. Europe-wide, several bands are provided either specifically for UMTS-TDD or for similar technologies. These are 1900 MHz and 1920 MHz and between 2010 MHz and 2025 MHz. In several countries the 2500–2690 MHz band (also known as MMDS in the USA) have been used for UMTS-TDD deployments. Additionally, spectrum around the 3.5 GHz range has been allocated in some countries, notably Britain, in a technology-neutral environment. In the Czech Republic UTMS-TDD is also used in a frequency range around 872 MHz.[35] UMTS-TDD has been deployed for public and/or private networks in at least nineteen countries around the world, with live systems in, amongst other countries, Australia, Czech Republic, France, Germany, Japan, New Zealand, Botswana, South Africa, the UK, and the USA. Deployments in the US thus far have been limited. It has been selected for a public safety support network used by emergency responders in New York,[36]but outside of some experimental systems, notably one fromNextel, thus far the WiMAX standard appears to have gained greater traction as a general mobile Internet access system. A variety of Internet-access systems exist which provide broadband speed access to the net. These include WiMAX andHIPERMAN. UMTS-TDD has the advantages of being able to use an operator's existing UMTS/GSM infrastructure, should it have one, and that it includes UMTS modes optimized for circuit switching should, for example, the operator want to offer telephone service. UMTS-TDD's performance is also more consistent. However, UMTS-TDD deployers often have regulatory problems with taking advantage of some of the services UMTS compatibility provides. For example, the UMTS-TDD spectrum in the UK cannot be used to provide telephone service, though the regulatorOFCOMis discussing the possibility of allowing it at some point in the future. Few operators considering UMTS-TDD have existing UMTS/GSM infrastructure. Additionally, the WiMAX and HIPERMAN systems provide significantly larger bandwidths when the mobile station is near the tower. Like most mobile Internet access systems, many users who might otherwise choose UMTS-TDD will find their needs covered by the ad hoc collection of unconnected Wi-Fi access points at many restaurants and transportation hubs, and/or by Internet access already provided by their mobile phone operator. By comparison, UMTS-TDD (and systems like WiMAX) offers mobile, and more consistent, access than the former, and generally faster access than the latter. UMTS also specifies the Universal Terrestrial Radio Access Network (UTRAN), which is composed of multiple base stations, possibly using different terrestrial air interface standards and frequency bands. UMTS and GSM/EDGE can share a Core Network (CN), making UTRAN an alternative radio access network toGERAN(GSM/EDGE RAN), and allowing (mostly) transparent switching between the RANs according to available coverage and service needs. Because of that, UMTS's and GSM/EDGE's radio access networks are sometimes collectively referred to as UTRAN/GERAN. UMTS networks are often combined with GSM/EDGE, the latter of which is also a part of IMT-2000. The UE (User Equipment) interface of theRAN(Radio Access Network) primarily consists ofRRC(Radio Resource Control),PDCP(Packet Data Convergence Protocol),RLC(Radio Link Control) and MAC (Media Access Control) protocols. RRC protocol handles connection establishment, measurements, radio bearer services, security and handover decisions. RLC protocol primarily divides into three Modes – Transparent Mode (TM), Unacknowledge Mode (UM), Acknowledge Mode (AM). The functionality of AM entity resembles TCP operation whereas UM operation resembles UDP operation. In TM mode, data will be sent to lower layers without adding any header toSDUof higher layers. MAC handles the scheduling of data on air interface depending on higher layer (RRC) configured parameters. The set of properties related to data transmission is called Radio Bearer (RB). This set of properties decides the maximum allowed data in a TTI (Transmission Time Interval). RB includes RLC information and RB mapping. RB mapping decides the mapping between RB<->logical channel<->transport channel. Signaling messages are sent on Signaling Radio Bearers (SRBs) and data packets (either CS or PS) are sent on data RBs. RRC andNASmessages go on SRBs. Security includes two procedures: integrity and ciphering. Integrity validates the resource of messages and also makes sure that no one (third/unknown party) on the radio interface has modified the messages. Ciphering ensures that no one listens to your data on the air interface. Both integrity and ciphering are applied for SRBs whereas only ciphering is applied for data RBs. With Mobile Application Part, UMTS uses the same core network standard as GSM/EDGE. This allows a simple migration for existing GSM operators. However, the migration path to UMTS is still costly: while much of the core infrastructure is shared with GSM, the cost of obtaining new spectrum licenses and overlaying UMTS at existing towers is high. The CN can be connected to variousbackbone networks, such as theInternetor anIntegrated Services Digital Network(ISDN) telephone network. UMTS (and GERAN) include the three lowest layers ofOSI model. The network layer (OSI 3) includes theRadio Resource Managementprotocol (RRM) that manages the bearer channels between the mobile terminals and the fixed network, including the handovers. A UARFCN (abbreviationfor UTRA Absolute Radio Frequency Channel Number, where UTRA stands for UMTS Terrestrial Radio Access) is used to identify a frequency in theUMTS frequency bands. Typically channel number is derived from the frequency in MHz through the formula Channel Number = Frequency * 5. However, this is only able to represent channels that are centered on a multiple of 200 kHz, which do not align with licensing in North America. 3GPP added several special values for the common North American channels. Over 130 licenses had been awarded to operators worldwide, as of December 2004, specifying W-CDMA radio access technology that builds on GSM. In Europe, the license process occurred at the tail end of the technology bubble, and the auction mechanisms for allocation set up in some countries resulted in some extremely high prices being paid for the original 2100 MHz licenses, notably in the UK and Germany. InGermany, bidders paid a total €50.8 billion for six licenses, two of which were subsequently abandoned and written off by their purchasers (Mobilcom and theSonera/Telefónicaconsortium). It has been suggested that these huge license fees have the character of a very large tax paid on future income expected many years down the road. In any event, the high prices paid put some European telecom operators close to bankruptcy (most notablyKPN). Over the last few years some operators have written off some or all of the license costs. Between 2007 and 2009, all three Finnish carriers began to use 900 MHz UMTS in a shared arrangement with its surrounding 2G GSM base stations for rural area coverage, a trend that is expected to expand over Europe in the next 1–3 years.[needs update] The 2100 MHz band (downlink around 2100 MHz and uplink around 1900 MHz) allocated for UMTS in Europe and most of Asia is already used in North America. The 1900 MHz range is used for 2G (PCS) services, and 2100 MHz range is used for satellite communications. Regulators have, however, freed up some of the 2100 MHz range for 3G services, together with a different range around 1700 MHz for the uplink.[needs update] AT&T Wireless launched UMTS services in the United States by the end of 2004 strictly using the existing 1900 MHz spectrum allocated for 2G PCS services. Cingular acquired AT&T Wireless in 2004 and has since then launched UMTS in select US cities. Cingular renamed itself AT&T Mobility and rolled out[37]some cities with a UMTS network at 850 MHz to enhance its existing UMTS network at 1900 MHz and now offers subscribers a number of dual-band UMTS 850/1900 phones. T-Mobile's rollout of UMTS in the US was originally focused on the 1700 MHz band. However, T-Mobile has been moving users from 1700 MHz to 1900 MHz (PCS) in order to reallocate the spectrum to 4GLTEservices.[38] In Canada, UMTS coverage is being provided on the 850 MHz and 1900 MHz bands on the Rogers and Bell-Telus networks. Bell and Telus share the network. Recently, new providersWind Mobile,MobilicityandVideotronhave begun operations in the 1700 MHz band. In 2008, Australian telco Telstra replaced its existing CDMA network with a national UMTS-based 3G network, branded asNextG, operating in the 850 MHz band. Telstra currently provides UMTS service on this network, and also on the 2100 MHz UMTS network, through a co-ownership of the owning and administrating company 3GIS. This company is also co-owned byHutchison 3G Australia, and this is the primary network used by their customers.Optusis currently rolling out a 3G network operating on the 2100 MHz band in cities and most large towns, and the 900 MHz band in regional areas.Vodafoneis also building a 3G network using the 900 MHz band. In India,BSNLhas started its 3G services since October 2009, beginning with the larger cities and then expanding over to smaller cities. The 850 MHz and 900 MHz bands provide greater coverage compared to equivalent 1700/1900/2100 MHz networks, and are best suited to regional areas where greater distances separate base station and subscriber. Carriers in South America are now also rolling out 850 MHz networks. UMTS phones (and data cards) are highly portable – they have been designed to roam easily onto other UMTS networks (if the providers have roaming agreements in place). In addition, almost all UMTS phones are UMTS/GSM dual-mode devices, so if a UMTS phone travels outside of UMTS coverage during a call the call may be transparently handed off to available GSM coverage. Roaming charges are usually significantly higher than regular usage charges. Most UMTS licensees consider ubiquitous, transparent globalroamingan important issue. To enable a high degree of interoperability, UMTS phones usually support several different frequencies in addition to their GSM fallback. Different countries support different UMTS frequency bands – Europe initially used 2100 MHz while the most carriers in the USA use 850 MHz and 1900 MHz. T-Mobile has launched a network in the US operating at 1700 MHz (uplink) /2100 MHz (downlink), and these bands also have been adopted elsewhere in the US and in Canada and Latin America. A UMTS phone and network must support a common frequency to work together. Because of the frequencies used, early models of UMTS phones designated for the United States will likely not be operable elsewhere and vice versa. There are now 11 different frequency combinations used around the world – including frequencies formerly used solely for 2G services. UMTS phones can use aUniversal Subscriber Identity Module, USIM (based on GSM'sSIM card) and also work (including UMTS services) with GSM SIM cards. This is a global standard of identification, and enables a network to identify and authenticate the (U)SIM in the phone. Roaming agreements between networks allow for calls to a customer to be redirected to them while roaming and determine the services (and prices) available to the user. In addition to user subscriber information and authentication information, the (U)SIM provides storage space for phone book contact. Handsets can store their data on their own memory or on the (U)SIM card (which is usually more limited in its phone book contact information). A (U)SIM can be moved to another UMTS or GSM phone, and the phone will take on the user details of the (U)SIM, meaning it is the (U)SIM (not the phone) which determines the phone number of the phone and the billing for calls made from the phone. Japan was the first country to adopt 3G technologies, and since they had not used GSM previously they had no need to build GSM compatibility into their handsets and their 3G handsets were smaller than those available elsewhere. In 2002, NTT DoCoMo's FOMA 3G network was the first commercial UMTS network – using a pre-release specification,[39]it was initially incompatible with the UMTS standard at the radio level but used standard USIM cards, meaning USIM card based roaming was possible (transferring the USIM card into a UMTS or GSM phone when travelling). Both NTT DoCoMo and SoftBank Mobile (which launched 3G in December 2002) now use standard UMTS. All of the major 2G phone manufacturers (that are still in business) are now manufacturers of 3G phones. The early 3G handsets and modems were specific to the frequencies required in their country, which meant they could only roam to other countries on the same 3G frequency (though they can fall back to the older GSM standard). Canada and USA have a common share of frequencies, as do most European countries. The article UMTS frequency bands is an overview of UMTS network frequencies around the world. Using acellular router, PCMCIA or USB card, customers are able to access 3G broadband services, regardless of their choice of computer (such as atablet PCor aPDA). Some softwareinstalls itselffrom the modem, so that in some cases absolutely no knowledge of technology is required to getonlinein moments. Using a phone that supports 3G and Bluetooth 2.0, multiple Bluetooth-capable laptops can be connected to the Internet. Some smartphones can also act as a mobileWLAN access point. There are very few 3G phones or modems available supporting all 3G frequencies (UMTS850/900/1700/1900/2100 MHz). In 2010, Nokia released a range of phones withPentaband3G coverage, including theN8andE7. Many other phones are offering more than one band which still enables extensive roaming. For example, Apple'siPhone 4contains a quadband chipset operating on 850/900/1900/2100 MHz, allowing usage in the majority of countries where UMTS-FDD is deployed. The main competitor to UMTS is CDMA2000 (IMT-MC), which is developed by the3GPP2. Unlike UMTS, CDMA2000 is an evolutionary upgrade to an existing 2G standard, cdmaOne, and is able to operate within the same frequency allocations. This and CDMA2000's narrower bandwidth requirements make it easier to deploy in existing spectra. In some, but not all, cases, existing GSM operators only have enough spectrum to implement either UMTS or GSM, not both. For example, in the US D, E, and F PCS spectrum blocks, the amount of spectrum available is 5 MHz in each direction. A standard UMTS system would saturate that spectrum. Where CDMA2000 is deployed, it usually co-exists with UMTS. In many markets however, the co-existence issue is of little relevance, as legislative hurdles exist to co-deploying two standards in the same licensed slice of spectrum. Another competitor to UMTS isEDGE(IMT-SC), which is an evolutionary upgrade to the 2G GSM system, leveraging existing GSM spectrums. It is also much easier, quicker, and considerably cheaper for wireless carriers to "bolt-on" EDGE functionality by upgrading their existing GSM transmission hardware to support EDGE rather than having to install almost all brand-new equipment to deliver UMTS. However, being developed by 3GPP just as UMTS, EDGE is not a true competitor. Instead, it is used as a temporary solution preceding UMTS roll-out or as a complement for rural areas. This is facilitated by the fact that GSM/EDGE and UMTS specifications are jointly developed and rely on the same core network, allowing dual-mode operation includingvertical handovers. China'sTD-SCDMAstandard is often seen as a competitor, too. TD-SCDMA has been added to UMTS' Release 4 as UTRA-TDD 1.28 Mcps Low Chip Rate (UTRA-TDD LCR). UnlikeTD-CDMA(UTRA-TDD 3.84 Mcps High Chip Rate, UTRA-TDD HCR) which complements W-CDMA (UTRA-FDD), it is suitable for both micro and macrocells. However, the lack of vendors' support is preventing it from being a real competitor. While DECT is technically capable of competing with UMTS and other cellular networks in densely populated, urban areas, it has only been deployed for domestic cordless phones and private in-house networks. All of these competitors have been accepted by ITU as part of the IMT-2000 family of 3G standards, along with UMTS-FDD. On the Internet access side, competing systems include WiMAX andFlash-OFDM. From a GSM/GPRS network, the following network elements can be reused: From a GSM/GPRS communication radio network, the following elements cannot be reused: They can remain in the network and be used in dual network operation where 2G and 3G networks co-exist while network migration and new 3G terminals become available for use in the network. The UMTS network introduces new network elements that function as specified by 3GPP: The functionality of MSC changes when going to UMTS. In a GSM system the MSC handles all the circuit switched operations like connecting A- and B-subscriber through the network. In UMTS the Media gateway (MGW) takes care of data transfer in circuit switched networks. MSC controls MGW operations. Some countries, including the United States, have allocated spectrum differently from theITUrecommendations, so that the standard bands most commonly used for UMTS (UMTS-2100) have not been available.[citation needed]In those countries, alternative bands are used, preventing the interoperability of existing UMTS-2100 equipment, and requiring the design and manufacture of different equipment for the use in these markets. As is the case with GSM900 today[when?], standard UMTS 2100 MHz equipment will not work in those markets. However, it appears as though UMTS is not suffering as much from handset band compatibility issues as GSM did, as many UMTS handsets are multi-band in both UMTS and GSM modes. Penta-band (850, 900, 1700, 2100, and 1900 MHz bands), quad-band GSM (850, 900, 1800, and 1900 MHz bands) and tri-band UMTS (850, 1900, and 2100 MHz bands) handsets are becoming more commonplace.[40] In its early days[when?], UMTS had problems in many countries: Overweight handsets with poor battery life were first to arrive on a market highly sensitive to weight and form factor.[citation needed]The Motorola A830, a debut handset on Hutchison's 3 network, weighed more than 200 grams and even featured a detachable camera to reduce handset weight. Another significant issue involved call reliability, related to problems with handover from UMTS to GSM. Customers found their connections being dropped as handovers were possible only in one direction (UMTS → GSM), with the handset only changing back to UMTS after hanging up. In most networks around the world this is no longer an issue.[citation needed] Compared to GSM, UMTS networks initially required a higherbase stationdensity. For fully-fledged UMTS incorporatingvideo on demandfeatures, one base station needed to be set up every 1–1.5 km (0.62–0.93 mi). This was the case when only the 2100 MHz band was being used, however with the growing use of lower-frequency bands (such as 850 and 900 MHz) this is no longer so. This has led to increasing rollout of the lower-band networks by operators since 2006.[citation needed] Even with current technologies and low-band UMTS, telephony and data over UMTS requires more power than on comparable GSM networks.Apple Inc.cited[41]UMTS power consumption as the reason that the first generationiPhoneonly supported EDGE. Their release of the iPhone 3G quotes talk time on UMTS as half that available when the handset is set to use GSM. Other manufacturers indicate different battery lifetime for UMTS mode compared to GSM mode as well. As battery and network technology improve, this issue is diminishing. As early as 2008, it was known that carrier networks can be used to surreptitiously gather user location information.[42]In August 2014, theWashington Postreported on widespread marketing of surveillance systems usingSignalling System No. 7(SS7) protocols to locate callers anywhere in the world.[42] In December 2014, news broke that SS7's very own functions can be repurposed for surveillance, because of its relaxed security, in order to listen to calls in real time or to record encrypted calls and texts for later decryption, or to defraud users and cellular carriers.[43] Deutsche Telekomand Vodafone declared the same day that they had fixed gaps in their networks, but that the problem is global and can only be fixed with a telecommunication system-wide solution.[44] The evolution of UMTS progresses according to planned releases. Each release is designed to introduce new features and improve upon existing ones.
https://en.wikipedia.org/wiki/UMTS-TDD
Radio jammingis the deliberate blocking of or interference withwireless communications.[1][2]In some cases, jammers work by the transmission ofradio signalsthat disrupttelecommunicationsby decreasing thesignal-to-noise ratio.[3] The concept can be used inwireless data networksto disrupt information flow.[4]It is a common form of censorship in totalitarian countries, in order to prevent foreign radio stations in border areas from reaching the country.[3] Jamming is usually distinguished from interference that can occur due to device malfunctions or other accidental circumstances. Devices that simply cause interference are regulated differently. Unintentional "jamming" occurs when an operator transmits on a busyfrequencywithout first checking whether it is in use, or without being able to hear stations using the frequency. Another form of unintentional jamming occurs when equipment accidentallyradiatesa signal, such as acable televisionplant that accidentally emits on an aircraft emergency frequency. Originally the terms were used interchangeably but nowadays most radio users use the term "jamming" to describe thedeliberateuse of radio noise or signals in an attempt to disrupt communications (or prevent listening to broadcasts) whereas the term "interference" is used to describeunintentionalforms of disruption (which are far more common). However, the distinction is still not universally applied. For inadvertent disruptions, seeelectromagnetic compatibility. Intentional communications jamming is usually aimed at radio signals to disrupt control of a battle. Atransmitter, tuned to the same frequency as the opponents' receiving equipment and with the same type ofmodulation, can, with enough power, override any signal at thereceiver. Digital wireless jamming for signals such asBluetoothandWiFiis possible with very low power. The most common types of this form of signal jamming arerandom noise, random pulse, stepped tones, warbler, random keyed modulatedCW, tone, rotary, pulse, spark, recorded sounds, gulls, and sweep-through. These can be divided into two groups: obvious and subtle. Obvious jamming is easy to detect because it can be heard on the receiving equipment. It is usually some type of noise, such as stepped tones (bagpipes), random-keyed code, pulses, music (often distorted), erratically warbling tones, highly distorted speech, random noise (hiss), and recorded sounds. Various combinations of these methods may be used, often accompanied by regularMorseidentification signals to enable individual transmitters to be identified in order to assess their effectiveness. For example, China, which did and does use jamming extensively, plays a loop oftraditional Chinese musicwhile it is jamming channels (cf.Attempted jamming of numbers stations). The purpose of this type of jamming is to block reception of transmitted signals and to cause a nuisance to the receiving operator. One early Soviet attempt at jamming Western broadcasters used the noise from thediesel generatorthat was powering the jamming transmitter. Subtle jamming is jamming during which no sound is heard on the receiving equipment. The radio does not receive incoming signals; yet everything seems superficially normal to the operator. These are often technical attacks on modern equipment, such as "squelch capture". Thanks to the FMcapture effect,frequency modulatedbroadcasts may be jammed, unnoticed, by a simple unmodulated carrier. The receiver locks on to the larger carrier signal, and hence will ignore the FM signal that carries the information. Digital signals use complex modulation techniques, such asQPSK. These signals are very robust in the presence of interfering signals. But the signal relies on hand shaking between the transmitter and receiver to identify and determine security settings and method of high-level transmission. If the jamming device sends initiation data packets, the receiver will begin its state machine to establish two-way data transmission. A jammer will loop back to the beginning instead of completing the handshake. This method jams the receiver in an infinite loop where it keeps trying to initiate a connection but never completes it, which effectively blocks all legitimate communication. Bluetooth and other consumer radio protocols such as WiFi have built-in detectors, so that they transmit only when the channel is free. Simple continuous transmission on a given channel will continuously stop a transmitter transmitting, hence jamming the receiver from ever hearing from its intended transmitter. Other jammers work by analysing the packet headers and, depending on the source or destination, selectively transmitting over the end of the message, corrupting the packet. DuringWorld War II, groundradio operatorswould attempt to misleadpilotsby false instructions in their ownlanguage, in what was more precisely aspoofing attackthan jamming.Radar jammingis also important to disrupt use ofradarused to guide an enemy'smissilesoraircraft. Modern secure communication techniques use such methods asspread spectrummodulation to resist the deleterious effects of jamming. Jamming of foreign radiobroadcaststations has often been used in wartime (and during periods of tense international relations) to prevent or deter citizens from listening to broadcasts from enemy countries. However, such jamming is usually of limited effectiveness because the affected stations usually change frequencies, put on additional frequencies and/or increase transmission power. Jamming has also occasionally been used by the governments of Germany (duringWorld War II),[6]Israel,[7]Cuba, Iraq, Iran (during theIran-Iraq War), China, North and South Korea and several Latin American countries, as well as byIrelandagainstpirate radiostations such asRadio Nova. The United Kingdom government used two coordinated, separately located transmitters to jam theoffshore radioship,Radio North Sea Internationaloff the coast of Britain in 1970.[8] In occupied Europe theNazisattempted to jam broadcasts to the continent from theBBCand other allied stations. Along with increasingtransmitterpower and adding extra frequencies, attempts were made to counteract the jamming by droppingleafletsover cities instructing listeners to construct a directionalloop aerialthat would enable them to hear the stations through the jamming. In the Netherlands such aerials were nicknamed "moffenzeef" (English: "kraut sieve").[9][10] During theContinuation War, after discovering the fact that the mines that the retreatingSovietforces had scattered throughout the city ofViipuriwere radio-triggered rather than timer- or pressure-triggered, the Finnish forces playedVesterinen's recording ofSäkkijärven Polkkawithout any pauses from September 4, 1941 to February 2, 1942, as they, to demine the city, needed to block the Soviets from activating the mines through the correct radio wave. The Soviets tried to trigger the mines by changing frequency; the mines had been set up to be able to be triggered by three different frequencies. The Finns countered this by playing Säkkijärven Polkka on all frequencies. During theBattle of the BeamsBritain jammed navigation signals used by German aircraft while the Soviets attempted to do likewise to American aircraft during theBerlin Airlift Since the Soviet Union started jamming Western radio broadcasts to the Soviet Union in 1948 the primary targets have been theBBC External Broadcasting Services,Voice of America(VOA) and especiallyRFE/RL. Western nations had allowed jamming prior to World War II,[dubious–discuss]but in the post-War era the Western view has been that jamming violates thefreedom of informationwhile the Soviet view has been that under the international law principle ofnational sovereigntyjamming is an acceptable response to foreign radio broadcasts.[11] During much of theCold War,Soviet(andEastern Bloc) jamming of some Western broadcasters led to a "power race" in which broadcasters and jammers alike repeatedly increased their transmission power, utilised highlydirectionalantennas and added extra frequencies (known as "barrage" or "frequency diversity" broadcasting) to the already heavily overcrowdedshortwavebands to such an extent that many broadcasters not directly targeted by the jammers (including pro-Soviet stations) suffered from the rising levels of noise and interference.[12][13] There were also periods whenChinaand the Soviet Union jammed each other's programmes. The Soviet Union also jammedAlbanianprogrammes at times. Some parts of the world were more affected by these broadcasting practices than others Meanwhile, some listeners in the Soviet Union and Eastern Bloc devised ingenious methods (such as homemade directionalloop antennas) to hear the Western stations through the noise. Becauseradio propagationon shortwave can be difficult to predict reliably, listeners sometimes found that there were days/times when the jamming was particularly ineffective because radio fading (due toatmospheric conditions) was affecting the jamming signals but favouring the broadcasts (a phenomenon sometimes dubbed "twilight immunity"). On other days of course the reverse was the case. There were also times when jamming transmitters were (temporarily) off air due to breakdowns or maintenance. The Soviets (and most of their Eastern bloc allies) used two types of jamming transmitter.Skywavejamming covered a large area but for the reasons described was of limited effectiveness.Groundwavejamming was more effective but only over a small area and was thus used only in/near major cities throughout the Eastern Bloc. Both types of jamming were less effective on higher shortwave frequencies (above 15 MHz); however, many radios sold on the domestic market in the Soviet Union didn't tune these higher bands.[14]Skywave jamming was usually accompanied bymorsesignals in order to enable (coded) identification of the jamming station in order that Soviet monitoring posts could assess the effectiveness of each station. In 1987 after decades of generally refusing to acknowledge that such jamming was even taking place the Soviets finally stopped jamming western broadcasts with the exception ofRFE/RLwhich continued to be jammed for several months into 1988. Previously there had been periods when some individual Eastern bloc countries refrained from jamming Western broadcasts but this varied widely by time and country. In general outside of the Soviet Union itselfBulgariawas one of the most prolific operators of jamming transmitters in the Eastern bloc withEast GermanyandYugoslaviathe least. Whilewestern governmentsmay have occasionally considered jamming broadcasts from Eastern Bloc stations, it was generally accepted that doing so would be a pointless exercise. Ownership of shortwave radios was less common in western countries than in the Soviet Union where, due to the vast physical size of the country, manydomestic stationswere relayed on shortwave as it was the only practical way to cover remote areas. Additionally, western governments were generally less afraid of intellectual competition from the Eastern Bloc. InFrancoist Spainthe dictatorship jammed for decadesRadio España Independiente, the radio station of theCommunist Party of Spainwhich broadcast fromMoscow(1941–1955),Bucharest(1955–1977) and East Berlin. It was the most important clandestine broadcaster in Spain and the regime considered it a threat, since it allowed its citizens to skip the censorship of the local media.[15]Broadcasts from East Germany to South Africa were also jammed. In Latin America there were instances of communist radio stations such asRadio Venceremosbeing jammed, allegedly by theCIA, while there were short lived instances where Britain jammed some Egyptian (during theSuez Crisis),Greek(prior toCyprusgaining independence) andRhodesianstations.[16]During the early years of the Northern Ireland troubles the British army regularly jammed broadcasts from both Republican and Loyalist paramilitary groups. In 2002, China acquired standard short-wave radio-broadcasting equipment designed for general public radio-broadcasting and technical support from Thales Broadcast Multimedia, a former subsidiary of the French state-owned companyThales Group. Debates have been raised in Iran regarding the possible health hazards of satellite jamming. Iranian officials including the health minister have claimed that jamming has no health risk for humans. However, the minister of communication has recently admitted that satellite jamming has 'serious effects' and has called for identification of jamming stations so they can put a stop to this practice.[18][19][20]The government has generally denied any involvement in jamming and claimed they are sent from unknown sources.[18]According to some sources,IRGCis the organization behind satellite jamming in Iran.[21] TheRussian Armed Forceshave, since the summer of 2015, begun using a multi-functionalEWweapon system inUkraine, known asBorisoglebsk 2.[22]It is postulated that this system has defeated communications in parts of that country, including mobile telephony andGPSsystems.[22][23][24] Radio jamming (or "comm jamming") is a common plot element in theStar Warsfranchise. InStar Wars: Episode VI -Return of the Jedi, when the Rebel fleet approaches the Galactic Empire's force, believing themselves to be launching a surprise attack, GeneralLando Calrissianrealizes the Empire is jamming their signals, and therefore know they are approaching. In the filmStar Trek II, after receiving a distress call from the space stationRegula I, Captain Kirk attempts to establish communications, but theEnterprise'scomm officer Lt. Uhura reports that further transmissions are "jammed at the source".
https://en.wikipedia.org/wiki/Radio_jamming
Theampere-turn(symbolA⋅t) is theMKS(metre–kilogram–second) unit ofmagnetomotive force(MMF), represented by adirect currentof oneampereflowing in a single-turn loop.[1]Turnsrefers to thewinding numberof an electrical conductor composing anelectromagnetic coil. For example, a current of2 Aflowing through a coil of 10 turns produces an MMF of20 A⋅t. The corresponding physical quantity isNI, the product of thenumber of turns,N, and the current,I; it has been used in industry, specifically, US-basedcoil-making industries.[citation needed] By maintaining the same current and increasing the number of loops or turns of the coil, the strength of the magnetic field increases because each loop or turn of the coil sets up its own magnetic field. The magnetic field unites with the fields of the other loops to produce the field around the entire coil, making the total magnetic field stronger. The strength of the magnetic field is not linearly related to the ampere-turns when a magnetic material is used as a part of the system. Also, the material within the magnet carrying the magnetic flux "saturates" at some point, after which adding more ampere-turns has little effect. The ampere-turn corresponds to⁠4π/10⁠gilberts, the correspondingCGSunit. InThomas Edison's laboratoryFrancis Uptonwas the lead mathematician. Trained withHelmholtzin Germany, he usedweberas the name of the unit of current, modified toamperelater:
https://en.wikipedia.org/wiki/Ampere-turn
The concept of anobligatory passage point(OPP) was developed by sociologistMichel Callonin a seminal contribution toactor–network theory: Callon, Michel (1986), "Elements of a sociology of translation: Domestication of the Scallops and the Fishermen of St Brieuc Bay".InJohn Law(Ed.),Power, Action and Belief: A New Sociology of Knowledge?London, Routledge: 196–233. Obligatory passage points are a feature of actor-networks, usually associated with the initial (problematization) phase of a translation process. An OPP can be thought of as the narrow end of a funnel, that forces the actors to converge on a certain topic, purpose or question. The OPP thereby becomes a necessary element for the formation of a network and anaction program. The OPP thereby mediates all interactions between actors in a network and defines the action program. Obligatory passage points allow for local networks to set up negotiation spaces that allow them a degree of autonomy from the global network of involved actors. If a project is unable to impose itself as a strong OPP between the global and local networks, it has no control over global resources such as financial and political support, which can be misused or withdrawn. Additionally, a weak OPP is unable to take credit for the successes achieved within the local network, as outside actors are able to bypass its control and influence the local network directly.[1] An action program can comprise a number of different OPPs. An OPP can also be redefined as the problematization phase is revisited. In Callon andLaw's'"Engineering and Sociology in a Military Aircraft Project"[2]the project management of a project to design a new strategic jet fighter for the British Military became an obligatory passage point between representatives of government and aerospace engineers. In recent years, the notion of the obligatory passage point has taken hold in information systems security andinformation privacydisciplines and journals. Backhouse et al. (2006)[3]illustrated how practices and policies are standardized and institutionalized through OPP. Thissociology-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Obligatory_passage_point
This list includesSQLreserved words– akaSQL reserved keywords,[1][2]as theSQL:2023specifies and someRDBMSshave added. A dash (-) means that the keyword is not reserved.
https://en.wikipedia.org/wiki/List_of_SQL_reserved_words
Alayerin a deep learning model is a structure ornetwork topologyin the model's architecture, which takes information from the previous layers and then passes it to the next layer. The first type of layer is theDense layer, also called thefully-connected layer,[1][2][3]and is used for abstract representations of input data. In this layer, neurons connect to every neuron in the preceding layer. Inmultilayer perceptronnetworks, these layers are stacked together. TheConvolutional layer[4]is typically used for image analysis tasks. In this layer, the network detects edges, textures, and patterns. The outputs from this layer are then fed into a fully-connected layer for further processing. See also:CNNmodel. ThePooling layer[5]is used to reduce the size of data input. TheRecurrent layeris used for text processing with a memory function. Similar to the Convolutional layer, the output of recurrent layers are usually fed into a fully-connected layer for further processing. See also:RNNmodel.[6][7][8] TheNormalization layeradjusts the output data from previous layers to achieve a regular distribution. This results in improved scalability and model training. AHidden layeris any of the layers in aNeural Networkthat aren't the input or output layers. There is an intrinsic difference betweendeep learninglayering andneocortical layering: deep learning layering depends onnetwork topology, while neocortical layering depends on intra-layershomogeneity.
https://en.wikipedia.org/wiki/Layer_(deep_learning)
Inmathematics,signal processingandcontrol theory, apole–zero plotis a graphical representation of arationaltransfer functionin thecomplex planewhich helps to convey certain properties of the system such as: A pole-zero plot shows the location in the complex plane of thepoles and zerosof thetransfer functionof adynamic system, such as a controller, compensator, sensor, equalizer,filter, or communications channel. By convention, the poles of the system are indicated in the plot by an X while the zeros are indicated by a circle or O. A pole-zero plot is plotted in the plane of a complex frequencydomain, which can represent either a continuous-time or a discrete-time system: In general, arationaltransfer function for a continuous-timeLTI systemhas the form: H(s)=B(s)A(s)=∑m=0MbmsmsN+∑n=0N−1ansn=b0+b1s+b2s2+⋯+bMsMa0+a1s+a2s2+⋯+a(N−1)s(N−1)+sN{\displaystyle H(s)={\frac {B(s)}{A(s)}}={\displaystyle \sum _{m=0}^{M}{b_{m}s^{m}} \over s^{N}+\displaystyle \sum _{n=0}^{N-1}{a_{n}s^{n}}}={\frac {b_{0}+b_{1}s+b_{2}s^{2}+\cdots +b_{M}s^{M}}{a_{0}+a_{1}s+a_{2}s^{2}+\cdots +a_{(N-1)}s^{(N-1)}+s^{N}}}} where EitherM{\displaystyle M}orN{\displaystyle N}or both may be zero, but in real systems, it should be the case thatM≤N{\displaystyle M\leq N}; otherwise the gain would be unbounded at high frequencies. Theregion of convergence(ROC) for a given continuous-time transfer function is a half-plane or vertical strip, either of which contains no poles. In general, the ROC is not unique, and the particular ROC in any given case depends on whether the system iscausalor anti-causal. The ROC is usually chosen to include the imaginary axis since it is important for most practical systems to haveBIBO stability. H(s)=25s2+6s+25{\displaystyle H(s)={\frac {25}{s^{2}+6s+25}}} This system has no (finite) zeros and two poles:s=α1=−3+4j{\displaystyle s=\alpha _{1}=-3+4j}ands=α2=−3−4j{\displaystyle s=\alpha _{2}=-3-4j} The pole-zero plot would be: Notice that these two poles arecomplex conjugates, which is the necessary and sufficient condition to have real-valued coefficients in the differential equation representing the system. In general, a rational transfer function for a discrete-timeLTI systemhas the form: H(z)=P(z)Q(z)=∑m=0Mbmz−m1+∑n=1Nanz−n=b0+b1z−1+b2z−2⋯+bMz−M1+a1z−1+a2z−2⋯+aNz−N{\displaystyle H(z)={\frac {P(z)}{Q(z)}}={\frac {\displaystyle \sum _{m=0}^{M}{b_{m}z^{-m}}}{1+\displaystyle \sum _{n=1}^{N}{a_{n}z^{-n}}}}={\frac {b_{0}+b_{1}z^{-1}+b_{2}z^{-2}\cdots +b_{M}z^{-M}}{1+a_{1}z^{-1}+a_{2}z^{-2}\cdots +a_{N}z^{-N}}}} where EitherM{\displaystyle M}orN{\displaystyle N}or both may be zero. Theregion of convergence(ROC) for a given discrete-time transfer function is adiskorannuluswhich contains no uncancelled poles. In general, the ROC is not unique, and the particular ROC in any given case depends on whether the system iscausalor anti-causal. The ROC is usually chosen to include the unit circle since it is important for most practical systems to haveBIBO stability. IfP(z){\displaystyle P(z)}andQ(z){\displaystyle Q(z)}are completely factored, their solution can be easily plotted in thez-plane. For example, given the following transfer function:H(z)=z+2z2+14{\displaystyle H(z)={\frac {z+2}{z^{2}+{\frac {1}{4}}}}} The only (finite) zero is located at:z=−2{\displaystyle z=-2}, and the two poles are located at:z=±j2{\displaystyle z=\pm {\frac {j}{2}}}, wherej{\displaystyle j}is theimaginary unit. The pole–zero plot would be:
https://en.wikipedia.org/wiki/Pole%E2%80%93zero_plot
Instatisticsandoptimization,errorsandresidualsare two closely related and easily confused measures of thedeviationof anobserved valueof anelementof astatistical samplefrom its "true value" (not necessarily observable). Theerrorof anobservationis the deviation of the observed value from the true value of a quantity of interest (for example, apopulation mean). Theresidualis the difference between the observed value and theestimatedvalue of the quantity of interest (for example, asample mean). The distinction is most important inregression analysis, where the concepts are sometimes called theregression errorsandregression residualsand where they lead to the concept ofstudentized residuals. Ineconometrics, "errors" are also calleddisturbances.[1][2][3] Suppose there is a series of observations from aunivariate distributionand we want to estimate themeanof that distribution (the so-calledlocation model). In this case, the errors are the deviations of the observations from the population mean, while the residuals are the deviations of the observations from the sample mean. Astatistical error(ordisturbance) is the amount by which an observation differs from itsexpected value, the latter being based on the wholepopulationfrom which the statistical unit was chosen randomly. For example, if the mean height in a population of 21-year-old men is 1.75 meters, and one randomly chosen man is 1.80 meters tall, then the "error" is 0.05 meters; if the randomly chosen man is 1.70 meters tall, then the "error" is −0.05 meters. The expected value, being themeanof the entire population, is typically unobservable, and hence the statistical error cannot be observed either. Aresidual(or fitting deviation), on the other hand, is an observableestimateof the unobservable statistical error. Consider the previous example with men's heights and suppose we have a random sample ofnpeople. Thesample meancould serve as a good estimator of thepopulationmean. Then we have: Note that, because of the definition of the sample mean, the sum of the residuals within a random sample is necessarily zero, and thus the residuals are necessarilynotindependent. The statistical errors, on the other hand, are independent, and their sum within the random sample isalmost surelynot zero. One can standardize statistical errors (especially of anormal distribution) in az-score(or "standard score"), and standardize residuals in at-statistic, or more generallystudentized residuals. If we assume a normally distributed population with mean μ andstandard deviationσ, and choose individuals independently, then we have and thesample mean is a random variable distributed such that: Thestatistical errorsare then withexpectedvalues of zero,[4]whereas theresidualsare The sum of squares of thestatistical errors, divided byσ2, has achi-squared distributionwithndegrees of freedom: However, this quantity is not observable as the population mean is unknown. The sum of squares of theresiduals, on the other hand, is observable. The quotient of that sum by σ2has a chi-squared distribution with onlyn− 1 degrees of freedom: This difference betweennandn− 1 degrees of freedom results inBessel's correctionfor the estimation ofsample varianceof a population with unknown mean and unknown variance. No correction is necessary if the population mean is known. It is remarkable that thesum of squares of the residualsand the sample mean can be shown to be independent of each other, using, e.g.Basu's theorem. That fact, and the normal and chi-squared distributions given above form the basis of calculations involving the t-statistic: whereX¯n−μ0{\displaystyle {\overline {X}}_{n}-\mu _{0}}represents the errors,Sn{\displaystyle S_{n}}represents the sample standard deviation for a sample of sizen, and unknownσ, and the denominator termSn/n{\displaystyle S_{n}/{\sqrt {n}}}accounts for the standard deviation of the errors according to:[5] Var⁡(X¯n)=σ2n{\displaystyle \operatorname {Var} \left({\overline {X}}_{n}\right)={\frac {\sigma ^{2}}{n}}} Theprobability distributionsof the numerator and the denominator separately depend on the value of the unobservable population standard deviationσ, butσappears in both the numerator and the denominator and cancels. That is fortunate because it means that even though we do not knowσ, we know the probability distribution of this quotient: it has aStudent's t-distributionwithn− 1 degrees of freedom. We can therefore use this quotient to find aconfidence intervalforμ. This t-statistic can be interpreted as "the number of standard errors away from the regression line."[6] Inregression analysis, the distinction betweenerrorsandresidualsis subtle and important, and leads to the concept ofstudentized residuals. Given an unobservable function that relates the independent variable to the dependent variable – say, a line – the deviations of the dependent variable observations from this function are the unobservable errors. If one runs a regression on some data, then the deviations of the dependent variable observations from thefittedfunction are the residuals. If the linear model is applicable, a scatterplot of residuals plotted against the independent variable should be random about zero with no trend to the residuals.[5]If the data exhibit a trend, the regression model is likely incorrect; for example, the true function may be a quadratic or higher order polynomial. If they are random, or have no trend, but "fan out" - they exhibit a phenomenon calledheteroscedasticity. If all of the residuals are equal, or do not fan out, they exhibithomoscedasticity. However, a terminological difference arises in the expressionmean squared error(MSE). The mean squared error of a regression is a number computed from the sum of squares of the computedresiduals, and not of the unobservableerrors. If that sum of squares is divided byn, the number of observations, the result is the mean of the squared residuals. Since this is abiasedestimate of the variance of the unobserved errors, the bias is removed by dividing the sum of the squared residuals bydf=n−p− 1, instead ofn, wheredfis the number ofdegrees of freedom(nminus the number of parameters (excluding the intercept) p being estimated - 1). This forms an unbiased estimate of the variance of the unobserved errors, and is called the mean squared error.[7] Another method to calculate the mean square of error when analyzing the variance of linear regression using a technique like that used inANOVA(they are the same because ANOVA is a type of regression), the sum of squares of the residuals (aka sum of squares of the error) is divided by the degrees of freedom (where the degrees of freedom equaln−p− 1, wherepis the number of parameters estimated in the model (one for each variable in the regression equation, not including the intercept)). One can then also calculate the mean square of the model by dividing the sum of squares of the model minus the degrees of freedom, which is just the number of parameters. Then the F value can be calculated by dividing the mean square of the model by the mean square of the error, and we can then determine significance (which is why you want the mean squares to begin with.).[8] However, because of the behavior of the process of regression, thedistributionsof residuals at different data points (of the input variable) may varyeven ifthe errors themselves are identically distributed. Concretely, in alinear regressionwhere the errors are identically distributed, the variability of residuals of inputs in the middle of the domain will behigherthan the variability of residuals at the ends of the domain:[9]linear regressions fit endpoints better than the middle. This is also reflected in theinfluence functionsof various data points on theregression coefficients: endpoints have more influence. Thus to compare residuals at different inputs, one needs to adjust the residuals by the expected variability ofresiduals,which is calledstudentizing. This is particularly important in the case of detectingoutliers, where the case in question is somehow different from the others in a dataset. For example, a large residual may be expected in the middle of the domain, but considered an outlier at the end of the domain. The use of the term "error" as discussed in the sections above is in the sense of a deviation of a value from a hypothetical unobserved value. At least two other uses also occur in statistics, both referring to observableprediction errors: Themean squared error(MSE) refers to the amount by which the values predicted by an estimator differ from the quantities being estimated (typically outside the sample from which the model was estimated). Theroot mean square error(RMSE) is the square-root of MSE. Thesum of squares of errors(SSE) is the MSE multiplied by the sample size. Sum of squares of residuals(SSR) is the sum of the squares of the deviations of the actual values from the predicted values, within the sample used for estimation. This is the basis for theleast squaresestimate, where the regression coefficients are chosen such that the SSR is minimal (i.e. its derivative is zero). Likewise, thesum of absolute errors(SAE) is the sum of the absolute values of the residuals, which is minimized in theleast absolute deviationsapproach to regression. Themean error(ME) is the bias. Themean residual(MR) is always zero for least-squares estimators.
https://en.wikipedia.org/wiki/Errors_and_residuals_in_statistics
Transport Layer Security pre-shared key ciphersuites(TLS-PSK) is a set ofcryptographic protocolsthat providesecurecommunication based onpre-shared keys(PSKs). These pre-shared keys aresymmetric keysshared in advance among the communicating parties. There are several cipher suites: The first set of ciphersuites use onlysymmetric keyoperations forauthentication. The second set use aDiffie–Hellmankey exchangeauthenticated with a pre-shared key. The third set combinepublic keyauthentication of the server with pre-shared key authentication of the client. Usually,Transport Layer Security(TLS) usespublic key certificatesorKerberosfor authentication. TLS-PSK uses symmetric keys, shared in advance among the communicating parties, to establish a TLS connection. There are several reasons to use PSKs:
https://en.wikipedia.org/wiki/TLS-PSK
Agraphics processing unit(GPU) is a specializedelectronic circuitdesigned fordigital image processingand to acceleratecomputer graphics, being present either as a discretevideo cardor embedded onmotherboards,mobile phones,personal computers,workstations, andgame consoles. GPUs were later found to be useful for non-graphic calculations involvingembarrassingly parallelproblems due to theirparallel structure. The ability of GPUs to rapidly perform vast numbers of calculations has led to their adoption in diverse fields includingartificial intelligence(AI) where they excel at handling data-intensive and computationally demanding tasks. Other non-graphical uses include the training ofneural networksandcryptocurrency mining. Arcade system boardshave used specialized graphics circuits since the 1970s. In early video game hardware,RAMfor frame buffers was expensive, so video chips composited data together as the display was being scanned out on the monitor.[1] A specializedbarrel shiftercircuit helped the CPU animate theframebuffergraphics for various 1970sarcade video gamesfromMidwayandTaito, such asGun Fight(1975),Sea Wolf(1976), andSpace Invaders(1978).[2]TheNamco Galaxianarcade system in 1979 used specializedgraphics hardwarethat supportedRGB color, multi-colored sprites, andtilemapbackgrounds.[3]The Galaxian hardware was widely used during thegolden age of arcade video games, by game companies such asNamco,Centuri,Gremlin,Irem,Konami, Midway,Nichibutsu,Sega, and Taito.[4] TheAtari 2600in 1977 used a video shifter called theTelevision Interface Adaptor.[5]Atari 8-bit computers(1979) hadANTIC, a video processor which interpreted instructions describing a "display list"—the way the scan lines map to specificbitmappedor character modes and where the memory is stored (so there did not need to be a contiguous frame buffer).[clarification needed][6]6502machine codesubroutinescould be triggered onscan linesby setting a bit on a display list instruction.[clarification needed][7]ANTIC also supported smoothverticalandhorizontal scrollingindependent of the CPU.[8] TheNEC μPD7220was the first implementation of apersonal computergraphics display processor as a singlelarge-scale integration(LSI)integrated circuitchip. This enabled the design of low-cost, high-performance video graphics cards such as those fromNumber Nine Visual Technology. It became the best-known GPU until the mid-1980s.[9]It was the first fully integratedVLSI(very large-scale integration)metal–oxide–semiconductor(NMOS) graphics display processor for PCs, supported up to1024×1024 resolution, and laid the foundations for the PC graphics market. It was used in a number of graphics cards and was licensed for clones such as the Intel 82720, the first ofIntel's graphics processing units.[10]The Williams Electronics arcade gamesRobotron 2084,Joust,Sinistar, andBubbles, all released in 1982, contain customblitterchips for operating on 16-color bitmaps.[11][12] In 1984,Hitachireleased the ARTC HD63484, the first majorCMOSgraphics processor for personal computers. The ARTC could display up to4K resolutionwhen inmonochromemode. It was used in a number of graphics cards and terminals during the late 1980s.[13]In 1985, theAmigawas released with a custom graphics chip including ablitterfor bitmap manipulation, line drawing, and area fill. It also included acoprocessorwith its own simple instruction set, that was capable of manipulating graphics hardware registers in sync with the video beam (e.g. for per-scanline palette switches, sprite multiplexing, and hardware windowing), or driving the blitter. In 1986,Texas Instrumentsreleased theTMS34010, the first fully programmable graphics processor.[14]It could run general-purpose code but also had a graphics-oriented instruction set. During 1990–1992, this chip became the basis of theTexas Instruments Graphics Architecture("TIGA")Windows acceleratorcards. In 1987, theIBM 8514graphics system was released. It was one of the first video cards forIBM PC compatiblesthat implementedfixed-function2D primitives inelectronic hardware.Sharp'sX68000, released in 1987, used a custom graphics chipset[15]with a 65,536 color palette and hardware support for sprites, scrolling, and multiple playfields.[16]It served as a development machine forCapcom'sCP Systemarcade board. Fujitsu'sFM Townscomputer, released in 1989, had support for a 16,777,216 color palette.[17]In 1988, the first dedicatedpolygonal 3Dgraphics boards were introduced in arcades with theNamco System 21[18]andTaitoAir System.[19] IBMintroduced itsproprietaryVideo Graphics Array(VGA) display standard in 1987, with a maximum resolution of 640×480 pixels. In November 1988,NEC Home Electronicsannounced its creation of theVideo Electronics Standards Association(VESA) to develop and promote aSuper VGA(SVGA)computer display standardas a successor to VGA. Super VGA enabledgraphics display resolutionsup to 800×600pixels, a 56% increase.[20] In 1991,S3 Graphicsintroduced theS3 86C911, which its designers named after thePorsche 911as an indication of the performance increase it promised.[21]The 86C911 spawned a variety of imitators: by 1995, all major PC graphics chip makers had added2Dacceleration support to their chips.[22]Fixed-functionWindows acceleratorssurpassed expensive general-purpose graphics coprocessors in Windows performance, and such coprocessors faded from the PC market. Throughout the 1990s, 2DGUIacceleration evolved. As manufacturing capabilities improved, so did the level of integration of graphics chips. Additionalapplication programming interfaces(APIs) arrived for a variety of tasks, such as Microsoft'sWinGgraphics libraryforWindows 3.x, and their laterDirectDrawinterface forhardware accelerationof 2D games inWindows 95and later. In the early- and mid-1990s,real-time3D graphics became increasingly common in arcade, computer, and console games, which led to increasing public demand for hardware-accelerated 3D graphics. Early examples of mass-market 3D graphics hardware can be found in arcade system boards such as theSega Model 1,Namco System 22, andSega Model 2, and thefifth-generation video game consolessuch as theSaturn,PlayStation, andNintendo 64. Arcade systems such as the Sega Model 2 andSGIOnyx-based Namco Magic Edge Hornet Simulator in 1993 were capable of hardware T&L (transform, clipping, and lighting) years before appearing in consumer graphics cards.[23][24]Another early example is theSuper FXchip, aRISC-basedon-cartridge graphics chipused in someSNESgames, notablyDoomandStar Fox. Some systems usedDSPsto accelerate transformations.Fujitsu, which worked on the Sega Model 2 arcade system,[25]began working on integrating T&L into a singleLSIsolution for use in home computers in 1995;[26]the Fujitsu Pinolite, the first 3D geometry processor for personal computers, released in 1997.[27]The first hardware T&L GPU onhomevideo game consoleswas theNintendo 64'sReality Coprocessor, released in 1996.[28]In 1997,Mitsubishireleased the3Dpro/2MP, a GPU capable of transformation and lighting, forworkstationsandWindows NTdesktops;[29]ATiused it for itsFireGL 4000graphics card, released in 1997.[30] The term "GPU" was coined bySonyin reference to the 32-bitSony GPU(designed byToshiba) in thePlayStationvideo game console, released in 1994.[31] In the PC world, notable failed attempts for low-cost 3D graphics chips included theS3ViRGE,ATI Rage, andMatroxMystique. These chips were essentially previous-generation 2D accelerators with 3D features bolted on. Many werepin-compatiblewith the earlier-generation chips for ease of implementation and minimal cost. Initially, 3D graphics were possible only with discrete boards dedicated to accelerating 3D functions (and lacking 2D graphical user interface (GUI) acceleration entirely) such as thePowerVRand the3dfxVoodoo. However, as manufacturing technology continued to progress, video, 2D GUI acceleration, and 3D functionality were all integrated into one chip.Rendition'sVeritechipsets were among the first to do this well. In 1997, Rendition collaborated withHerculesand Fujitsu on a "Thriller Conspiracy" project which combined a Fujitsu FXG-1 Pinolite geometry processor with a Vérité V2200 core to create a graphics card with a full T&L engine years before Nvidia'sGeForce 256; This card, designed to reduce the load placed upon the system's CPU, never made it to market.[citation needed]NVIDIARIVA 128was one of the first consumer-facing GPU integrated 3D processing unit and 2D processing unit on a chip. OpenGLwas introduced in the early 1990s by Silicon Graphics as a professional graphics API, with proprietary hardware support for 3D rasterization. In 1994, Microsoft acquiredSoftimage, the dominant CGI movie production tool used for early CGI movie hits likeJurassic Park,Terminator 2andTitanic. With that deal came a strategic relationship with SGI and a commercial license of their OpenGL libraries, enabling Microsoft to port the API to the Windows NT OS but not to the upcoming release of Windows 95. Although it was little known at the time, SGI had contracted with Microsoft totransition from Unix to the forthcoming Windows NT OS; the deal which was signed in 1995 was not announced publicly until 1998. In the intervening period, Microsoft worked closely with SGI to port OpenGL to Windows NT. In that era, OpenGL had no standard driver model for competing hardware accelerators to compete on the basis of support for higher level 3D texturing and lighting functionality. In 1994 Microsoft announced DirectX 1.0 and support for gaming in the forthcoming Windows 95 consumer OS. In 1995Microsoft announced the acquisition of UK based Rendermorphics Ltdand the Direct3D driver model for the acceleration of consumer 3D graphics. The Direct3D driver model shipped with DirectX 2.0 in 1996. It included standards and specifications for 3D chip makers to compete to support 3D texture, lighting and Z-buffering. ATI, which was later to be acquired by AMD, began development on the first Direct3D GPUs. Nvidia quickly pivoted from afailed deal with Segain 1996 to aggressively embracing support for Direct3D. In this era Microsoft merged their internal Direct3D and OpenGL teams and worked closely with SGI to unify driver standards for both industrial and consumer 3D graphics hardware accelerators. Microsoft ran annual events for 3D chip makers called "Meltdowns" to test their 3D hardware and drivers to work both with Direct3D and OpenGL. It was during this period of strong Microsoft influence over 3D standards that 3D accelerator cards moved beyond being simplerasterizersto become more powerful general purpose processors as support for hardware accelerated texture mapping, lighting, Z-buffering and compute created the modern GPU. During this period the same Microsoft team responsible for Direct3D and OpenGL driver standardization introduced their own Microsoft 3D chip design calledTalisman. Details of this era are documented extensively in the books "Game of X" v.1 and v.2 by Russel Demaria, "Renegades of the Empire" by Mike Drummond, "Opening the Xbox" by Dean Takahashi and "Masters of Doom" by David Kushner. TheNvidiaGeForce 256(also known as NV10) was the first consumer-level card with hardware-accelerated T&L. While the OpenGL API provided software support for texture mapping and lighting, the first 3D hardware acceleration for these features arrived with the firstDirect3D accelerated consumer GPU's. NVIDIA released the GeForce 256, marketed as the world's first GPU, integrating transform and lighting engines for advanced 3D graphics rendering. Nvidia was first to produce a chip capable of programmableshading: theGeForce 3. Each pixel could now be processed by a short program that could include additional image textures as inputs, and each geometric vertex could likewise be processed by a short program before it was projected onto the screen. Used in theXboxconsole, this chip competed with the one in thePlayStation 2, which used a custom vector unit for hardware-accelerated vertex processing (commonly referred to as VU0/VU1). The earliest incarnations of shader execution engines used in Xbox were not general-purpose and could not execute arbitrary pixel code. Vertices and pixels were processed by different units, which had their resources, with pixel shaders having tighter constraints (because they execute at higher frequencies than vertices). Pixel shading engines were more akin to a highly customizable function block and did not "run" a program. Many of these disparities between vertex and pixel shading were not addressed until theUnified Shader Model. In October 2002, with the introduction of theATIRadeon 9700(also known as R300), the world's firstDirect3D9.0 accelerator, pixel and vertex shaders could implementloopingand lengthyfloating pointmath, and were quickly becoming as flexible as CPUs, yet orders of magnitude faster for image-array operations. Pixel shading is often used forbump mapping, which adds texture to make an object look shiny, dull, rough, or even round or extruded.[32] With the introduction of the NvidiaGeForce 8 seriesand new generic stream processing units, GPUs became more generalized computing devices.ParallelGPUs are making computational inroads against the CPU, and a subfield of research, dubbed GPU computing orGPGPUforgeneral purpose computing on GPU, has found applications in fields as diverse asmachine learning,[33]oil exploration, scientificimage processing,linear algebra,[34]statistics,[35]3D reconstruction, andstock optionspricing.GPGPUwas the precursor to what is now called a compute shader (e.g. CUDA, OpenCL, DirectCompute) and actually abused the hardware to a degree by treating the data passed to algorithms as texture maps and executing algorithms by drawing a triangle or quad with an appropriate pixel shader.[clarification needed]This entails some overheads since units like thescan converterare involved where they are not needed (nor are triangle manipulations even a concern—except to invoke the pixel shader).[clarification needed] Nvidia'sCUDAplatform, first introduced in 2007,[36]was the earliest widely adopted programming model for GPU computing.OpenCLis an open standard defined by theKhronos Groupthat allows for the development of code for both GPUs and CPUs with an emphasis on portability.[37]OpenCL solutions are supported by Intel, AMD, Nvidia, and ARM, and according to a report in 2011 by Evans Data, OpenCL had become the second most popular HPC tool.[38] In 2010, Nvidia partnered withAudito power their cars' dashboards, using theTegraGPU to provide increased functionality to cars' navigation and entertainment systems.[39]Advances in GPU technology in cars helped advanceself-driving technology.[40]AMD'sRadeon HD 6000 seriescards were released in 2010, and in 2011 AMD released its 6000M Series discrete GPUs for mobile devices.[41]The Kepler line of graphics cards by Nvidia were released in 2012 and were used in the Nvidia's 600 and 700 series cards. A feature in this GPU microarchitecture included GPU boost, a technology that adjusts the clock-speed of a video card to increase or decrease it according to its power draw.[42]TheKepler microarchitecturewas manufactured. ThePS4andXbox Onewere released in 2013; they both use GPUs based onAMD's Radeon HD 7850 and 7790.[43]Nvidia's Kepler line of GPUs was followed by theMaxwellline, manufactured on the same process. Nvidia's 28 nm chips were manufactured byTSMCin Taiwan using the 28 nm process. Compared to the 40 nm technology from the past, this manufacturing process allowed a 20 percent boost in performance while drawing less power.[44][45]Virtual reality headsetshave high system requirements; manufacturers recommended the GTX 970 and the R9 290X or better at the time of their release.[46][47]Cards based on thePascalmicroarchitecture were released in 2016. TheGeForce 10 seriesof cards are of this generation of graphics cards. They are made using the 16 nm manufacturing process which improves upon previous microarchitectures.[48]Nvidia released one non-consumer card under the newVoltaarchitecture, the Titan V. Changes from the Titan XP, Pascal's high-end card, include an increase in the number of CUDA cores, the addition of tensor cores, andHBM2. Tensor cores are designed for deep learning, while high-bandwidth memory is on-die, stacked, lower-clocked memory that offers an extremely wide memory bus. To emphasize that the Titan V is not a gaming card, Nvidia removed the "GeForce GTX" suffix it adds to consumer gaming cards. In 2018, Nvidia launched the RTX 20 series GPUs that added ray-tracing cores to GPUs, improving their performance on lighting effects.[49]Polaris 11andPolaris 10GPUs from AMD are fabricated by a 14 nm process. Their release resulted in a substantial increase in the performance per watt of AMD video cards.[50]AMD also released the Vega GPU series for the high end market as a competitor to Nvidia's high end Pascal cards, also featuring HBM2 like the Titan V. In 2019, AMD released the successor to theirGraphics Core Next(GCN) microarchitecture/instruction set. Dubbed RDNA, the first product featuring it was theRadeon RX 5000 seriesof video cards.[51]The company announced that the successor to the RDNA microarchitecture would be incremental (a "refresh"). AMD unveiled theRadeon RX 6000 series, its RDNA 2 graphics cards with support for hardware-accelerated ray tracing.[52]The product series, launched in late 2020, consisted of the RX 6800, RX 6800 XT, and RX 6900 XT.[53][54]The RX 6700 XT, which is based on Navi 22, was launched in early 2021.[55] ThePlayStation 5andXbox Series X and Series Swere released in 2020; they both use GPUs based on theRDNA 2microarchitecture with incremental improvements and different GPU configurations in each system's implementation.[56][57][58] Intelfirstentered the GPU marketin the late 1990s, but produced lackluster 3D accelerators compared to the competition at the time. Rather than attempting to compete with the high-end manufacturers Nvidia and ATI/AMD, they began integratingIntel Graphics TechnologyGPUs into motherboard chipsets, beginning with theIntel 810for the Pentium III, and later into CPUs. They began with theIntel Atom 'Pineview'laptop processor in 2009, continuing in 2010 with desktop processors in the first generation of theIntel Coreline and with contemporary Pentiums and Celerons. This resulted in a large nominal market share, as the majority of computers with an Intel CPU also featured this embedded graphics processor. These generally lagged behind discrete processors in performance. Intel re-entered the discrete GPU market in 2022 with itsArcseries, which competed with the then-current GeForce 30 series and Radeon 6000 series cards at competitive prices.[citation needed] In the 2020s, GPUs have been increasingly used for calculations involvingembarrassingly parallelproblems, such as training ofneural networkson enormous datasets that are needed forlarge language models. Specialized processing cores on some modern workstation's GPUs are dedicated fordeep learningsince they have significant FLOPS performance increases, using 4×4 matrix multiplication and division, resulting in hardware performance up to 128 TFLOPS in some applications.[59]These tensor cores are expected to appear in consumer cards, as well.[needs update][60] Many companies have produced GPUs under a number of brand names. In 2009,[needs update]Intel,Nvidia, andAMD/ATIwere the market share leaders, with 49.4%, 27.8%, and 20.6% market share respectively. In addition,Matrox[61]produces GPUs. Chinese companies such asJingjia Microhave also produced GPUs for the domestic market although in terms of worldwide sales, they still lag behind market leaders.[62] Modern smartphones use mostlyAdrenoGPUs fromQualcomm,PowerVRGPUs fromImagination Technologies, andMali GPUsfromARM. Modern GPUs have traditionally used most of theirtransistorsto do calculations related to3D computer graphics. In addition to the 3D hardware, today's GPUs include basic 2D acceleration andframebuffercapabilities (usually with a VGA compatibility mode). Newer cards such as AMD/ATI HD5000–HD7000 lack dedicated 2D acceleration; it is emulated by 3D hardware. GPUs were initially used to accelerate the memory-intensive work oftexture mappingandrenderingpolygons. Later, dedicated hardware was added to accelerategeometriccalculations such as therotationandtranslationofverticesinto differentcoordinate systems. Recent developments in GPUs include support forprogrammable shaderswhich can manipulate vertices and textures with many of the same operations that are supported byCPUs,oversamplingandinterpolationtechniques to reducealiasing, and very high-precisioncolor spaces. Several factors of GPU construction affect the performance of the card for real-time rendering, such as the size of the connector pathways in thesemiconductor device fabrication, theclock signalfrequency, and the number and size of various on-chip memorycaches. Performance is also affected by the number of streaming multiprocessors (SM) for NVidia GPUs, or compute units (CU) for AMD GPUs, or Xe cores for Intel discrete GPUs, which describe the number of on-silicon processor core units within the GPU chip that perform the core calculations, typically working in parallel with other SM/CUs on the GPU. GPU performance is typically measured in floating point operations per second (FLOPS); GPUs in the 2010s and 2020s typically deliver performance measured in teraflops (TFLOPS). This is an estimated performance measure, as other factors can affect the actual display rate.[63] Most GPUs made since 1995 support theYUVcolor spaceandhardware overlays, important fordigital videoplayback, and many GPUs made since 2000 also supportMPEGprimitives such asmotion compensationandiDCT. This hardware-accelerated video decoding, in which portions of thevideo decodingprocess andvideo post-processingare offloaded to the GPU hardware, is commonly referred to as "GPU accelerated video decoding", "GPU assisted video decoding", "GPU hardware accelerated video decoding", or "GPU hardware assisted video decoding". Recent graphics cards decodehigh-definition videoon the card, offloading the central processing unit. The most commonAPIsfor GPU accelerated video decoding areDxVAforMicrosoft Windowsoperating systems andVDPAU,VAAPI,XvMC, andXvBAfor Linux-based and UNIX-like operating systems. All except XvMC are capable of decoding videos encoded withMPEG-1,MPEG-2,MPEG-4 ASP (MPEG-4 Part 2),MPEG-4 AVC(H.264 / DivX 6),VC-1,WMV3/WMV9,Xvid/ OpenDivX (DivX 4), andDivX5codecs, while XvMC is only capable of decoding MPEG-1 and MPEG-2. There are severaldedicated hardware video decoding and encoding solutions. Video decoding processes that can be accelerated by modern GPU hardware are: These operations also have applications in video editing, encoding, and transcoding. An earlier GPU may support one or more 2D graphics API for 2D acceleration, such asGDIandDirectDraw.[64] A GPU can support one or more 3D graphics API, such asDirectX,Metal,OpenGL,OpenGL ES,Vulkan. In the 1970s, the term "GPU" originally stood forgraphics processor unitand described a programmable processing unit working independently from the CPU that was responsible for graphics manipulation and output.[65][66]In 1994,Sonyused the term (now standing forgraphics processing unit) in reference to thePlayStationconsole'sToshiba-designedSony GPU.[31]The term was popularized byNvidiain 1999, who marketed theGeForce 256as "the world's first GPU".[67]It was presented as a "single-chipprocessorwith integratedtransform, lighting, triangle setup/clipping, and rendering engines".[68]RivalATI Technologiescoined the term "visual processing unit" orVPUwith the release of theRadeon 9700in 2002.[69]TheAMD Alveo MA35Dfeatures dual VPU’s, each using the5 nm processin 2023.[70] In personal computers, there are two main forms of GPUs. Each has many synonyms:[71] Most GPUs are designed for a specific use, real-time 3D graphics, or other mass calculations: Dedicated graphics processing unitsusesRAMthat is dedicated to the GPU rather than relying on the computer’s main system memory. This RAM is usually specially selected for the expected serial workload of the graphics card (seeGDDR). Sometimes systems with dedicateddiscreteGPUs were called "DIS" systems as opposed to "UMA" systems (see next section).[72] Dedicated GPUs are not necessarily removable, nor does it necessarily interface with the motherboard in a standard fashion. The term "dedicated" refers to the fact thatgraphics cardshave RAM that is dedicated to the card's use, not to the fact thatmostdedicated GPUs are removable. Dedicated GPUs for portable computers are most commonly interfaced through a non-standard and often proprietary slot due to size and weight constraints. Such ports may still be considered PCIe or AGP in terms of their logical host interface, even if they are not physically interchangeable with their counterparts. Graphics cards with dedicated GPUs typically interface with themotherboardby means of anexpansion slotsuch asPCI Express(PCIe) orAccelerated Graphics Port(AGP). They can usually be replaced or upgraded with relative ease, assuming the motherboard is capable of supporting the upgrade. A few graphics cards still usePeripheral Component Interconnect(PCI) slots, but their bandwidth is so limited that they are generally used only when a PCIe or AGP slot is not available. Technologies such asScan-Line Interleaveby 3dfx,SLIandNVLinkby Nvidia andCrossFireby AMD allow multiple GPUs to draw images simultaneously for a single screen, increasing the processing power available for graphics. These technologies, however, are increasingly uncommon; most games do not fully use multiple GPUs, as most users cannot afford them.[73][74][75]Multiple GPUs are still used on supercomputers (like inSummit), on workstations to accelerate video (processing multiple videos at once)[76][77][78]and 3D rendering,[79]forVFX,[80]GPGPUworkloads and for simulations,[81]and in AI to expedite training, as is the case with Nvidia's lineup of DGX workstations and servers, Tesla GPUs, and Intel's Ponte Vecchio GPUs. Integrated graphics processing units(IGPU),integrated graphics,shared graphics solutions,integrated graphics processors(IGP), orunified memory architectures(UMA) use a portion of a computer's system RAM rather than dedicated graphics memory. IGPs can be integrated onto a motherboard as part of itsnorthbridgechipset,[82]or on the samedie (integrated circuit)with the CPU (likeAMD APUorIntel HD Graphics). On certain motherboards,[83]AMD's IGPs can use dedicated sideport memory: a separate fixed block of high performance memory that is dedicated for use by the GPU. As of early 2007[update]computers with integrated graphics account for about 90% of all PC shipments.[84][needs update]They are less costly to implement than dedicated graphics processing, but tend to be less capable. Historically, integrated processing was considered unfit for 3D games or graphically intensive programs but could run less intensive programs such as Adobe Flash. Examples of such IGPs would be offerings from SiS and VIA circa 2004.[85]However, modern integrated graphics processors such asAMD Accelerated Processing UnitandIntel Graphics Technology(HD, UHD, Iris, Iris Pro, Iris Plus, andXe-LP) can handle 2D graphics or low-stress 3D graphics. Since GPU computations are memory-intensive, integrated processing may compete with the CPU for relatively slow system RAM, as it has minimal or no dedicated video memory. IGPs use system memory with bandwidth up to a current maximum of 128 GB/s, whereas a discrete graphics card may have a bandwidth of more than 1000 GB/s between itsVRAMand GPU core. Thismemory busbandwidth can limit the performance of the GPU, thoughmulti-channel memorycan mitigate this deficiency.[86]Older integrated graphics chipsets lacked hardwaretransform and lighting, but newer ones include it.[87][88] On systems with "Unified Memory Architecture" (UMA), including modern AMD processors with integrated graphics,[89]modern Intel processors with integrated graphics,[90]Apple processors, the PS5 and Xbox Series (among others), the CPU cores and the GPU block share the same pool of RAM and memory address space. This allows the system to dynamically allocate memory between the CPU cores and the GPU block based on memory needs (without needing a large static split of the RAM) and thanks to zero copy transfers, removes the need for either copying data over abus (computing)between physically separate RAM pools or copying between separate address spaces on a single physical pool of RAM, allowing more efficient transfer of data. Hybrid GPUs compete with integrated graphics in the low-end desktop and notebook markets. The most common implementations of this are ATI'sHyperMemoryand Nvidia'sTurboCache. Hybrid graphics cards are somewhat more expensive than integrated graphics, but much less expensive than dedicated graphics cards. They share memory with the system and have a small dedicated memory cache, to make up for the highlatencyof the system RAM. Technologies within PCI Express make this possible. While these solutions are sometimes advertised as having as much as 768 MB of RAM, this refers to how much can be shared with the system memory. It is common to use ageneral purpose graphics processing unit (GPGPU)as a modified form ofstream processor(or avector processor), runningcompute kernels. This turns the massive computational power of a modern graphics accelerator's shader pipeline into general-purpose computing power. In certain applications requiring massive vector operations, this can yield several orders of magnitude higher performance than a conventional CPU. The two largest discrete (see "Dedicated graphics processing unit" above) GPU designers,AMDandNvidia, are pursuing this approach with an array of applications. Both Nvidia and AMD teamed withStanford Universityto create a GPU-based client for theFolding@homedistributed computing project for protein folding calculations. In certain circumstances, the GPU calculates forty times faster than the CPUs traditionally used by such applications.[91][92] GPGPUs can be used for many types ofembarrassingly paralleltasks includingray tracing. They are generally suited to high-throughput computations that exhibitdata-parallelismto exploit the wide vector widthSIMDarchitecture of the GPU. GPU-based high performance computers play a significant role in large-scale modelling. Three of the ten most powerful supercomputers in the world take advantage of GPU acceleration.[93] GPUs support API extensions to theCprogramming language such asOpenCLandOpenMP. Furthermore, each GPU vendor introduced its own API which only works with their cards:AMD APP SDKfrom AMD, andCUDAfrom Nvidia. These allow functions calledcompute kernelsto run on the GPU's stream processors. This makes it possible for C programs to take advantage of a GPU's ability to operate on large buffers in parallel, while still using the CPU when appropriate. CUDA was the first API to allow CPU-based applications to directly access the resources of a GPU for more general purpose computing without the limitations of using a graphics API.[citation needed] Since 2005 there has been interest in using the performance offered by GPUs forevolutionary computationin general, and for accelerating thefitnessevaluation ingenetic programmingin particular. Most approaches compilelinearortree programson the host PC and transfer the executable to the GPU to be run. Typically a performance advantage is only obtained by running the single active program simultaneously on many example problems in parallel, using the GPU'sSIMDarchitecture.[94]However, substantial acceleration can also be obtained by not compiling the programs, and instead transferring them to the GPU, to be interpreted there.[95]Acceleration can then be obtained by either interpreting multiple programs simultaneously, simultaneously running multiple example problems, or combinations of both. A modern GPU can simultaneously interpret hundreds of thousands of very small programs. An external GPU is a graphics processor located outside of the housing of the computer, similar to a large external hard drive. External graphics processors are sometimes used with laptop computers. Laptops might have a substantial amount of RAM and a sufficiently powerful central processing unit (CPU), but often lack a powerful graphics processor, and instead have a less powerful but more energy-efficient on-board graphics chip. On-board graphics chips are often not powerful enough for playing video games, or for other graphically intensive tasks, such as editing video or 3D animation/rendering. Therefore, it is desirable to attach a GPU to some external bus of a notebook.PCI Expressis the only bus used for this purpose. The port may be, for example, anExpressCardormPCIeport (PCIe ×1, up to 5 or 2.5 Gbit/s respectively), aThunderbolt1, 2, or 3 port (PCIe ×4, up to 10, 20, or 40 Gbit/s respectively), aUSB4 port with Thunderbolt compatibility, or anOCuLinkport. Those ports are only available on certain notebook systems.[96]eGPU enclosures include their own power supply (PSU), because powerful GPUs can consume hundreds of watts.[97] Graphics processing units (GPU) have continued to increase in energy usage, while CPUs designers have recently[when?]focused on improving performance per watt. High performance GPUs may draw large amount of power, therefore intelligent techniques are required to manage GPU power consumption. Measures like3DMark2006 scoreper watt can help identify more efficient GPUs.[98]However that may not adequately incorporate efficiency in typical use, where much time is spent doing less demanding tasks.[99] With modern GPUs, energy usage is an important constraint on the maximum computational capabilities that can be achieved. GPU designs are usually highly scalable, allowing the manufacturer to put multiple chips on the same video card, or to use multiple video cards that work in parallel. Peak performance of any system is essentially limited by the amount of power it can draw and the amount of heat it can dissipate. Consequently, performance per watt of a GPU design translates directly into peak performance of a system that uses that design. In 2013, 438.3 million GPUs were shipped globally and the forecast for 2014 was 414.2 million. However, by the third quarter of 2022, shipments of PC GPUs totaled around 75.5 million units, down 19% year-over-year.[100][needs update][101]
https://en.wikipedia.org/wiki/Graphics_processing_unit
Inmathematics, theMoyal product(afterJosé Enrique Moyal; also called thestar productorWeyl–Groenewold product, afterHermann WeylandHilbrand J. Groenewold) is an example of aphase-space star product. It is an associative, non-commutative product,★, on the functions onR2n{\displaystyle \mathbb {R} ^{2n}}, equipped with itsPoisson bracket(with a generalization tosymplectic manifolds, described below). It is a special case of the★-product of the "algebra of symbols" of auniversal enveloping algebra. The Moyal product is named afterJosé Enrique Moyal, but is also sometimes called theWeyl–Groenewold product as it was introduced byH. J. Groenewoldin his 1946 doctoral dissertation, in a trenchant appreciation[1]of theWeyl correspondence. Moyal actually appears not to know about the product in his celebrated article[2]and was crucially lacking it in his legendary correspondence with Dirac, as illustrated in his biography.[3]The popular naming after Moyal appears to have emerged only in the 1970s, in homage to his flatphase-space quantizationpicture.[4] The product forsmooth functionsfandgonR2n{\displaystyle \mathbb {R} ^{2n}}takes the formf⋆g=fg+∑n=1∞ℏnCn(f,g),{\displaystyle f\star g=fg+\sum _{n=1}^{\infty }\hbar ^{n}C_{n}(f,g),}where eachCnis a certain bidifferential operatorof ordern, characterized by the following properties (see below for an explicit formula): Note that, if one wishes to take functions valued in thereal numbers, then an alternative version eliminates theiin the second condition and eliminates the fourth condition. If one restricts to polynomial functions, the above algebra is isomorphic to theWeyl algebraAn, and the two offer alternative realizations of theWeyl mapof the space of polynomials innvariables (or thesymmetric algebraof a vector space of dimension2n). To provide an explicit formula, consider a constantPoisson bivectorΠonR2n{\displaystyle \mathbb {R} ^{2n}}:Π=∑i,jΠij∂i∧∂j,{\displaystyle \Pi =\sum _{i,j}\Pi ^{ij}\partial _{i}\wedge \partial _{j},}whereΠijis a real number for eachi,j. The star product of two functionsfandgcan then be defined as thepseudo-differential operatoracting on both of them,f⋆g=fg+iℏ2∑i,jΠij(∂if)(∂jg)−ℏ28∑i,j,k,mΠijΠkm(∂i∂kf)(∂j∂mg)+…,{\displaystyle f\star g=fg+{\frac {i\hbar }{2}}\sum _{i,j}\Pi ^{ij}(\partial _{i}f)(\partial _{j}g)-{\frac {\hbar ^{2}}{8}}\sum _{i,j,k,m}\Pi ^{ij}\Pi ^{km}(\partial _{i}\partial _{k}f)(\partial _{j}\partial _{m}g)+\ldots ,}whereħis thereduced Planck constant, treated as a formal parameter here. This is a special case of what is known as theBerezin formula[5]on the algebra of symbols and can be given a closed form[6](which follows from theBaker–Campbell–Hausdorff formula). The closed form can be obtained by using theexponential:f⋆g=m∘eiℏ2Π(f⊗g),{\displaystyle f\star g=m\circ e^{{\frac {i\hbar }{2}}\Pi }(f\otimes g),}wheremis the multiplication map,m(a⊗b) =ab, and the exponential is treated as a power series,eA=∑n=0∞1n!An.{\displaystyle e^{A}=\sum _{n=0}^{\infty }{\frac {1}{n!}}A^{n}.} That is, the formula forCnisCn=in2nn!m∘Πn.{\displaystyle C_{n}={\frac {i^{n}}{2^{n}n!}}m\circ \Pi ^{n}.} As indicated, often one eliminates all occurrences ofiabove, and the formulas then restrict naturally to real numbers. Note that if the functionsfandgare polynomials, the above infinite sums become finite (reducing to the ordinary Weyl-algebra case). The relationship of the Moyal product to the generalized★-product used in the definition of the "algebra of symbols" of auniversal enveloping algebrafollows from the fact that theWeyl algebrais the universal enveloping algebra of theHeisenberg algebra(modulo that the center equals the unit). On any symplectic manifold, one can, at least locally, choose coordinates so as to make the symplectic structureconstant, byDarboux's theorem; and, using the associated Poisson bivector, one may consider the above formula. For it to work globally, as a function on the whole manifold (and not just a local formula), one must equip the symplectic manifold with a torsion-free symplecticconnection. This makes it aFedosov manifold. More general results forarbitrary Poisson manifolds(where the Darboux theorem does not apply) are given by theKontsevich quantization formula. A simple explicit example of the construction and utility of the★-product (for the simplest case of a two-dimensional euclideanphase space) is given in the article on theWigner–Weyl transform: two Gaussians compose with this★-product according to a hyperbolic tangent law:[7]exp⁡[−a(q2+p2)]⋆exp⁡[−b(q2+p2)]=11+ℏ2abexp⁡[−a+b1+ℏ2ab(q2+p2)].{\displaystyle \exp \left[-a\left(q^{2}+p^{2}\right)\right]\star \exp \left[-b\left(q^{2}+p^{2}\right)\right]={\frac {1}{1+\hbar ^{2}ab}}\exp \left[-{\frac {a+b}{1+\hbar ^{2}ab}}\left(q^{2}+p^{2}\right)\right].}Equivalently,e−tanh⁡(a)q2+p2ℏ⋆e−tanh⁡(b)q2+p2ℏ=tanh⁡(a+b)tanh⁡(a)+tanh⁡(b)e−tanh⁡(a+b)q2+p2ℏ{\displaystyle e^{-\tanh(a){\frac {q^{2}+p^{2}}{\hbar }}}\star e^{-\tanh(b){\frac {q^{2}+p^{2}}{\hbar }}}={\frac {\tanh(a+b)}{\tanh(a)+\tanh(b)}}e^{-\tanh(a+b){\frac {q^{2}+p^{2}}{\hbar }}}}The classical limit atℏ→0,a/ℏ→α,b/ℏ→β{\displaystyle \hbar \to 0,a/\hbar \to \alpha ,b/\hbar \to \beta }ise−α(q2+p2)e−β(q2+p2)=e−(α+β)(q2+p2){\displaystyle e^{-\alpha (q^{2}+p^{2})}e^{-\beta (q^{2}+p^{2})}=e^{-(\alpha +\beta )(q^{2}+p^{2})}}, as expected. Every correspondence prescriptionbetween phase space and Hilbert space, however, inducesits ownproper★-product.[8][9] Similar results are seen in theSegal–Bargmann spaceand in thetheta representationof theHeisenberg group, where the creation and annihilation operatorsa∗=zanda=∂/∂zare understood to act on the complex plane (respectively, theupper half-planefor the Heisenberg group), so that the position and momenta operators are given byq=a+a∗2{\displaystyle q={\frac {a+a^{*}}{2}}}andp=a−a∗2i{\displaystyle p={\frac {a-a^{*}}{2i}}}. This situation is clearly different from the case where the positions are taken to be real-valued, but does offer insights into the overall algebraic structure of the Heisenberg algebra and its envelope, the Weyl algebra. Inside a phase-space integral, justonestar product of the Moyal type may be dropped,[10]resulting in plain multiplication, as evident by integration by parts,∫dxdpf⋆g=∫dxdpfg,{\displaystyle \int dx\,dp\;f\star g=\int dx\,dp~f~g,}making the cyclicity of the phase-space trace manifest. This is a unique property of the above specific Moyal product, and does not hold for other correspondence rules' star products, such as Husimi's, etc.
https://en.wikipedia.org/wiki/Moyal_product
Web testingissoftware testingthat focuses onweb applications. Complete testing of a web-based system before going live can help address issues before the system is revealed to the public. Issues may include the security of the web application, the basic functionality of the site, its accessibility to disabled and fully able users, its ability to adapt to the multitude of desktops, devices, and operating systems, as well as readiness for expected traffic and number of users and the ability to survive a massive spike in user traffic, both of which are related toload testing. A web application performance tool (WAPT) is used to test web applications and web related interfaces. These tools are used for performance, load and stress testing of web applications,web sites,web API,web serversand other web interfaces. WAPT tends to simulate virtual users which will repeat either recorded URLs or specified URL and allows the users to specify number of times or iterations that the virtual users will have to repeat the recorded URLs. By doing so, the tool is useful to check for bottleneck and performance leakage in the website or web application being tested. A WAPT faces various challenges during testing and should be able to conduct tests for: WAPT allows a user to specify how virtual users are involved in the testing environment.ie either increasing users or constant users or periodic users load. Increasing user load, step by step is called RAMP where virtual users are increased from 0 to hundreds. Constant user load maintains specified user load at all time. Periodic user load tends to increase and decrease the user load from time to time. Web security testing tells us whether Web-based applications requirements are met when they are subjected to malicious input data.[1]There is a web application security testing plug-in collection for Fire Fox[2] An application programming interfaceAPIexposes services to other software components, which can query the API. The API implementation is in charge of computing the service and returning the result to the component that send the query. A part of web testing focuses on testing these web API implementations. GraphQLis a specific query and API language. It is the focus of tailored testing techniques. Search-based test generation yields good results to generate test cases for GraphQL APIs.[3]
https://en.wikipedia.org/wiki/Web_testing
Decision tablesare a concise visual representation for specifying which actions to perform depending on given conditions. Decision table is the term used for aControl tableorState-transition tablein the field ofBusiness process modeling; they are usually formatted as the transpose of the way they are formatted inSoftware engineering. Each decision corresponds to a variable, relation or predicate whose possible values are listed among the condition alternatives. Each action is a procedure or operation to perform, and the entries specify whether (or in what order) the action is to be performed for the set of condition alternatives the entry corresponds to. To make them more concise, many decision tables include in their condition alternatives adon't caresymbol. This can be a hyphen[1][2][3]or blank,[4]although using a blank is discouraged as it may merely indicate that the decision table has not been finished.[citation needed]One of the uses of decision tables is to reveal conditions under which certain input factors are irrelevant on the actions to be taken, allowing these input tests to be skipped and thereby streamlining decision-making procedures.[5] Aside from the basic four quadrant structure, decision tables vary widely in the way the condition alternatives and action entries are represented.[6][7]Some decision tables use simple true/false values to represent the alternatives to a condition (similar to if-then-else), other tables may use numbered alternatives (similar to switch-case), and some tables even use fuzzy logic or probabilistic representations for condition alternatives.[8]In a similar way, action entries can simply represent whether an action is to be performed (check the actions to perform), or in more advanced decision tables, the sequencing of actions to perform (number the actions to perform). A decision table is consideredbalanced[4]orcomplete[3]if it includes every possible combination of input variables. In other words, balanced decision tables prescribe an action in every situation where the input variables are provided.[4] The limited-entry decision table is the simplest to describe. The condition alternatives are simple Boolean values, and the action entries are check-marks, representing which of the actions in a given column are to be performed. The followingbalanced decision tableis an example in which a technical support company writes a decision table to enable technical support employees to efficiently diagnose printer problems based upon symptoms described to them over the phone from their clients. This is just a simple example, and it does not necessarily correspond to the reality of printer troubleshooting. Even so, it demonstrates how decision tables can scale to several conditions with many possibilities. Decision tables, especially when coupled with the use of adomain-specific language, allow developers and policy experts to work from the same information, the decision tables themselves. Tools to render nested if statements from traditional programming languages into decision tables can also be used as a debugging tool.[9][10] Decision tables have proven to be easier to understand and review than code, and have been used extensively and successfully to produce specifications for complex systems.[11] In the 1960s and 1970s a range of "decision table based" languages such asFiletabwere popular for business programming. Decision tables can be, and often are, embedded within computer programs and used to "drive" the logic of the program. A simple example might be alookup tablecontaining a range of possible input values and afunction pointerto the section of code to process that input. Multiple conditions can be coded for in similar manner to encapsulate the entire program logic in the form of an "executable" decision table orcontrol table. There may be several such tables in practice, operating at different levels and often linked to each other (either by pointers or an index value).
https://en.wikipedia.org/wiki/Decision_table
Incomputing, thedesktop metaphoris aninterface metaphorwhich is a set of unifying concepts used bygraphical user interfacesto help users interact more easily with the computer.[1]The desktop metaphor treats thecomputer monitoras if it is the top of the user'sdesk, upon whichobjectssuch asdocumentsandfoldersof documents can be placed. A document can be opened into awindow, which represents a paper copy of the document placed on the desktop. Small applications calleddesk accessoriesare also available, such as a desk calculator or notepad, etc. The desktop metaphor itself has been extended and stretched with various implementations ofdesktop environments, since access to features andusabilityof the computer are usually more important than maintaining the 'purity' of themetaphor. Hence one can find trash cans on the desktop, as well as disks and network volumes (which can be thought of asfiling cabinets—not something normally foundona desktop). Other features such asmenu barsortaskbarshave no direct counterpart on a real-world desktop, though this may vary by environment and the function provided; for instance, a familiarwall calendarcan sometimes be displayed or otherwise accessed via a taskbar or menu bar belonging to the desktop. The desktop metaphor was first introduced byAlan Kay, David C. Smith, and others atXerox PARCin 1970 and elaborated in a series of innovative software applications developed by PARC scientists throughout the ensuing decade. The first computer to use an early version of the desktop metaphor was the experimentalXerox Alto,[2][3]and the first commercial computer that adopted this kind of interface was theXerox Star. The use ofwindow controlsto contain related information predates the desktop metaphor, with a primitive version appearing inDouglas Engelbart's "Mother of All Demos",[4]though it was incorporated by PARC in the environment of theSmalltalklanguage.[5] One of the first desktop-like interfaces on the market was a program calledMagic DeskI. Built as a cartridge for theCommodore 64home computerin 1983, a very primitive GUI presented alow resolutionsketch of a desktop, complete with telephone, drawers, calculator, etc. The user made their choices by moving aspritedepicting a hand pointing by using the samejoystickthe user may have used forvideo gaming. Onscreen options were chosen by pushing the fire button on the joystick. The Magic Desk I program featured atypewritergraphically emulated complete with audio effects. Other applications included a calculator,rolodexorganiser, and aterminal emulator. Files could be archived into the drawers of the desktop. Atrashcanwas also present. The first computer to popularise the desktop metaphor, using it as a standard feature over the earliercommand-line interfacewas theApple Macintoshin 1984. The desktop metaphor is ubiquitous in modern-day personal computing; it is found in mostdesktop environmentsof modern operating systems:Windowsas well asmacOS,Linux, and otherUnix-likesystems. BeOSobserved the desktop metaphor more strictly than many other systems. For example, external hard drives appeared on the 'desktop', while internal ones were accessed clicking on aniconrepresenting the computer itself. By comparison, the Mac OS places all drives on the desktop itself by default, while in Windows the user can access the drives through an icon labelled "Computer". Amigaterminology for its desktop metaphor was taken directly from workshop jargon. The desktop was calledWorkbench, programs were calledtools, small applications (applets) were utilities, directories were drawers, etc. Icons of objects were animated and the directories are shown as drawers which were represented as either open or closed. As in theclassic Mac OSandmacOSdesktop, an icon for afloppy diskorCD-ROMwould appear on the desktop when the disk was inserted into the drive, as it was a virtual counterpart of a physical floppy disk or CD-ROM on the surface of a workbench. Thepaper paradigmrefers to theparadigmused by most modern computers and operating systems. The paper paradigm consists of, usually, black text on a white background, files within folders, and a "desktop". The paper paradigm was created by many individuals and organisations, such asDouglas Engelbart,Xerox PARC, andApple Computer, and was an attempt to make computers more user-friendly by making them resemble the common workplace of the time (with papers, folders, and a desktop).[6]It was first presented to the public by Engelbart in 1968, in what is now referred to as "The Mother of All Demos". From John Siracusa:[7] Back in 1984, explanations of the originalMacinterface to users who had never seen aGUIbefore inevitably included an explanation oficonsthat went something like this: "This icon represents your file on disk." But to the surprise of many, users very quickly discarded any semblance of indirection. This iconismy file. My fileisthis icon. One is not a "representation of" or an "interface to" the other. Such relationships were foreign to most people, and constituted unnecessary mental baggage when there was a much more simple and direct connection to what they knew of reality. Since then, many aspects of computers have wandered away from the paper paradigm by implementing features such as "shortcuts" to files,hypertext, and non-spatial file browsing. A shortcut (a link to a file that acts as a redirecting proxy, not the actual file) and hypertext have no real-world equivalent. Non-spatial file browsing, as well, may confuse novice users, as they can often have more than one window representing the same folder open at the same time, something that is impossible in reality. These and other departures from real-world equivalents are violations of the pure paper paradigm.
https://en.wikipedia.org/wiki/Desktop_metaphor
Approaches tosupercomputer architecturehave taken dramatic turns since the earliest systems were introduced in the 1960s. Earlysupercomputerarchitectures pioneered bySeymour Crayrelied on compact innovative designs and localparallelismto achieve superior computational peak performance.[1]However, in time the demand for increased computational power ushered in the age ofmassively parallelsystems. While the supercomputers of the 1970s used only a fewprocessors, in the 1990s, machines with thousands of processors began to appear and by the end of the 20th century, massively parallel supercomputers with tens of thousands ofcommercial off-the-shelfprocessors were the norm. Supercomputers of the 21st century can use over 100,000 processors (some beinggraphic units) connected by fast connections.[2][3] Throughout the decades, the management ofheat densityhas remained a key issue for most centralized supercomputers.[4][5][6]The large amount of heat generated by a system may also have other effects, such as reducing the lifetime of other system components.[7]There have been diverse approaches to heat management, from pumpingFluorinertthrough the system, to a hybrid liquid-air cooling system or air cooling with normalair conditioningtemperatures.[8][9] Systems with a massive number of processors generally take one of two paths: in one approach, e.g., ingrid computingthe processing power of a large number of computers in distributed, diverse administrative domains, is opportunistically used whenever a computer is available.[10]In another approach, a large number of processors are used in close proximity to each other, e.g., in acomputer cluster. In such a centralizedmassively parallelsystem the speed and flexibility of the interconnect becomes very important, and modern supercomputers have used various approaches ranging from enhancedInfinibandsystems to three-dimensionaltorus interconnects.[11][12] Since the late 1960s the growth in the power and proliferation of supercomputers has been dramatic, and the underlying architectural directions of these systems have taken significant turns. While the early supercomputers relied on a small number of closely connected processors that accessedshared memory, the supercomputers of the 21st century use over 100,000 processors connected by fast networks.[2][3] Throughout the decades, the management ofheat densityhas remained a key issue for most centralized supercomputers.[4]Seymour Cray's "get the heat out" motto was central to his design philosophy and has continued to be a key issue in supercomputer architectures, e.g., in large-scale experiments such asBlue Waters.[4][5][6]The large amount of heat generated by a system may also have other effects, such as reducing the lifetime of other system components.[7] There have been diverse approaches to heat management,e.g., theCray 2pumpedFluorinertthrough the system, whileSystem Xused a hybrid liquid-air cooling system and theBlue Gene/Pis air-cooled with normalair conditioningtemperatures.[8][13][14]The heat from theAquasarsupercomputer is used to warm a university campus.[15][16] The heat density generated by a supercomputer has a direct dependence on the processor type used in the system, with more powerful processors typically generating more heat, given similar underlyingsemiconductor technologies.[7]While early supercomputers used a few fast, closely packed processors that took advantage of local parallelism (e.g.,pipeliningandvector processing), in time the number of processors grew, and computing nodes could be placed further away, e.g., in acomputer cluster, or could be geographically dispersed ingrid computing.[2][17]As the number of processors in a supercomputer grows, "component failure rate" begins to become a serious issue. If a supercomputer uses thousands of nodes, each of which may fail once per year on the average, then the system will experience severalnode failureseach day.[9] As the price/performance ofgeneral purpose graphic processors(GPGPUs) has improved, a number ofpetaflopsupercomputers such asTianhe-IandNebulaehave started to rely on them.[18]However, other systems such as theK computercontinue to use conventional processors such asSPARC-based designs and the overall applicability of GPGPUs in general purpose high performance computing applications has been the subject of debate, in that while a GPGPU may be tuned to score well on specific benchmarks its overall applicability to everyday algorithms may be limited unless significant effort is spent to tune the application towards it.[19]However, GPUs are gaining ground and in 2012 theJaguar supercomputerwas transformed intoTitanby replacing CPUs with GPUs.[20][21][22] As the number of independent processors in a supercomputer increases, the way they access data in thefile systemand how they share and accesssecondary storageresources becomes prominent. Over the years a number of systems fordistributed file managementwere developed,e.g., theIBM General Parallel File System,BeeGFS, theParallel Virtual File System,Hadoop, etc.[23][24]A number of supercomputers on theTOP100list such as the Tianhe-I useLinux'sLustre file system.[4] TheCDC 6600series of computers were very early attempts at supercomputing and gained their advantage over the existing systems by relegating work toperipheral devices, freeing thecentral processing unit(CPU) to process actual data. With the MinnesotaFORTRANcompiler the 6600 could sustain 500 kiloflops on standard mathematical operations.[25] Other early supercomputers such as theCray 1andCray 2that appeared afterwards used a small number of fast processors that worked in harmony and were uniformly connected to the largest amount ofshared memorythat could be managed at the time.[3] These early architectures introducedparallel processingat the processor level, with innovations such asvector processing, in which the processor can perform several operations during oneclock cycle, rather than having to wait for successive cycles. In time, as the number of processors increased, different architectural issues emerged. Two issues that need to be addressed as the number of processors increases are the distribution of memory and processing. In the distributed memory approach, each processor is physically packaged close with some local memory. The memory associated with other processors is then "further away" based onbandwidthandlatencyparameters innon-uniform memory access. In the 1960spipeliningwas viewed as an innovation, and by the 1970s the use ofvector processorshad been well established. By the 1980s, many supercomputers used parallel vector processors.[2] The relatively small number of processors in early systems, allowed them to easily use ashared memory architecture, which allows processors to access a common pool of memory. In the early days a common approach was the use ofuniform memory access(UMA), in which access time to a memory location was similar between processors. The use ofnon-uniform memory access(NUMA) allowed a processor to access its own local memory faster than other memory locations, whilecache-only memory architectures(COMA) allowed for the local memory of each processor to be used as cache, thus requiring coordination as memory values changed.[26] As the number of processors increases, efficientinterprocessor communicationand synchronization on a supercomputer becomes a challenge. A number of approaches may be used to achieve this goal. For instance, in the early 1980s, in theCray X-MPsystem,shared registerswere used. In this approach, all processors had access toshared registersthat did not move data back and forth but were only used for interprocessor communication and synchronization. However, inherent challenges in managing a large amount of shared memory among many processors resulted in a move to moredistributed architectures.[27] During the 1980s, as the demand for computing power increased, the trend to a much larger number of processors began, ushering in the age ofmassively parallelsystems, withdistributed memoryanddistributed file systems,[2]given thatshared memory architecturescould not scale to a large number of processors.[28]Hybrid approaches such asdistributed shared memoryalso appeared after the early systems.[29] The computer clustering approach connects a number of readily available computing nodes (e.g. personal computers used as servers) via a fast, privatelocal area network.[30]The activities of the computing nodes are orchestrated by "clustering middleware", a software layer that sits atop the nodes and allows the users to treat the cluster as by and large one cohesive computing unit, e.g. via asingle system imageconcept.[30] Computer clustering relies on a centralized management approach which makes the nodes available as orchestratedshared servers. It is distinct from other approaches such aspeer-to-peerorgrid computingwhich also use many nodes, but with a far moredistributed nature.[30]By the 21st century, theTOP500organization's semiannual list of the 500 fastest supercomputers often includes many clusters, e.g. the world's fastest in 2011, theK computerwith adistributed memory, cluster architecture.[31][32] When a large number of local semi-independent computing nodes are used (e.g. in a cluster architecture) the speed and flexibility of the interconnect becomes very important. Modern supercomputers have taken different approaches to address this issue, e.g.Tianhe-1uses a proprietary high-speed network based on theInfinibandQDR, enhanced withFeiTeng-1000CPUs.[4]On the other hand, theBlue Gene/L system uses a three-dimensionaltorusinterconnect with auxiliary networks for global communications.[11]In this approach each node is connected to its six nearest neighbors. A similar torus was used by theCray T3E.[12] Massive centralized systems at times use special-purpose processors designed for a specific application, and may usefield-programmable gate arrays(FPGA) chips to gain performance by sacrificing generality. Examples of special-purpose supercomputers includeBelle,[33]Deep Blue,[34]andHydra,[35]for playingchess,Gravity Pipefor astrophysics,[36]MDGRAPE-3for protein structure computation molecular dynamics[37]andDeep Crack,[38]for breaking theDEScipher. Grid computinguses a large number of computers in distributed, diverse administrative domains. It is an opportunistic approach which uses resources whenever they are available.[10]An example isBOINCavolunteer-based, opportunistic grid system.[39]SomeBOINCapplications have reached multi-petaflop levels by using close to half a million computers connected on the internet, whenever volunteer resources become available.[40]However, these types of results often do not appear in theTOP500ratings because they do not run the general purposeLinpackbenchmark. Although grid computing has had success in parallel task execution, demanding supercomputer applications such asweather simulationsorcomputational fluid dynamicshave remained out of reach, partly due to the barriers in reliable sub-assignment of a large number of tasks as well as the reliable availability of resources at a given time.[39][41][42] Inquasi-opportunistic supercomputinga large number of geographicallydisperse computersare orchestrated withbuilt-in safeguards.[43]The quasi-opportunistic approach goes beyondvolunteer computingon a highly distributed systems such asBOINC, or generalgrid computingon a system such as Globus by allowing themiddlewareto provide almost seamless access to many computing clusters so that existing programs in languages such asFortranorCcan be distributed among multiple computing resources.[43] Quasi-opportunistic supercomputing aims to provide a higher quality of service thanopportunistic resource sharing.[44]The quasi-opportunistic approach enables the execution of demanding applications within computer grids by establishing grid-wise resource allocation agreements; andfault tolerantmessage passing to abstractly shield against the failures of the underlying resources, thus maintaining some opportunism, while allowing a higher level of control.[10][43][45] The air-cooled IBMBlue Genesupercomputer architecture trades processor speed for low power consumption so that a larger number of processors can be used at room temperature, by using normal air-conditioning.[14][46]The second-generation Blue Gene/P system has processors with integrated node-to-node communication logic.[47]It is energy-efficient, achieving 371MFLOPS/W.[48] TheK computeris awater-cooled, homogeneous processor,distributed memorysystem with acluster architecture.[32][49]It uses more than 80,000SPARC64 VIIIfxprocessors, each with eightcores, for a total of over 700,000 cores—almost twice as many as any other system. It comprises more than 800 cabinets, each with 96 computing nodes (each with 16 GB of memory), and 6 I/O nodes. Although it is more powerful than the next five systems on the TOP500 list combined, at 824.56 MFLOPS/W it has the lowest power to performance ratio of any current major supercomputer system.[50][51]The follow-up system for the K computer, called thePRIMEHPC FX10uses the same six-dimensional torus interconnect, but still only one processor per node.[52] Unlike the K computer, theTianhe-1Asystem uses a hybrid architecture and integrates CPUs and GPUs.[4]It uses more than 14,000Xeongeneral-purpose processors and more than 7,000Nvidia Teslageneral-purpose graphics processing units(GPGPUs) on about 3,500blades.[53]It has 112 computer cabinets and 262 terabytes of distributed memory; 2 petabytes of disk storage is implemented viaLustreclustered files.[54][55][56][4]Tianhe-1 uses a proprietary high-speed communication network to connect the processors.[4]The proprietary interconnect network was based on theInfinibandQDR, enhanced with Chinese-madeFeiTeng-1000CPUs.[4]In the case of the interconnect the system is twice as fast as the Infiniband, but slower than some interconnects on other supercomputers.[57] The limits of specific approaches continue to be tested, as boundaries are reached through large-scale experiments, e.g., in 2011 IBM ended its participation in theBlue Waterspetaflops project at the University of Illinois.[58][59]The Blue Waters architecture was based on the IBMPOWER7processor and intended to have 200,000 cores with a petabyte of "globally addressable memory" and 10 petabytes of disk space.[6]The goal of a sustained petaflop led to design choices that optimized single-core performance, and hence a lower number of cores. The lower number of cores was then expected to help performance on programs that did not scale well to a large number of processors.[6]The large globally addressable memory architecture aimed to solve memory address problems in an efficient manner, for the same type of programs.[6]Blue Waters had been expected to run at sustained speeds of at least one petaflop, and relied on the specific water-cooling approach to manage heat. In the first four years of operation, the National Science Foundation spent about $200 million on the project. IBM released thePower 775computing node derived from that project's technology soon thereafter, but effectively abandoned the Blue Waters approach.[58][59] Architectural experiments are continuing in a number of directions, e.g. theCyclops64system uses a "supercomputer on a chip" approach, in a direction away from the use of massive distributed processors.[60][61]Each 64-bit Cyclops64 chip contains 80 processors, and the entire system uses aglobally addressablememory architecture.[62]The processors are connected with non-internally blocking crossbar switch and communicate with each other via global interleaved memory. There is nodata cachein the architecture, but half of eachSRAMbank can be used as a scratchpad memory.[62]Although this type of architecture allows unstructured parallelism in a dynamically non-contiguous memory system, it also produces challenges in the efficient mapping of parallel algorithms to amany-coresystem.[61]
https://en.wikipedia.org/wiki/Supercomputer_architecture
Deep reinforcement learning(DRL) is a subfield ofmachine learningthat combines principles ofreinforcement learning(RL) anddeep learning. It involves training agents to make decisions by interacting with an environment to maximize cumulative rewards, while usingdeep neural networksto represent policies, value functions, or environment models. This integration enables DRL systems to process high-dimensional inputs, such as images or continuous control signals, making the approach effective for solving complex tasks. Since the introduction of thedeep Q-network (DQN)in 2015, DRL has achieved significant successes across domains includinggames,robotics, andautonomous systems, and is increasingly applied in areas such as healthcare, finance, and autonomous vehicles. Deep reinforcement learning (DRL)is part ofmachine learning, which combinesreinforcement learning(RL) anddeep learning. In DRL, agents learn how decisions are to be made by interacting with environments in order to maximize cumulative rewards, while usingdeep neural networksto represent policies, value functions, or models of the environment. This integration enables agents to handle high-dimensional input spaces, such as raw images or continuous control signals, making DRL a widely used approach for addressing complex tasks.[1] Since the development of thedeep Q-network (DQN)in 2015, DRL has led to major breakthroughs in domains such asgames,robotics, andautonomous systems. Research in DRL continues to expand rapidly, with active work on challenges like sample efficiency and robustness, as well as innovations in model-based methods, transformer architectures, and open-ended learning. Applications now range from healthcare and finance to language systems and autonomous vehicles.[2] Reinforcement learning (RL) is a framework in which agents interact with environments by taking actions and learning from feedback in form of rewards or penalties. Traditional RL methods, such asQ-learningand policy gradient techniques, rely on tabular representations or linear approximations, which are often not scalable to high-dimensional or continuous input spaces. DRL came out as solution to above limitation by integrating RL anddeep neural networks. This combination enables agents to approximate complex functions and handle unstructured input data like raw images, sensor data, or natural language. The approach became widely recognized following the success of DeepMind's deep Q-network (DQN), which achieved human-level performance on several Atari video games using only pixel inputs and game scores as feedback.[3] Since then, DRL has evolved to include various architectures and learning strategies, including model-based methods, actor-critic frameworks, and applications in continuous control environments.[4]These developments have significantly expanded the applicability of DRL across domains where traditional RL was limited. Several algorithmic approaches form the foundation of deep reinforcement learning, each with different strategies for learning optimal behavior. One of the earliest and most influential DRL algorithms is the Deep Q-Network (DQN), which combines Q-learning with deep neural networks. DQN approximates the optimal action-value function using a convolutional neural network and introduced techniques such as experience replay and target networks which stabilize training.[5] Other methods include multi-agent reinforcement learning, hierarchical RL, and approaches that integrate planning or memory mechanisms, depending on the complexity of the task and environment. DRL has been applied to wide range of domains that require sequential decision-making and the ability to learn from high-dimensional input data. One of the most well-known applications is ingames, where DRL agents have demonstrated performance comparable to or exceeding human-level benchmarks. DeepMind's AlphaGo and AlphaStar, as well as OpenAI Five, are notable examples of DRL systems mastering complex games such asGo,StarCraft II, andDota 2.[7]While these systems have demonstrated high performance in constrained environments, their success often depends on extensive computational resources and may not generalize easily to tasks outside their training domains. Inrobotics, DRL has been used to train agents for tasks such as locomotion, manipulation, and navigation in both simulated and real-world environments. By learning directly from sensory input, DRL enables robots to adapt to complex dynamics without relying on hand-crafted control rules.[8] Other growing areas of application includefinance(e.g., portfolio optimization),healthcare(e.g., treatment planning and medical decision-making),natural language processing(e.g., dialogue systems), andautonomous vehicles(e.g., path planning and control).All of these applications shows how DRL deals with real-world problems like uncertainty, sequential reasoning, and high-dimensional data.[9] DRL has several significant challenges which limit its broader deployment. One of the most prominent issues is sample inefficiency. DRL algorithms often require millions of interactions with the environment to learn effective policies, which is impractical in many real-world settings where data collection is expensive or time-consuming.[10] Another challenge is sparse or delayed reward problem, where feedback signals are infrequent, which makes it difficult for agents to attribute outcomes to specific decisions. Techniques such as reward shaping and exploration strategies have been developed to address this issue.[11] DRL systems also tend to be sensitive to hyperparameters and lack robustness across tasks or environments. Models that are trained in simulation fail very often when deployed in the real world due to discrepancies between simulated and real-world dynamics, a problem known as the "reality gap."Bias and fairness in DRL systems have also emerged as concerns, particularly in domains like healthcare and finance where imbalanced data can lead to unequal outcomes for underrepresented groups. Additionally, concerns about safety, interpretability, and reproducibility have become increasingly important, especially in high-stakes domains such as healthcare or autonomous driving. These issues remain active areas of research in the DRL community. Recent developments in DRL have introduced new architectures and training strategies which aims to improving performance, efficiency, and generalization. One key area of progress is model-based reinforcement learning, where agents learn an internal model of the environment to simulate outcomes before acting. This kind off approach improves sample efficiency and planning. An example is the Dreamer algorithm, which learns a latent space model to train agents more efficiently in complex environments.[12] Another major innovation is the use of transformer-based architectures in DRL. Unlike traditional models that rely on recurrent or convolutional networks, transformers can model long-term dependencies more effectively. The Decision Transformer and other similar models treat RL as a sequence modeling problem, enabling agents to generalize better across tasks.[13] In addition, research into open-ended learning has led to the creation of  capable agents that are able to solve a range of tasks without task-specific tuning. Similar systems like the ones that are developed by OpenAI show that agents trained in diverse, evolving environments can generalize across new challenges, moving toward more adaptive and flexible intelligence.[14] As deep reinforcement learning continues to evolve, researchers are exploring ways to make algorithms more efficient, robust, and generalizable across a wide range of tasks. Improving sample efficiency through model-based learning, enhancing generalization with open-ended training environments, and integrating foundation models are among the current research goals. similar area of interest is safe and ethical deployment, particularly in high-risk settings like healthcare, autonomous driving, and finance. Researchers are developing frameworks for safer exploration, interpretability, and better alignment with human values.Ensuring that DRL systems promote equitable outcomes remains an ongoing challenge, especially where historical data may under‑represent marginalized populations. The future of DRL may also involve more integration with other subfields of machine learning, such as unsupervised learning, transfer learning, and large language models, enabling agents that can learn from diverse data modalities and interact more naturally with human users.[15]
https://en.wikipedia.org/wiki/End-to-end_reinforcement_learning
DigiNotarwas a Dutchcertificate authority, established in 1998 and acquired in January 2011 byVASCO Data Security International, Inc.[1][2]The company was hacked in June 2011 and it issued hundreds of fakecertificates, some of which were used forman-in-the-middle attackson IranianGmailusers. The company was declared bankrupt in September 2011. On 3 September 2011, after it had become clear that a security breach had resulted in thefraudulentissuing ofcertificates, theDutch governmenttook over operational management of DigiNotar's systems.[3]That same month, the company was declared bankrupt.[4][5] An investigation into the hacking by Dutch-government appointed Fox-IT consultancy identified 300,000IranianGmailusers as the main target of the hack (targeted subsequently usingman-in-the-middleattacks), and suspected that the Iranian government was behind the hack.[6]While nobody has been charged with the break-in and compromise of the certificates (as of 2013[update]), cryptographerBruce Schneiersays the attack may have been "either the work of theNSA, or exploited by the NSA."[7]However, this has been disputed, with others saying the NSA had only detected a foreignintelligence serviceusing the fake certificates.[8]The hack has also been claimed by the so-called Comodohacker, allegedly a 21-year-old Iranian student, who also claimed to have hacked four other certificate authorities, includingComodo, a claim found plausible byF-Secure, although not fully explaining how it led to the subsequent "widescale interception of Iranian citizens".[9] After more than 500 fake DigiNotar certificates were found, major web browser makers reacted by blacklisting all DigiNotar certificates.[10]The scale of the incident was used by some organizations likeENISAandAccessNow.orgto call for a deeper reform ofHTTPSin order to remove the weakest link possibility that a single compromised CA can affect that many users.[11][12] DigiNotar's main activity was as acertificate authority, issuing two types of certificate. First, they issued certificates under their own name (where the root CA was "DigiNotar Root CA").[13]Entrustcertificates were not issued since July 2010, but some were still valid up to July 2013.[14][15]Secondly, they issued certificates for the Dutch government'sPKIoverheid("PKIgovernment") program. This issuance was via two intermediate certificates, each of which chained up to one of the two "Staat der Nederlanden" root CAs. National and local Dutch authorities and organisations offering services for the government who want to use certificates for secure internet communication can request such a certificate. Some of the most-used electronic services offered by Dutch governments used certificates from DigiNotar. Examples were the authentication infrastructureDigiDand the central car-registration organisationNetherlands Vehicle Authority[nl](RDW). DigiNotar's root certificates were removed from the trusted-root lists of all major web browsers and consumer operating systems on or around 29 August 2011;[16][17][18]the "Staat der Nederlanden" roots were initially kept because they were not believed to be compromised. However, they have since been revoked. DigiNotar was originally set up in 1998 by the DutchnotaryDick Batenburg fromBeverwijkand theKoninklijke Notariële Beroepsorganisatie[nl], the national body for Dutchcivil law notaries. The KNB offers all kind of central services to the notaries, and because many of the services that notaries offer are official legal procedures, security in communications is important. The KNB offered advisory services to their members on how to implement electronic services in their business; one of these activities was offering secure certificates. Dick Batenburg and the KNB formed the group TTP Notarissen (TTP Notaries), where TTP stands fortrusted third party. A notary can become a member of TTP Notarissen if they comply with certain rules. If they comply with additional rules on training and work procedures, they can become an accredited TTP Notary.[19] Although DigiNotar had been a general-purpose CA for several years, they still targeted the market for notaries and other professionals. On 10 January 2011 the company was sold to VASCO Data Security International.[1]In a VASCO press release dated 20 June 2011, one day after DigiNotar first detected an incident on their systems[20]VASCO's president andCOOJan Valcke is quoted as stating "We believe that DigiNotar's certificates are among the most reliable in the field."[21] On 20 September 2011 Vasco announced that its subsidiary DigiNotar was declared bankrupt after filing forvoluntary bankruptcyat theHaarlemcourt. Effective immediately the court appointed areceiver, a court-appointed trustee who takes over the management of all of DigiNotar's affairs as it proceeds through the bankruptcy process toliquidation.[4][22] Thecurator(court-appointed receiver) didn't want the report fromITSecto be published, as it might lead to additional claims towards DigiNotar.[citation needed]The report covered the way the company operated and details of the hack of 2011 that led to its bankruptcy.[citation needed] The report was made on request of the Dutch supervisory agencyOPTAwho refused to publish the report in the first place. In afreedom of information(Wet openbaarheid van bestuur[nl]) procedure started by a journalist, the receiver tried to convince the court not to allow publication of this report, and to confirm the OPTA's initial refusal to do so.[23] The report was ordered to be released, and was made public in October 2012. It shows a near total compromise of the systems. On 10 July 2011 an attacker with access to DigiNotar's systems issued awildcardcertificateforGoogle. This certificate was subsequently used by unknown persons inIranto conduct aman-in-the-middle attackagainst Google services.[24][25]On 28 August 2011 certificate problems were observed on multipleInternet service providersin Iran.[26]The fraudulent certificate was posted onPastebin.[27]According to a subsequent news release by VASCO, DigiNotar had detected an intrusion into its certificate authority infrastructure on 19 July 2011.[28]DigiNotar did not publicly reveal the security breach at the time. After this certificate was found, DigiNotar belatedly admitted dozens of fraudulent certificates had been created, including certificates for the domains ofYahoo!,Mozilla,WordPressandThe Tor Project.[29]DigiNotar could not guarantee all such certificates had beenrevoked.[30]Googleblacklisted247 certificates inChromium,[31]but the final known total of misissued certificates is at least 531.[32]Investigation byF-Securealso revealed that DigiNotar's website had been defaced by Turkish and Iranian hackers in 2009.[33] In reaction, Mozilla revoked trust in the DigiNotar root certificate in all supported versions of itsFirefoxbrowser andMicrosoftremoved the DigiNotar root certificate from its list of trusted certificates with its browsers on all supported releases of Microsoft Windows.[34][35]Chromium/Google Chromewas able to detect the fraudulent*.google.comcertificate, due to its "certificate pinning" security feature;[36]however, this protection was limited to Google domains, which resulted in Google removing DigiNotar from its list of trusted certificate issuers.[24]Operaalways checks the certificate revocation list of the certificate's issuer and so they initially stated they did not need a security update.[37][38]However, later they also removed the root from their trust store.[39]On 9 September 2011Appleissued Security Update 2011-005 forMac OS X10.6.8 and 10.7.1, which removes DigiNotar from the list of trusted root certificates and EV certificate authorities.[40]Without this update,Safariand Mac OS X do not detect the certificate's revocation, and users must use theKeychainutility to manually delete the certificate.[41]Apple did not patch iOS until 13 October 2011, with the release of iOS 5.[42] DigiNotar also controlled an intermediate certificate which was used for issuing certificates as part of theDutch government’spublic key infrastructure"PKIoverheid" program, chaining up to the official Dutch government certification authority (Staat der Nederlanden).[43]Once this intermediate certificate was revoked or marked as untrusted by browsers, thechain of trustfor their certificates was broken, and it was difficult to access services such as theidentity managementplatformDigiDand theTax and Customs Administration.[44]GOVCERT.NL[nl], the Dutchcomputer emergency response team, initially did not believe the PKIoverheid certificates had been compromised,[45]although security specialists were uncertain.[30][46]Because these certificates were initially thought not to be compromised by the security breach, they were, at the request of the Dutch authorities, kept exempt from the removal of trust[43][47]– although one of the two, the active "Staat der Nederlanden - G2" root certificate, was overlooked by the Mozilla engineers and accidentally distrusted in the Firefox build.[48]However, this assessment was rescinded after an audit by the Dutch government, and the DigiNotar-controlled intermediates in the "Staat der Nederlanden" hierarchy were also blacklisted by Mozilla in the next security update, and also by other browser manufacturers.[49]The Dutch government announced on 3 September 2011 that they would switch to a different firm as certificate authority.[50] After the initial claim that the certificates under the DigiNotar-controlled intermediate certificate in thePKIoverheidhierarchy weren't affected, further investigation by an external party, the Fox-IT consultancy, showed evidence of hacker activity on those machines as well. Consequently, the Dutch government decided on 3 September 2011 to withdraw their earlier statement that nothing was wrong.[51](The Fox-IT investigators dubbed the incident "Operation Black Tulip".[52]) The Fox-IT report identified 300,000 Iranian Gmail accounts as the main victims of the hack.[6] DigiNotar was only one of the available CAs in PKIoverheid, so not all certificates used by the Dutch government under their root were affected. When the Dutch government decided that they had lost their trust in DigiNotar, they took back control over the company's intermediate certificate in order to manage an orderly transition, and they replaced the untrusted certificates with new ones from one of the other providers.[51]The much-used DigiD platform now[when?]uses a certificate issued byGetronicsPinkRoccade Nederland B.V.[53]According to the Dutch government, DigiNotar gave them its full co-operation with these procedures. After the removal of trust in DigiNotar, there are now[when?]fourCertification Service Providers(CSP) that can issue certificates under thePKIoverheidhierarchy:[54] All four companies have opened special help desks and/or published information on their websites as to how organisations that have a PKIoverheid certificate from DigiNotar can request a new certificate from one of the remaining four providers.[55][56][57][58]
https://en.wikipedia.org/wiki/DigiNotar
Rheology(/riːˈɒlədʒi/; fromGreekῥέω(rhéō)'flow'and-λoγία(-logia)'study of') is the study of the flow ofmatter, primarily in afluid(liquidorgas) state but also as "softsolids" or solids under conditions in which they respond withplasticflow rather than deformingelasticallyin response to an applied force.[1]Rheology is the branch ofphysicsthat deals with thedeformationand flow of materials, both solids and liquids.[1] The termrheologywas coined byEugene C. Bingham, a professor atLafayette College, in 1920 from a suggestion by a colleague,Markus Reiner.[2][3]The term was inspired by theaphorismofHeraclitus(often mistakenly attributed toSimplicius),panta rhei(πάντα ῥεῖ, 'everything flows'[4][5]) and was first used to describe the flow of liquids and the deformation of solids. It applies to substances that have a complex microstructure, such asmuds,sludges,suspensions, andpolymersand otherglass formers(e.g., silicates), as well as many foods and additives,bodily fluids(e.g., blood) and otherbiological materials, and other materials that belong to the class ofsoft mattersuch as food. Newtonian fluidscan be characterized by a single coefficient ofviscosityfor a specific temperature. Although this viscosity will change with temperature, it does not change with thestrain rate. Only a small group of fluids exhibit such constant viscosity. The large class of fluids whose viscosity changes with the strain rate (the relativeflow velocity) are callednon-Newtonian fluids. Rheology generally accounts for the behavior of non-Newtonian fluids by characterizing the minimum number of functions that are needed to relate stresses with rate of change of strain or strain rates. For example,ketchupcan have itsviscosityreduced by shaking (or other forms of mechanical agitation, where the relative movement of different layers in the material actually causes the reduction in viscosity), but water cannot. Ketchup is a shear-thinning material, likeyogurtandemulsionpaint(US terminologylatex paintoracrylic paint), exhibitingthixotropy, where an increase in relative flow velocity will cause a reduction in viscosity, for example, by stirring. Some other non-Newtonian materials show the opposite behavior,rheopecty(viscosity increasing with relative deformation), and are called shear-thickening ordilatantmaterials. Since SirIsaac Newtonoriginated the concept of viscosity, the study of liquids with strain-rate-dependent viscosity is also often calledNon-Newtonian fluid mechanics.[1] The experimental characterisation of a material's rheological behaviour is known asrheometry, although the termrheologyis frequently used synonymously with rheometry, particularly by experimentalists. Theoretical aspects of rheology are the relation of the flow/deformation behaviour of material and its internal structure (e.g., the orientation and elongation of polymer molecules) and the flow/deformation behaviour of materials that cannot be described by classical fluid mechanics or elasticity. In practice, rheology is principally concerned with extendingcontinuum mechanicsto characterize the flow of materials that exhibit a combination ofelastic,viscousandplasticbehavior by properly combiningelasticityand (Newtonian)fluid mechanics. It is also concerned with predicting mechanical behavior (on the continuum mechanical scale) based on the micro- or nanostructure of the material, e.g. themolecularsize and architecture ofpolymersin solution or the particle size distribution in a solid suspension. Materials with the characteristics of a fluid will flow when subjected to astress, which is defined as the force per area. There are different sorts of stress (e.g. shear, torsional, etc.), and materials can respond differently under different stresses. Much of theoretical rheology is concerned with associating external forces and torques with internal stresses, internal strain gradients, and flow velocities.[1][6][7][8] Rheology unites the seemingly unrelated fields ofplasticityandnon-Newtonian fluiddynamics by recognizing that materials undergoing these types of deformation are unable to support a stress (particularly ashear stress, since it is easier to analyze shear deformation) in staticequilibrium. In this sense, a solid undergoing plasticdeformationis afluid, although no viscosity coefficient is associated with this flow. Granular rheology refers to the continuum mechanical description ofgranular materials. One of the major tasks of rheology is to establish by measurement the relationships betweenstrains(or rates of strain) and stresses, although a number of theoretical developments (such as assuring frame invariants) are also required before using the empirical data. These experimental techniques are known asrheometryand are concerned with the determination of well-definedrheological material functions. Such relationships are then amenable to mathematical treatment by the established methods ofcontinuum mechanics. The characterization of flow or deformation originating from a simple shear stress field is calledshear rheometry(or shear rheology). The study of extensional flows is calledextensional rheology. Shear flows are much easier to study and thus much more experimental data are available for shear flows than for extensional flows. On one end of the spectrum we have aninviscidor a simple Newtonian fluid and on the other end, a rigid solid; thus the behavior of all materials fall somewhere in between these two ends. The difference in material behavior is characterized by the level and nature of elasticity present in the material when it deforms, which takes the material behavior to the non-Newtonian regime. The non-dimensional Deborah number is designed to account for the degree of non-Newtonian behavior in a flow. The Deborah number is defined as the ratio of the characteristic time of relaxation (which purely depends on the material and other conditions like the temperature) to the characteristic time of experiment or observation.[3][10]Small Deborah numbers represent Newtonian flow, while non-Newtonian (with both viscous and elastic effects present) behavior occurs for intermediate range Deborah numbers, and high Deborah numbers indicate an elastic/rigid solid. Since Deborah number is a relative quantity, the numerator or the denominator can alter the number. A very small Deborah number can be obtained for a fluid with extremely small relaxation time or a very large experimental time, for example. Influid mechanics, theReynolds numberis a measure of theratioofinertialforces(vsρ{\displaystyle v_{s}\rho }) toviscousforces (μL{\displaystyle {\frac {\mu }{L}}}) and consequently it quantifies the relative importance of these two types of effect for given flow conditions. Under low Reynolds numbers viscous effects dominate and the flow islaminar, whereas at high Reynolds numbers inertia predominates and the flow may beturbulent. However, since rheology is concerned with fluids which do not have a fixedviscosity, but one which can vary with flow and time, calculation of the Reynolds number can be complicated. It is one of the most importantdimensionless numbersinfluid dynamicsand is used, usually along with other dimensionless numbers, to provide a criterion for determiningdynamic similitude. When two geometrically similar flow patterns, in perhaps different fluids with possibly different flow rates, have the same values for the relevant dimensionless numbers, they are said to be dynamically similar. Typically it is given as follows: where: Rheometersare instruments used to characterize the rheological properties of materials, typically fluids that are melts or solution. These instruments impose a specific stress field or deformation to the fluid, and monitor the resultant deformation or stress. Instruments can be run in steady flow or oscillatory flow, in both shear and extension. Rheology has applications inmaterials science,engineering,geophysics,physiology, humanbiologyandpharmaceutics. Materials science is utilized in the production of many industrially important substances, such ascement,paint, andchocolate, which have complex flow characteristics. In addition,plasticitytheory has been similarly important for the design of metal forming processes. The science of rheology and the characterization of viscoelastic properties in the production and use ofpolymericmaterials has been critical for the production of many products for use in both the industrial and military sectors. Study of flow properties of liquids is important for pharmacists working in the manufacture of several dosage forms, such as simple liquids, ointments, creams, pastes etc. The flow behavior of liquids under applied stress is of great relevance in the field of pharmacy. Flow properties are used as important quality control tools to maintain the superiority of the product and reduce batch to batch variations. Examples may be given to illustrate the potential applications of these principles to practical problems in the processing[11]and use ofrubbers,plastics, andfibers.Polymersconstitute the basic materials of the rubber and plastic industries and are of vital importance to the textile,petroleum,automobile,paper, andpharmaceutical industries. Their viscoelastic properties determine the mechanical performance of the final products of these industries, and also the success of processing methods at intermediate stages of production. Inviscoelasticmaterials, such as most polymers and plastics, the presence of liquid-like behaviour depends on the properties of and so varies with rate of applied load, i.e., how quickly a force is applied. Thesiliconetoy 'Silly Putty' behaves quite differently depending on the time rate of applying a force. Pull on it slowly and it exhibits continuous flow, similar to that evidenced in a highly viscous liquid. Alternatively, when hit hard and directly, it shatters like asilicate glass. In addition, conventional rubber undergoes aglass transition(often called arubber-glass transition). E.g. TheSpace ShuttleChallengerdisaster was caused by rubber O-rings that were being used well below their glass transition temperature on an unusually cold Florida morning, and thus could not flex adequately to form proper seals between sections of the twosolid-fuel rocket boosters. With theviscosityof asoladjusted into a proper range, bothopticalquality glass fiber andrefractoryceramic fiber can be drawn which are used forfiber-optic sensorsandthermal insulation, respectively. The mechanisms ofhydrolysisandcondensation, and the rheological factors that bias the structure toward linear or branched structures are the most critical issues ofsol-gelscience and technology. The scientific discipline ofgeophysicsincludes study of the flow of moltenlavaand study of debris flows (fluid mudslides). This disciplinary branch also deals with solid Earth materials which only exhibit flow over extended time-scales. Those that display viscous behaviour are known asrheids. For example,granitecan flow plastically with a negligible yield stress at room temperatures (i.e. a viscous flow). Long-term creep experiments (~10 years) indicate that the viscosity of granite and glass under ambient conditions are on the order of 1020poises.[12][13] Physiology includes the study of many bodily fluids that have complex structure and composition, and thus exhibit a wide range of viscoelastic flow characteristics. In particular there is a specialist study of blood flow calledhemorheology. This is the study of flow properties of blood and its elements (plasmaand formed elements, includingred blood cells,white blood cellsandplatelets).Blood viscosityis determined by plasma viscosity,hematocrit(volume fraction of red blood cell, which constitute 99.9% of the cellular elements) and mechanical behaviour of red blood cells. Therefore, red blood cell mechanics is the major determinant of flow properties of blood.(The ocularVitreous humoris subject to rheologic observations, particularly during studies of age-related vitreous liquefaction, orsynaeresis.)[14] The leading characteristic for hemorheology has beenshear thinningin steady shear flow. Other non-Newtonian rheological characteristics that blood can demonstrate includespseudoplasticity,viscoelasticity, andthixotropy.[15] There are two current major hypotheses to explain blood flow predictions andshear thinningresponses. The two models also attempt to demonstrate the drive for reversible red blood cell aggregation, although the mechanism is still being debated. There is a direct effect of red blood cell aggregation on blood viscosity and circulation.[16]The foundation ofhemorheologycan also provide information for modeling of other biofluids.[15]The bridging or "cross-bridging" hypothesis suggests that macromolecules physically crosslink adjacent red blood cells into rouleaux structures. This occurs through adsorption of macromolecules onto the red blood cell surfaces.[15][16]The depletion layer hypothesis suggests the opposite mechanism. The surfaces of the red blood cells are bound together by an osmotic pressure gradient that is created by depletion layers overlapping.[15]The effect of rouleaux aggregation tendency can be explained byhematocritand fibrinogen concentration in whole blood rheology.[15]Some techniques researchers use are optical trapping and microfluidics to measure cell interaction in vitro.[16] Changes to viscosity has been shown to be linked with diseases like hyperviscosity, hypertension, sickle cell anemia, and diabetes.[15]Hemorheologicalmeasurements and genomic testing technologies act as preventative measures and diagnostic tools.[15][17] Hemorheologyhas also been correlated with aging effects, especially with impaired blood fluidity, and studies have shown that physical activity may improve the thickening of blood rheology.[18] Many animals make use of rheological phenomena, for examplesandfishthat exploit the granular rheology of dry sand to "swim" in it orland gastropodsthat usesnail slimefor adhesivelocomotion. Certain animals produce specializedendogenouscomplex fluids, such as the sticky slime produced byvelvet wormsto immobilize prey or the fast-gelling underwater slime secreted byhagfishto deter predators.[19] Food rheologyis important in the manufacture and processing of food products, such as cheese[20]andgelato.[21]An adequate rheology is important for the indulgence of many common foods, particularly in the case of sauces,[22]dressings,[23]yogurt,[24]orfondue.[25] Thickening agents, or thickeners, are substances which, when added to an aqueous mixture, increase itsviscositywithout substantially modifying its other properties, such as taste. They provide body, increasestability, and improvesuspensionof added ingredients. Thickening agents are often used asfood additivesand incosmeticsandpersonal hygiene products. Some thickening agents aregelling agents, forming agel. The agents are materials used to thicken and stabilize liquid solutions,emulsions, andsuspensions. They dissolve in the liquid phase as acolloidmixture that forms a weakly cohesive internal structure. Food thickeners frequently are based on eitherpolysaccharides(starches,vegetable gums, andpectin), orproteins.[26][27] Concrete's andmortar's workability is related to the rheological properties of the freshcementpaste. The mechanical properties of hardened concrete increase if less water is used in the concrete mix design, however reducing the water-to-cement ratio may decrease the ease of mixing and application. To avoid these undesired effects,superplasticizersare typically added to decrease the apparent yield stress and the viscosity of the fresh paste. Their addition highly improves concrete and mortar properties.[28] The incorporation of various types offillersintopolymersis a common means of reducing cost and to impart certain desirable mechanical, thermal, electrical and magnetic properties to the resulting material. The advantages that filled polymer systems have to offer come with an increased complexity in the rheological behavior.[29] Usually when the use of fillers is considered, a compromise has to be made between the improved mechanical properties in the solid state on one side and the increased difficulty in melt processing, the problem of achieving uniformdispersionof the filler in the polymer matrix and the economics of the process due to the added step of compounding on the other. The rheological properties of filled polymers are determined not only by the type and amount of filler, but also by the shape, size and size distribution of its particles. The viscosity of filled systems generally increases with increasing filler fraction. This can be partially ameliorated via broad particle size distributions via theFarris effect.[30]An additional factor is thestresstransfer at the filler-polymer interface. The interfacial adhesion can be substantially enhanced via a coupling agent that adheres well to both the polymer and the filler particles. The type and amount ofsurface treatmenton the filler are thus additional parameters affecting the rheological and material properties of filled polymeric systems. It is important to take into consideration wall slip when performing the rheological characterization of highly filled materials, as there can be a large difference between the actual strain and the measured strain.[31] A rheologist is aninterdisciplinaryscientist or engineer who studies the flow of complex liquids or the deformation of soft solids. It is not a primary degree subject; there is no qualification of rheologist as such. Most rheologists have a qualification in mathematics, the physical sciences (e.g.chemistry,physics,geology,biology), engineering (e.g.mechanical,chemical,materials science, plastics engineering and engineeringorcivil engineering),medicine, or certain technologies, notablymaterialsorfood. Typically, a small amount of rheology may be studied when obtaining a degree, but a person working in rheology will extend this knowledge during postgraduate research or by attending short courses and by joining a professional association.
https://en.wikipedia.org/wiki/Rheology
Instatistics,additive smoothing, also calledLaplacesmoothing[1]orLidstonesmoothing, is a technique used to smooth count data, eliminating issues caused by certain values having 0 occurrences. Given a set of observation countsx=⟨x1,x2,…,xd⟩{\displaystyle \mathbf {x} =\langle x_{1},x_{2},\ldots ,x_{d}\rangle }from ad{\displaystyle d}-dimensionalmultinomial distributionwithN{\displaystyle N}trials, a "smoothed" version of the counts gives theestimator where the smoothed countx^i=Nθ^i{\displaystyle {\hat {x}}_{i}=N{\hat {\theta }}_{i}}, and the "pseudocount"α> 0 is a smoothingparameter, withα= 0 corresponding to no smoothing (this parameter is explained in§ Pseudocountbelow). Additive smoothing is a type ofshrinkage estimator, as the resulting estimate will be between theempirical probability(relative frequency)xi/N{\displaystyle x_{i}/N}and theuniform probability1/d.{\displaystyle 1/d.}Common choices forαare 0 (no smoothing),+1⁄2(theJeffreys prior), or 1 (Laplace'srule of succession),[2][3]but the parameter may also be set empirically based on the observed data. From aBayesianpoint of view, this corresponds to theexpected valueof theposterior distribution, using a symmetricDirichlet distributionwith parameterαas aprior distribution. In the special case where the number of categories is 2, this is equivalent to using abeta distributionas the conjugate prior for the parameters of thebinomial distribution. Laplace came up with this smoothing technique when he tried to estimate the chance that the sun will rise tomorrow. His rationale was that even given a large sample of days with the rising sun, we still can not be completely sure that the sun will still rise tomorrow (known as thesunrise problem).[4] Apseudocountis an amount (not generally an integer, despite its name) added to the number of observed cases in order to change the expectedprobabilityin amodelof those data, when not known to be zero. It is so named because, roughly speaking, a pseudo-count of valueα{\displaystyle \alpha }weighs into theposterior distributionsimilarly to each category having an additional count ofα{\displaystyle \alpha }. If the frequency of each itemi{\displaystyle i}isxi{\displaystyle x_{i}}out ofN{\displaystyle N}samples, the empirical probability of eventi{\displaystyle i}is but the posterior probability when additively smoothed is as if to increase each countxi{\displaystyle x_{i}}byα{\displaystyle \alpha }a priori. Depending on the prior knowledge, which is sometimes a subjective value, a pseudocount may have any non-negative finite value. It may only be zero (or the possibility ignored) if impossible by definition, such as the possibility of a decimal digit ofπbeing a letter, or a physical possibility that would be rejected and so not counted, such as a computer printing a letter when a valid program forπis run, or excluded and not counted because of no interest, such as if only interested in the zeros and ones. Generally, there is also a possibility that no value may be computable or observable in a finite time (see thehalting problem). But at least one possibility must have a non-zero pseudocount, otherwise no prediction could be computed before the first observation. The relative values of pseudocounts represent the relative prior expected probabilities of their possibilities. The sum of the pseudocounts, which may be very large, represents the estimated weight of the prior knowledge compared with all the actual observations (one for each) when determining the expected probability. In any observed data set orsamplethere is the possibility, especially with low-probabilityeventsand with small data sets, of a possible event not occurring. Its observed frequency is therefore zero, apparently implying a probability of zero. This oversimplification is inaccurate and often unhelpful, particularly in probability-basedmachine learningtechniques such asartificial neural networksandhidden Markov models. By artificially adjusting the probability of rare (but not impossible) events so those probabilities are not exactly zero,zero-frequency problemsare avoided. Also seeCromwell's rule. One common approach is to add 1 to each observed number of events, including the zero-count possibilities. This is sometimes called Laplace'srule of succession. This approach is equivalent to assuming a uniform prior distribution over the probabilities for each possible event (spanning the simplex where each probability is between 0 and 1, and they all sum to 1). Using theJeffreys priorapproach, a pseudocount of one half should be added to each possible outcome. Pseudocounts should be set to one or one-half only when there is no prior knowledge at all – see theprinciple of indifference. However, given appropriate prior knowledge, the sum should be adjusted in proportion to the expectation that the prior probabilities should be considered correct, despite evidence to the contrary – seefurther analysis. Higher values are appropriate inasmuch as there is prior knowledge of the true values (for a mint-condition coin, say); lower values inasmuch as there is prior knowledge that there is probable bias, but of unknown degree (for a bent coin, say). One way to motivate pseudocounts, particularly for binomial data, is via a formula for the midpoint of aninterval estimate, particularly abinomial proportion confidence interval. The best-known is due toEdwin Bidwell Wilson, inWilson (1927): the midpoint of theWilson score intervalcorresponding to⁠z{\displaystyle z}⁠standard deviations on either side is Takingz=2{\displaystyle z=2}standard deviations to approximate a 95% confidence interval (⁠z≈1.96{\displaystyle z\approx 1.96}⁠) yields pseudocount of 2 for each outcome, so 4 in total, colloquially known as the "plus four rule": This is also the midpoint of theAgresti–Coull interval(Agresti & Coull 1998). Often the bias of an unknown trial population is tested against a control population with known parameters (incidence rates)μ=⟨μ1,μ2,…,μd⟩.{\displaystyle {\boldsymbol {\mu }}=\langle \mu _{1},\mu _{2},\ldots ,\mu _{d}\rangle .}In this case the uniform probability1/d{\displaystyle 1/d}should be replaced by the known incidence rate of the control populationμi{\displaystyle \mu _{i}}to calculate the smoothed estimator: As a consistency check, if the empirical estimator happens to equal the incidence rate, i.e.μi=xi/N,{\displaystyle \mu _{i}=x_{i}/N,}the smoothed estimator is independent ofα{\displaystyle \alpha }and also equals the incidence rate. Additive smoothing is commonly a component ofnaive Bayes classifiers. In abag of words modelof natural language processing and information retrieval, the data consists of the number of occurrences of each word in a document. Additive smoothing allows the assignment of non-zero probabilities to words which do not occur in the sample. Studies have shown that additive smoothing is more effective than other probability smoothing methods in several retrieval tasks such as language-model-basedpseudo-relevance feedbackandrecommender systems.[5][6]
https://en.wikipedia.org/wiki/Additive_smoothing
Morphological typologyis awayof classifying the languages of the world that groups languages according to their commonmorphologicalstructures. The field organizes languages on the basis of how those languages formwordsby combiningmorphemes.Analyticlanguages contain very littleinflection, instead relying on features likeword orderand auxiliary words to convey meaning.Syntheticlanguages, ones that are not analytic, are divided into two categories:agglutinativeandfusionallanguages. Agglutinative languages rely primarily on discrete particles (prefixes,suffixes, andinfixes) for inflection, while fusional languages "fuse" inflectional categories together, often allowing one word ending to contain several categories, such that the original root can be difficult to extract. A further subcategory of agglutinative languages arepolysyntheticlanguages, which takeagglutinationto a higher level by constructing entire sentences, includingnouns, as one word. Analytic, fusional, and agglutinative languages can all be found in many regions of the world. However, each category is dominant in some families and regions and essentially nonexistent in others. Analytic languages encompass theSino-Tibetanfamily, includingChinese, many languages in Southeast Asia, the Pacific, and West Africa, and a few of theGermanic languages. Fusional languages encompass most of theIndo-Europeanfamily—for example,French,Russian, andHindi—as well as theSemiticfamily and a few members of theUralicfamily. Most of the world's languages, however, are agglutinative, including theTurkic,Japonic,Dravidian, andBantulanguages and most families in the Americas, Australia, the Caucasus, and non-SlavicRussia.Constructed languagestake a variety of morphological alignments. The concept of discrete morphological categories has been criticized. Some linguists argue that most, if not all, languages are in a permanent state of transition, normally from fusional to analytic to agglutinative to fusional again. Others take issue with the definitions of the categories, arguing that they conflate several distinct, if related, variables. The field was first developed by brothersFriedrich von SchlegelandAugust von Schlegel.[citation needed] Analytic languages show a low ratio ofmorphemestowords; in fact, the correspondence is nearly one-to-one. Sentences in analytic languages are composed of independent root morphemes. Grammatical relations between words are expressed by separate words where they might otherwise be expressed by affixes, which are present to a minimal degree in such languages. There is little to no morphological change in words: they tend to be uninflected. Grammatical categories are indicated by word order (for example, inversion of verb and subject for interrogative sentences) or by bringing in additional words (for example, a word for "some" or "many" instead of a pluralinflectionlike English-s). Individual words carry a general meaning (root concept); nuances are expressed by other words. Finally, in analytic languages context and syntax are more important thanmorphology. Analytic languages include some of the majorEast Asian languages, such asChinese, andVietnamese. Note that theideographic writingsystems of these languages play a strong role in regimenting linguistic continuity according to an analytic, or isolating, morphology (cf.orthography).[citation needed] Additionally,Englishis moderately analytic, and it andAfrikaanscan be considered as some of the most analytic of all Indo-European languages. However, they are traditionally analyzed asfusional languages. A related concept is theisolating language, one in which there is only one, or on average close to one,morphemeper word. Not all analytic languages are isolating; for example, Chinese and English possess manycompound words, but contain few inflections for them. Synthetic languages form words by affixing a given number of dependent morphemes to a root morpheme. The morphemes may be distinguishable from the root, or they may not. They may be fused with it or among themselves (in that multiple pieces of grammatical information may potentially be packed into one morpheme). Word order is less important for these languages than it is for analytic languages, since individual words express the grammatical relations that would otherwise be indicated by syntax. In addition, there tends to be a high degree ofconcordance(agreement, or cross-reference between different parts of the sentence). Therefore, morphology in synthetic languages is more important than syntax. MostIndo-European languagesare moderately synthetic. There are two subtypes of synthesis, according to whether morphemes are clearly differentiable or not. These subtypes areagglutinativeandfusional(orinflectionalorflectionalin older terminology). Morphemes in fusional languages are not readily distinguishable from the root or among themselves. Several grammatical bits of meaning may be fused into one affix. Morphemes may also be expressed by internal phonological changes in the root (i.e.morphophonology), such asconsonant gradationandvowel gradation, or bysuprasegmentalfeatures such asstressortone, which are of course inseparable from the root. TheIndo-EuropeanandSemiticlanguages are the most typically cited examples of fusional languages.[1]However, others have been described. For example,Navajois sometimes categorized as a fusional language because its complex system of verbal affixes has become condensed and irregular enough that discerning individual morphemes is rarely possible.[2][3]SomeUralic languagesare described as fusional, particularly theSami languagesandEstonian. On the other hand, not all Indo-European languages are fusional; for example, English andAfrikaans, as well as someNorth Germanic languageslean more toward the analytic. Agglutinative languages have words containing several morphemes that are always clearly differentiable from one another in that each morpheme represents only one grammatical meaning and the boundaries between those morphemes are easily demarcated; that is, the bound morphemes are affixes, and they may be individually identified. Agglutinative languages tend to have a high number of morphemes per word, and their morphology is usually highly regular, with a notable exception beingGeorgian, among others. Agglutinative languages includeHungarian,Tamil,Telugu,Kannada,Malayalam,Turkish,Saho,Mongolian,Korean,Japanese,Swahili,ZuluandIndonesian. In 1836,Wilhelm von Humboldtproposed a third category for classifying languages, a category that he labeledpolysynthetic. (The termpolysynthesiswas first used in linguistics byPeter Stephen DuPonceauwho borrowed it from chemistry.) These languages have a high morpheme-to-word ratio, a highly regular morphology, and a tendency for verb forms to include morphemes that refer to several arguments besides the subject (polypersonalism). Another feature of polysynthetic languages is commonly expressed as "the ability to form words that are equivalent to whole sentences in other languages". The distinction between synthetic languages and polysynthetic languages is therefore relative: the place of one language largely depends on its relation to other languages displaying similar characteristics on the same scale. Many Amerindian languages are polysynthetic; indeed, most of the world's polysynthetic languages are native to North America.[4]Inuktitutis one example, for instance the word-phrase:tavvakiqutiqarpiitroughly translates to "Do you have any tobacco for sale?".[citation needed]However, it is a common misconception that polysynthetic morphology is universal among Amerindian languages.ChinookandShoshone, for instance, are simply agglutinative, as their nouns stand mostly separate from their verbs.[1] Oligosynthetic languages are ones in which very few morphemes, perhaps only a few hundred, combine as in polysynthetic languages.Benjamin WhorfcategorizedNahuatlandBlackfootas oligosynthetic, but most linguists disagree with this classification and instead label them polysynthetic or simply agglutinative. No known languages are widely accepted as oligosynthetic.[citation needed] Constructed languages(conlangs) take a variety of morphological alignments. Despite theIndo-Europeanfamily's typical fusional alignment, mostuniversal auxiliary languagesbased on the family have ended up being agglutinative morphologically because agglutination is more transparent than fusion and thus furthers various goals of the language creators. This pattern began withVolapük, which is strongly agglutinative, and was continued withEsperanto, which tends to be agglutinative as well.[5]Other languages inspired by Esperanto likeIdoandNovialalso tend to be agglutinative, although some examples likeInterlinguamight be considered more fusional.Zonal constructed languagessuch asInterslavictend to follow the language families they are based on. Fictional languagesvary amongJ. R. R. Tolkien's languages for theMiddle earthuniverse, for example,Sindarinis fusional whileQuenyais agglutinative.[6]Amongengineered languages,Toki Ponais completely analytic, as it contains only a limited set of words with no inflections or compounds.Lojbanis analytic to the extent that everygismu(basic word, not counting particles) involves pre-determined syntactical roles for everygismucoming after it in a clause, though it does involve agglutination of roots when formingcalques.[7]Ithkuil, on the other hand, contains both agglutination in its addition of affixes and extreme fusion in that these affixes often result from the fusion of numerous morphemes viaablaut.[8] While the above scheme of analytic, fusional, and agglutinative languages dominated linguistics for many years—at least since the 1920s—it has fallen out of favor more recently. A common objection has been that most languages display features of all three types, if not in equal measure, some of them contending that a fully fusional language would be completelysuppletive. Jennifer Garland of theUniversity of California, Santa BarbaragivesSinhalaas an example of a language that demonstrates the flaws in the traditional scheme: she argues that while its affixes,clitics, andpostpositionswould normally be considered markers of agglutination, they are too closely intertwined to the root, yet classifying the language as primarily fusional, as it usually is, is also unsatisfying.[9] R. M. W. Dixon(1998) theorizes that languages normally evolve in a cycle fromfusionaltoanalytictoagglutinativeto fusional again. He analogizes this cycle to a clock, placing fusional languages at 12:00, analytic languages at 4:00, and agglutinative languages at 8:00. Dixon suggests that, for example,Old Chinesewas at about 3:00 (mostly analytic with some fusional elements), while modern varieties are around 5:00 (leaning instead toward agglutination), and also guesses thatProto-Tai-Kadaimay have been fusional. On the other hand, he argues that modernFinno-UgricandDravidianlanguages are on the transition from agglutinative to fusional, with the Finno-Ugric family being further along. Dixon cites theEgyptian languageas one that has undergone the entire cycle in three thousand years.[10] Other linguists have proposed similar concepts. For instance,Elly van Gelderensees the regular patterns of linguistic change as a cycle. In the unidirectional cycles, older features are replaced by newer items. One example isgrammaticalization, where a lexical item became a grammatical marker. The markers may further grammaticalize, and a new marker may come in place to substitute the loss of meaning of the previous marker.[11][12] TheWorld Atlas of Language Structures(WALS) sees the categorization of languages as strictly analytic, agglutinative, or fusional as misleading, arguing that these categories conflate multiple variables. WALS lists these variables as: These categories allow to capture non-traditional distributions of typological traits. For example, high exponence for nouns (e.g., case + number) is typically thought of as a trait of fusional languages. However, it is absent in many traditionally fusional languages likeArabicbut present in many traditionally agglutinative languages likeFinnish,Yaqui, andCree.[14]
https://en.wikipedia.org/wiki/Morphological_richness
Instatistics,hypotheses suggested by a given dataset, when tested with the same dataset that suggested them, are likely to be accepted even when they are not true. This is becausecircular reasoning(double dipping) would be involved: something seems true in the limited data set; therefore we hypothesize that it is true in general; therefore we wrongly test it on the same, limited data set, which seems to confirm that it is true. Generating hypotheses based on data already observed, in the absence of testing them on new data, is referred to aspost hoctheorizing(fromLatinpost hoc, "after this"). The correct procedure is to test any hypothesis on a data set that was not used to generate the hypothesis. Testing a hypothesis suggested by the data can very easily result in false positives (type I errors). If one looks long enough and in enough different places, eventually data can be found to support any hypothesis. Yet, these positive data do not by themselves constituteevidencethat the hypothesis is correct. The negative test data that were thrown out are just as important, because they give one an idea of how common the positive results are compared to chance. Running an experiment, seeing a pattern in the data, proposing a hypothesis from that pattern, then using thesameexperimental data as evidence for the new hypothesis is extremely suspect, because data from all other experiments, completed or potential, has essentially been "thrown out" by choosing to look only at the experiments that suggested the new hypothesis in the first place. A large set of tests as described above greatly inflates theprobabilityoftype I erroras all but the data most favorable to thehypothesisis discarded. This is a risk, not only inhypothesis testingbut in allstatistical inferenceas it is often problematic to accurately describe the process that has been followed in searching and discardingdata. In other words, one wants to keep all data (regardless of whether they tend to support or refute the hypothesis) from "good tests", but it is sometimes difficult to figure out what a "good test" is. It is a particular problem instatistical modelling, where many different models are rejected bytrial and errorbefore publishing a result (see alsooverfitting,publication bias). The error is particularly prevalent indata miningandmachine learning. It also commonly occurs inacademic publishingwhere only reports of positive, rather than negative, results tend to be accepted, resulting in the effect known aspublication bias. All strategies for sound testing of hypotheses suggested by the data involve including a wider range of tests in an attempt to validate or refute the new hypothesis. These include: Henry Scheffé's simultaneous testof all contrasts inmultiple comparisonproblems is the most[citation needed]well-known remedy in the case ofanalysis of variance.[1]It is a method designed for testing hypotheses suggested by the data while avoiding the fallacy described above.
https://en.wikipedia.org/wiki/Post_hoc_theorizing
Inmathematics, thefactorialof a non-negativeintegern{\displaystyle n},denotedbyn!{\displaystyle n!},is theproductof all positive integers less than or equalton{\displaystyle n}.The factorialofn{\displaystyle n}also equals the product ofn{\displaystyle n}with the next smaller factorial:n!=n×(n−1)×(n−2)×(n−3)×⋯×3×2×1=n×(n−1)!{\displaystyle {\begin{aligned}n!&=n\times (n-1)\times (n-2)\times (n-3)\times \cdots \times 3\times 2\times 1\\&=n\times (n-1)!\\\end{aligned}}}For example,5!=5×4!=5×4×3×2×1=120.{\displaystyle 5!=5\times 4!=5\times 4\times 3\times 2\times 1=120.}The value of 0! is 1, according to the convention for anempty product.[1] Factorials have been discovered in several ancient cultures, notably inIndian mathematicsin the canonical works ofJain literature, and by Jewish mystics in the Talmudic bookSefer Yetzirah. The factorial operation is encountered in many areas of mathematics, notably incombinatorics, where its most basic use counts the possible distinctsequences– thepermutations– ofn{\displaystyle n}distinct objects: therearen!{\displaystyle n!}.Inmathematical analysis, factorials are used inpower seriesfor theexponential functionand other functions, and they also have applications inalgebra,number theory,probability theory, andcomputer science. Much of the mathematics of the factorial function was developed beginning in the late 18th and early 19th centuries.Stirling's approximationprovides an accurate approximation to the factorial of large numbers, showing that it grows more quickly thanexponential growth.Legendre's formuladescribes the exponents of the prime numbers in aprime factorizationof the factorials, and can be used to count the trailing zeros of the factorials.Daniel BernoulliandLeonhard Eulerinterpolatedthe factorial function to a continuous function ofcomplex numbers, except at the negative integers, the (offset)gamma function. Many other notable functions and number sequences are closely related to the factorials, including thebinomial coefficients,double factorials,falling factorials,primorials, andsubfactorials. Implementations of the factorial function are commonly used as an example of differentcomputer programmingstyles, and are included inscientific calculatorsand scientific computing software libraries. Although directly computing large factorials using the product formula or recurrence is not efficient, faster algorithms are known, matching to within a constant factor the time for fastmultiplication algorithmsfor numbers with the same number of digits. The concept of factorials has arisen independently in many cultures: From the late 15th century onward, factorials became the subject of study by Western mathematicians. In a 1494 treatise, Italian mathematicianLuca Paciolicalculated factorials up to 11!, in connection with a problem of dining table arrangements.[12]Christopher Claviusdiscussed factorials in a 1603 commentary on the work ofJohannes de Sacrobosco, and in the 1640s, French polymathMarin Mersennepublished large (but not entirely correct) tables of factorials, up to 64!, based on the work of Clavius.[13]Thepower seriesfor theexponential function, with the reciprocals of factorials for its coefficients, was first formulated in 1676 byIsaac Newtonin a letter toGottfried Wilhelm Leibniz.[14]Other important works of early European mathematics on factorials include extensive coverage in a 1685 treatise byJohn Wallis, a study of their approximate values for large values ofn{\displaystyle n}byAbraham de Moivrein 1721, a 1729 letter fromJames Stirlingto de Moivre stating what became known asStirling's approximation, and work at the same time byDaniel BernoulliandLeonhard Eulerformulating the continuous extension of the factorial function to thegamma function.[15]Adrien-Marie LegendreincludedLegendre's formula, describing the exponents in thefactorizationof factorials intoprime powers, in an 1808 text onnumber theory.[16] The notationn!{\displaystyle n!}for factorials was introduced by the French mathematicianChristian Krampin 1808.[17]Many other notations have also been used. Another later notation|n_{\displaystyle \vert \!{\underline {\,n}}}, in which the argument of the factorial was half-enclosed by the left and bottom sides of a box, was popular for some time in Britain and America but fell out of use, perhaps because it is difficult to typeset.[17]The word "factorial" (originally French:factorielle) was first used in 1800 byLouis François Antoine Arbogast,[18]in the first work onFaà di Bruno's formula,[19]but referring to a more general concept of products ofarithmetic progressions. The "factors" that this name refers to are the terms of the product formula for the factorial.[20] The factorial function of a positive integern{\displaystyle n}is defined by the product of all positive integers not greater thann{\displaystyle n}[1]n!=1⋅2⋅3⋯(n−2)⋅(n−1)⋅n.{\displaystyle n!=1\cdot 2\cdot 3\cdots (n-2)\cdot (n-1)\cdot n.}This may be written more concisely inproduct notationas[1]n!=∏i=1ni.{\displaystyle n!=\prod _{i=1}^{n}i.} If this product formula is changed to keep all but the last term, it would define a product of the same form, for a smaller factorial. This leads to arecurrence relation, according to which each value of the factorial function can be obtained by multiplying the previous valuebyn{\displaystyle n}:[21]n!=n⋅(n−1)!.{\displaystyle n!=n\cdot (n-1)!.}For example,5!=5⋅4!=5⋅24=120{\displaystyle 5!=5\cdot 4!=5\cdot 24=120}. The factorialof0{\displaystyle 0}is1{\displaystyle 1},or in symbols,0!=1{\displaystyle 0!=1}.There are several motivations for this definition: The earliest uses of the factorial function involve countingpermutations: there aren!{\displaystyle n!}different ways of arrangingn{\displaystyle n}distinct objects into a sequence.[26]Factorials appear more broadly in many formulas incombinatorics, to account for different orderings of objects. For instance thebinomial coefficients(nk){\displaystyle {\tbinom {n}{k}}}count thek{\displaystyle k}-elementcombinations(subsets ofk{\displaystyle k}elements)from a set withn{\displaystyle n}elements,and can be computed from factorials using the formula[27](nk)=n!k!(n−k)!.{\displaystyle {\binom {n}{k}}={\frac {n!}{k!(n-k)!}}.}TheStirling numbers of the first kindsum to the factorials, and count the permutationsofn{\displaystyle n}grouped into subsets with the same numbers of cycles.[28]Another combinatorial application is in countingderangements, permutations that do not leave any element in its original position; the number of derangements ofn{\displaystyle n}items is thenearest integerton!/e{\displaystyle n!/e}.[29] Inalgebra, the factorials arise through thebinomial theorem, which uses binomial coefficients to expand powers of sums.[30]They also occur in the coefficients used to relate certain families of polynomials to each other, for instance inNewton's identitiesforsymmetric polynomials.[31]Their use in counting permutations can also be restated algebraically: the factorials are theordersof finitesymmetric groups.[32]Incalculus, factorials occur inFaà di Bruno's formulafor chaining higher derivatives.[19]Inmathematical analysis, factorials frequently appear in the denominators ofpower series, most notably in the series for theexponential function,[14]ex=1+x1+x22+x36+⋯=∑i=0∞xii!,{\displaystyle e^{x}=1+{\frac {x}{1}}+{\frac {x^{2}}{2}}+{\frac {x^{3}}{6}}+\cdots =\sum _{i=0}^{\infty }{\frac {x^{i}}{i!}},}and in the coefficients of otherTaylor series(in particular those of thetrigonometricandhyperbolic functions), where they cancel factors ofn!{\displaystyle n!}coming from then{\displaystyle n}th derivativeofxn{\displaystyle x^{n}}.[33]This usage of factorials in power series connects back toanalytic combinatoricsthrough theexponential generating function, which for acombinatorial classwithni{\displaystyle n_{i}}elements ofsizei{\displaystyle i}is defined as the power series[34]∑i=0∞xinii!.{\displaystyle \sum _{i=0}^{\infty }{\frac {x^{i}n_{i}}{i!}}.} Innumber theory, the most salient property of factorials is thedivisibilityofn!{\displaystyle n!}by all positive integers upton{\displaystyle n},described more precisely for prime factors byLegendre's formula. It follows that arbitrarily largeprime numberscan be found as the prime factors of the numbersn!±1{\displaystyle n!\pm 1}, leading to a proof ofEuclid's theoremthat the number of primes is infinite.[35]Whenn!±1{\displaystyle n!\pm 1}is itself prime it is called afactorial prime;[36]relatedly,Brocard's problem, also posed bySrinivasa Ramanujan, concerns the existence ofsquare numbersof the formn!+1{\displaystyle n!+1}.[37]In contrast, the numbersn!+2,n!+3,…n!+n{\displaystyle n!+2,n!+3,\dots n!+n}must all be composite, proving the existence of arbitrarily largeprime gaps.[38]An elementaryproof of Bertrand's postulateon the existence of a prime in any interval of theform[n,2n]{\displaystyle [n,2n]},one of the first results ofPaul Erdős, was based on the divisibility properties of factorials.[39][40]Thefactorial number systemis amixed radixnotation for numbers in which the place values of each digit are factorials.[41] Factorials are used extensively inprobability theory, for instance in thePoisson distribution[42]and in the probabilities ofrandom permutations.[43]Incomputer science, beyond appearing in the analysis ofbrute-force searchesover permutations,[44]factorials arise in thelower boundoflog2⁡n!=nlog2⁡n−O(n){\displaystyle \log _{2}n!=n\log _{2}n-O(n)}on the number of comparisons needed tocomparison sorta set ofn{\displaystyle n}items,[45]and in the analysis of chainedhash tables, where the distribution of keys per cell can be accurately approximated by a Poisson distribution.[46]Moreover, factorials naturally appear in formulae fromquantumandstatistical physics, where one often considers all the possible permutations of a set of particles. Instatistical mechanics, calculations ofentropysuch asBoltzmann's entropy formulaor theSackur–Tetrode equationmust correct the count ofmicrostatesby dividing by the factorials of the numbers of each type ofindistinguishable particleto avoid theGibbs paradox. Quantum physics provides the underlying reason for why these corrections are necessary.[47] As a functionofn{\displaystyle n},the factorial has faster thanexponential growth, but grows more slowly than adouble exponential function.[48]Its growth rate is similartonn{\displaystyle n^{n}},but slower by an exponential factor. One way of approaching this result is by taking thenatural logarithmof the factorial, which turns its product formula into a sum, and then estimating the sum by an integral:ln⁡n!=∑x=1nln⁡x≈∫1nln⁡xdx=nln⁡n−n+1.{\displaystyle \ln n!=\sum _{x=1}^{n}\ln x\approx \int _{1}^{n}\ln x\,dx=n\ln n-n+1.}Exponentiating the result (and ignoring the negligible+1{\displaystyle +1}term) approximatesn!{\displaystyle n!}as(n/e)n{\displaystyle (n/e)^{n}}.[49]More carefully bounding the sum both above and below by an integral, using thetrapezoid rule, shows that this estimate needs a correction factor proportionalton{\displaystyle {\sqrt {n}}}.The constant of proportionality for this correction can be found from theWallis product, which expressesπ{\displaystyle \pi }as a limiting ratio of factorials and powers of two. The result of these corrections isStirling's approximation:[50]n!∼2πn(ne)n.{\displaystyle n!\sim {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}\,.}Here, the∼{\displaystyle \sim }symbol means that, asn{\displaystyle n}goes to infinity, the ratio between the left and right sides approaches one in thelimit. Stirling's formula provides the first term in anasymptotic seriesthat becomes even more accurate when taken to greater numbers of terms:[51]n!∼2πn(ne)n(1+112n+1288n2−13951840n3−5712488320n4+⋯).{\displaystyle n!\sim {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}\left(1+{\frac {1}{12n}}+{\frac {1}{288n^{2}}}-{\frac {139}{51840n^{3}}}-{\frac {571}{2488320n^{4}}}+\cdots \right).}An alternative version uses only odd exponents in the correction terms:[51]n!∼2πn(ne)nexp⁡(112n−1360n3+11260n5−11680n7+⋯).{\displaystyle n!\sim {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}\exp \left({\frac {1}{12n}}-{\frac {1}{360n^{3}}}+{\frac {1}{1260n^{5}}}-{\frac {1}{1680n^{7}}}+\cdots \right).}Many other variations of these formulas have also been developed, bySrinivasa Ramanujan,Bill Gosper, and others.[51] Thebinary logarithmof the factorial, used to analyzecomparison sorting, can be very accurately estimated using Stirling's approximation. In the formula below, theO(1){\displaystyle O(1)}term invokesbig O notation.[45]log2⁡n!=nlog2⁡n−(log2⁡e)n+12log2⁡n+O(1).{\displaystyle \log _{2}n!=n\log _{2}n-(\log _{2}e)n+{\frac {1}{2}}\log _{2}n+O(1).} The product formula for the factorial implies thatn!{\displaystyle n!}isdivisibleby allprime numbersthat are atmostn{\displaystyle n},and by no larger prime numbers.[52]More precise information about its divisibility is given byLegendre's formula, which gives the exponent of each primep{\displaystyle p}in the prime factorization ofn!{\displaystyle n!}as[53][54]∑i=1∞⌊npi⌋=n−sp(n)p−1.{\displaystyle \sum _{i=1}^{\infty }\left\lfloor {\frac {n}{p^{i}}}\right\rfloor ={\frac {n-s_{p}(n)}{p-1}}.}Heresp(n){\displaystyle s_{p}(n)}denotes the sum of thebase-p{\displaystyle p}digitsofn{\displaystyle n},and the exponent given by this formula can also be interpreted in advanced mathematics as thep-adic valuationof the factorial.[54]Applying Legendre's formula to the product formula forbinomial coefficientsproducesKummer's theorem, a similar result on the exponent of each prime in the factorization of a binomial coefficient.[55]Grouping the prime factors of the factorial intoprime powersin different ways produces themultiplicative partitions of factorials.[56] The special case of Legendre's formula forp=5{\displaystyle p=5}gives the number oftrailing zerosin the decimal representation of the factorials.[57]According to this formula, the number of zeros can be obtained by subtracting the base-5 digits ofn{\displaystyle n}fromn{\displaystyle n}, and dividing the result by four.[58]Legendre's formula implies that the exponent of the primep=2{\displaystyle p=2}is always larger than the exponent forp=5{\displaystyle p=5},so each factor of five can be paired with a factor of two to produce one of these trailing zeros.[57]The leading digits of the factorials are distributed according toBenford's law.[59]Every sequence of digits, in any base, is the sequence of initial digits of some factorial number in that base.[60] Another result on divisibility of factorials,Wilson's theorem, states that(n−1)!+1{\displaystyle (n-1)!+1}is divisible byn{\displaystyle n}if and only ifn{\displaystyle n}is aprime number.[52]For any givenintegerx{\displaystyle x},theKempner functionofx{\displaystyle x}is given by the smallestn{\displaystyle n}for whichx{\displaystyle x}dividesn!{\displaystyle n!}.[61]For almost all numbers (all but a subset of exceptions withasymptotic densityzero), it coincides with the largest prime factorofx{\displaystyle x}.[62] The product of two factorials,m!⋅n!{\displaystyle m!\cdot n!},always evenly divides(m+n)!{\displaystyle (m+n)!}.[63]There are infinitely many factorials that equal the product of other factorials: ifn{\displaystyle n}is itself any product of factorials, thenn!{\displaystyle n!}equals that same product multiplied by one more factorial,(n−1)!{\displaystyle (n-1)!}.The only known examples of factorials that are products of other factorials but are not of this "trivial" form are9!=7!⋅3!⋅3!⋅2!{\displaystyle 9!=7!\cdot 3!\cdot 3!\cdot 2!},10!=7!⋅6!=7!⋅5!⋅3!{\displaystyle 10!=7!\cdot 6!=7!\cdot 5!\cdot 3!},and16!=14!⋅5!⋅2!{\displaystyle 16!=14!\cdot 5!\cdot 2!}.[64]It would follow from theabcconjecturethat there are only finitely many nontrivial examples.[65] Thegreatest common divisorof the values of aprimitive polynomialof degreed{\displaystyle d}over the integers evenly dividesd!{\displaystyle d!}.[63] There are infinitely many ways to extend the factorials to acontinuous function.[66]The most widely used of these[67]uses thegamma function, which can be defined for positive real numbers as theintegralΓ(z)=∫0∞xz−1e−xdx.{\displaystyle \Gamma (z)=\int _{0}^{\infty }x^{z-1}e^{-x}\,dx.}The resulting function is related to the factorial of a non-negative integern{\displaystyle n}by the equationn!=Γ(n+1),{\displaystyle n!=\Gamma (n+1),}which can be used as a definition of the factorial for non-integer arguments. At all valuesx{\displaystyle x}for which bothΓ(x){\displaystyle \Gamma (x)}andΓ(x−1){\displaystyle \Gamma (x-1)}are defined, the gamma function obeys thefunctional equationΓ(n)=(n−1)Γ(n−1),{\displaystyle \Gamma (n)=(n-1)\Gamma (n-1),}generalizing therecurrence relationfor the factorials.[66] The same integral converges more generally for anycomplex numberz{\displaystyle z}whose real part is positive. It can be extended to the non-integer points in the rest of thecomplex planeby solving for Euler'sreflection formulaΓ(z)Γ(1−z)=πsin⁡πz.{\displaystyle \Gamma (z)\Gamma (1-z)={\frac {\pi }{\sin \pi z}}.}However, this formula cannot be used at integers because, for them, thesin⁡πz{\displaystyle \sin \pi z}term would produce adivision by zero. The result of this extension process is ananalytic function, theanalytic continuationof the integral formula for the gamma function. It has a nonzero value at all complex numbers, except for the non-positive integers where it hassimple poles. Correspondingly, this provides a definition for the factorial at all complex numbers other than the negative integers.[67]One property of the gamma function, distinguishing it from other continuous interpolations of the factorials, is given by theBohr–Mollerup theorem, which states that the gamma function (offset by one) is the onlylog-convexfunction on the positive real numbers that interpolates the factorials and obeys the same functional equation. A related uniqueness theorem ofHelmut Wielandtstates that the complex gamma function and its scalar multiples are the onlyholomorphic functionson the positive complex half-plane that obey the functional equation and remain bounded for complex numbers with real part between 1 and 2.[68] Other complex functions that interpolate the factorial values includeHadamard's gamma function, which is anentire functionover all the complex numbers, including the non-positive integers.[69][70]In thep-adic numbers, it is not possible to continuously interpolate the factorial function directly, because the factorials of large integers (a dense subset of thep-adics) converge to zero according to Legendre's formula, forcing any continuous function that is close to their values to be zero everywhere. Instead, thep-adic gamma functionprovides a continuous interpolation of a modified form of the factorial, omitting the factors in the factorial that are divisible byp.[71] Thedigamma functionis thelogarithmic derivativeof the gamma function. Just as the gamma function provides a continuous interpolation of the factorials, offset by one, the digamma function provides a continuous interpolation of theharmonic numbers, offset by theEuler–Mascheroni constant.[72] The factorial function is a common feature inscientific calculators.[73]It is also included in scientific programming libraries such as thePythonmathematical functions module[74]and theBoost C++ library.[75]If efficiency is not a concern, computing factorials is trivial: just successively multiply a variable initializedto1{\displaystyle 1}by the integers upton{\displaystyle n}.The simplicity of this computation makes it a common example in the use of different computer programming styles and methods.[76] The computation ofn!{\displaystyle n!}can be expressed inpseudocodeusingiteration[77]as or usingrecursion[78]based on its recurrence relation as Other methods suitable for its computation includememoization,[79]dynamic programming,[80]andfunctional programming.[81]Thecomputational complexityof these algorithms may be analyzed using the unit-costrandom-access machinemodel of computation, in which each arithmetic operation takes constant time and each number uses a constant amount of storage space. In this model, these methods can computen!{\displaystyle n!}in timeO(n){\displaystyle O(n)},and the iterative version uses spaceO(1){\displaystyle O(1)}.Unless optimized fortail recursion, the recursive version takes linear space to store itscall stack.[82]However, this model of computation is only suitable whenn{\displaystyle n}is small enough to allown!{\displaystyle n!}to fit into amachine word.[83]The values 12! and 20! are the largest factorials that can be stored in, respectively, the32-bit[84]and64-bitintegers.[85]Floating pointcan represent larger factorials, but approximately rather than exactly, and will still overflow for factorials larger than170!{\displaystyle 170!}.[84] The exact computation of larger factorials involvesarbitrary-precision arithmetic, because offast growthandinteger overflow. Time of computation can be analyzed as a function of the number of digits or bits in the result.[85]By Stirling's formula,n!{\displaystyle n!}hasb=O(nlog⁡n){\displaystyle b=O(n\log n)}bits.[86]TheSchönhage–Strassen algorithmcan produce ab{\displaystyle b}-bitproduct in timeO(blog⁡blog⁡log⁡b){\displaystyle O(b\log b\log \log b)},and fastermultiplication algorithmstaking timeO(blog⁡b){\displaystyle O(b\log b)}are known.[87]However, computing the factorial involves repeated products, rather than a single multiplication, so these time bounds do not apply directly. In this setting, computingn!{\displaystyle n!}by multiplying the numbers from 1ton{\displaystyle n}in sequence is inefficient, because it involvesn{\displaystyle n}multiplications, a constant fraction of which take timeO(nlog2⁡n){\displaystyle O(n\log ^{2}n)}each, giving total timeO(n2log2⁡n){\displaystyle O(n^{2}\log ^{2}n)}.A better approach is to perform the multiplications as adivide-and-conquer algorithmthat multiplies a sequence ofi{\displaystyle i}numbers by splitting it into two subsequences ofi/2{\displaystyle i/2}numbers, multiplies each subsequence, and combines the results with one last multiplication. This approach to the factorial takes total timeO(nlog3⁡n){\displaystyle O(n\log ^{3}n)}:one logarithm comes from the number of bits in the factorial, a second comes from the multiplication algorithm, and a third comes from the divide and conquer.[88] Even better efficiency is obtained by computingn!from its prime factorization, based on the principle thatexponentiation by squaringis faster than expanding an exponent into a product.[86][89]An algorithm for this byArnold Schönhagebegins by finding the list of the primes upton{\displaystyle n},for instance using thesieve of Eratosthenes, and uses Legendre's formula to compute the exponent for each prime. Then it computes the product of the prime powers with these exponents, using a recursive algorithm, as follows: The product of all primes up ton{\displaystyle n}is anO(n){\displaystyle O(n)}-bit number, by theprime number theorem, so the time for the first step isO(nlog2⁡n){\displaystyle O(n\log ^{2}n)}, with one logarithm coming from the divide and conquer and another coming from the multiplication algorithm. In the recursive calls to the algorithm, the prime number theorem can again be invoked to prove that the numbers of bits in the corresponding products decrease by a constant factor at each level of recursion, so the total time for these steps at all levels of recursion adds in ageometric seriestoO(nlog2⁡n){\displaystyle O(n\log ^{2}n)}.The time for the squaring in the second step and the multiplication in the third step are againO(nlog2⁡n){\displaystyle O(n\log ^{2}n)},because each is a single multiplication of a number withO(nlog⁡n){\displaystyle O(n\log n)}bits. Again, at each level of recursion the numbers involved have a constant fraction as many bits (because otherwise repeatedly squaring them would produce too large a final result) so again the amounts of time for these steps in the recursive calls add in a geometric seriestoO(nlog2⁡n){\displaystyle O(n\log ^{2}n)}.Consequentially, the whole algorithm takestimeO(nlog2⁡n){\displaystyle O(n\log ^{2}n)},proportional to a single multiplication with the same number of bits in its result.[89] Several other integer sequences are similar to or related to the factorials:
https://en.wikipedia.org/wiki/Factorial
Collective consciousness,collective conscience, orcollective conscious(French:conscience collective) is the set of shared beliefs, ideas, and moral attitudes which operate as a unifying force within society.[1]In general, it does not refer to the specifically moral conscience, but to a shared understanding of social norms.[2] The modern concept of what can be considered collective consciousness includessolidarityattitudes,memes, extreme behaviors likegroup-thinkandherd behavior, and collectively shared experiences during collective rituals, dance parties,[3]and the discarnate entities which can be experienced from psychedelic use.[4] Rather than existing as separate individuals, people come together as dynamic groups to share resources and knowledge. It has also developed as a way of describing how an entire community comes together to share similar values. This has also been termed "hive mind", "group mind", "mass mind", and "social mind".[5] The term was introduced by the FrenchsociologistÉmile Durkheimin hisThe Division of Labour in Societyin 1893. The French wordconsciencegenerally means "conscience", "consciousness", "awareness",[6]or "perception".[7]Given the multiplicity of definitions, translators of Durkheim disagree on which is most appropriate, or whether the translation should depend on the context. Some prefer to treat the word 'conscience' as an untranslatable foreign word or technical term, without its normal English meaning.[8]As for "collective", Durkheim makes clear that he is notreifyingorhypostasizingthis concept; for him, it is "collective" simply in the sense that it is common to many individuals;[9]cf.social fact. Scipio Sighelepublished ‘La Foule Criminele’ one year before Durkheim, in which he describes emergent characteristics of crowds that don’t appear in the individuals that form the crowd. He doesn’t call this collective consciousness, but ‘âme de la foule’ (soul of the crowd).[10]This term returns inSigmund Freud’s book about mass psychology and essentially overlaps with Durkheims concept of collective consciousness. Durkheim used the term in his booksThe Division of Labour in Society(1893),The Rules of the Sociological Method(1895),Suicide(1897), andThe Elementary Forms of Religious Life(1912). InThe Division of Labour, Durkheim argued that in traditional/primitive societies (those based around clan, family or tribal relationships),totemicreligion played an important role in uniting members through the creation of a common consciousness. In societies of this type, the contents of an individual's consciousness are largely shared in common with all other members of their society, creating amechanical solidaritythrough mutual likeness. The totality of beliefs and sentiments common to the average members of a society forms a determinate system with a life of its own. It can be termed the collective or common consciousness. InSuicide, Durkheim developed the concept ofanomieto refer to the social rather than individual causes of suicide. This relates to the concept of collective consciousness, as if there is a lack of integration or solidarity in society then suicide rates will be higher.[12] Antonio Gramscistates, “A collective consciousness, which is to say a living organism, is formed only after the unification of the multiplicity through friction on the part of the individuals; nor can one say that ‘silence’ is not a multiplicity.”[13]A form of collective consciousness can be formed from Gramsci's conception that the presence of ahegemonycan mobilize the collective consciousness of thoseoppressedby the ruling ideas ofsociety, or the ruling hegemony. Collective consciousness can refer to amultitudeof different individual forms of consciousness coalescing into a greater whole. In Gramsci's view, a unified whole is composed ofsolidarityamong its different constituent parts, and therefore, this whole cannot be uniformly the same. The unified whole can embrace different forms of consciousness (or individual experiences of social reality), which coexist to reflect the different experiences of themarginalizedpeoples in a given society. This agrees with Gramsci's theory of Marxism andclass struggleapplied to cultural contexts.Cultural Marxism(as distinguished from the right-wing use of the term) embodies the concept of collective consciousness. It incorporatessocial movementsthat are based on some sort of collective identity; these identities can include, for instance,gender,sexual orientation,race, andability, and can be incorporated by collective-based movements into a broader historical material analysis of class struggle. According to Michelle Filippini, “The nature and workings of collective organisms – not only parties, but also trade unions, associations and intermediate bodies in general – represent a specific sphere of reflection in the Prison Notebooks, particularly in regard to the new relationship between State and society that in Gramsci's view emerged during the age of mass politics.”[14]Collective organisms can express collective consciousness. Whether this form of expression finds itself in the realm of the state or the realm of society is up to the direction that the subjects take in expressing their collective consciousness. In Gramsci'sPrison Notebooks, the ongoing conflict betweencivil society, thebureaucracy, and the state necessitates the emergence of a collective consciousness that can often act as an intermediary between these different realms. The public organizations of protest, such aslabor unionsand anti-war organizations, are vehicles that can unite multiple types of collective consciousness. Although identity-based movements are necessary for the progress ofdemocracyand can generate collective consciousness, they cannot completely do so without a unifying framework. This is whyanti-warandlabor movementsprovide an avenue that has united various social movements under the banner of a multiple collective consciousness. This is also why future social movements need to have anethosof collective consciousness if they are to succeed in the long-term. Zukerfield states that “The different disciplines that have studied knowledge share an understanding of it as a product of human subjects – individual, collective, etc.”[15]Knowledgein a sociological sense is derived from social conditions andsocial realities. Collective consciousness also reflects social realities, and sociological knowledge can be gained through the adoption of a collective consciousness. Many different disciplines such asphilosophyandliteratureexamine collective consciousness from different lenses. These different disciplines reach a similar understanding of a collective consciousness despite their different approaches to the subject. The inherent humanness in the idea of collective consciousness refers to a shared way of thinking among human beings in the pursuit of knowledge. Collective consciousness can provide an understanding of the relationship betweenselfand society. As Zukerfeld states, “Even though it impels us, as a first customary gesture, to analyse the subjective (such as individual consciousness) or intersubjective bearers (such as the values of a given society), in other words those which Marxism and sociology examine, now we can approach them in an entirely different light.”[16]“Cognitive materialism”[15]is presented in the work by Zukerfeld as a sort of ‘third way’ between sociological knowledge and Marxism. Cognitive materialism is based on a kind of collective consciousness of themind. This consciousness can be used, with cognitive materialism as a guiding force, by human beings in order to critically analyze society and social conditions. Society is made up of various collective groups, such as the family, community, organizations, regions, nations which as Burns and Egdahl state "can be considered to possess agential capabilities: to think, judge, decide, act, reform; to conceptualize self and others as well as self's actions and interactions; and to reflect.".[17]It is suggested that these different national behaviors vary according to the different collective consciousness between nations. This illustrates that differences in collective consciousness can have practical significance. According to a theory, the character of collective consciousness depends on the type of mnemonic encoding used within a group (Tsoukalas, 2007). The specific type of encoding used has a predictable influence on the group's behavior and collective ideology. Informal groups, that meet infrequently and spontaneously, have a tendency to represent significant aspects of their community as episodic memories. This usually leads to strong social cohesion and solidarity, an indulgent atmosphere, an exclusive ethos and a restriction of social networks. Formal groups, that have scheduled and anonymous meetings, tend to represent significant aspects of their community as semantic memories which usually leads to weak social cohesion and solidarity, a more moderate atmosphere, an inclusive ethos and an expansion of social networks.[18] In a case study of a Serbian folk story, Wolfgang Ernst examines collective consciousness in terms of forms ofmedia, specifically collective oral and literary traditions. "Current discourse analysis drifts away from the 'culturalist turn' of the last two or three decades and its concern with individual and collective memory as an extended target of historical research".[19]There is still a collective consciousness present in terms of the shared appreciation offolk storiesandoral traditions. Folk stories enable the subject and the audiences to come together around a common experience and a shared heritage. In the case of the Serbian folk “gusle”,[20]the Serbian people take pride in this musical instrument of epic poetry and oral tradition and play it at social gatherings. Expressions ofartandcultureare expressions of a collective consciousness or expressions of multiple social realities. Works by Durkheim Works by others
https://en.wikipedia.org/wiki/Collective_consciousness
Inmathematics, amultipleis theproductof any quantity and aninteger.[1]In other words, for the quantitiesaandb, it can be said thatbis a multiple ofaifb=nafor some integern, which is called themultiplier. Ifais notzero, this is equivalent to saying thatb/a{\displaystyle b/a}is an integer. Whenaandbare both integers, andbis a multiple ofa, thenais called adivisorofb. One says also thatadividesb. Ifaandbare not integers, mathematicians prefer generally to useinteger multipleinstead ofmultiple, for clarification. In fact,multipleis used for other kinds of product; for example, apolynomialpis a multiple of another polynomialqif there exists third polynomialrsuch thatp=qr. 14, 49, −21 and 0 are multiples of 7, whereas 3 and −6 are not. This is because there are integers that 7 may be multiplied by to reach the values of 14, 49, 0 and −21, while there are no suchintegersfor 3 and −6. Each of the products listed below, and in particular, the products for 3 and −6, is theonlyway that the relevant number can be written as a product of 7 and another real number: In some texts[which?], "ais asubmultipleofb" has the meaning of "abeing aunit fractionofb" (a=b/n) or, equivalently, "bbeing aninteger multiplenofa" (b=na). This terminology is also used withunits of measurement(for example by theBIPM[2]andNIST[3]), where aunit submultipleis obtained byprefixingthe main unit, defined as the quotient of the main unit by an integer, mostly a power of 103. For example, amillimetreis the 1000-fold submultiple of ametre.[2][3]As another example, oneinchmay be considered as a 12-fold submultiple of afoot, or a 36-fold submultiple of ayard.
https://en.wikipedia.org/wiki/Submultiple
In mathematics, apseudo-finite fieldFis an infinite model of thefirst-ordertheoryoffinite fields. This is equivalent to the condition thatFisquasi-finite(perfect with a uniqueextensionof every positive degree) andpseudo algebraically closed(every absolutelyirreducible varietyoverFhas a point defined overF). Everyhyperfinite fieldis pseudo-finite and every pseudo-finite field is quasifinite. Every non-principalultraproductof finite fields is pseudo-finite. Pseudo-finite fields were introduced byAx(1968).
https://en.wikipedia.org/wiki/Pseudo-finite_field
Thespectrumof alinear operatorT{\displaystyle T}that operates on aBanach spaceX{\displaystyle X}is a fundamental concept offunctional analysis. The spectrum consists of allscalarsλ{\displaystyle \lambda }such that the operatorT−λ{\displaystyle T-\lambda }does not have a boundedinverseonX{\displaystyle X}. The spectrum has a standarddecompositioninto three parts: This decomposition is relevant to the study ofdifferential equations, and has applications to many branches of science and engineering. A well-known example fromquantum mechanicsis the explanation for thediscrete spectral linesand the continuous band in the light emitted byexcitedatoms ofhydrogen. LetXbe aBanach space,B(X) the family ofbounded operatorsonX, andT∈B(X). Bydefinition, acomplex numberλis in thespectrumofT, denotedσ(T), ifT−λdoes not have an inverse inB(X). IfT−λisone-to-oneandonto, i.e.bijective, then its inverse is bounded; this follows directly from theopen mapping theoremof functional analysis. So,λis in the spectrum ofTif and only ifT−λis not one-to-one or not onto. One distinguishes three separate cases: Soσ(T) is the disjoint union of these three sets,σ(T)=σp(T)∪σc(T)∪σr(T).{\displaystyle \sigma (T)=\sigma _{p}(T)\cup \sigma _{c}(T)\cup \sigma _{r}(T).}The complement of the spectrumσ(T){\displaystyle \sigma (T)}is known asresolvent setρ(T){\displaystyle \rho (T)}that isρ(T)=C∖σ(T){\displaystyle \rho (T)=\mathbb {C} \setminus \sigma (T)}. In addition, whenT−λdoes not have dense range, whether is injective or not, thenλis said to be in thecompression spectrumofT,σcp(T). The compression spectrum consists of the whole residual spectrum and part of point spectrum. The spectrum of anunbounded operatorcan be divided into three parts in the same way as in the bounded case, but because the operator is not defined everywhere, the definitions of domain, inverse, etc. are more involved. Given a σ-finitemeasure space(S,Σ,μ), consider the Banach spaceLp(μ). A functionh:S→Cis calledessentially boundedifhis boundedμ-almost everywhere. An essentially boundedhinduces a bounded multiplication operatorThonLp(μ):(Thf)(s)=h(s)⋅f(s).{\displaystyle (T_{h}f)(s)=h(s)\cdot f(s).} The operator norm ofTis the essential supremum ofh. Theessential rangeofhis defined in the following way: a complex numberλis in the essential range ofhif for allε> 0, the preimage of the open ballBε(λ) underhhas strictly positive measure. We will show first thatσ(Th) coincides with the essential range ofhand then examine its various parts. Ifλis not in the essential range ofh, takeε> 0 such thath−1(Bε(λ)) has zero measure. The functiong(s) = 1/(h(s) −λ) is bounded almost everywhere by 1/ε. The multiplication operatorTgsatisfiesTg· (Th−λ) = (Th−λ) ·Tg=I. Soλdoes not lie in spectrum ofTh. On the other hand, ifλlies in the essential range ofh, consider the sequence of sets{Sn=h−1(B1/n(λ))}. EachSnhas positive measure. Letfnbe the characteristic function ofSn. We can compute directly‖(Th−λ)fn‖pp=‖(h−λ)fn‖pp=∫Sn|h−λ|pdμ≤1npμ(Sn)=1np‖fn‖pp.{\displaystyle \|(T_{h}-\lambda )f_{n}\|_{p}^{p}=\|(h-\lambda )f_{n}\|_{p}^{p}=\int _{S_{n}}|h-\lambda \;|^{p}d\mu \leq {\frac {1}{n^{p}}}\;\mu (S_{n})={\frac {1}{n^{p}}}\|f_{n}\|_{p}^{p}.} This showsTh−λis not bounded below, therefore not invertible. Ifλis such thatμ(h−1({λ})) > 0, thenλlies in the point spectrum ofThas follows. Letfbe the characteristic function of the measurable seth−1(λ), then by considering two cases, we find∀s∈S,(Thf)(s)=λf(s),{\displaystyle \forall s\in S,\;(T_{h}f)(s)=\lambda f(s),}so λ is an eigenvalue ofTh. Anyλin the essential range ofhthat does not have a positive measure preimage is in the continuous spectrum ofTh. To show this, we must show thatTh−λhas dense range. Givenf∈Lp(μ), again we consider the sequence of sets{Sn=h−1(B1/n(λ))}. Letgnbe the characteristic function ofS−Sn. Definefn(s)=1h(s)−λ⋅gn(s)⋅f(s).{\displaystyle f_{n}(s)={\frac {1}{h(s)-\lambda }}\cdot g_{n}(s)\cdot f(s).} Direct calculation shows thatfn∈Lp(μ), with‖fn‖p≤n‖f‖p{\displaystyle \|f_{n}\|_{p}\leq n\|f\|_{p}}. Then by thedominated convergence theorem,(Th−λ)fn→f{\displaystyle (T_{h}-\lambda )f_{n}\rightarrow f}in theLp(μ) norm. Therefore, multiplication operators have no residual spectrum. In particular, by thespectral theorem,normal operatorson a Hilbert space have no residual spectrum. In the special case whenSis the set of natural numbers andμis the counting measure, the correspondingLp(μ) is denoted by lp. This space consists of complex valued sequences {xn} such that∑n≥0|xn|p<∞.{\displaystyle \sum _{n\geq 0}|x_{n}|^{p}<\infty .} For 1 <p< ∞,lpisreflexive. Define theleft shiftT:lp→lpbyT(x1,x2,x3,…)=(x2,x3,x4,…).{\displaystyle T(x_{1},x_{2},x_{3},\dots )=(x_{2},x_{3},x_{4},\dots ).} Tis apartial isometrywith operator norm 1. Soσ(T) lies in the closed unit disk of the complex plane. T*is the right shift (orunilateral shift), which is an isometry onlq, where 1/p+ 1/q= 1:T∗(x1,x2,x3,…)=(0,x1,x2,…).{\displaystyle T^{*}(x_{1},x_{2},x_{3},\dots )=(0,x_{1},x_{2},\dots ).} Forλ∈Cwith |λ| < 1,x=(1,λ,λ2,…)∈lp{\displaystyle x=(1,\lambda ,\lambda ^{2},\dots )\in l^{p}}andT x=λ x. Consequently, the point spectrum ofTcontains the open unit disk. Now,T*has no eigenvalues, i.e.σp(T*) is empty. Thus, invoking reflexivity and the theorem inSpectrum_(functional_analysis)#Spectrum_of_the_adjoint_operator(thatσp(T) ⊂σr(T*) ∪σp(T*)), we can deduce that the open unit disk lies in the residual spectrum ofT*. The spectrum of a bounded operator is closed, which implies the unit circle, { |λ| = 1 } ⊂C, is inσ(T). Again by reflexivity oflpand the theorem given above (this time, thatσr(T) ⊂σp(T*)), we have thatσr(T) is also empty. Therefore, for a complex numberλwith unit norm, one must haveλ∈σp(T) orλ∈σc(T). Now if |λ| = 1 andTx=λx,i.e.(x2,x3,x4,…)=λ(x1,x2,x3,…),{\displaystyle Tx=\lambda x,\qquad i.e.\;(x_{2},x_{3},x_{4},\dots )=\lambda (x_{1},x_{2},x_{3},\dots ),}thenx=x1(1,λ,λ2,…),{\displaystyle x=x_{1}(1,\lambda ,\lambda ^{2},\dots ),}which cannot be inlp, a contradiction. This means the unit circle must lie in the continuous spectrum ofT. So for the left shiftT,σp(T) is the open unit disk andσc(T) is the unit circle, whereas for the right shiftT*,σr(T*) is the open unit disk andσc(T*) is the unit circle. Forp= 1, one can perform a similar analysis. The results will not be exactly the same, since reflexivity no longer holds. Hilbert spacesare Banach spaces, so the above discussion applies to bounded operators on Hilbert spaces as well. A subtle point concerns the spectrum ofT*. For a Banach space,T* denotes the transpose andσ(T*) =σ(T). For a Hilbert space,T* normally denotes theadjointof an operatorT∈B(H), not the transpose, andσ(T*) is notσ(T) but rather its image under complex conjugation. For a self-adjointT∈B(H), theBorel functional calculusgives additional ways to break up the spectrum naturally. This subsection briefly sketches the development of this calculus. The idea is to first establish the continuous functional calculus, and then pass to measurable functions via theRiesz–Markov–Kakutani representation theorem. For the continuous functional calculus, the key ingredients are the following: The familyC(σ(T)) is aBanach algebrawhen endowed with the uniform norm. So the mappingP→P(T){\displaystyle P\rightarrow P(T)}is an isometric homomorphism from a dense subset ofC(σ(T)) toB(H). Extending the mapping by continuity givesf(T) forf∈ C(σ(T)): letPnbe polynomials such thatPn→funiformly and definef(T) = limPn(T). This is the continuous functional calculus. For a fixedh∈H, we notice thatf→⟨h,f(T)h⟩{\displaystyle f\rightarrow \langle h,f(T)h\rangle }is a positive linear functional onC(σ(T)). According to the Riesz–Markov–Kakutani representation theorem a unique measureμhonσ(T) exists such that∫σ(T)fdμh=⟨h,f(T)h⟩.{\displaystyle \int _{\sigma (T)}f\,d\mu _{h}=\langle h,f(T)h\rangle .} This measure is sometimes called thespectral measureassociated toh. The spectral measures can be used to extend the continuous functional calculus to bounded Borel functions. For a bounded functiongthat is Borel measurable, define, for a proposedg(T)∫σ(T)gdμh=⟨h,g(T)h⟩.{\displaystyle \int _{\sigma (T)}g\,d\mu _{h}=\langle h,g(T)h\rangle .} Via thepolarization identity, one can recover (sinceHis assumed to be complex)⟨k,g(T)h⟩.{\displaystyle \langle k,g(T)h\rangle .}and thereforeg(T)hfor arbitraryh. In the present context, the spectral measures, combined with a result from measure theory, give a decomposition ofσ(T). Leth∈Handμhbe its corresponding spectral measure onσ(T). According to a refinement ofLebesgue's decomposition theorem,μhcan be decomposed into three mutually singular parts:μh=μac+μsc+μpp{\displaystyle \mu _{h}=\mu _{\mathrm {ac} }+\mu _{\mathrm {sc} }+\mu _{\mathrm {pp} }}whereμacis absolutely continuous with respect to the Lebesgue measure,μscis singular with respect to the Lebesgue measure and atomless, andμppis a pure point measure.[1][2] All three types of measures are invariant under linear operations. LetHacbe the subspace consisting of vectors whose spectral measures are absolutely continuous with respect to theLebesgue measure. DefineHppandHscin analogous fashion. These subspaces are invariant underT. For example, ifh∈Hacandk=T h. Letχbe the characteristic function of some Borel set inσ(T), then⟨k,χ(T)k⟩=∫σ(T)χ(λ)⋅λ2dμh(λ)=∫σ(T)χ(λ)dμk(λ).{\displaystyle \langle k,\chi (T)k\rangle =\int _{\sigma (T)}\chi (\lambda )\cdot \lambda ^{2}d\mu _{h}(\lambda )=\int _{\sigma (T)}\chi (\lambda )\;d\mu _{k}(\lambda ).}Soλ2dμh=dμk{\displaystyle \lambda ^{2}d\mu _{h}=d\mu _{k}}andk∈Hac. Furthermore, applying the spectral theorem givesH=Hac⊕Hsc⊕Hpp.{\displaystyle H=H_{\mathrm {ac} }\oplus H_{\mathrm {sc} }\oplus H_{\mathrm {pp} }.} This leads to the following definitions: The closure of the eigenvalues is the spectrum ofTrestricted toHpp.[3][nb 1]Soσ(T)=σac(T)∪σsc(T)∪σ¯pp(T).{\displaystyle \sigma (T)=\sigma _{\mathrm {ac} }(T)\cup \sigma _{\mathrm {sc} }(T)\cup {{\bar {\sigma }}_{\mathrm {pp} }(T)}.} A bounded self-adjoint operator on Hilbert space is, a fortiori, a bounded operator on a Banach space. Therefore, one can also apply toTthe decomposition of the spectrum that was achieved above for bounded operators on a Banach space. Unlike the Banach space formulation,[clarification needed]the unionσ(T)=σ¯pp(T)∪σac(T)∪σsc(T){\displaystyle \sigma (T)={{\bar {\sigma }}_{\mathrm {pp} }(T)}\cup \sigma _{\mathrm {ac} }(T)\cup \sigma _{\mathrm {sc} }(T)}need not be disjoint. It is disjoint when the operatorTis of uniform multiplicity, saym, i.e. ifTis unitarily equivalent to multiplication byλon the direct sum⨁i=1mL2(R,μi){\displaystyle \bigoplus _{i=1}^{m}L^{2}(\mathbb {R} ,\mu _{i})}for some Borel measuresμi{\displaystyle \mu _{i}}. When more than one measure appears in the above expression, we see that it is possible for the union of the three types of spectra to not be disjoint. Ifλ∈σac(T) ∩σpp(T),λis sometimes called an eigenvalueembeddedin the absolutely continuous spectrum. WhenTis unitarily equivalent to multiplication byλonL2(R,μ),{\displaystyle L^{2}(\mathbb {R} ,\mu ),}the decomposition ofσ(T) from Borel functional calculus is a refinement of the Banach space case. The preceding comments can be extended to the unbounded self-adjoint operators since Riesz-Markov holds forlocally compactHausdorff spaces. Inquantum mechanics, observables are (often unbounded)self-adjoint operatorsand their spectra are the possible outcomes of measurements. Thepure point spectrumcorresponds tobound statesin the following way: A particle is said to be in a bound state if it remains "localized" in a bounded region of space.[6]Intuitively one might therefore think that the "discreteness" of the spectrum is intimately related to the corresponding states being "localized". However, a careful mathematical analysis shows that this is not true in general.[7]For example, consider the function This function is normalizable (i.e.f∈L2(R){\displaystyle f\in L^{2}(\mathbb {R} )}) as Known as theBasel problem, this series converges toπ26{\textstyle {\frac {\pi ^{2}}{6}}}. Yet,f{\displaystyle f}increases asx→∞{\displaystyle x\to \infty }, i.e, the state "escapes to infinity". The phenomena ofAnderson localizationanddynamical localizationdescribe when the eigenfunctions are localized in a physical sense. Anderson Localization means that eigenfunctions decay exponentially asx→∞{\displaystyle x\to \infty }. Dynamical localization is more subtle to define. Sometimes, when performing quantum mechanical measurements, one encounters "eigenstates" that are not localized, e.g., quantum states that do not lie inL2(R). These arefree statesbelonging to the absolutely continuous spectrum. In thespectral theorem for unbounded self-adjoint operators, these states are referred to as "generalized eigenvectors" of an observable with "generalized eigenvalues" that do not necessarily belong to its spectrum. Alternatively, if it is insisted that the notion of eigenvectors and eigenvalues survive the passage to the rigorous, one can consider operators onrigged Hilbert spaces.[8] An example of an observable whose spectrum is purely absolutely continuous is theposition operatorof a free particle moving on the entire real line. Also, since themomentum operatoris unitarily equivalent to the position operator, via theFourier transform, it has a purely absolutely continuous spectrum as well. The singular spectrum correspond to physically impossible outcomes. It was believed for some time that the singular spectrum was something artificial. However, examples as thealmost Mathieu operatorandrandom Schrödinger operatorshave shown, that all types of spectra arise naturally in physics.[9][10] LetA:X→X{\displaystyle A:\,X\to X}be a closed operator defined on the domainD(A)⊂X{\displaystyle D(A)\subset X}which is dense inX. Then there is a decomposition of the spectrum ofAinto adisjoint union,[11]σ(A)=σess,5(A)⊔σd(A),{\displaystyle \sigma (A)=\sigma _{\mathrm {ess} ,5}(A)\sqcup \sigma _{\mathrm {d} }(A),}where
https://en.wikipedia.org/wiki/Decomposition_of_spectrum_(functional_analysis)
TheAND gateis a basic digitallogic gatethat implements thelogical conjunction(∧) frommathematical logic– AND gates behave according to theirtruth table. A HIGH output (1) results only if all the inputs to the AND gate are HIGH (1). If any of the inputs to the AND gate are not HIGH, a LOW (0) is outputted. The function can be extended to any number of inputs by multiple gates up in a chain. There are three symbols for AND gates: the American (ANSIor 'military') symbol and theIEC('European' or 'rectangular') symbol, as well as the deprecatedDINsymbol. Additional inputs can be added as needed. For more information see theLogic gate symbolsarticle. It can also be denoted as symbol "^" or "&". The AND gate with inputsAandBand outputCimplements the logical expressionC=A⋅B{\displaystyle C=A\cdot B}. This expression also may be denoted asC=A∧B{\displaystyle C=A\wedge B}orC=A&B{\displaystyle C=A\And B}. As ofUnicode16.0.0, the AND gate is also encoded in theSymbols for Legacy Computing Supplementblock asU+1CC16𜰖LOGIC GATE AND. In logic families likeTTL,NMOS,PMOSandCMOS, an AND gate is built from aNAND gatefollowed by aninverter. In the CMOS implementation above, transistors T1-T4 realize the NAND gate and transistors T5 and T6 the inverter. The need for an inverter makes AND gates less efficient than NAND gates. AND gates can also be made from discrete components and are readily available asintegrated circuitsin several differentlogic families. f(a,b)=a∗b{\displaystyle f(a,b)=a*b}is the analytical representation of AND gate: If no specific AND gates are available, one can be made fromNANDorNORgates, because NAND and NOR gates are "universal gates"[1]meaning that they can be used to make all the others. AND gates with multiple inputs are designated with the same symbol, with more lines leading in.[2]While direct implementations with more than four inputs are possible in logic families likeCMOS, these are inefficient. More efficient implementations use a cascade ofNANDandNORgates, as shown in the picture on the right below. This is more efficient than the cascade of AND gates shown on the left.[3]
https://en.wikipedia.org/wiki/AND_gate
Ingame theory, atrigger strategyis any of a class of strategies employed in a repeated non-cooperative game. A player using a trigger strategy initially cooperates but punishes the opponent if a certain level of defection (i.e., the trigger) is observed. The level ofpunishmentand the sensitivity of the trigger vary with different trigger strategies. Thisgame theoryarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Trigger_strategy
Inmathematics, thequantum Markov chainis a reformulation of the ideas of a classicalMarkov chain, replacing the classical definitions of probability withquantum probability. Very roughly, the theory of a quantum Markov chain resembles that of ameasure-many automaton, with some important substitutions: the initial state is to be replaced by adensity matrix, and the projection operators are to be replaced bypositive operator valued measures. More precisely, a quantum Markov chain is a pair(E,ρ){\displaystyle (E,\rho )}withρ{\displaystyle \rho }adensity matrixandE{\displaystyle E}aquantum channelsuch that is acompletely positive trace-preservingmap, andB{\displaystyle {\mathcal {B}}}aC*-algebraof bounded operators. The pair must obey the quantum Markov condition, that for allb1,b2∈B{\displaystyle b_{1},b_{2}\in {\mathcal {B}}}.
https://en.wikipedia.org/wiki/Quantum_Markov_chain
Disinformation attacksare strategic deception campaigns[1]involvingmedia manipulationandinternet manipulation,[2]to disseminatemisleading information,[3]aiming toconfuse, paralyze, andpolarizeanaudience.[4]Disinformationcan be considered an attack when it involves orchestrated and coordinated efforts[5]to build an adversarial narrative campaign that weaponizes multiple rhetorical strategies and forms of knowing—including not only falsehoods but also truths,half-truths, andvalue-ladenjudgements—to exploit and amplify identity-driven controversies.[6]Disinformation attacks use media manipulation to targetbroadcast medialike state-sponsored TV channels and radios.[7][8]Due to the increasing use of internet manipulation onsocial media,[2]they can be considered acyber threat.[9][10]Digital tools such asbots,algorithms, andAI technology, along with human agents includinginfluencers, spread and amplify disinformation tomicro-targetpopulations on online platforms likeInstagram,Twitter,Google,Facebook, andYouTube.[11][6] According to a 2018 report by theEuropean Commission,[12]disinformation attacks can pose threats todemocratic governance, by diminishing the legitimacy of the integrity ofelectoral processes. Disinformation attacks are used by and againstgovernments,corporations, scientists, journalists,activists, and other private individuals.[13][14][15][16]These attacks are commonly employed to reshape attitudes and beliefs, drive a particular agenda, or elicit certain actions from a target audience. Tactics include circulating incorrect or misleading information, creating uncertainty, and undermining the legitimacy of official information sources.[17][18][19] An emerging area ofdisinformation researchfocuses on the countermeasures to disinformation attacks.[20][21][19]Technologically, defensive measures includemachine learning applicationsandblockchain technologiesthat can flag disinformation ondigital platforms.[22][18]Socially, educational programs are being developed to teach people how to better discern between facts and disinformation online.[23]Journalists publish recommendations for assessing sources.[24]Commercially, revisions to algorithms,advertising, and influencer practices on digital platforms are proposed.[2]Individual interventions include actions that can be taken by individuals to improve their own skills in dealing with information (e.g.,media literacy), and individual actions to challenge disinformation. Disinformation attacks involve the intentional spreading of false information, with end goals of misleading, confusing, and encouraging violence,[25]and gaining money, power, or reputation.[26]Disinformation attacks may involve political, economic, and individual actors. They may attempt to influence attitudes and beliefs, drive a specific agenda, get people to act in specific ways, or destroy credibility of individuals or institutions. The presentation of incorrect information may be the most obvious part of a disinformation attack, but it is not the only purpose. The creation of uncertainty and the undermining of both correct information and the credibility of information sources are often intended as well.[17][18][19] If individuals can be convinced of something that is factually incorrect, they may make decisions that will run counter to the best interests of themselves and those around them. If the majority of people in a society can be convinced of something that is factually incorrect, the misinformation may lead to political and social decisions that are not in the best interest of that society. This can have serious impacts at both individual and societal levels.[27] In the 1990s, a British doctor who held a patent on a single-shot measles vaccine promoted distrust of combinedMMR vaccine. Hisfraudulent claimswere meant to promote sales of his own vaccine. The subsequent media frenzy increased fear and many parents chose not to immunize their children.[28]This was followed by a significant increase in cases, hospitalizations and deaths that would have been preventable by the MMR vaccine.[29][30]It also led to the expenditure of substantial money on follow-up research that tested the assertions made in the disinformation,[31]and on public information campaigns attempting to correct the disinformation. The fraudulent claim continues to be referenced and to increasevaccine hesitancy.[32] In the case of the2020 United States presidential election, disinformation was used in an attempt to convince people to believe something that was not true and change the outcome of the election.[33][34]Repeated disinformation messages about the possibility of election fraud were introduced years before the actual election occurred, as early as 2016.[35][36]Researchers found that much of the fake news originated in domestic right-wing groups. The nonpartisan Election Integrity Partnership reported prior to the election that "What we're seeing right now are essentially seeds being planted, dozens of seeds each day, of false stories... They're all being planted such that they could be cited and reactivated ... after the election."[37]Groundwork was laid through multiple and repeated disinformation attacks for claims that voting was unfair and to delegitimize the results of the election once it occurred.[37]Although the2020 United States presidential electionresults were upheld, some people still believe the "big lie".[34] People who get information from a variety of news sources, not just sources from a particular viewpoint, are more likely to detect disinformation.[38]Tips for detecting disinformation include reading reputable news sources at a local or national level, rather than relying on social media. Beware of sensational headlines that are intended to attract attention and arouse emotion. Fact-check information broadly, not just on one usual platform or among friends. Check the original source of the information. Ask what was really said, who said it, and when. Consider possible agendas or conflicts of interest on the part of the speaker or those passing along the information.[39][40][41][42][43] Sometimes undermining belief in correct information is a more important goal of disinformation than convincing people to hold a new belief. In the case of combined MMR vaccines, disinformation was originally intended to convince people of a specific fraudulent claim and by doing so promote sales of a competing product.[28]However, the impact of the disinformation became much broader. The fear that one type of vaccine might pose a danger fueled general fears that vaccines might pose a risk. Rather than convincing people to choose one product over another, belief in a whole area of medical research was eroded.[32] There is widespread agreement that disinformation is spreading confusion.[44]This is not just a side effect; confusing and overwhelming people is an intentional objective.[45][46]Whether disinformation attacks are used against political opponents or "commercially inconvenient science", they sow doubt and uncertainty as a way of undermining support for an opposing position and preventing effective action.[47] A 2016 paper describes social media-driven political disinformation tactics as a "firehose of falsehood" that "entertains, confuses and overwhelms the audience."[48]Four characteristics were illustrated with respect to Russian propaganda. Disinformation is used in a way that is 1) high-volume and multichannel 2) continuous and repetitive 3) ignores objective reality and 4) ignores consistency. It becomes effective by creating confusion and obscuring, disrupting and diminishing the truth. When one falsehood is exposed, "the propagandists will discard it and move on to a new (though not necessarily more plausible) explanation."[48]The purpose is not to convince people of a specific narrative, but to "Deny, deflect, distract".[49] Countering this is difficult, in part because "It takes less time to make up facts than it does to verify them."[48]There is evidence that false information "cascades" travel farther, faster, and more broadly than truthful information, perhaps due to novelty and emotional loading.[50]Trying to fight amany-headed hydraof disinformation may be less effective than raising awareness of how disinformation works and how to identify it, before an attack occurs.[48]For example, Ukraine was able to warm citizens and journalists about the potential use of state-sponsoreddeepfakesin advance of an actual attack, which likely slowed its spread.[51] Another way to counter disinformation is to focus on identifying and countering its real objective.[48]For example, if disinformation is trying to discourage voters, find ways to empower voters and elevate authoritative information about when, where and how to vote.[52]If claims of voter fraud are being put forward, provide clear messaging about how the voting process occurs, and refer people back to reputable sources that can address their concerns.[53] Disinformation involves more than just a competition between inaccurate and accurate information. Disinformation, rumors and conspiracy theories call into question underlying trust at multiple levels. Undermining of trust can be directed at scientists, governments and media and have very real consequences. Public trust in science is essential to the work of policymakers and to good governance, particularly for issues in medicine, public health, and the environmental sciences. It is essential that individuals, organizations and governments have access to accurate information when making decisions.[14][15] An example is disinformation around COVID-19 vaccines. Disinformation has targeted the products themselves, the researchers and organizations who develop them, the healthcare professionals and organizations who administer them, and the policy-makers that have supported their development and advised their use.[14][54][55]Countries where citizens had higher levels of trust in society and government appear to have mobilized more effectively against the virus, as measured by slower virus spread and lower mortality rates.[56] Studies of people's beliefs about the amount of disinformation and misinformation in the news media suggest that distrust of traditional news media tends to be associated with reliance on alternate information sources such as social media. Structural support for press freedoms, a stronger independent press, and evidence of the credibility and honesty of the press can help to restore trust in traditional media as a provider of independent, honest, and transparent information.[57][46] A major tactic of disinformation is to attack and attempt to undermine the credibility of people and organizations who are in a position to oppose the disinformation narrative due to their research or position of authority.[58]This can include politicians, government officials, scientists, journalists, activists, human rights defenders and others.[16] For example, aNew Yorkerreport in 2023 revealed details about the campaign run by theUAE, under which the Emirati PresidentMohamed bin Zayedpaid millions of euros to a Swiss businessman, Mario Brero, for "dark PR" against their targets. Brero and his company Alp Services used the UAE money to create damning Wikipedia entries and publish propaganda articles against Qatar and those with ties to theMuslim Brotherhood. Targets included the company Lord Energy, which eventually declared bankruptcy following unproven allegations of links to terrorism.[59]Alp was also paid by the UAE to publish 100 propaganda articles a year against Qatar.[60] Disinformation attacks on scientists and science, including attacks funded by the tobacco and fossil fuels industries, have been painstakingly documented in books such asMerchants of Doubt,[58][61][62]Doubt Is Their Product,[63][64]andThe Triumph of Doubt: Dark Money and the Science of Deception(2020).[65][66]While scientists, doctors and teachers are considered the most trustworthy professionals globally[15]scientists are concerned about whether confidence in science has decreased.[15][55]Sudip Parikh, CEO of theAmerican Association for the Advancement of Science(AAAS) in 2022 is quoted as saying "We now have a significant minority of the population that's hostile to the scientific enterprise... We're going to have to work hard to regain trust."[55]That said, at the same time that disinformation poses a threat, the widespread use of social media by scientists offers an unprecedented opportunity for scientific communication and engagement between scientists and the public, with the potential to increase public knowledge.[15][67] TheAmerican Council on Science and Healthhas advice for scientists facing a disinformation campaign, and notes that disinformation campaigns often incorporate some elements of truth to make them more convincing. The five recommendations include identifying and acknowledging any parts of the story that are actually true; explaining why other parts are untrue, out of context or manipulated; calling out motivations that may be behind the disinformation, such as financial interests or power; preparing an "accusation audit" in anticipation of further attacks; and maintaining calm and self-control.[68]Others recommend educating oneself about the platforms one uses and the privacy tools that platforms offer to protect personal information and to mute, block, and report online participants. Disinformers and online trolls are unlikely to engage in reasoned discussion or interact in good faith, and responding to them is rarely useful.[69] Studies clearly document the harassment of scientists, personally and in terms of scientific credibility. In 2021, aNaturesurvey reported that nearly 60% of scientists who had made public statements about COVID-19 had their credibility attacked. Attacks disproportionately affected those in nondominant identity groups such as women, transgender people, and people of color.[69]A highly visible example isAnthony S. Fauci. He is deeply respected nationally and internationally as an expert on infectious diseases. He also has been subjected to intimidation, harassment and death threats fueled by disinformation attacks and conspiracy theories.[70][71][72]Despite those experiences, Fauci encourages early-career scientists "not to be deterred, because the satisfaction and the degree of contribution you can make to society by getting into public service and public health is immeasurable."[73] Individual decisions, like whether or not to smoke, are major targets for disinformation. So are policymaking processes such as the formation of public health policy, the recommendation and adoption of policy measures, and the acceptance or regulation of processes and products. Public opinion and policy interact: public opinion and the popularity of public health measures can strongly influence government policy and the creation and enforcement of industry standards. Disinformation attempts to undermine public opinion and prevent the organization of collection actions, including policy debates, government action, regulation and litigation.[47] An important type of collective activity is the act of voting. In the2017 Kenyan general election, 87% of Kenyans surveyed reported encountering disinformation before the August election, and 35% reported being unable to make an informed voting decision as a result.[8]Disinformation campaigns often target specific groups such as black or Latino voters to discourage voting andcivic engagement. Fake accounts and bots are used to amplify uncertainty about whether voting really matters, whether voters are "appreciated", and whose interests politicians care about.[74][75]Microtargeting can present messages precisely designed for a chosen population, while geofencing can pinpoint people based on where they go, like churchgoers. In some cases, voter suppression attacks have circulated incorrect information about where and when to vote.[76]During the 2020 U.S. Democratic primaries, disinformation narratives arose around the use of masks and the use of mail-in ballots, relating to whether and how people would vote.[77] Disinformation strikes at the foundation of democratic government: "the idea that the truth is knowable and that citizens can discern and use it to govern themselves."[78]Disinformation campaigns are designed by both foreign and domestic actors to gain political and economic advantage. The undermining of functional government weakens the rule of law and can enable both foreign and domestic actors to profit politically and economically. At home and abroad, the goal is to weaken opponents. Elections are an especially critical target, but the day-to-day ability to govern is also undermined.[78][79] The Oxford Internet Institute atOxford Universityreports that in 2020, organized social media manipulation campaigns were active in 81 countries, an increase from 70 countries in 2019. 76 of those countries used disinformation attacks. The report describes disinformation as being produced globally "on an industrial scale".[80] A Russian operation known as theInternet Research Agency(IRA) spent thousands on social media ads to influence the2016 United States presidential election, confuse the public on key political issues and sow discord. These political ads leveraged user data to micro-target certain populations and spread misleading information, with an end goal of exacerbatingpolarizationand eroding public trust in political institutions.[10][81][20]TheComputational PropagandaProject at theOxford Internet Institutefound that the IRA's ads specifically sought to sow mistrust towards the U.S. government amongMexican Americansand discourage voter turnout amongAfrican Americans.[82] An examination of twitter activity prior to the2017 French presidential electionindicates that 73% of the disinformation flagged byLe Mondewas traceable to two political communities: one associated withFrançois Fillon(right-wing, with 50.75% of the fake link shares) and another withMarine Le Pen(extreme-right wing, 22.21%). 6% of accounts in the Fillon community and 5% of the Le Pen community were early spreaders of disinformation. Debunking of the disinformation came from other communities, and was most often related toEmmanuel Macron(39.18% of debunks) andJean-Luc Mélenchon(14% of debunks).[83] Another analysis, of the2017 #MacronLeaksdisinformation campaign, illustrates frequent patterns of election-related disinformation campaigns. Such campaigns often peak 1–2 days before an election. The scale of a campaign like #MacronLeaks can be comparable to the volume of regular discussion in that time period, suggesting that it can obtain considerable collective attention. About 18 percent of the users involved in #MacronLeaks were identifiable as bots. Spikes in bot content tended to occur slightly ahead of spikes in human-created content, suggesting bots were able to trigger cascades of disinformation. Some bot accounts showed a pattern of previous use: creation shortly before the 2016 U.S. presidential election, brief usage then, and no further activity until early May 2017, prior to the French election. Alt-right media personalities including Britain'sPaul Joseph Watsonand AmericanJack Posobiecprominently shared MacronLeaks content prior to the French election.[84]Experts worry that disinformation attacks will increasingly be used to influence national elections and democratic processes.[10] InA Lot of People Are Saying: The New Conspiracism and the Assault on Democracy(2020)Nancy L. RosenblumandRussell Muirheadexamine the history and psychology ofconspiracy theoriesand the ways in which they are used to de-legitimize the political system. They distinguish between classical conspiracy theory in which actual issues and events (such as theassassination of John F. Kennedy) are examined and combined to create a theory, and a new form of "conspiracism without theory" that relies on repeating false statements and hearsay without factual grounding.[85][86] Such disinformation exploits human bias towards accepting new information. Humans constantly share information and rely on others to provide information they cannot verify for themselves. Much of that information will be true, whether they ask if it is cold outside or cold in Antarctica. As a result, they tend to believe what they hear. Studies show an "illusory truth effect": the more often people hear a claim, the more likely they are to consider it true. This is the case even when people identify a statement as false the first time they see it; they are likely to rank the probability that it is true higher after multiple exposures.[86][87]Social media is particularly dangerous as a source of disinformation because robots and multiple fake accounts are used to repeat and magnify the impact of false statements. Algorithms track what users click on and recommend content similar to what users have chosen, creatingconfirmation biasandfilter bubbles. In more tightly focused communities anecho chambereffect is enhanced.[88][89][86][90][91][20] Autocratshave employed domestic voter disinformation attacks to cover upelectoral corruption. Voter disinformation can include public statements that assert local electoral processes are legitimate and statements that discredit electoral monitors. Public-relations firms may be hired to execute specialized disinformation campaigns, including media advertisements and behind-the-sceneslobbying, to push the narrative of an honest and democratic election.[92]Independent monitoring of the electoral process is essential to combatting electoral disinformation. Monitoring can include both citizen election monitors and international observers, as long as they are credible. Norms for accurate characterization of elections are based on ethical principles, effective methodologies, and impartial analysis. Democratic norms emphasize the importance of open electoral data, the free exercise of political rights, and protection for human rights.[92] Disinformation attacks can increase political polarization and alter public discourse.[91]Foreign manipulation campaigns may attempt to amplify extreme positions and weaken a target society, while domestic actors may try to demonize political opponents.[78]States with highly polarized political landscapes and low public trust in local media and government are particularly vulnerable to disinformation attacks.[93][94] There is concern that Russia will employ disinformation, propaganda, and intimidation to destabilizeNATOmembers, such as theBaltic statesand coerce them into accepting Russian narratives and agendas.[82][93]During theRusso-Ukrainian Warof 2014, Russia combined traditional combat warfare with disinformation attacks in a form ofhybrid warfarein its offensive strategy, to sow doubt and confusion among enemy populations and intimidate adversaries, erode public trust in Ukrainian institutions, and boost Russia's reputation and legitimacy.[95]Since escalating theRusso-Ukrainian Warwith the2022 Russian invasion of Ukraine, Russia's pattern of disinformation has been described byCBC Newsas "Deny, deflect, distract".[49] Thousands of stories have been debunked, including doctored photographs and deepfakes. At least 20 main "themes" are being promoted by Russia propaganda, targeting audiences far beyond Ukraine and Russia. Many of these try to reinforce ideas that Ukraine is somehow Nazi-controlled, that its military forces are weak, and that damage and atrocities are due to Ukrainian, not Russian, actions.[49]Many of the images they examine are shared onTelegraph. Government organizations and independent journalistic groups such asBellingcatwork to confirm or deny such reports, often using open-source data and sophisticated tools to identify where and when information has originated and whether claims are legitimate. Bellingcat works to provide an accurate account of events as they happen and to create a permanent, verified, longer-term record.[96] Fear-mongering and conspiracy theories are used to encourage polarization, to promote exclusionary narratives, and to legitimize hate speech and aggression.[54][89]As has been painstakingly documented, the period leading up tothe Holocaustwas marked by repeated disinformation and increasing persecution by theNazi government,[97][98]culminating in the mass murder[99]of 165,200 German Jews[100]by a "genocidal state".[99]Populations in Africa, Asia, Europe and South America today are considered to be at serious risk for human rights abuses.[8]Changing conditions in the United States have also been identified as increasing risk factors for violence.[94] Elections are particularly tense political transition points, emotionally charged at any time, and increasingly targeted by disinformation. These conditions increase the risk of individual violence, civil unrest, and mass atrocities. Countries such asKenyawhose history has involved ethnic or election-related violence, foreign or domestic interference, and a high reliance on the use of social media for political discourse, are considered to be at higher risk. The United Nations Framework of Analysis for Atrocity Crimes identifies elections as an atrocity risk indicator: disinformation can act as a threat multiplier foratrocity crime. Recognition of the seriousness of this problem is essential, to mobilize governments, civic society, and social media platforms to take steps to prevent both online and offline harm.[8] Disinformation attacks target the credibility of science, particularly in areas ofpublic health[26]andenvironmental science.[101][15]Examples include denying the dangers ofleaded gasoline,[102][103]smoking,[104][105][106]andclimate change.[61][107][21][108] A pattern for disinformation attacks involving scientific sources developed in the 1920s. It illustrates tactics that continue to be used.[109]As early as 1910, industrial toxicologistAlice Hamiltondocumented the dangers associated with exposure tolead.[110][111]In the 1920s,Charles Kettering,Thomas Midgley Jr.andRobert A. Kehoeof the Ethyl Gasoline Corporation introduced lead into gasoline. Following the sensational madness and deaths of workers at their plants, a Public Health Service conference was held in 1925, to review the use oftetraethyllead(TEL). Hamilton and others warned of leaded gasoline's potential danger to people and the environment. They questioned the research methodology used by Kehoe, who claimed that lead was a "natural" part of the environment and that high lead levels in workers were "normal".[112][113][102]Kettering, Midgley and Kehoe emphasized that a gas additive was needed, and argued that until "it is shown ... that an actual danger to the public is had as a result",[110]the company should be allowed to produce its product. Rather than requiring industry to show that their product was safe before it could be sold, the burden of proof was placed on public health advocates to show uncontestable proof that harm had occurred.[110][114]Critics of TEL were described as "hysterical".[115]With industry support, Kehoe went on to became a prominent industry expert and advocate for the position that leaded gasoline was safe, holding "an almost complete monopoly" on research in the area.[116]It would be decades before his work was finally discredited.[102]In 1988, theEPAestimated that over the previous 60 years, 68 million children suffered high toxic exposure to lead from leaded fuels.[117]A 2022 review reported that the use of lead in gasoline was linked to neurodevelopmental disabilities in children and to neurobehavioral deficits, cardiovascular and kidney disease, and premature deaths in adults.[118] By the 1950s, the production and use of biased "scientific" research was part of a consistent "disinformation playbook", used by companies in the tobacco,[119]pesticide[120]and fossil fuels industries.[61][107][121]In many cases, the same researchers, research groups, and public relations firms were hired by multiple industries. They repeatedly argued that products were safe while knowing that they were unsafe. When assertions of safety were challenged, it was argued that the products were necessary.[106]Through coordinated and widespread campaigns, they worked to influence public opinion and to manipulate government officials and regulatory agencies, to prevent regulatory or legal action that might interfere with profits.[47] Similar tactics continue to be used by scientific disinformation campaigns. When proof of harm is presented, it is argued that the proof is not sufficient. The argument that more proof is needed is used to put off action to some future time. Delays are used to block attempts to limit or regulate industry, and to avoid litigation, while continuing to profit. Industry-funded experts carry out research that all too often can be challenged on methodological grounds as well as over conflicts of interest. Disinformers use bad research as a basis for claiming that scientists are not in agreement, and to generate specific claims as part of a disinformation narrative. Opponents are often attacked on a personal level as well as in terms of their scientific work.[122][47][123] A tobacco industry memo summarized this approach by saying "Doubt is our product".[122]Scientists generally consider a question in terms of the likelihood that a conclusion is supported, given the weight of the best available scientific evidence. Evidence tends to involve measurement, and measurement introduces a potential for error. A scientist may say that available evidence is sufficient to support a conclusion about a problem, but will rarely claim that a problem is fully understood or that a conclusion is 100% certain. Disinformation rhetoric tries to undermine science and sway public opinion by using a "doubt strategy". Reframing the normal scientific process, disinformation often suggests that anything less than 100% certainty implies doubt, and that doubt means there is no consensus about an issue. Disinformation attempts to undermine both certainty about a particular issue and about science itself.[122][47]Decades of disinformation attacks have considerably eroded public belief in science.[47] Scientific information can become distorted as it is transferred among primary scientific sources, the popular press, and social media. This can occur both intentionally and unintentionally. Some features of current academic publishing like the use of preprint servers make it easier for inaccurate information to become public, particularly if the information reported is novel or sensational.[39] Steps to protect science from disinformation and interference include both individual actions on the part of scientists, peer reviewers, and editors, and collective actions via research, granting, and professional organizations, and regulatory agencies.[47][124][125] Traditional media channels can be used to spread disinformation. For example,Russia Todayis a state-funded news channel that is broadcast internationally. It aims to boost Russia's reputation abroad and also depictWesternnations, such as the U.S., in a negative light. It has served as a platform to disseminate propaganda andconspiracy theoriesintended to mislead and misinform its audience.[7] Within the United States, sharing of disinformation and propaganda has been associated with the development of increasingly "partisan" media, most strongly in right-wing sources such asBreitbart,The Daily Caller, andFox News.[126]As local news outlets have declined, there has been an increase in partisan media outlets that "masquerade" as local news sources.[127][128]The impact of partisanship and its amplification through the media is documented. For example, attitudes to climate legislation were bipartisan in the 1990s but became intensely polarized by 2010. While media messaging on climate from Democrats increased between 1990 and 2015 and tended to support the scientific consensus on climate change, Republican messaging around climate decreased and became more mixed.[21] A "gateway belief" that affects people's acceptance of scientific positions and policies is their understanding of the extent of scientific agreement on a topic. Undermining scientific consensus is therefore a frequent disinformation tactic. Indicating that there is a scientific consensus (and explaining the science involved) can help to counter misinformation.[21]Indicating the broad consensus of experts can help to align people's perceptions and understandings with the empirical evidence.[129]Presenting messages in a way that aligns with someone's cultural frame of reference makes them more likely to be accepted.[21] It is important to avoidfalse balance, in which opposing claims are presented in a way that is out of proportion to the actual evidence for each side. One way to counter false balance is to present a weight-of-evidence statement that explicitly indicates the balance of evidence for different positions.[129][130] Perpetrators primarily use social media channels as a medium to spread disinformation, using a variety of tools.[131]Researchers have compiled multiple actions through which disinformation attacks occur on social media, which are summarized in the table below.[2][132][133] An app called "Dawn of Glad Tidings," developed byIslamic Statemembers, assists in the organization's efforts to rapidly disseminate disinformation in social media channels. When a user downloads the app, they are prompted to link it to their Twitter account and grant the app access to tweeting from their personal account. This app allows for automated Tweets to be sent out from real user accounts and helps create trends across Twitter that amplify disinformation produced by the Islamic State on an international scope.[82] In many cases, individuals and companies in different countries are paid to create false content and push disinformation, sometimes earning both payments and advertising revenue by doing so.[131][2]"Disinfo-for-hire actors" often promote multiple issues, or even multiple sides in the same issue, solely for material gain.[146]Others are motivated politically or psychologically.[147][2] More broadly, themonetizationpractices ofsocial mediaandonline advertisingcan be exploited to amplify disinformation.[148]Social media'sbusiness modelcan be used to spread disinformation Media outlets (1) provide content to the public at little or no cost, (2) capture and refocus public attention and (3) collect, use and resell user data. Advertising companies, publishers,influencers, brands, and clients may benefit from disinformation in a variety of ways.[2] In 2022, theJournal of Communicationpublished a study of thepolitical economyunderlying disinformation around vaccines. Researchers identified 59 English-language "actors" that provided "almost exclusively anti-vaccination publications". Their websites monetized disinformation through appeals for donations, sales of content-based media and other merchandise, third-party advertising, and membership fees. Some maintained a group of linked websites, attracting visitors with one site and appealing for money and selling merchandise on others. In how they gainedattentionand obtained funding, their activities displayed a "hybrid monetization strategy". They attracted attention by combining eye-catching aspects of "junk news" and online celebrity promotion. At the same time, they developed campaign-specific communities to publicize and legitimize their position, similar to radical social movements.[147] Emotion is used and manipulated to spread disinformation and false beliefs.[20]Arousing emotions can be persuasive. When people feel strongly about something, they are more likely to see it as true.[87]Emotion can also cause people to think less clearly about what they are reading and the credibility of its source. Content that appeals to emotion is more likely to spread quickly on the internet. Fear, confusion, and distraction can all interfere with people's ability to think critically and make good decisions.[149] Human psychologyis leveraged to make disinformation attacks more potent and viral.[20]Psychological phenomena, such as stereotyping,confirmation bias, selective attention, andecho chambers, contribute to the virality and success of disinformation on digital platforms.[140][150][6]Disinformation attacks are often considered a type ofpsychological warfarebecause of their use of psychological techniques to manipulate populations.[151][27] Perceptions of identify and a sense of belonging are manipulated so as to influence people.[20]Feelings of social belonging are reinforced to encourage affiliation with a group and discourage dissent. This can make people more susceptible to aninfluenceror leader who may encourage his "engagedfollowership" to attack others. The type of behavior has been compared to thecollective behaviorof mobs and is similar to dynamics withincults.[69][152][153] As has been noted by the Knight First Amendment Institute at Columbia University, "The misinformation problem is social and not just technological or legal."[154]It raises serious ethical issues about how we engage with each other.[155]The 2023 Summit on "Truth, Trust, and Hope", held by theNobel Committeeand the USNational Academy of Science, identified disinformation as more dangerous than any other crisis, because of the way in which it hampers the addressing and resolution of all other problems.[156] Defensive measures against disinformation can occur at a wide variety of levels, in diverse societies, under different laws and conditions. Responses to disinformation can involve institutions, individuals, and technologies, including government regulation, self-regulation, monitoring by third parties, the actions of private actors, the influence of crowds, and technological changes to platform architecture and algorithmic behaviors.[157][158]Advanced systems that involveblockchain technoloigies,crowd wisdomandartificial intelligencewere developed to fight against online disinformation.[22]It is also important to develop and share best practices for countering disinformation and building resilience against it.[78] Existing social, legal and regulatory guidelines may not apply easily to actions in an international virtual world, where private corporations compete for profitability, often on the basis of user engagement.[157][2]Ethical concerns apply to some of the possible responses to disinformation, as people debate issues of content moderation, free speech, the right to personal privacy, human identity, human dignity, suppression of human rights and religious freedom, and the use of data.[155]The scope of the problem means that "Building resilience to and countering manipulative information campaigns is a whole-of-society endeavor."[78] While authoritarian regimes have chosen to use disinformation attacks as a policy tool, their use poses specific dangers for democratic governments: using equivalent tactics will further deepen public distrust of political processes and undermine the basis of democratic and legitimate government. "Democracies should not seek to covertly influence public debate either by deliberately spreading information that is false or misleading or by engaging in deceptive practices, such as the use of fictitious online personas."[159]Further, democracies are encouraged to play to their strengths, including rule of law, respect for human rights, cooperation with partners and allies,soft power, and technical capability to address cyber threats.[159] The constitutional norms that govern a society are needed both to makegovernanceeffective and to averttyranny.[154]Providing accurate information and countering disinformation are legitimate activities of government. TheOECDsuggests that public communication of policy responses should followopen governmentprinciples of integrity, transparency, accountability and citizen participation.[160]A discussion of the US government's ability to legally respond to disinformation argues that responses should be based on principles of transparency and generality. Responses should avoidad hominemattacks, racial appeals, or selectivity in the person responded to. Criticism should focus first on providing correct information and secondarily on explaining why the false information is wrong, rather than focusing on the speaker or repeating the false narrative.[154][149][161] In the case of theCOVID-19pandemic, multiple factors created "space for misinformation to proliferate". Government responses to thispublic healthissue indicate several areas of weakness including gaps in basic public health knowledge, lack of coordination in government communication, and confusion about how to address a situation involving significant uncertainty. Lessons from the pandemic include the need to admit uncertainty when it exists, and to distinguish clearly between what is known and what is not yet known. Science is a process, and it is important to recognize and communicate that scientific understanding and related advice will change over time on the basis of new evidence.[160] Regulation of disinformation raises ethical issues. Therighttofreedom of expressionis recognized as ahuman rightin theUniversal Declaration of Human Rightsandinternational human rights lawby theUnited Nations. Many countries haveconstitutional lawthat protects free speech. A country's laws may identify specific categories of speech that are or are not protected, and specific parties whose actions are restricted.[157] TheFirst Amendment to the United States Constitutionprotects both freedom of speech andfreedom of the pressfrom interference by theUnited States Congress. As a result, the regulation of disinformation in the United States tends to be left to private rather than government action.[157] The First Amendment does not protect speech used to incite violence or break the law,[162]or "obscenity, child pornography, defamatory speech, false advertising, true threats, and fighting words".[163]With these exceptions, debating matters of "public or general interest" in a way that is "uninhibited, robust and wide-open" is expected to benefit a democratic society.[164] The First Amendment tends to rely on counterspeech as a workable corrective measure, preferring refutation of falsehood to regulation.[157][154]There is an underlying assumption that identifiable parties will have the opportunity to share their views on a relatively level playing field, where a public figure being drawn into a debate will have increased access to the media and a chance of rebuttal.[164]This may no longer hold true when rapid, massive disinformation attacks are deployed against an individual or group through anonymous or multiple third parties, where "A half-day's delay is a lifetime for an online lie."[154] Other civil and criminal laws are intended to protect individuals and organizations in cases where speech involvesdefamation of character(libelorslander) orfraud. In such cases, being incorrect is not sufficient to justify legal or governmental action. Incorrect information must demonstrably cause harm to others or enable the liar to gain an unjustified benefit. Someone who has knowingly spread disinformation and used that disinformation to gain money may be chargeable with fraud.[165]The extent to which these existing laws can be effectively applied against disinformation attacks is unclear.[157][154][166]Under this approach, a subset of disinformation, which is not only untrue but "communicated for the purpose of gaining profit or advantage by deceit and causes harm as a result" could be considered "fraud on the public",[33]and no longer considered a type of protected speech. Much of the speech that constitutes disinformation would not meet this test.[33] TheDigital Services Act(DSA) is aRegulationinEU lawthat establishes a legal framework within the European Union for the management of content on intermediaries, including illegal content, transparent advertising, and disinformation.[167][168]The European Parliament approved the DSA along with theDigital Markets Acton 5 July 2022.[169]The European Council gave its final approval to the Regulation on a Digital Services Act on 4 October 2022.[170]It was published in theOfficial Journal of the European Unionon the 19 October 2022. Affected service providers will have until 1 January 2024 to comply with its provisions.[169]DSA aims to harmonise differing laws at the national level in the European Union[167]including Germany (NetzDG), Austria ("Kommunikationsplattformen-Gesetz") and France ("Loi Avia").[171]Platforms with more than 45 million users in theEuropean Union, includingFacebook,YouTube,TwitterandTikTokwould be subject to the new obligations. Companies failing to meet those obligations could risk fines of up to 10% of their annual turnover.[172] As of April 25, 2023, Wikipedia was one of 17 platforms to be designated a Very Large Online Platform (VLOP) by the EU Commission, with regulations taking effect as of August 25, 2023.[173]In addition to any steps taken by the Wikimedia Foundation, Wikipedia's compliance with the Digital Services Act will be independently audited, on a yearly basis, beginning in 2024.[174] It has been suggested that China and Russia are jointly portraying the United States and the European Union in an adversarial way in terms of the use of information and technology. This narrative is then used by China and Russia to justify the restriction of freedom of expression, access to independent media, and internet freedoms. They have jointly called for the "internationalization of internet governance", meaning distribution of control of the internet to individual sovereign states. In contrast, calls for global internet governance emphasize the existence of a free and open internet, whose governance involves citizens and civil society.[175][78]Democratic governments need to be aware of the potential impact of measures used to restrict disinformation both at home and abroad. This is not an argument that should block legislation, but it should be taken into consideration when forming legislation.[78] In the United States, the First Amendment limits the actions of Congress, not those of private individuals, companies and employers.[162]Private entities can establish their own rules (subject to local and international laws) for dealing with information.[163]Social media platforms like Facebook, Twitter and Telegram could legally establish guidelines for moderation of information and disinformation on their platforms. Ideally, platforms should attempt to balance free expression by their users against the moderation or removal of harmful and illegal speech.[40][176] Sharing of information through broadcast media and newspapers has been largely self-regulating. It has relied on voluntary self-governance and standard-setting by professional organizations such as the USSociety of Professional Journalists(SPJ). The SPJ has acode of ethicsfor professional accountability, which includes seeking and reporting truth, minimizing harm, accountability and transparency.[177]The code states that "whoever enjoys a special measure of freedom, like a professional journalist, has an obligation to society to use their freedoms and powers responsibly."[178]Anyone can write a letter to the editor of theNew York Times, but theTimeswill not publish that letter unless they choose to do so.[179] Arguably, social media platforms are treated more like the post office—which passes along information without reviewing it—than they are like journalists and print publishers who make editorial decisions and are expected to take responsibility for what they publish. The kinds of ethical, social and legal frameworks that journalism and print publishing have developed have not been applied to social media platforms.[180] It has been pointed out that social media platforms like Facebook and Twitter lack incentives to control disinformation or to self-regulate.[177][157][181]To the extent that platforms rely on advertising for revenue, it is to their financial benefit to maximize user engagement, and the attention of users is demonstrably captured by sensational content.[20][182]Algorithms that push content based on user search histories, frequent clicks and paid advertising leads to unbalanced, poorly sourced, and actively misleading information. It is also highly profitable.[177][181][183]When countering disinformation, the use of algorithms for monitoring content is cheaper than employing people to review and fact-check content. People are more effective at detecting disinformation. People may also bring their own biases (or their employer's biases) to the task of moderation.[180] Privately owned social media platforms such as Facebook and Twitter can legally develop regulations, procedures and tools to identify and combat disinformation on their platforms.[184]For example, Twitter can use machine learning applications to flag content that does not comply with its terms of service and identify extremist posts encouraging terrorism. Facebook andGooglehave developed a content hierarchy system where fact-checkers can identify and de-rank possible disinformation and adjust algorithms accordingly.[10]Companies are considering using procedural legal systems to regulate content on their platforms as well. Specifically, they are considering using appellate systems: posts may be taken down for violating terms of service and posing as a disinformation threat, but users can contest this action through a hierarchy of appellate bodies.[136] Blockchaintechnology has been suggested as a potential defense mechanism against internet manipulation.[22][185]While blockchain was originally developed to create a ledger of transactions for the digital currencybitcoin, it is now widely used in applications where a permanent record or history of assets, transactions, and activities is desired. It provides a potential for transparency and accountability,[186]Blockchain technology could be applied to make data transport more secure in online spaces and theInternet of Thingsnetworks, making it difficult for actors to alter or censor content and carry out disinformation attacks.[187]Applying techniques such as blockchain and keyed watermarking on social media/messaging platforms could also help to detect and curb disinformation attacks. The density and rate of forwarding of a message could be observed to detect patterns of activity that suggest the use of bots and fake account activity in disinformation attacks. Blockchain could support both backtracking and forward tracking of events that involve the spreading of disinformation. If the content is deemed dangerous or inappropriate, its spread could be curbed immediately.[185] Understandably, methods for countering disinformation that involvealgorithmic governanceraise ethical concerns. The use of technologies that track and manipulate information raises questions about "who is accountable for their operation, whether they can create injustices and erode civic norms, and how we should resolve their (un)intended consequences".[177][188][189] A study from thePew Research Centerreports that public support for restriction of disinformation by both technology companies and government increased among Americans from 2018 to 2021. However, views on whether government and technology companies should take such steps became increasingly partisan and polarized during the same time period.[190] Cyber security experts claim that collaboration between public and private sectors is necessary to successfully combat disinformation attacks.[20]Recommended cooperative defense strategies include: However, in the United States, the Republican party is actively opposing both disinformation research and government involvement in fighting disinformation. Republicans gained a majority in the House in January 2023. Since then, the House Judiciary Committee has used legal action to send letters, subpoenas, and threats of legal action to researchers, demanding notes, emails and other records from researchers and even student interns, dating back to 2015. Institutions affected include theStanford Internet ObservatoryatStanford University, theUniversity of Washington, theAtlantic Council's Digital Forensic Research Lab and the social media analytics firm Graphika. Projects include the Election Integrity Partnership, formed to identify attempts "to suppress voting, reduce participation, confuse voters or delegitimize election results without evidence"[192]and the Virality Project, which has examined the spread of false claims about vaccines. Researchers argue that they haveacademic freedomto study social media and disinformation as well asfreedom of speechto report their results.[192][193][194]Despite conservative claims that the government acted to censor speech online, "no evidence has emerged that government officials coerced the companies to take action against accounts".[192] At the state level, state governments that were politically aligned with anti-vaccine activists successfully sought apreliminary injunctionto prevent theBiden Administrationfrom urging social media companies to fight misinformation about public health. The order issued byUnited States Court of Appeals for the Fifth Circuitin 2023 "severely limits the ability of the White House, the surgeon general, [and] the Centers for Disease Control and Prevention... to communicate with social media companies about content related to COVID-19... that the government views as misinformation".[195] Reports on disinformation inArmenia[54]andAsia[78]identify key issues and make recommendations. These can be applied to many other countries, particularly those experiencing "both profound disruption and an opportunity for change".[54]The report emphasizes the importance of strengthening civil society by protecting the integrity of elections and rebuilding trust in public institutions. Steps to support the integrity of elections include: ensuring a free and fair process, allowing independent observation and monitoring, allowing independent journalistic access, and investigating electoral infractions. Other suggestions include rethinking state communication strategies to enable all levels of government to more effectively communicate and to address disinformation attacks.[54] National dialogue bringing together diverse public, community, political, state and nonstate actors as stakeholders is recommended for effective long-term strategic planning. Creating a unified strategy for legislation to deal with information spaces is recommended. Balancing concerns about freedom of expression with protections for individuals and democratic institutions is critical.[54][196][197] Another concern is the development of a healthy information environment that supports fact-based journalism, truthful discourse, and independent reporting at the same time that it rejects information manipulation and disinformation. Key issues for the support of resilient independent media include transparency of ownership, financial viability, editorial independence, media ethics and professional standards, and mechanisms for self-regulation.[54][196][197][198][78] During the2018 Mexican general election, the collaborative journalism project Verificado 2018 was established to address misinformation. It involved at least eighty organizations, including local and national media outlets, universities and civil society and advocacy groups. The group researched online claims and political statements and published joint verifications. During the course of the election, they produced over 400 notes and 50 videos documenting false claims and suspect sites, and tracked instances where fake news went viral.[199]Verificado.mx received 5.4 million visits during the election, with its partner organizations registering millions more.[200]: 25To deal with the sharing of encrypted messages viaWhatsApp, Verificado set up a hotline where WhatsApp users could submit messages for verification and debunking. Over 10,000 users subscribed to Verificado's hotline.[199] Organizations promoting civil society and democracy, independent journalists, human rights defenders, and other activists are increasingly targets of disinformation campaigns and violence. Their protection is essential. Journalists, activists and organizations can be key allies in combating false narratives, promoting inclusion, and encouraging civic engagement. Oversight and ethics bodies are also critical.[54][201]Organizations that have developed resources and trainings to better support journalists against online and offline violence andviolence against womeninclude theCoalition Against Online Violence,[202][203]Knight Center for Journalism in the Americas,[204]International Women's Media Foundation,[205]UNESCO,[201][204]PEN America,[206]First Draft,[207]and others.[208] Media literacy education and information on how to identify and combat disinformation is recommended for public schools and universities.[54]In 2022, countries in theEuropean Unionwere ranked on a Media Literacy Index to measure resilience against disinformation.Finland, the highest ranking country, has developed an extensive curriculum that teaches critical thinking and resistance to information warfare, and integrated it into its public education system. Fins also rank high in trust in government authorities and the media.[209][210][211]Organizations such asFaktabaariand Mediametka develop tools and resources around information, media and voter literacy.[211]Following a2007 cyberattackthat included disinformation tactics, the country ofEstoniafocused on improving its cyberdefenses and made media literacy education a major focus from kindergarten through to high school.[212][213] In 2018, theExecutive Vice President of the European Commission for A Europe Fit for the Digital Agegathered a group of experts to produce a report with recommendations for teaching digital literacy. Proposed digital literacy curricula familiarize students withfact-checkingwebsites such asSnopesandFactCheck.org. This curricula aims to equip students with critical thinking skills to discern between factual content and disinformation online.[23]Suggested areas to focus on include skills incritical thinking,[214]information literacy,[215][216]science literacy[217]andhealth literacy.[26] Another approach is to buildinteractive gamessuch as theCranky Unclegame, which teaches critical thinking and inoculates players against techniques of disinformation and science denial. TheCranky Unclegame is freely available and has been translated into at least 9 languages.[218][219]Videos for teaching critical thinking and addressing disinformation can also be found online.[220][221] Training and best practices for identifying and countering disinformation are being developed and shared by groups of journalists, scientists, and others (e.g. Climate Action Against Disinformation,[24]PEN America,[222][223][224]UNESCO,[46]Union of Concerned Scientists,[225][226]Young African Leaders Initiative[227]). Research suggests that a number of tactics have proven useful against scientific disinformation around climate change. These include: 1) providing clear explanations about why climate change is occurring 2) indicating that there is scientific consensus about the existence of climate change and about its basis in human actions 3) presenting information in ways that are culturally aligned with the listener 4) "inoculating" people by clearly identifying misinformation (ideally before a myth is encountered, but also later through debunking).[21][19] A "Toolbox of Interventions Against Online Misinformation and Manipulation" reviews research into individually-focused interventions to combat misinformation and their possible effectiveness. Tactics include:[228][229]
https://en.wikipedia.org/wiki/Disinformation_attack
Inmathematics,Chebyshev distance(orTchebychev distance),maximum metric, orL∞metric[1]is ametricdefined on areal coordinate spacewhere thedistancebetween twopointsis the greatest of their differences along any coordinate dimension.[2]It is named afterPafnuty Chebyshev. It is also known aschessboard distance, since in the game ofchessthe minimum number of moves needed by akingto go from one square on achessboardto another equals the Chebyshev distance between the centers of the squares, if the squares have side length one, as represented in 2-D spatial coordinates with axes aligned to the edges of the board.[3]For example, the Chebyshev distance between f6 and e2 equals 4. The Chebyshev distance between two vectors or pointsxandy, with standard coordinatesxi{\displaystyle x_{i}}andyi{\displaystyle y_{i}}, respectively, is This equals the limit of theLpmetrics: hence it is also known as the L∞metric. Mathematically, the Chebyshev distance is ametricinduced by thesupremum normoruniform norm. It is an example of aninjective metric. In two dimensions, i.e.plane geometry, if the pointspandqhaveCartesian coordinates(x1,y1){\displaystyle (x_{1},y_{1})}and(x2,y2){\displaystyle (x_{2},y_{2})}, their Chebyshev distance is Under this metric, acircleofradiusr, which is the set of points with Chebyshev distancerfrom a center point, is a square whose sides have the length 2rand are parallel to the coordinate axes. On a chessboard, where one is using adiscreteChebyshev distance, rather than a continuous one, the circle of radiusris a square of side lengths 2r,measuring from the centers of squares, and thus each side contains 2r+1 squares; for example, the circle of radius 1 on a chess board is a 3×3 square. In one dimension, all Lpmetrics are equal – they are just the absolute value of the difference. The two dimensionalManhattan distancehas "circles" i.e.level setsin the form of squares, with sides of length√2r, oriented at an angle of π/4 (45°) to the coordinate axes, so the planar Chebyshev distance can be viewed as equivalent by rotation and scaling to (i.e. alinear transformationof) the planar Manhattan distance. However, this geometric equivalence between L1and L∞metrics does not generalize to higher dimensions. Asphereformed using the Chebyshev distance as a metric is acubewith each face perpendicular to one of the coordinate axes, but a sphere formed usingManhattan distanceis anoctahedron: these aredual polyhedra, but among cubes, only the square (and 1-dimensional line segment) areself-dualpolytopes. Nevertheless, it is true that in all finite-dimensional spaces the L1and L∞metrics are mathematically dual to each other. On a grid (such as a chessboard), the points at a Chebyshev distance of 1 of a point are theMoore neighborhoodof that point. The Chebyshev distance is the limiting case of the order-p{\displaystyle p}Minkowski distance, whenp{\displaystyle p}reachesinfinity. The Chebyshev distance is sometimes used inwarehouselogistics,[4]as it effectively measures the time anoverhead cranetakes to move an object (as the crane can move on the x and y axes at the same time but at the same speed along each axis). It is also widely used in electroniccomputer-aided manufacturing(CAM) applications, in particular, in optimization algorithms for these. For thesequence spaceof infinite-length sequences of real or complex numbers, the Chebyshev distance generalizes to theℓ∞{\displaystyle \ell ^{\infty }}-norm; this norm is sometimes called the Chebyshev norm. For the space of (real or complex-valued) functions, the Chebyshev distance generalizes to theuniform norm.
https://en.wikipedia.org/wiki/Chebyshev_distance
Incryptography,forward secrecy(FS), also known asperfect forward secrecy(PFS), is a feature of specifickey-agreement protocolsthat gives assurances thatsession keyswill not be compromised even if long-term secrets used in the session key exchange are compromised, limiting damage.[1][2][3]ForTLS, the long-term secret is typically theprivate keyof the server. Forward secrecy protects past sessions against future compromises of keys or passwords. By generating a unique session key for every session a user initiates, the compromise of a single session key will not affect any data other than that exchanged in the specific session protected by that particular key. This by itself is not sufficient for forward secrecy which additionally requires that a long-term secret compromise does not affect the security of past session keys. Forward secrecy protects data on thetransport layerof a network that uses common transport layer security protocols, includingOpenSSL,[4]when its long-term secret keys are compromised, as with theHeartbleedsecurity bug. If forward secrecy is used, encrypted communications and sessions recorded in the past cannot be retrieved and decrypted should long-term secret keys or passwords be compromised in the future, even if the adversary actively interfered, for example via aman-in-the-middle (MITM) attack. The value of forward secrecy is that it protects past communication. This reduces the motivation for attackers to compromise keys. For instance, if an attacker learns a long-term key, but the compromise is detected and the long-term key is revoked and updated, relatively little information is leaked in a forward secure system. The value of forward secrecy depends on the assumed capabilities of an adversary. Forward secrecy has value if an adversary is assumed to be able to obtain secret keys from a device (read access) but is either detected or unable to modify the way session keys are generated in the device (full compromise). In some cases an adversary who can read long-term keys from a device may also be able to modify the functioning of the session key generator, as in the backdooredDual Elliptic Curve Deterministic Random Bit Generator. If an adversary can make the random number generator predictable, then past traffic will be protected but all future traffic will be compromised. The value of forward secrecy is limited not only by the assumption that an adversary will attack a server by only stealing keys and not modifying the random number generator used by the server but it is also limited by the assumption that the adversary will only passively collect traffic on the communications link and not be active using a man-in-the-middle attack. Forward secrecy typically uses an ephemeralDiffie–Hellman key exchangeto prevent reading past traffic. The ephemeral Diffie–Hellman key exchange is often signed by the server using a static signing key. If an adversary can steal (or obtain through a court order) this static (long term) signing key, the adversary can masquerade as the server to the client and as the client to the server and implement a classic man-in-the-middle attack.[5] The term "perfect forward secrecy" was coined by C. G. Günther in 1990[6]and further discussed byWhitfield Diffie,Paul van Oorschot, and Michael James Wiener in 1992,[7]where it was used to describe a property of the Station-to-Station protocol.[8] Forward secrecy has also been used to describe the analogous property ofpassword-authenticated key agreementprotocols where the long-term secret is a (shared)password.[9] In 2000 theIEEEfirst ratifiedIEEE 1363, which establishes the related one-party and two-party forward secrecy properties of various standard key agreement schemes.[10] An encryption system has the property of forward secrecy if plain-text (decrypted) inspection of the data exchange that occurs during key agreement phase of session initiation does not reveal the key that was used to encrypt the remainder of the session. The following is a hypothetical example of a simple instant messaging protocol that employs forward secrecy: Forward secrecy (achieved by generating new session keys for each message) ensures that past communications cannot be decrypted if one of the keys generated in an iteration of step 2 is compromised, since such a key is only used to encrypt a single message. Forward secrecy also ensures that past communications cannot be decrypted if the long-term private keys from step 1 are compromised. However, masquerading as Alice or Bob would be possible going forward if this occurred, possibly compromising all future messages. Forward secrecy is designed to prevent the compromise of a long-term secret key from affecting the confidentiality of past conversations. However, forward secrecy cannot defend against a successfulcryptanalysisof the underlyingciphersbeing used, since a cryptanalysis consists of finding a way to decrypt an encrypted message without the key, and forward secrecy only protects keys, not the ciphers themselves.[11]A patient attacker can capture a conversation whose confidentiality is protected through the use ofpublic-key cryptographyand wait until the underlying cipher is broken (e.g. largequantum computerscould be created which allow thediscrete logarithm problemto be computed quickly), a.k.a.harvest now, decrypt laterattacks. This would allow the recovery of old plaintexts even in a system employing forward secrecy. Non-interactive forward-secure key exchange protocols face additional threats that are not relevant to interactive protocols. In amessage suppressionattack, an attacker in control of the network may itself store messages while preventing them from reaching the intended recipient; as the messages are never received, the corresponding private keys may not be destroyed or punctured, so a compromise of the private key can lead to successful decryption. Proactively retiring private keys on a schedule mitigates, but does not eliminate, this attack. In amalicious key exhaustionattack, the attacker sends many messages to the recipient and exhausts the private key material, forcing a protocol to choose between failing closed (and enablingdenial of serviceattacks) or failing open (and giving up some amount of forward secrecy).[12] Most key exchange protocols areinteractive, requiring bidirectional communication between the parties. A protocol that permits the sender to transmit data without first needing to receive any replies from the recipient may be callednon-interactive, orasynchronous, orzero round trip(0-RTT).[13][14] Interactivity is onerous for some applications—for example, in a secure messaging system, it may be desirable to have astore-and-forwardimplementation, rather than requiring sender and recipient to be online at the same time; loosening the bidirectionality requirement can also improve performance even where it is not a strict requirement, for example at connection establishment or resumption. These use cases have stimulated interest in non-interactive key exchange, and, as forward security is a desirable property in a key exchange protocol, in non-interactive forward secrecy.[15][16]This combination has been identified as desirable since at least 1996.[17]However, combining forward secrecy and non-interactivity has proven challenging;[18]it had been suspected that forward secrecy with protection againstreplay attackswas impossible non-interactively, but it has been shown to be possible to achieve all three desiderata.[14] Broadly, two approaches to non-interactive forward secrecy have been explored,pre-computed keysandpuncturable encryption.[16] With pre-computed keys, many key pairs are created and the public keys shared, with the private keys destroyed after a message has been received using the corresponding public key. This approach has been deployed as part of theSignal protocol.[19] In puncturable encryption, the recipient modifies their private key after receiving a message in such a way that the new private key cannot read the message but the public key is unchanged.Ross J. Andersoninformally described a puncturable encryption scheme for forward secure key exchange in 1997,[20]andGreen & Miers (2015)formally described such a system,[21]building on the related scheme ofCanetti, Halevi & Katz (2003), which modifies the private key according to a schedule so that messages sent in previous periods cannot be read with the private key from a later period.[18]Green & Miers (2015)make use ofhierarchical identity-based encryptionandattribute-based encryption, whileGünther et al. (2017)use a different construction that can be based on any hierarchical identity-based scheme.[22]Dallmeier et al. (2020)experimentally found that modifyingQUICto use a 0-RTT forward secure and replay-resistant key exchange implemented with puncturable encryption incurred significantly increased resource usage, but not so much as to make practical use infeasible.[23] Weak perfect forward secrecy (Wpfs) is the weaker property whereby when agents' long-term keys are compromised, the secrecy of previously established session-keys is guaranteed, but only for sessions in which the adversary did not actively interfere. This new notion, and the distinction between this and forward secrecy was introduced by Hugo Krawczyk in 2005.[24][25]This weaker definition implicitly requires that full (perfect) forward secrecy maintains the secrecy of previously established session keys even in sessions where the adversarydidactively interfere, or attempted to act as a man in the middle. Forward secrecy is present in several protocol implementations, such asSSHand as an optional feature inIPsec(RFC 2412).Off-the-Record Messaging, a cryptography protocol and library for many instant messaging clients, as well asOMEMOwhich provides additional features such as multi-user functionality in such clients, both provide forward secrecy as well asdeniable encryption. InTransport Layer Security(TLS),cipher suitesbased onDiffie–Hellmankey exchange (DHE-RSA, DHE-DSA) andelliptic curve Diffie–Hellmankey exchange (ECDHE-RSA, ECDHE-ECDSA) are available. In theory, TLS can use forward secrecy since SSLv3, but many implementations do not offer forward secrecy or provided it with lower grade encryption.[26]TLS 1.3 removed support for RSA for key exchange, leaving Diffie-Hellman (with forward-secrecy) as the sole algorithm for key exchange.[27] OpenSSLsupports forward secrecy usingelliptic curve Diffie–Hellmansince version 1.0,[28]with a computational overhead of approximately 15% for the initial handshake.[29] TheSignal Protocoluses theDouble Ratchet Algorithmto provide forward secrecy.[30] On the other hand, among popular protocols currently in use,WPA Personaldid not support forward secrecy before WPA3.[31] Since late 2011, Google provided forward secrecy with TLS by default to users of itsGmailservice,Google Docsservice, and encrypted search services.[28]Since November 2013,Twitterprovided forward secrecy with TLS to its users.[32]Wikishosted by theWikimedia Foundationhave all provided forward secrecy to users since July 2014[33]and are requiring the use of forward secrecy since August 2018. Facebook reported as part of an investigation into email encryption that, as of May 2014, 74% of hosts that supportSTARTTLSalso provide forward secrecy.[34]TLS 1.3, published in August 2018, dropped support for ciphers without forward secrecy. As of February 2019[update], 96.6% of web servers surveyed support some form of forward secrecy, and 52.1% will use forward secrecy with most browsers.[35] At WWDC 2016, Apple announced that all iOS apps would need to use App Transport Security (ATS), a feature which enforces the use of HTTPS transmission. Specifically, ATS requires the use of an encryption cipher that provides forward secrecy.[36]ATS became mandatory for apps on January 1, 2017.[37] TheSignalmessaging application employs forward secrecy in its protocol, notably differentiating it from messaging protocols based onPGP.[38] Forward secrecy is supported on 92.6% of websites on modern browsers, while 0.3% of websites do not support forward secrecy at all as of May 2024.[39]
https://en.wikipedia.org/wiki/Forward_secrecy
Insoftware engineering,inversion of control(IoC) is a design principle in which custom-written portions of acomputer programreceive theflow of controlfrom an external source (e.g. aframework). The term "inversion" is historical: asoftware architecturewith this design "inverts" control as compared toprocedural programming. In procedural programming, a program's custom codecallsreusable libraries to take care of generic tasks, but with inversion of control, it is the external code or framework that is in control and calls the custom code. Inversion of control has been widely used by application development frameworks since the rise of GUI environments[1][2]and continues to be used both in GUI environments and inweb server application frameworks. Inversion of control makes the frameworkextensibleby the methods defined by the application programmer.[3] Event-driven programmingis often implemented using IoC so that the custom code need only be concerned with the handling of events, while theevent loopand dispatch of events/messages is handled by the framework or the runtime environment. In web server application frameworks, dispatch is usually called routing, and handlers may be called endpoints. The phrase "inversion of control" has separately also come to be used in the community of Java programmers to refer specifically to the patterns ofdependency injection(passing services to objects that need them) that occur with "IoC containers" in Java frameworks such as theSpring Framework.[4]In this different sense, "inversion of control" refers to granting the framework control over the implementations of dependencies that are used by application objects[5]rather than to the original meaning of granting the frameworkcontrol flow(control over the time of execution of application code, e.g. callbacks). As an example, with traditional programming, themain functionof an application might make function calls into a menu library to display a list of availablecommandsand query the user to select one.[6]The library thus would return the chosen option as the value of the function call, and the main function uses this value to execute the associated command. This style was common intext-based interfaces. For example, anemail clientmay show a screen with commands to load new mail, answer the current mail, create new mail, etc., and the program execution would block until the user presses a key to select a command. With inversion of control, on the other hand, the program would be written using asoftware frameworkthat knows common behavioral and graphical elements, such aswindowing systems, menus, controlling the mouse, and toolbars. The custom code "fills in the blanks" for the framework, such as supplying a table of menu items and registering a code subroutine for each item, but it is the framework that monitors the user's actions and invokes the subroutine when a menu item is selected. In the mail client example, the framework could follow both the keyboard and mouse inputs and call the command invoked by the user by either means and at the same time monitor thenetwork interfaceto find out if new messages arrive and refresh the screen when some network activity is detected. The same framework could be used as the skeleton for a spreadsheet program or a text editor. Conversely, the framework knows nothing about Web browsers, spreadsheets, or text editors; implementing their functionality takes custom code. Inversion of control carries the strong connotation that the reusable code and the problem-specific code are developed independently even though they operate together in an application.Callbacks,schedulers,event loops, and thetemplate methodare examples ofdesign patternsthat follow the inversion of control principle, although the term is most commonly used in the context ofobject-oriented programming. (Dependency injectionis an example of the separate, specific idea of "inverting control over the implementations of dependencies" popularised by Java frameworks.)[4] Inversion of control is sometimes referred to as the "Hollywood Principle: Don't call us, we'll call you," reflecting how frameworks dictate execution flow.[1] Inversion of control is not a new term in computer science.Martin Fowlertraces the etymology of the phrase back to 1988,[7]but it is closely related to the concept ofprogram inversiondescribed byMichael Jacksonin hisJackson Structured Programmingmethodology in the 1970s.[8]Abottom-up parsercan be seen as an inversion of atop-down parser: in the one case, the control lies with the parser, while in the other case, it lies with the receiving application. The term was used by Michael Mattsson in a thesis (with its original meaning of a framework calling application code instead of vice versa)[9]and was then taken from there[10]by Stefano Mazzocchi and popularized by him in 1999 in a defunct Apache Software Foundation project, Avalon, in which it referred to a parent object passing in a child object's dependencies in addition to controlling execution flow.[11]The phrase was further popularized in 2004 byRobert C. MartinandMartin Fowler, the latter of whom traces the term's origins to the 1980s.[7] In traditional programming, theflowof thebusiness logicis determined by objects that arestatically boundto one another. With inversion of control, the flow depends on the object graph that is built up during program execution. Such a dynamic flow is made possible by object interactions that are defined through abstractions. Thisrun-time bindingis achieved by mechanisms such asdependency injectionor aservice locator. In IoC, the code could also be linked statically during compilation, but finding the code to execute by reading its description fromexternal configurationinstead of with a direct reference in the code itself. In dependency injection, a dependentobjector module is coupled to the object it needs atrun time. Which particular object will satisfy the dependency during program execution typically cannot be known atcompile timeusingstatic analysis. While described in terms of object interaction here, the principle can apply to other programming methodologies besidesobject-oriented programming. In order for the running program to bind objects to one another, the objects must possess compatibleinterfaces. For example, classAmay delegate behavior to interfaceIwhich is implemented by classB; the program instantiatesAandB, and then injectsBintoA. Web browsers implement inversion of control for DOM events in HTML. The application developer usesdocument.addEventListener()to register a callback. This example code for an ASP.NET Core web application creates a web application host, registers an endpoint, and then passes control to the framework:[12]
https://en.wikipedia.org/wiki/Inversion_of_control